paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Noisy Machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference . In particular , analog computing hardware has been heavily motivated specifically for accelerating neural networks , based on either electronic , optical or photonic devices , which may well achieve lower power consumption than conventional digital electronics . However , these proposed analog accelerators suffer from the intrinsic noise generated by their physical components , which makes it challenging to achieve high accuracy on deep neural networks . Hence , for successful deployment on analog accelerators , it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights , which is a somewhat new challenge in machine learning . In this paper , we advance the understanding of noisy neural networks . We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output . To combat this , we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks , which is demonstrated experimentally across different networks and datasets , including ImageNet . Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts , which is a significant step towards making analog hardware practical for deep learning . 1 INTRODUCTION . Deep neural networks ( DNNs ) have achieved unprecedented performance over a wide variety of tasks such as computer vision , speech recognition , and natural language processing . However , DNN inference is typically very demanding in terms of compute and memory resources . Consequently , larger models are often not well suited for large-scale deployment on edge devices , which typically have meagre performance and power budgets , especially battery powered mobile and IoT devices . To address these issues , the design of specialized hardware for DNN inference has drawn great interest , and is an extremely active area of research . To date , a plethora of techniques have been proposed for designing efficient neural network hardware ( Sze et al. , 2017 ) . In contrast to the current status quo of predominantly digital hardware , there is significant research interest in analog hardware for DNN inference . In this approach , digital values are represented by analog quantities such as electrical voltages or light pulses , and the computation itself ( e.g. , multiplication and addition ) proceeds in the analog domain , before eventually being converted back to digital . Analog accelerators take advantage of particular efficiencies of analog computation in exchange for losing the bit-exact precision of digital . In other words , analog compute is cheap but somewhat imprecise . Analog computation has been demonstrated in the context of DNN inference in both electronic ( Binas et al. , 2016 ) , photonic ( Shen et al. , 2017 ) and optical ( Lin et al. , 2018 ) systems . Analog accelerators promise to deliver at least two orders of magnitude better performance over a conventional digital processor for deep learning workloads in both speed ( Shen et al. , 2017 ) and energy efficiency ( Ni et al. , 2017 ) . Electronic analog DNN accelerators are arguably the most mature technology and hence will be our focus in this work . The most common approach to electronic analog DNN accelerator is in-memory computing , which typically uses non-volatile memory ( NVM ) crossbar arrays to encode the network weights as analog values . The NVM itself can be implemented with memristive devices , such as metal-oxide resistive random-access memory ( ReRAM ) ( Hu et al. , 2018 ) or phase-change memory ( PCM ) ( Le Gallo et al. , 2018 ; Boybat et al. , 2018 ; Ambrogio et al. , 2018 ) . The matrix-vector operations computed during inference are then performed in parallel inside the crossbar array , operating on analog quantities for weights and activations . For example , addition of two quantities encoded as electrical currents can be achieved by simply connecting the two wires together , whereby the currents will add linearly according to Kirchhoff ’ s current law . In this case , there is almost zero latency or energy dissipation for this operation . Similarly , multiplication with a weight can be achieved by programming the NVM cell conductance to the weight value , which is then used to convert an input activation encoded as a voltage into a scaled current , following Ohm ’ s law . Therefore , the analog approach promises significantly improved throughput and energy efficiency . However , the analog nature of the weights makes the compute noisy , which can limit inference accuracy . For example , a simple two-layer fully-connected network with a baseline accuracy of 91.7 % on digital hardware , achieves only 76.7 % when implemented on an analog photonic array ( Shen et al. , 2017 ) . This kind of accuracy degradation is not acceptable for most deep learning applications . Therefore , the challenge of imprecise analog hardware motivates us to study and understand noisy neural networks , in order to maintain inference accuracy under noisy analog computation . The question of how to effectively learn and compute with a noisy machine is a long-standing problem of interest in machine learning and computer science ( Stevenson et al. , 1990 ; Von Neumann , 1956 ) . In this paper , we study noisy neural networks to understand their inference performance . We also demonstrate how to train a neural network with distillation and noise injection to make it more resilient to computation noise , enabling higher inference accuracy for models deployed on analog hardware . We present empirical results that demonstrate state-of-the-art noise tolerance on multiple datasets , including ImageNet . The remainder of the paper is organized as follows . Section 2 gives an overview of related work . Section 3 outlines the problem statement . Section 4 presents a more formal analysis of noisy neural networks . Section 5 gives a distillation methodology for training noisy neural networks , with experimental results . Finally , Section 6 provides a brief discussion and Section 7 closes with concluding remarks . 2 RELATED WORK . Previous work broadly falls under the following categories : studying the effect of analog computation noise , analysis of noise-injection for DNNs , and use of distillation in model training . Analog Computation Noise Models In Rekhi et al . ( 2019 ) , the noise due to analog computation is modeled as additive parameter noise with zero-mean Gaussian distribution . The variance of this Gaussian is a function of the effective number of bits of the output of an analog computation . Similarly , the authors in Joshi et al . ( 2019 ) also model analog computation noise as additive Gaussian noise on the parameters , where the variance is proportional to the range of values that their PCM device can represent . Some noise models presented have included a more detailed account of device-level interactions , such as voltage drop across the analog array ( Jain et al. , 2018 ; Feinberg et al. , 2018 ) , but are beyond the scope of this paper . In this work , we consider an additive Gaussian noise model on the weights , similar to Rekhi et al . ( 2019 ) ; Joshi et al . ( 2019 ) and present a novel training method that outperforms the previous work in model noise resilience . Noise Injection for Neural Networks Several stochastic regularization techniques based on noise-injection and dropout ( Srivastava et al. , 2014 ; Noh et al. , 2017 ; Li & Liu , 2016 ) have been demonstrated to be highly effective at reducing overfitting . For generalized linear models , dropout and additive noise have been shown to be equivalent to adaptive L2 regularization to a first order ( Wager et al. , 2013 ) . Training networks with Gaussian noise added to the weights or activations can also increase robustness to variety of adversarial attacks ( Rakin et al. , 2018 ) . Bayesian neural networks replace deterministic weights with distributions in order to optimize over the posterior distribution of the weights ( Kingma & Welling , 2013 ) . Many of these methods use noise injection at inference time to approximate weight distribution ; in Gal & Ghahramani ( 2016 ) a link between Gaussian processes and dropout is established in an effort to model the uncertainty of the output of a network . A theoretical analysis by Stevenson et al . ( 1990 ) has shown that for neural networks with adaptive linear neurons , the probability of error of a noisy neural network classifier with weight noise increases with the number of layers , but largely independent of the number of weights per neuron or neurons per layer . Distillation in Training Knowledge distillation ( Hinton et al. , 2015 ) is a well known technique in which the soft labels produced by a teacher model are used to train a student model which typically has reduced capacity . Distillation has shown merit for improving model performance across a range of scenarios , including student models lacking access to portions of training data ( Micaelli & Storkey , 2019 ) , quantized low-precision networks ( Polino et al. , 2018 ; Mishra & Marr , 2017 ) , protection against adversarial attacks ( Papernot et al. , 2016 ; Goldblum et al. , 2019 ) , and in avoiding catastrophic forgetting for multi-task learning ( Schwarz et al. , 2018 ) . To the best of our knowledge , our work is the first to combine distillation with noise injection in training to enhance model noise robustness . 3 PROBLEM STATEMENT . Without loss of generality , we model a general noisy machine after a simple memristive crossbar array , similar to Shafiee et al . ( 2016 ) . Figure 1 illustrates how an arbitrary neural network layer , l , such as a typical 3× 3 convolution , can be mapped to this hardware substrate by first flattening the weights into a single large 2D matrix , Wl , and then programming each element of this matrix into a memristive cell in the crossbar array , which provides the required conductances Gl ( the reciprocal of resistance ) to perform analog multiplication following Ohm ’ s law , iout = vinG . Note that a pair of differential pair of NVM devices are typically used to represent a signed quantity in Gl . Subsequently , input activations , xl converted into continuous voltages , v ( xl ) , are streamed into the array rows from the left-hand side . The memristive devices connect row with columns , where the row voltages are converted into currents scaled by the programmed conductance , G , to generate the currents i ( yl ) , which are differential in order to represent both positive and negative quantites with unipolar signals . The currents from each memristive device essentially add up for free where they are connected in the columns , according to Kirchhoff ’ s current law . Finally , the differential currents are converted to bipolar voltages , v ( yl ) , which are they digitized before adding bias , and performing batch normalization and ReLU operations , which are not shown in Figure 1 . However , the analog inference hardware of Figure 1 is subject to real-world non-idealities , typically attributed to variations in : 1 ) manufacturing process , 2 ) supply voltage and 3 ) temperature , PVT variation collectively , all of which result in noise in the system . Below we discuss the two key components in terms of analog noise modeling . Data Converters . Digital-to-analog converter ( DAC ) and analog-to-digital converter ( ADC ) circuits are designed to be robust to PVT variation , but in practice these effects do degrade the resolution ( i.e . number of bits ) . Therefore , we consider effective number of bits ( ENOB ) , which is a lower bound on resolution in the presence of non-idealities . Hence , we use activation and weight quantization with ENOB data converters and no additional converter noise modeling . NVM cells . Due to their analog nature , memristive NVM cells have limited precision , due to the read and write circuitry ( Joshi et al. , 2019 ) . In between write and read operations , their stored value is prone to drift over time . Long-term drift can be corrected with periodic refresh operations . At shorter timescales , time-varying noise may be encountered . For most of the experiments in this paper , we model generic NVM cell noise as an additive zero-mean i.i.d . Gaussian error term on the weights of the model in each particular layer ∆Wl ∼ N ( ∆Wl ; 0 , σ2N , lI ) . This simple model , described more concretely in Section 5 , is similar to that used by Joshi et al . ( 2019 ) which was verified on real hardware . In addition , we also investigate spatially-varying and time-varying noise models in Section 5.2 ( Table 1 ) .
The article on "Noisy Machines" addresses the issue of implementing deep neural network inference on a noisy hardware computing substrate, e.g. analog accelerators. This is an important topic because analog devices allow fast and energy efficient inference, which is crucial for inference at the edge. Because of their analog nature such devices suffer from noisy computations, and in this article the case of noisy weights is studied.
SP:cc73a630ce68477bde408cc08a92a4f98eb2c597
Noisy Machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference . In particular , analog computing hardware has been heavily motivated specifically for accelerating neural networks , based on either electronic , optical or photonic devices , which may well achieve lower power consumption than conventional digital electronics . However , these proposed analog accelerators suffer from the intrinsic noise generated by their physical components , which makes it challenging to achieve high accuracy on deep neural networks . Hence , for successful deployment on analog accelerators , it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights , which is a somewhat new challenge in machine learning . In this paper , we advance the understanding of noisy neural networks . We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output . To combat this , we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks , which is demonstrated experimentally across different networks and datasets , including ImageNet . Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts , which is a significant step towards making analog hardware practical for deep learning . 1 INTRODUCTION . Deep neural networks ( DNNs ) have achieved unprecedented performance over a wide variety of tasks such as computer vision , speech recognition , and natural language processing . However , DNN inference is typically very demanding in terms of compute and memory resources . Consequently , larger models are often not well suited for large-scale deployment on edge devices , which typically have meagre performance and power budgets , especially battery powered mobile and IoT devices . To address these issues , the design of specialized hardware for DNN inference has drawn great interest , and is an extremely active area of research . To date , a plethora of techniques have been proposed for designing efficient neural network hardware ( Sze et al. , 2017 ) . In contrast to the current status quo of predominantly digital hardware , there is significant research interest in analog hardware for DNN inference . In this approach , digital values are represented by analog quantities such as electrical voltages or light pulses , and the computation itself ( e.g. , multiplication and addition ) proceeds in the analog domain , before eventually being converted back to digital . Analog accelerators take advantage of particular efficiencies of analog computation in exchange for losing the bit-exact precision of digital . In other words , analog compute is cheap but somewhat imprecise . Analog computation has been demonstrated in the context of DNN inference in both electronic ( Binas et al. , 2016 ) , photonic ( Shen et al. , 2017 ) and optical ( Lin et al. , 2018 ) systems . Analog accelerators promise to deliver at least two orders of magnitude better performance over a conventional digital processor for deep learning workloads in both speed ( Shen et al. , 2017 ) and energy efficiency ( Ni et al. , 2017 ) . Electronic analog DNN accelerators are arguably the most mature technology and hence will be our focus in this work . The most common approach to electronic analog DNN accelerator is in-memory computing , which typically uses non-volatile memory ( NVM ) crossbar arrays to encode the network weights as analog values . The NVM itself can be implemented with memristive devices , such as metal-oxide resistive random-access memory ( ReRAM ) ( Hu et al. , 2018 ) or phase-change memory ( PCM ) ( Le Gallo et al. , 2018 ; Boybat et al. , 2018 ; Ambrogio et al. , 2018 ) . The matrix-vector operations computed during inference are then performed in parallel inside the crossbar array , operating on analog quantities for weights and activations . For example , addition of two quantities encoded as electrical currents can be achieved by simply connecting the two wires together , whereby the currents will add linearly according to Kirchhoff ’ s current law . In this case , there is almost zero latency or energy dissipation for this operation . Similarly , multiplication with a weight can be achieved by programming the NVM cell conductance to the weight value , which is then used to convert an input activation encoded as a voltage into a scaled current , following Ohm ’ s law . Therefore , the analog approach promises significantly improved throughput and energy efficiency . However , the analog nature of the weights makes the compute noisy , which can limit inference accuracy . For example , a simple two-layer fully-connected network with a baseline accuracy of 91.7 % on digital hardware , achieves only 76.7 % when implemented on an analog photonic array ( Shen et al. , 2017 ) . This kind of accuracy degradation is not acceptable for most deep learning applications . Therefore , the challenge of imprecise analog hardware motivates us to study and understand noisy neural networks , in order to maintain inference accuracy under noisy analog computation . The question of how to effectively learn and compute with a noisy machine is a long-standing problem of interest in machine learning and computer science ( Stevenson et al. , 1990 ; Von Neumann , 1956 ) . In this paper , we study noisy neural networks to understand their inference performance . We also demonstrate how to train a neural network with distillation and noise injection to make it more resilient to computation noise , enabling higher inference accuracy for models deployed on analog hardware . We present empirical results that demonstrate state-of-the-art noise tolerance on multiple datasets , including ImageNet . The remainder of the paper is organized as follows . Section 2 gives an overview of related work . Section 3 outlines the problem statement . Section 4 presents a more formal analysis of noisy neural networks . Section 5 gives a distillation methodology for training noisy neural networks , with experimental results . Finally , Section 6 provides a brief discussion and Section 7 closes with concluding remarks . 2 RELATED WORK . Previous work broadly falls under the following categories : studying the effect of analog computation noise , analysis of noise-injection for DNNs , and use of distillation in model training . Analog Computation Noise Models In Rekhi et al . ( 2019 ) , the noise due to analog computation is modeled as additive parameter noise with zero-mean Gaussian distribution . The variance of this Gaussian is a function of the effective number of bits of the output of an analog computation . Similarly , the authors in Joshi et al . ( 2019 ) also model analog computation noise as additive Gaussian noise on the parameters , where the variance is proportional to the range of values that their PCM device can represent . Some noise models presented have included a more detailed account of device-level interactions , such as voltage drop across the analog array ( Jain et al. , 2018 ; Feinberg et al. , 2018 ) , but are beyond the scope of this paper . In this work , we consider an additive Gaussian noise model on the weights , similar to Rekhi et al . ( 2019 ) ; Joshi et al . ( 2019 ) and present a novel training method that outperforms the previous work in model noise resilience . Noise Injection for Neural Networks Several stochastic regularization techniques based on noise-injection and dropout ( Srivastava et al. , 2014 ; Noh et al. , 2017 ; Li & Liu , 2016 ) have been demonstrated to be highly effective at reducing overfitting . For generalized linear models , dropout and additive noise have been shown to be equivalent to adaptive L2 regularization to a first order ( Wager et al. , 2013 ) . Training networks with Gaussian noise added to the weights or activations can also increase robustness to variety of adversarial attacks ( Rakin et al. , 2018 ) . Bayesian neural networks replace deterministic weights with distributions in order to optimize over the posterior distribution of the weights ( Kingma & Welling , 2013 ) . Many of these methods use noise injection at inference time to approximate weight distribution ; in Gal & Ghahramani ( 2016 ) a link between Gaussian processes and dropout is established in an effort to model the uncertainty of the output of a network . A theoretical analysis by Stevenson et al . ( 1990 ) has shown that for neural networks with adaptive linear neurons , the probability of error of a noisy neural network classifier with weight noise increases with the number of layers , but largely independent of the number of weights per neuron or neurons per layer . Distillation in Training Knowledge distillation ( Hinton et al. , 2015 ) is a well known technique in which the soft labels produced by a teacher model are used to train a student model which typically has reduced capacity . Distillation has shown merit for improving model performance across a range of scenarios , including student models lacking access to portions of training data ( Micaelli & Storkey , 2019 ) , quantized low-precision networks ( Polino et al. , 2018 ; Mishra & Marr , 2017 ) , protection against adversarial attacks ( Papernot et al. , 2016 ; Goldblum et al. , 2019 ) , and in avoiding catastrophic forgetting for multi-task learning ( Schwarz et al. , 2018 ) . To the best of our knowledge , our work is the first to combine distillation with noise injection in training to enhance model noise robustness . 3 PROBLEM STATEMENT . Without loss of generality , we model a general noisy machine after a simple memristive crossbar array , similar to Shafiee et al . ( 2016 ) . Figure 1 illustrates how an arbitrary neural network layer , l , such as a typical 3× 3 convolution , can be mapped to this hardware substrate by first flattening the weights into a single large 2D matrix , Wl , and then programming each element of this matrix into a memristive cell in the crossbar array , which provides the required conductances Gl ( the reciprocal of resistance ) to perform analog multiplication following Ohm ’ s law , iout = vinG . Note that a pair of differential pair of NVM devices are typically used to represent a signed quantity in Gl . Subsequently , input activations , xl converted into continuous voltages , v ( xl ) , are streamed into the array rows from the left-hand side . The memristive devices connect row with columns , where the row voltages are converted into currents scaled by the programmed conductance , G , to generate the currents i ( yl ) , which are differential in order to represent both positive and negative quantites with unipolar signals . The currents from each memristive device essentially add up for free where they are connected in the columns , according to Kirchhoff ’ s current law . Finally , the differential currents are converted to bipolar voltages , v ( yl ) , which are they digitized before adding bias , and performing batch normalization and ReLU operations , which are not shown in Figure 1 . However , the analog inference hardware of Figure 1 is subject to real-world non-idealities , typically attributed to variations in : 1 ) manufacturing process , 2 ) supply voltage and 3 ) temperature , PVT variation collectively , all of which result in noise in the system . Below we discuss the two key components in terms of analog noise modeling . Data Converters . Digital-to-analog converter ( DAC ) and analog-to-digital converter ( ADC ) circuits are designed to be robust to PVT variation , but in practice these effects do degrade the resolution ( i.e . number of bits ) . Therefore , we consider effective number of bits ( ENOB ) , which is a lower bound on resolution in the presence of non-idealities . Hence , we use activation and weight quantization with ENOB data converters and no additional converter noise modeling . NVM cells . Due to their analog nature , memristive NVM cells have limited precision , due to the read and write circuitry ( Joshi et al. , 2019 ) . In between write and read operations , their stored value is prone to drift over time . Long-term drift can be corrected with periodic refresh operations . At shorter timescales , time-varying noise may be encountered . For most of the experiments in this paper , we model generic NVM cell noise as an additive zero-mean i.i.d . Gaussian error term on the weights of the model in each particular layer ∆Wl ∼ N ( ∆Wl ; 0 , σ2N , lI ) . This simple model , described more concretely in Section 5 , is similar to that used by Joshi et al . ( 2019 ) which was verified on real hardware . In addition , we also investigate spatially-varying and time-varying noise models in Section 5.2 ( Table 1 ) .
The manuscript illustrates how a noisy neural network can reduce the learning capacity. To mitigate this loss, the authors propose a method that combines the method of "noise injection and "knowledge distillation". However, from a conceptual point of view, their contribution (i.e. (10) in Section 5,) is unclear to me. Specifically, the authors are not precise about how do they merge the aforementioned previous ideas and come up with the new loss function (10).
SP:cc73a630ce68477bde408cc08a92a4f98eb2c597
Neural ODEs for Image Segmentation with Level Sets
1 INTRODUCTION . Image segmentation is the task of delineating pixels belonging to semantic labels . The ability to automatically segment objects is important because accurate labeling is expensive and hard ( Vittayakorn & Hays , 2011 ; Zhang et al. , 2018 ) . Automatic image segmentation can have large impact in many domains , e.g . obstacle avoidance in autonomous driving and treatment planning in medical imaging . Accurate classification of pixels in close proximity to inter-class boundaries remains a challenging task in image segmentation . Object boundaries can have high curvature contours or weak pixel intensity that complicate separating the object from surrounding ones . In deep CNNs ( Simonyan & Zisserman , 2014 ; Zeiler & Fergus , 2014 ; Szegedy et al. , 2015 ; He et al. , 2016 ; Chen et al. , 2017 ) , the object-of-interest and surrounding competing objects can provide equal context to a receptive field of a boundary pixel , which can make accurate classification difficult . Humans also find it difficult to accurately label pixels near object boundaries . Level Set methods ( Zhao et al. , 1996 ; Brox et al. , 2006 ) and Active Shapes ( Paragios & Deriche , 2000 ; Chan & Vese , 2001 ) have been proposed to incorporate shape and image priors to mitigate boundary ambiguities ( Tsai et al. , 2003 ; Rousson & Paragios , 2002 ) . The Level Set method for image segmentation evolves an initial contour of an object-of-interest along the normal direction with a forcing function . A contour is represented by an embedding function , typically a signed distance function , and its evolution amounts to solving a differential equation ( Osher & Sethian , 1988 ) . In this work , we extend the formulation of the level set method . Inspired by the recent progress in Neural Ordinary Diferential Equations ( NODEs ) ( Chen et al. , 2018 ; Dupont et al. , 2019 ; Gholami et al. , 2019 ) , we propose to use NODEs to solve the level set formulation of the contour evolution , thus learning the forcing function in an end-to-end data driven manner . Unlike earlier attempts in combining the level set method with CNNs , we benefit from NODEs parametrization of the derivative of the contour because it allows us to incorporate external constraints that guide the contour evolution , e.g . by adding a regularization penalty to the curvature of the front or exploiting images at the evolving front by extracting appearance constraints in a non-supervised way . Finally , similar to experiments in ( Chen et al. , 2018 ) , to alleviate the need for careful choice or design of contour embedding functions , we propose a NODE-based method that evolves an image embedding into a dense per-pixel semantic label space . To the best of our knowledge , this work is the first to apply Neural ODEs to real world problems . We validate our methods on two 2D segmentation tasks : kidney segmentation in transversal slices of CT scans and salient object segmentation . Given an initial estimate of kidney via existing algorithms , our method effectively evolves the initial estimates and achieves improved kidney segmentation , as we show in Figure 1 . On real life salient objects , in addition to contour evolution , we use our method to directly evolve the embedding of an input image into a pixel-wise dense semantic label . Following ( Hu et al. , 2017 ) , we compare against the results in ( Wang et al. , 2017 ; Li et al. , 2016 ; Li & Yu , 2015 ; Zhao et al. , 2015 ; Lee et al. , 2016 ; Wang et al. , 2015 ; Hu et al. , 2017 ) and achieve ω-Fβ scores , PASCAL-S 0.668 and ECSSD 0.768 , that are higher than several state-of-the art algorithms . Our results suggest the potential of utilizing NODEs for solving the contour evolution of level set methods or the direct evolution of image embeddings into segmentation maps . We hope our findings will inspire future research in using NODEs for semantic segmentation tasks . We foresee that our method would allow for intervention on intermediate states of the solution of the ODE , allowing for injection of shape priors or other regularizing constraints . In summary , our contributions are : • We propose to use NODEs to solve the level set formulation of the contour evolution . • We propose using NODEs to learn the forcing function in an end-to-end data driven way . • We show NODEs can also evolve image embeddings directly into dense per-pixel semantic label spaces , which may alleviate the need for careful choice or design of contour embedding functions . 2 METHODS . Suppose I is a 2D image , S is the contour of an object we want to segment , and φ is a contour embedding function , defined as a distance map , such that S = { ( x , y ) |φ ( x , y ) = 0 } . We assume an initial but rough contour of the object is given by a human operator or by an existing algorithm . A level set segmentation ( Osher & Sethian , 1988 ) solves a differential equation to evolve a contour along its normal direction with a speed function F as : dφ dt = |∇φ|F for t ∈ [ 0 , 1 ] , ( 1 ) where the initial value φ0 ( x , y ) is defined as a signed Euclidean distance from ( x , y ) to the closest point on the initial contour S0 . The speed function F is often modelled to be a function of the target image I , the shape statistics of the object contour ( derived from training shapes ) , or a regularizing curvature term ( ∇ ∇φ‖∇φ‖ ) . In Neural ODEs , we parametrize the derivative of the hidden state h using a neural network fθ parametrized by θ : dh dt = fθ ( h , t ) . ( 2 ) The relationship between Eq . 1 and Eq . 2 is immediate . In the next section , we propose two approaches that adapt NODEs to the level set method for image segmentation . 2.1 CONTOUR EVOLUTION WITH NODES . We propose to solve a more general form of Eq . 1 to evolve an initial contour estimate φ̂ for image segmentation . We define the state of the NODE to be φ̂ augmented with the input image ’ s embedding , h. We then advance the augmented state , γ = ( φ̂ , h ) , using NODEs , which can be interpreted as estimating the speed function F described in Eq . 1 . Mathematically , γ = ( φ̂ , h ) , dγ dt = fθ ( γ , t ) for t ∈ [ 0 , 1 ] , γ ( 0 ) = ( φ̂ ( 0 ) , h ( 0 ) ) , φ̃ = φ̂ ( 1 ) + ψ ( γ ( 1 ) ) , ( 3 ) where t is the time step in the evolution , γ is the augmented state of the NODE , f is a neural network parametrized by θ , φ̂ ( 0 ) is the initial value of the distance map , h ( 0 ) is the initial value of the image embedding , ψ is a learned function and φ̃ is the dense per-pixel distance map prediction . Figure 2a schematically illustrates our initial contour evolution approach . Throughout this paper , we will refer to this method as Contour Evolution . 2.2 IMAGE EVOLUTION WITH NODES . In our first approach , we obtain a final optimal contour by evolving an initial estimate . In our second approach , inspired by Chen et al . ( 2018 ) , we evolve an image embedding h and project it into a dense per-pixel distance map φ̃ , whose zero level set defines the final segmentation map , S ( t ) = { ( x , y ) |φ ( t ) ( x , y ) = 0 } . Mathematically , dh dt = fθ ( h , t ) for t ∈ [ 0 , 1 ] , h ( 0 ) = λ ( I ) , φ̃ = ψ ( h ( 1 ) ) , ( 4 ) where t is the time step in the evolution , f is a neural network parametrized by θ , I is an image , λ is a learned image embedding function and ψ is a learned function that maps an image embedding to a distance map . Figure 2b schematically illustrates our direct image evolution approach . Throughout this paper , we will refer to this method as Image Evolution . 3 IMPLEMENTATION . In the following subsections , we describe our design choices in loss function and their regularization terms , architectures , strategies for emphasizing the evolution of the contour on a region of interest . We also detail our model initialization strategies to prevent drifting from the sub-optimal initial value , and choices of error tolerances and activation normalization . 3.1 LOSS FUNCTION AND REGULARIZATION TERMS . We optimize the parameters of our NODE models , described in Figures 2a and 2b , to minimize the empirical risk computed as the Mean Squared Error ( MSE ) between the target ( φ ) and predicted ( φ̃ ) distance maps . We remind the reader that although our techniques can access intermediate NODE states , which could allow injection of priors or other constraints , we do not explore this in our current experiments , and relay it to future work . 3.2 NARROW BAND AND RE-INITIALIZATION . In the level set formulation , all levels that describe the propagating contour are tracked . Adalsteinsson & Sethian ( 1995 ) proposed limiting the evolution to the subset of levels within a narrow band of the zero level contour . In our approach , we obtain the equivalent of a narrow band by applying a hyperbolic tangent nonlinearity on the evolved distance map . It effectively attenuates the contribution of levels in the optimization process . This transformation is especially valuable in refinement setups because it weights the gradients of the loss according to the proximity to the contour1 . Re-initialization of φ is another common practice in classical level set methods . It ensures the states in the trajectory of the numerical solution remain valid distance maps . ( Sussman et al. , 1994 ; Hartmann et al. , 2010 ) propose to first extract a zero level set of an evolving state , and re-calculate a distance map of that contour . In our experiments , we found that our optimization is not sensitive to non valid distance maps , and we did not find it necessary to reinitialize φ . 3.3 PARAMETER INITIALIZATION AND LEARNING RATE RAMPUP . In tasks where the initial value is already close to the desired solution , not initializing the model parameters to represent the identity function and not using learning rate ramp up can slow down the optimization process as the model predictions can immediately drift away from the initial value . In addition to using learning rate rampup , we prevent this issue by setting the weights and biases on the last layer of the NODE and Postnet layers to zero . This approach has been successfully used in normalizing flow models ( Kingma & Dhariwal , 2018 ; Prenger et al. , 2019 ) . 3.4 ADAPTIVE SOLVERS AND ERROR TOLERANCES . In ordinary differential equations , adaptive step solvers vary the step size according to the error estimate of the current step and the error tolerance . If the error estimate is larger than the threshold , the step will be redone with a smaller size until the error is smaller than the error tolerance . The error tolerance eitol given the current state i is the sum of the absolute error tolerance atol and the infinity norm of the current state h weighted by the relative error tolerance rtol : eitol = atol + rtol ∗ ∥∥hi∥∥∞ . ( 5 ) 1For the hyperbolic tangent , the gradients decrease as it moves away from zero , which represents the contour at the zero level set . Given that we do not know in advance the infinity norm of hi , which in our case contains the image embedding as described in Equations 3 and 4 , we set the contribution of the relative error tolerance term to zero and adjust the absolute error tolerance .
This paper proposes to utilize Neural ODEs (NODEs) and the Level Set Method (LSM) for the task of image segmentation. The argument is that the NODE can be used to learn the force function in an LSM and solve the contour evolution process. The authors propose two architectures and demonstrate promising performance on a few image segmentation benchmarks.
SP:16cb7d0da739f1e6a72efb9b18399d2d8b69f540
Neural ODEs for Image Segmentation with Level Sets
1 INTRODUCTION . Image segmentation is the task of delineating pixels belonging to semantic labels . The ability to automatically segment objects is important because accurate labeling is expensive and hard ( Vittayakorn & Hays , 2011 ; Zhang et al. , 2018 ) . Automatic image segmentation can have large impact in many domains , e.g . obstacle avoidance in autonomous driving and treatment planning in medical imaging . Accurate classification of pixels in close proximity to inter-class boundaries remains a challenging task in image segmentation . Object boundaries can have high curvature contours or weak pixel intensity that complicate separating the object from surrounding ones . In deep CNNs ( Simonyan & Zisserman , 2014 ; Zeiler & Fergus , 2014 ; Szegedy et al. , 2015 ; He et al. , 2016 ; Chen et al. , 2017 ) , the object-of-interest and surrounding competing objects can provide equal context to a receptive field of a boundary pixel , which can make accurate classification difficult . Humans also find it difficult to accurately label pixels near object boundaries . Level Set methods ( Zhao et al. , 1996 ; Brox et al. , 2006 ) and Active Shapes ( Paragios & Deriche , 2000 ; Chan & Vese , 2001 ) have been proposed to incorporate shape and image priors to mitigate boundary ambiguities ( Tsai et al. , 2003 ; Rousson & Paragios , 2002 ) . The Level Set method for image segmentation evolves an initial contour of an object-of-interest along the normal direction with a forcing function . A contour is represented by an embedding function , typically a signed distance function , and its evolution amounts to solving a differential equation ( Osher & Sethian , 1988 ) . In this work , we extend the formulation of the level set method . Inspired by the recent progress in Neural Ordinary Diferential Equations ( NODEs ) ( Chen et al. , 2018 ; Dupont et al. , 2019 ; Gholami et al. , 2019 ) , we propose to use NODEs to solve the level set formulation of the contour evolution , thus learning the forcing function in an end-to-end data driven manner . Unlike earlier attempts in combining the level set method with CNNs , we benefit from NODEs parametrization of the derivative of the contour because it allows us to incorporate external constraints that guide the contour evolution , e.g . by adding a regularization penalty to the curvature of the front or exploiting images at the evolving front by extracting appearance constraints in a non-supervised way . Finally , similar to experiments in ( Chen et al. , 2018 ) , to alleviate the need for careful choice or design of contour embedding functions , we propose a NODE-based method that evolves an image embedding into a dense per-pixel semantic label space . To the best of our knowledge , this work is the first to apply Neural ODEs to real world problems . We validate our methods on two 2D segmentation tasks : kidney segmentation in transversal slices of CT scans and salient object segmentation . Given an initial estimate of kidney via existing algorithms , our method effectively evolves the initial estimates and achieves improved kidney segmentation , as we show in Figure 1 . On real life salient objects , in addition to contour evolution , we use our method to directly evolve the embedding of an input image into a pixel-wise dense semantic label . Following ( Hu et al. , 2017 ) , we compare against the results in ( Wang et al. , 2017 ; Li et al. , 2016 ; Li & Yu , 2015 ; Zhao et al. , 2015 ; Lee et al. , 2016 ; Wang et al. , 2015 ; Hu et al. , 2017 ) and achieve ω-Fβ scores , PASCAL-S 0.668 and ECSSD 0.768 , that are higher than several state-of-the art algorithms . Our results suggest the potential of utilizing NODEs for solving the contour evolution of level set methods or the direct evolution of image embeddings into segmentation maps . We hope our findings will inspire future research in using NODEs for semantic segmentation tasks . We foresee that our method would allow for intervention on intermediate states of the solution of the ODE , allowing for injection of shape priors or other regularizing constraints . In summary , our contributions are : • We propose to use NODEs to solve the level set formulation of the contour evolution . • We propose using NODEs to learn the forcing function in an end-to-end data driven way . • We show NODEs can also evolve image embeddings directly into dense per-pixel semantic label spaces , which may alleviate the need for careful choice or design of contour embedding functions . 2 METHODS . Suppose I is a 2D image , S is the contour of an object we want to segment , and φ is a contour embedding function , defined as a distance map , such that S = { ( x , y ) |φ ( x , y ) = 0 } . We assume an initial but rough contour of the object is given by a human operator or by an existing algorithm . A level set segmentation ( Osher & Sethian , 1988 ) solves a differential equation to evolve a contour along its normal direction with a speed function F as : dφ dt = |∇φ|F for t ∈ [ 0 , 1 ] , ( 1 ) where the initial value φ0 ( x , y ) is defined as a signed Euclidean distance from ( x , y ) to the closest point on the initial contour S0 . The speed function F is often modelled to be a function of the target image I , the shape statistics of the object contour ( derived from training shapes ) , or a regularizing curvature term ( ∇ ∇φ‖∇φ‖ ) . In Neural ODEs , we parametrize the derivative of the hidden state h using a neural network fθ parametrized by θ : dh dt = fθ ( h , t ) . ( 2 ) The relationship between Eq . 1 and Eq . 2 is immediate . In the next section , we propose two approaches that adapt NODEs to the level set method for image segmentation . 2.1 CONTOUR EVOLUTION WITH NODES . We propose to solve a more general form of Eq . 1 to evolve an initial contour estimate φ̂ for image segmentation . We define the state of the NODE to be φ̂ augmented with the input image ’ s embedding , h. We then advance the augmented state , γ = ( φ̂ , h ) , using NODEs , which can be interpreted as estimating the speed function F described in Eq . 1 . Mathematically , γ = ( φ̂ , h ) , dγ dt = fθ ( γ , t ) for t ∈ [ 0 , 1 ] , γ ( 0 ) = ( φ̂ ( 0 ) , h ( 0 ) ) , φ̃ = φ̂ ( 1 ) + ψ ( γ ( 1 ) ) , ( 3 ) where t is the time step in the evolution , γ is the augmented state of the NODE , f is a neural network parametrized by θ , φ̂ ( 0 ) is the initial value of the distance map , h ( 0 ) is the initial value of the image embedding , ψ is a learned function and φ̃ is the dense per-pixel distance map prediction . Figure 2a schematically illustrates our initial contour evolution approach . Throughout this paper , we will refer to this method as Contour Evolution . 2.2 IMAGE EVOLUTION WITH NODES . In our first approach , we obtain a final optimal contour by evolving an initial estimate . In our second approach , inspired by Chen et al . ( 2018 ) , we evolve an image embedding h and project it into a dense per-pixel distance map φ̃ , whose zero level set defines the final segmentation map , S ( t ) = { ( x , y ) |φ ( t ) ( x , y ) = 0 } . Mathematically , dh dt = fθ ( h , t ) for t ∈ [ 0 , 1 ] , h ( 0 ) = λ ( I ) , φ̃ = ψ ( h ( 1 ) ) , ( 4 ) where t is the time step in the evolution , f is a neural network parametrized by θ , I is an image , λ is a learned image embedding function and ψ is a learned function that maps an image embedding to a distance map . Figure 2b schematically illustrates our direct image evolution approach . Throughout this paper , we will refer to this method as Image Evolution . 3 IMPLEMENTATION . In the following subsections , we describe our design choices in loss function and their regularization terms , architectures , strategies for emphasizing the evolution of the contour on a region of interest . We also detail our model initialization strategies to prevent drifting from the sub-optimal initial value , and choices of error tolerances and activation normalization . 3.1 LOSS FUNCTION AND REGULARIZATION TERMS . We optimize the parameters of our NODE models , described in Figures 2a and 2b , to minimize the empirical risk computed as the Mean Squared Error ( MSE ) between the target ( φ ) and predicted ( φ̃ ) distance maps . We remind the reader that although our techniques can access intermediate NODE states , which could allow injection of priors or other constraints , we do not explore this in our current experiments , and relay it to future work . 3.2 NARROW BAND AND RE-INITIALIZATION . In the level set formulation , all levels that describe the propagating contour are tracked . Adalsteinsson & Sethian ( 1995 ) proposed limiting the evolution to the subset of levels within a narrow band of the zero level contour . In our approach , we obtain the equivalent of a narrow band by applying a hyperbolic tangent nonlinearity on the evolved distance map . It effectively attenuates the contribution of levels in the optimization process . This transformation is especially valuable in refinement setups because it weights the gradients of the loss according to the proximity to the contour1 . Re-initialization of φ is another common practice in classical level set methods . It ensures the states in the trajectory of the numerical solution remain valid distance maps . ( Sussman et al. , 1994 ; Hartmann et al. , 2010 ) propose to first extract a zero level set of an evolving state , and re-calculate a distance map of that contour . In our experiments , we found that our optimization is not sensitive to non valid distance maps , and we did not find it necessary to reinitialize φ . 3.3 PARAMETER INITIALIZATION AND LEARNING RATE RAMPUP . In tasks where the initial value is already close to the desired solution , not initializing the model parameters to represent the identity function and not using learning rate ramp up can slow down the optimization process as the model predictions can immediately drift away from the initial value . In addition to using learning rate rampup , we prevent this issue by setting the weights and biases on the last layer of the NODE and Postnet layers to zero . This approach has been successfully used in normalizing flow models ( Kingma & Dhariwal , 2018 ; Prenger et al. , 2019 ) . 3.4 ADAPTIVE SOLVERS AND ERROR TOLERANCES . In ordinary differential equations , adaptive step solvers vary the step size according to the error estimate of the current step and the error tolerance . If the error estimate is larger than the threshold , the step will be redone with a smaller size until the error is smaller than the error tolerance . The error tolerance eitol given the current state i is the sum of the absolute error tolerance atol and the infinity norm of the current state h weighted by the relative error tolerance rtol : eitol = atol + rtol ∗ ∥∥hi∥∥∞ . ( 5 ) 1For the hyperbolic tangent , the gradients decrease as it moves away from zero , which represents the contour at the zero level set . Given that we do not know in advance the infinity norm of hi , which in our case contains the image embedding as described in Equations 3 and 4 , we set the contribution of the relative error tolerance term to zero and adjust the absolute error tolerance .
This paper proposes to apply the Neural ODE framework (Chen et al 2018) for image segmentation. The method relies on contour delineation through Level Sets. Since contour estimation requires to solve an ODE, this naturally allows to apply the work presented in (Chen et al 2018). The method is here applied in two segmentation tasks: kidney segmentation and salient object detection.
SP:16cb7d0da739f1e6a72efb9b18399d2d8b69f540
Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition
1 INTRODUCTION . Most machine learning systems make the closed world assumption and are predominantly trained according to the isolated learning paradigm , where data is available at all times and is independently and identically distributed . However , in the context of continual learning , where tasks and data arrive in sequence , neither of these two principles is desirable . A neural network that is trained exclusively on a new task ’ s data forgets past knowledge and suffers from an early identified phenomenon commonly referred to as catastrophic forgetting ( McCloskey & Cohen , 1989 ) . Moreover , to overcome the closed world assumption , inclusion of a ” background ” class is veritably insufficient as it is impossible to include all unseen concepts and classes explicitly in the loss function beforehand . Likewise , commonly applied thresholding of prediction values doesn ’ t prevent resulting large confidences for unseen classes if the data is far away from any known data ( Matan et al. , 1990 ) . Most of the existing continual learning literature concentrates efforts on either alleviating catastrophic forgetting , maximizing knowledge transfer or addressing ways in which to efficiently store subsets of past data . These works have identified weight regularization ( McCloskey & Cohen , 1989 ; Zenke et al. , 2017 ; Kirkpatrick et al. , 2017 ; Li & Hoiem , 2016 ; Nguyen et al. , 2018 ) and rehearsal techniques ( Ratcliff , 1990 ; Lopez-Paz & Ranzato , 2017 ; Rebuffi et al. , 2017 ; Bachem et al. , 2015 ) or have postulated methods based on complementary learning systems theory ( O ’ Reilly & Norman , 2003 ) through dual-model with generative memory approaches ( Gepperth & Karaoguz , 2016 ; Shin et al. , 2017 ; Wu et al. , 2018 ; Farquhar & Gal , 2018 ; Achille et al. , 2018 ) as mechanisms against catastrophic inference . On the one hand , regularization techniques can work well in principle , but come with the caveat of relying on a new task ’ s proximity to previous knowledge . On the other hand , training and storing separate models , including generative models for generative rehearsal , comes at increased memory cost and doesn ’ t allow for full knowledge sharing , particularly to already stored models . Specifically , the transfer of already attained knowledge to benefit new tasks , known as forward transfer , as well as the potential positive impact of learning new concepts to aid in existing tasks , known as backward transfer , are crucial to any continual learning system . Generally speak- ing , most current approaches include a set of simplifications , such as considering separate classifiers for each new task , referred to as multi-head classifiers . This scenario prevents ” cross-talk ” between classifier units by not sharing them , which would otherwise rapidly decay the accuracy ( Zenke et al. , 2017 ; Kirkpatrick et al. , 2017 ; Rusu et al. , 2016 ; Shin et al. , 2017 ; Gepperth & Karaoguz , 2016 ; Rebuffi et al. , 2017 ; Achille et al. , 2018 ; Nguyen et al. , 2018 ) as newly introduced classes directly impact and confuse existing concepts . In the multi-head scenario task ids thus need to be encoded or are often assumed to be given by humans in order to know which classifier to use for prediction . Correspondingly , in generative replay , generative and discriminative models are taken to be separate models ( Shin et al. , 2017 ; Farquhar & Gal , 2018 ; Nguyen et al. , 2018 ) . Similar to regularization of a classifier , a generative model can suffer from the learned approximate posterior distribution deviating further from the true posterior with each further task increment . In order to avoid catastrophic forgetting induced by learning to generate on previously generated data , previous works even store a separate generative model per task ( Farquhar & Gal , 2018 ) , in analogy to the multi-head classifier . An extended review of recent continual learning methods is provided by Parisi et al . ( 2019 ) . A parallel thread pursues a complementary component of identifying out-of-distribution and open set examples . While current continual learning approaches typically do not include this thread yet , it can be considered crucial to any system and a necessity in order to avoid encoding task labels and distinguishing seen from unknown data . Here , multiple methods rely on using confidence values as means of rejection through calibration ( Liang et al. , 2018 ; Lee et al. , 2018b ; a ) . Arguably this also includes Bayesian approaches using variational methods ( Farquhar & Gal , 2018 ; Achille et al. , 2018 ) or Monte-Carlo dropout ( Gal & Ghahramani , 2015 ) to estimate uncertainties . Since the closed world assumption also holds true for Bayesian methods as the approximated posterior probability can not be computed for unknown classes , misclassification still occurs , as the open space risk is unbounded ( Boult et al. , 2019 ) . Recently Thomas et al . ( 2014 ) ; Bendale & Boult ( 2016 ) ; Dhamija et al . ( 2018 ) have proposed extreme value theory ( EVT ) based open set recognition to bound the open-space risk and balance it with recognition errors in deep neural networks . In this work we propose a probabilistic approach to unify open set recognition with continual learning in a single deep model in order to remove or alleviate above mentioned common simplifications . Specifically , our contributions are : • We introduce a single model for continual learning that combines a joint probabilistic encoder with a generative model and a linear classifier . Inspired by EVT based open set recognition ( Bendale & Boult , 2016 ) for Softmax prediction layers , this model architecture gives rise to a natural way of open set recognition with statistical outlier rejection on the basis of the approximate posterior in Bayesian inference . The latter requires no upfront knowledge of open set data or corresponding modifications to loss or training procedure and can successfully prevent nonsensical predictions for unseen unknown data , a robustness feature that is currently not present in closed world continual learning systems . • We show how this EVT bound to the posterior can be used for both identification and rejection of statistically outlying unseen unknown data instances , as well as exclusion of generated samples from areas of low probability density . When used in generative replay this leads to significantly reduced catastrophic forgetting without storing real data . • We share our model across tasks and automatically expand a single linear classifier head with units for new classes , thus not requiring explicit task labels during inference . • We demonstrate that our approach can incrementally learn the classes of two image and one audio dataset , as well as cross-dataset scenarios across modalities , while allowing for forward and backward transfer due to weight-sharing . When presented with novel data our model is able to distinguish between unseen data from various datasets and data belonging to known tasks . We further show that our approach readily profits from recent model advances such as variational lossy auto-encoders ( Gulrajani et al. , 2017 ; Chen et al. , 2017 ) . 2 UNIFYING CONTINUAL LEARNING WITH OPEN SET RECOGNITION . In isolated supervised machine learning the core assumption is the presence of i.i.d . data at all times and training is conducted using a dataset D ≡ { ( x ( n ) , y ( n ) ) } N n=1 , consisting of N pairs of data instances x ( n ) and their corresponding labels y ( n ) ∈ { 1 . . . C } for C classes . In contrast , in continual learning task dataDt ≡ { ( x ( n ) t , y ( n ) t ) } Nt n=1 with t = 1 , . . . , T arrives sequentially for T disjoint datasets , each with number of classes Ct . In our work , we consider this class incremental scenario from a perspective of variational Bayesian inference in deep neural networks ( Kingma & Welling , 2013 ) that consist of a shared encoder with variational parameters θ , decoder and linear classifier with respective parameters φ and ξ . The joint probabilistic encoder learns an encoding to a latent variable z , over which a prior is placed , say a unit Gaussian . Using variational inference , its purpose is to approximate the true posterior to both pφ ( x , z ) and pξ ( y , z ) . The probabilistic decoder pφ ( x|z ) and probabilistic linear classifier pξ ( y|z ) then return the conditional probability density of the input x and target y under the respective generative model given a sample z from the approximate posterior qθ ( z|x ) . This yields a generative model p ( x , y , z ) , for which we assume a factorization and generative process of the form p ( x , y , z ) = p ( x|z ) p ( y|z ) p ( z ) . For variational inference with our model the following continual learning loss function thus needs to be optimized : LUBt ( θ , φ , ξ ) = t∑ τ=1 Nτ∑ n=1 [ E qθ , t ( z|x ( n ) τ ) [ log pφ , t ( x ( n ) τ |z ) + log pξ , t ( y ( n ) τ |z ) ] − βKL ( qθ , t ( z|x ( n ) τ ) || p ( z ) ) ] ( 1 ) Here , we have added a weight term β to the KL divergence to balance the individual loss terms , similar to the work of Zhou et al . ( 2012 ) and Higgins et al . ( 2017 ) . This factor regulates the tradeoff between the additional constraint imposed by the classifier , needing to be able to linearly separate the classes given z , and the quality of the approximation to the training data . To balance this tradeoff irrespective of input and latent dimensionality and number of classes , the losses are normalized according to dimensions . Note that in practice this changes the relative scale of the losses and thus the interpretation of specific β values with respect to the original authors Higgins et al . ( 2017 ) . We provide a more detailed discussion with empirical examples for the role of β in the supplementary section . However , equation 1 requires the presence of all data for all tasks and is thus generally not feasible for continual learning where only the most recent task ’ s data is assumed to be available . In context of variational inference , two potential approaches offer solutions to this challenge : a prior-based approach using the former approximate posterior qθ , t−1 as the new task ’ s prior ( Nguyen et al. , 2018 ) or estimating the likelihood of former data through generative replay or other forms of rehearsal ( Farquhar & Gal , 2018 ; Achille et al. , 2018 ) . For our proposed model , we follow the latter line of work and let the prior remain the same at all times . Making use of the generative nature of our model , we let the above upper-bound to task incremental continual learning become : Lt ( θ , φ , ξ ) = Ñt∑ n=1 [ E qθ , t ( z|x̃ ( n ) t ) [ log pφ , t ( x̃ ( n ) t |z ) + log pξ , t ( ỹ ( n ) t |z ) ] − βKL ( qθ , t ( z|x̃ ( n ) t ) || p ( z ) ) ] + Nt∑ n=1 [ E qθ , t ( z|x ( n ) t ) [ log pφ , t ( x ( n ) t |z ) + log pξ , t ( y ( n ) t |z ) ] − βKL ( qθ , t ( z|x ( n ) t ) || p ( z ) ) ] ( 2 ) Here , x̃t ∼ pφ , t−1 ( x|z ) and ỹt ∼ pξ , t−1 ( y|z ) with z ∼ p ( z ) is a sample from the generative model with the corresponding label obtained from the classifier . Ñt is the number of total data instances of all previously seen tasks or alternatively a hyper-parameter . This way the expectation of the log-likelihood for all previously seen tasks is estimated and the dataset at any point in time D̃t ≡ ( xt ∪ x̃t , yt ∪ ỹt ) is a combination of generations from seen past data distributions and the current task ’ s real data . For each newly arriving task with novel labels , the classifier is expanded with newly initialized units . We note that whereas the loss function with generative replay in equation 2 is used for continual training , equation 1 and thus real data is always used for testing . The model is further trained in a denoising fashion , where noise is added to each input x to avoid over-fitting . This is preferable to weight regularization as it doesn ’ t entail unrecoverable units that are needed to encode later stage concepts . We have accordingly coined our model Classifying Denoising Variational Auto-Encoder ( CDVAE ) . We optionally enhance the probabilistic decoder with an autoregressive variant , where generation of a pixel ’ s value is spatially conditioned on previous pixels ( van den Oord et al. , 2016 ; Gulrajani et al. , 2017 ; Chen et al. , 2017 ) . Here , the denoising plays an additional crucial role of de-quantization . Nonetheless , similar to existing dual-model approaches ( Shin et al. , 2017 ; Wu et al. , 2018 ; Farquhar & Gal , 2018 ) , by itself both CDVAE and PixCDVAE models accumulate errors as with each iteration of generative replay deviations of the approximate from the true posterior get amplified . However , in our joint model , the linear classifier directly affects the partitioning of the latent space by influencing the joint probabilistic encoder ’ s weights , resulting in class specific areas of large probability density . This is particularly noticeable for lossy VAEs ( Gulrajani et al. , 2017 ; Chen et al. , 2017 ) that leave the encoding of local structure to autoregressive layers and hence in our case attribute more influence on the latent space to the classifier . We note that these class specific areas in latent space are not necessarily encouraged for deeper classifiers , however would argue that with a sufficiently expressive probabilistic encoder such a classifier is not necessary . For visualization purposes , we have trained a CDVAE following the details of section 3 with a two-dimensional latent space on the class-incremental MNIST ( LeCun et al. , 1998 ) upper-bound and show the latent space embedding for the validation dataset at the end of continual learning in figure 1b . Corresponding intermediate visualizations for each task increment and PixCDVAE can be found in the supplementary material . We take advantage of the classifier ’ s impact on the latent space as the foundation for posterior based open set recognition and complementary generative replay with statistical outlier rejection . We refer to this extended model as Open-set Classifying Denoising Variational Auto-Encoder ( OCDVAE ) and PixOCDVAE respectively . An illustration of our joint probabilistic model is shown in figure 1a .
This paper tackles the problem of catastrophic forgetting when data is organized in a large number of batches of data (tasks) that are sequentially made available. To avoid catastrophic forgetting, the authors learn a VAE that generates the training data (both inputs and labels) and retrain it using samples from the new task combined with samples generated from the VAE trained in the previous tasks (generative replay). In this way, there's no need to store all past data and even the first learned batch keeps being refreshed and should not be forgotten.
SP:34f3abe09b1ca5c5bbbf1a2e28b489fee010098e
Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition
1 INTRODUCTION . Most machine learning systems make the closed world assumption and are predominantly trained according to the isolated learning paradigm , where data is available at all times and is independently and identically distributed . However , in the context of continual learning , where tasks and data arrive in sequence , neither of these two principles is desirable . A neural network that is trained exclusively on a new task ’ s data forgets past knowledge and suffers from an early identified phenomenon commonly referred to as catastrophic forgetting ( McCloskey & Cohen , 1989 ) . Moreover , to overcome the closed world assumption , inclusion of a ” background ” class is veritably insufficient as it is impossible to include all unseen concepts and classes explicitly in the loss function beforehand . Likewise , commonly applied thresholding of prediction values doesn ’ t prevent resulting large confidences for unseen classes if the data is far away from any known data ( Matan et al. , 1990 ) . Most of the existing continual learning literature concentrates efforts on either alleviating catastrophic forgetting , maximizing knowledge transfer or addressing ways in which to efficiently store subsets of past data . These works have identified weight regularization ( McCloskey & Cohen , 1989 ; Zenke et al. , 2017 ; Kirkpatrick et al. , 2017 ; Li & Hoiem , 2016 ; Nguyen et al. , 2018 ) and rehearsal techniques ( Ratcliff , 1990 ; Lopez-Paz & Ranzato , 2017 ; Rebuffi et al. , 2017 ; Bachem et al. , 2015 ) or have postulated methods based on complementary learning systems theory ( O ’ Reilly & Norman , 2003 ) through dual-model with generative memory approaches ( Gepperth & Karaoguz , 2016 ; Shin et al. , 2017 ; Wu et al. , 2018 ; Farquhar & Gal , 2018 ; Achille et al. , 2018 ) as mechanisms against catastrophic inference . On the one hand , regularization techniques can work well in principle , but come with the caveat of relying on a new task ’ s proximity to previous knowledge . On the other hand , training and storing separate models , including generative models for generative rehearsal , comes at increased memory cost and doesn ’ t allow for full knowledge sharing , particularly to already stored models . Specifically , the transfer of already attained knowledge to benefit new tasks , known as forward transfer , as well as the potential positive impact of learning new concepts to aid in existing tasks , known as backward transfer , are crucial to any continual learning system . Generally speak- ing , most current approaches include a set of simplifications , such as considering separate classifiers for each new task , referred to as multi-head classifiers . This scenario prevents ” cross-talk ” between classifier units by not sharing them , which would otherwise rapidly decay the accuracy ( Zenke et al. , 2017 ; Kirkpatrick et al. , 2017 ; Rusu et al. , 2016 ; Shin et al. , 2017 ; Gepperth & Karaoguz , 2016 ; Rebuffi et al. , 2017 ; Achille et al. , 2018 ; Nguyen et al. , 2018 ) as newly introduced classes directly impact and confuse existing concepts . In the multi-head scenario task ids thus need to be encoded or are often assumed to be given by humans in order to know which classifier to use for prediction . Correspondingly , in generative replay , generative and discriminative models are taken to be separate models ( Shin et al. , 2017 ; Farquhar & Gal , 2018 ; Nguyen et al. , 2018 ) . Similar to regularization of a classifier , a generative model can suffer from the learned approximate posterior distribution deviating further from the true posterior with each further task increment . In order to avoid catastrophic forgetting induced by learning to generate on previously generated data , previous works even store a separate generative model per task ( Farquhar & Gal , 2018 ) , in analogy to the multi-head classifier . An extended review of recent continual learning methods is provided by Parisi et al . ( 2019 ) . A parallel thread pursues a complementary component of identifying out-of-distribution and open set examples . While current continual learning approaches typically do not include this thread yet , it can be considered crucial to any system and a necessity in order to avoid encoding task labels and distinguishing seen from unknown data . Here , multiple methods rely on using confidence values as means of rejection through calibration ( Liang et al. , 2018 ; Lee et al. , 2018b ; a ) . Arguably this also includes Bayesian approaches using variational methods ( Farquhar & Gal , 2018 ; Achille et al. , 2018 ) or Monte-Carlo dropout ( Gal & Ghahramani , 2015 ) to estimate uncertainties . Since the closed world assumption also holds true for Bayesian methods as the approximated posterior probability can not be computed for unknown classes , misclassification still occurs , as the open space risk is unbounded ( Boult et al. , 2019 ) . Recently Thomas et al . ( 2014 ) ; Bendale & Boult ( 2016 ) ; Dhamija et al . ( 2018 ) have proposed extreme value theory ( EVT ) based open set recognition to bound the open-space risk and balance it with recognition errors in deep neural networks . In this work we propose a probabilistic approach to unify open set recognition with continual learning in a single deep model in order to remove or alleviate above mentioned common simplifications . Specifically , our contributions are : • We introduce a single model for continual learning that combines a joint probabilistic encoder with a generative model and a linear classifier . Inspired by EVT based open set recognition ( Bendale & Boult , 2016 ) for Softmax prediction layers , this model architecture gives rise to a natural way of open set recognition with statistical outlier rejection on the basis of the approximate posterior in Bayesian inference . The latter requires no upfront knowledge of open set data or corresponding modifications to loss or training procedure and can successfully prevent nonsensical predictions for unseen unknown data , a robustness feature that is currently not present in closed world continual learning systems . • We show how this EVT bound to the posterior can be used for both identification and rejection of statistically outlying unseen unknown data instances , as well as exclusion of generated samples from areas of low probability density . When used in generative replay this leads to significantly reduced catastrophic forgetting without storing real data . • We share our model across tasks and automatically expand a single linear classifier head with units for new classes , thus not requiring explicit task labels during inference . • We demonstrate that our approach can incrementally learn the classes of two image and one audio dataset , as well as cross-dataset scenarios across modalities , while allowing for forward and backward transfer due to weight-sharing . When presented with novel data our model is able to distinguish between unseen data from various datasets and data belonging to known tasks . We further show that our approach readily profits from recent model advances such as variational lossy auto-encoders ( Gulrajani et al. , 2017 ; Chen et al. , 2017 ) . 2 UNIFYING CONTINUAL LEARNING WITH OPEN SET RECOGNITION . In isolated supervised machine learning the core assumption is the presence of i.i.d . data at all times and training is conducted using a dataset D ≡ { ( x ( n ) , y ( n ) ) } N n=1 , consisting of N pairs of data instances x ( n ) and their corresponding labels y ( n ) ∈ { 1 . . . C } for C classes . In contrast , in continual learning task dataDt ≡ { ( x ( n ) t , y ( n ) t ) } Nt n=1 with t = 1 , . . . , T arrives sequentially for T disjoint datasets , each with number of classes Ct . In our work , we consider this class incremental scenario from a perspective of variational Bayesian inference in deep neural networks ( Kingma & Welling , 2013 ) that consist of a shared encoder with variational parameters θ , decoder and linear classifier with respective parameters φ and ξ . The joint probabilistic encoder learns an encoding to a latent variable z , over which a prior is placed , say a unit Gaussian . Using variational inference , its purpose is to approximate the true posterior to both pφ ( x , z ) and pξ ( y , z ) . The probabilistic decoder pφ ( x|z ) and probabilistic linear classifier pξ ( y|z ) then return the conditional probability density of the input x and target y under the respective generative model given a sample z from the approximate posterior qθ ( z|x ) . This yields a generative model p ( x , y , z ) , for which we assume a factorization and generative process of the form p ( x , y , z ) = p ( x|z ) p ( y|z ) p ( z ) . For variational inference with our model the following continual learning loss function thus needs to be optimized : LUBt ( θ , φ , ξ ) = t∑ τ=1 Nτ∑ n=1 [ E qθ , t ( z|x ( n ) τ ) [ log pφ , t ( x ( n ) τ |z ) + log pξ , t ( y ( n ) τ |z ) ] − βKL ( qθ , t ( z|x ( n ) τ ) || p ( z ) ) ] ( 1 ) Here , we have added a weight term β to the KL divergence to balance the individual loss terms , similar to the work of Zhou et al . ( 2012 ) and Higgins et al . ( 2017 ) . This factor regulates the tradeoff between the additional constraint imposed by the classifier , needing to be able to linearly separate the classes given z , and the quality of the approximation to the training data . To balance this tradeoff irrespective of input and latent dimensionality and number of classes , the losses are normalized according to dimensions . Note that in practice this changes the relative scale of the losses and thus the interpretation of specific β values with respect to the original authors Higgins et al . ( 2017 ) . We provide a more detailed discussion with empirical examples for the role of β in the supplementary section . However , equation 1 requires the presence of all data for all tasks and is thus generally not feasible for continual learning where only the most recent task ’ s data is assumed to be available . In context of variational inference , two potential approaches offer solutions to this challenge : a prior-based approach using the former approximate posterior qθ , t−1 as the new task ’ s prior ( Nguyen et al. , 2018 ) or estimating the likelihood of former data through generative replay or other forms of rehearsal ( Farquhar & Gal , 2018 ; Achille et al. , 2018 ) . For our proposed model , we follow the latter line of work and let the prior remain the same at all times . Making use of the generative nature of our model , we let the above upper-bound to task incremental continual learning become : Lt ( θ , φ , ξ ) = Ñt∑ n=1 [ E qθ , t ( z|x̃ ( n ) t ) [ log pφ , t ( x̃ ( n ) t |z ) + log pξ , t ( ỹ ( n ) t |z ) ] − βKL ( qθ , t ( z|x̃ ( n ) t ) || p ( z ) ) ] + Nt∑ n=1 [ E qθ , t ( z|x ( n ) t ) [ log pφ , t ( x ( n ) t |z ) + log pξ , t ( y ( n ) t |z ) ] − βKL ( qθ , t ( z|x ( n ) t ) || p ( z ) ) ] ( 2 ) Here , x̃t ∼ pφ , t−1 ( x|z ) and ỹt ∼ pξ , t−1 ( y|z ) with z ∼ p ( z ) is a sample from the generative model with the corresponding label obtained from the classifier . Ñt is the number of total data instances of all previously seen tasks or alternatively a hyper-parameter . This way the expectation of the log-likelihood for all previously seen tasks is estimated and the dataset at any point in time D̃t ≡ ( xt ∪ x̃t , yt ∪ ỹt ) is a combination of generations from seen past data distributions and the current task ’ s real data . For each newly arriving task with novel labels , the classifier is expanded with newly initialized units . We note that whereas the loss function with generative replay in equation 2 is used for continual training , equation 1 and thus real data is always used for testing . The model is further trained in a denoising fashion , where noise is added to each input x to avoid over-fitting . This is preferable to weight regularization as it doesn ’ t entail unrecoverable units that are needed to encode later stage concepts . We have accordingly coined our model Classifying Denoising Variational Auto-Encoder ( CDVAE ) . We optionally enhance the probabilistic decoder with an autoregressive variant , where generation of a pixel ’ s value is spatially conditioned on previous pixels ( van den Oord et al. , 2016 ; Gulrajani et al. , 2017 ; Chen et al. , 2017 ) . Here , the denoising plays an additional crucial role of de-quantization . Nonetheless , similar to existing dual-model approaches ( Shin et al. , 2017 ; Wu et al. , 2018 ; Farquhar & Gal , 2018 ) , by itself both CDVAE and PixCDVAE models accumulate errors as with each iteration of generative replay deviations of the approximate from the true posterior get amplified . However , in our joint model , the linear classifier directly affects the partitioning of the latent space by influencing the joint probabilistic encoder ’ s weights , resulting in class specific areas of large probability density . This is particularly noticeable for lossy VAEs ( Gulrajani et al. , 2017 ; Chen et al. , 2017 ) that leave the encoding of local structure to autoregressive layers and hence in our case attribute more influence on the latent space to the classifier . We note that these class specific areas in latent space are not necessarily encouraged for deeper classifiers , however would argue that with a sufficiently expressive probabilistic encoder such a classifier is not necessary . For visualization purposes , we have trained a CDVAE following the details of section 3 with a two-dimensional latent space on the class-incremental MNIST ( LeCun et al. , 1998 ) upper-bound and show the latent space embedding for the validation dataset at the end of continual learning in figure 1b . Corresponding intermediate visualizations for each task increment and PixCDVAE can be found in the supplementary material . We take advantage of the classifier ’ s impact on the latent space as the foundation for posterior based open set recognition and complementary generative replay with statistical outlier rejection . We refer to this extended model as Open-set Classifying Denoising Variational Auto-Encoder ( OCDVAE ) and PixOCDVAE respectively . An illustration of our joint probabilistic model is shown in figure 1a .
This paper combines replay and openMax approach to help continual learning. The results shows robustness on different dataset include image and audio in the continual learning condition, where the new come data has a different distribution but the model still able to maintain reasonable quality for the previously and newly come examples. To my understanding, this approach was not ground-breaking but seems a reasonable combinations.
SP:34f3abe09b1ca5c5bbbf1a2e28b489fee010098e
Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
1 INTRODUCTION . In supervised learning machines learn from a finite collection of n training data , and their generalization error is then evaluated on unseen data drawn from the same distribution . How many data are needed to learn a task is characterized by the learning curve relating generalization error to n. In various cases , the generalization error decays as a power law n−β , with an exponent β that depends on both the data and the algorithm . In ( Hestness et al. , 2017 ) β is reported for state-of-the-art ( SOTA ) deep neural networks for various tasks : in for neural-machine translation β ≈ 0.3–0.36 ( for fixed model size ) or β ≈ 0.13 ( for best-fit models at any n ) ; language modeling shows β ≈ 0.06–0.09 ; in speech recognition β ≈ 0.3 ; SOTA models for image classification ( on ImageNet ) have exponents β ≈ 0.3–0.5 . Currently there is no available theory of deep learning to rationalize these observations . Recently it was shown that for a proper initialization of the weights , deep learning in the infinite-width limit ( Jacot et al. , 2018 ) converges to kernel learning . Moreover , it is nowadays part of the lore that there exist kernels whose performance is nearly comparable to deep networks ( Bruna and Mallat , 2013 ; Arora et al. , 2019 ) , at least for some tasks . It is thus of great interest to understand the learning curves of kernels . For regression , if the target function being learned is simply assumed to be Lipschitz , then the best guarantee is β = 1/d ( Luxburg and Bousquet , 2004 ; Bach , 2017 ) where d is the data dimension . Thus for large d , β is very small : learning is completely inefficient , a phenomenon referred to as the curse of dimensionality . As a result , various works on kernel regression make the much stronger assumption that the training points are sampled from a target function that belongs to the reproducing kernel Hilbert space ( RKHS ) of the kernel ( see for example ( Smola et al. , 1998 ) ) . With this assumption β does not depend on d ( for instance in ( Rudi and Rosasco , 2017 ) β = 1/2 is guaranteed ) . Yet , RKHS is a very strong assumption which requires the smoothness of the target function to increase with d ( Bach , 2017 ) ( see more on this point below ) , which may not be realistic in large dimensions . In this work we compute β empirically for kernel methods applied on MNIST and CIFAR10 datasets . We find βMNIST ≈ 0.4 and βCIFAR10 ≈ 0.1 respectively . Quite remarkably , we observe essentially the same exponents for regression and classification tasks , using either a Gaussian or a Laplace kernel . Thus the exponents are not as small as 1/d ( d = 784 for MNIST , d = 3072 for CIFAR10 ) , but neither are they 1/2 as one would expect under the RKHS assumption . These facts call for frameworks in which assumptions on the smoothness of the data can be intermediary between Lipschitz and RKHS . Here we propose such a framework for regression , in which the target function is assumed to be a Gaussian random field of zero mean with translation-invariant isotropic covariance KT ( x ) . The data can equivalently be thought as being synthesized by a “ Teacher ” kernelKT ( x ) . Learning is performed with a “ Student ” kernel KS ( x ) that minimizes the mean-square error . In general KT ( x ) 6= KS ( x ) . In this set-up learning is very similar to a technique referred to as kriging , or Gaussian process regression , originally developed in the geostatistics community ( Matheron , 1963 ; Stein , 1999b ) . To quantify learning , we first perform numerical experiments for data points distributed uniformly at random on a hypersphere of varying dimension d , focusing on a Laplace kernel for the Student , and considering a Laplace or Gaussian kernel for the Teacher . We observe that in both cases β ( d ) is a decreasing function . To derive β ( d ) we consider the simplified situation where the Gaussian random field is sampled at training points lying on a regular lattice . Building on the kriging literature ( Stein , 1999b ) , we show that β is controlled by the high-frequency scaling of both the Teacher and Student kernels : assuming that the Fourier transforms of the kernels decay as K̃T ( w ) = cT ||w||−αT + o ( ||w||−αT ) and K̃S ( w ) = cS ||w||−αS + o ( ||w||−αS ) , we obtain β = 1 d min ( αT − d , 2αS ) . ( 1 ) Importantly ( i ) Eq . ( 1 ) leads to a prediction for β ( d ) that accurately matches our numerical study for random training data points , leading to the conjecture that Eq . ( 1 ) holds in that case as well . We offer the following interpretation : ultimately , kernel methods are performing a local interpolation whose quality depends on the distance δ ( n ) between adjacent data points . δ ( n ) is asymptotically similar for random data or data sitting on a lattice . ( ii ) If the kernel KS is not too sensitive to high-frequencies , then learning is optimal as far as scaling is concerned and β = ( αT − d ) /d . We will argue that the smoothness index s ≡ [ ( αT − d ) /2 ] characterizes the number of derivatives of the target function that are continuous . We thus recover the curse of dimensionality : s needs to be of order d to have non-vanishing β in large dimensions . Point ( ii ) leads to an apparent paradox : β is significant for MNIST and CIFAR10 , for which d is a priori very large , leading to a smoothness value s in the hundreds in both cases , which appears unrealistic . The paradox is resolved by considering that real datasets actually live on lower-dimensional manifolds . As far as kernel learning is concerned , our findings support that the correct definition of dimension should be based on how the nearest-neighbors distance δ ( n ) scales with n : δ ( n ) ∼ n−1/deff . Direct measurements of δ ( n ) support that MNIST and CIFAR10 live on manifolds of lower dimensions deffMNIST ≈ 15 and deffCIFAR10 ≈ 35 . Considering the effective dimensions that we find , the observed values for β would be obtained for Gaussian fields of smoothness sMNIST ≈ 3 and sCIFAR10 ≈ 1 , values that appear intuitively more reasonable . More generally this analogy with Gaussian fields allows one to associate a smoothness index s to any dataset once β and deff are measured , which may turn out to be a useful characterization of data complexity in the future . 2 RELATED WORKS . Our set-up of Teacher-Student learning with kernels is also referred to as kriging , or Gaussian process regression , and it was originally developed in the geostatistics community ( Matheron , 1963 ) . In Section 5 we present a theorem that allows one to know the rate at which the test error decreases as we increase the number of training points , assumed to lie on a high-dimensional regular lattice . Similar results have been previously derived in the kriging literature ( Stein , 1999b ) when sampling occurs on the regular lattice with the exception of the origin , where the inference is made . Here we propose an alternative derivation that some readers might find simpler . We also study a slightly different problem : instead of computing the test error when the inference is carried on at the origin , we compute the average error for a test point that lie at an arbitrary point , sampled uniformly at random and not necessarily on the lattice . In what follows we show , via extensive numerical simulations , that such predictions are accurate even when the training points do not lie on a regular lattice , but are taken at random on a hypersphere . An exact proof of our result in such a general setting is difficult and can not be found even in the kriging literature . To our knowledge the results that get closer to the point are those discussed in ( Stein , 1999a ) , where the author studies one-dimensional processes where the training data are not necessarily evenly spaced . In this work the effective dimension of the data plays an import role , as it controls how the distance between nearest neighbors scales with the dataset size . Of course , there exists a vast literature ( Grassberger and Procaccia , 1983 ; Costa and Hero , 2004 ; Hein and Audibert , 2005 ; Levina and Bickel , 2005 ; Rozza et al. , 2012 ; Facco et al. , 2017 ; Allegra et al. , 2019 ) devoted to the study of effective dimensions , where other definitions are analyzed . The effective dimensions that we find are compatible with those obtained with more refined methods . 3 LEARNING CURVE FOR KERNEL METHODS APPLIED TO REAL DATA . In what follows we apply kernel methods to the MNIST and CIFAR10 datasets , each consisting of a set of images ( xµ ) n µ=1 . We simplify the problem by considering only two classes whose label Z ( xµ ) = ±1 correspond to odd and even numbers for MNIST , and to two groups of 5 classes in CIFAR10 . The goal is to infer the value of the label ẐS ( x ) of an image x that does not belong to the dataset . The S subscript reminds us that inference is performed using a positive definite kernel KS . We perform inference in both a regression and a classification setting . The following algorithms and associated results can be found in ( Scholkopf and Smola , 2001 ) . Regression . Learning corresponds to minimizing a mean-square error : min n∑ µ=1 [ ẐS ( xµ ) − Z ( xµ ) ] 2 . ( 2 ) For algorithms seeking solutions of the form ẐS ( x ) = ∑ µ aµKS ( xµ , x ) ≡ a · kS ( x ) by minimizing the man-square loss over the vector a , one obtains : ẐS ( x ) = kS ( x ) ·K−1S Z , ( 3 ) where the vector Z contains all the labels in the training set , Z ≡ ( Z ( xµ ) ) nµ=1 , and KS , µν ≡ KS ( xµ , xν ) is the Gram matrix . The Gram matrix is always invertible if the kernel KS is positive definite . The generalization error is then evaluated as the expected mean-square error on unseen data , estimated by averaging over a test set composed of ntest unseen data points : MSE = 1 ntest ntest∑ µ=1 [ ẐS ( xµ ) − Z ( xµ ) ] 2 . ( 4 ) Classification . We perform kernel classification via the algorithm soft-margin SVM . The details can be found in Appendix A . After learning from the training data with a student kernel KS , performance is evaluated via the generalization error . It is estimated as the fraction of correctly predicted labels for data points belonging to a test set with ntest elements . In Fig . 1 we present the learning curves for ( binary ) MNIST and CIFAR10 , for regression and classification . Learning is performed both with a Gaussian kernel K ( x ) ∝ exp ( −||x||2/ ( 2σ2 ) ) and a Laplace one K ( x ) ∝ exp ( −||x||/σ ) . Remarkably , the power laws in the two tasks are essentially identical ( although the estimated exponent appears to be slightly larger , in absolute value , for classification ) . Moreover , the two kernels display a very similar behavior , compatible with the same exponent : about −0.4 for MNIST and −0.1 for CIFAR10 . The presented data are for σ = 1000 ; in Appendix B we show that the same behaviour is observed for different values .
This paper experimentally investigates how fast the generalization error decreases when some specific kernel functions are used in real datasets. This paper conducted numerical experiments on several datasets to investigate the decreasing rate of the generalization error, and the rate is determined for such datasets. This decreasing rate is theoretically analyzed by using the approximation theory of RKHS in the teacher-student setting. It is shown that the rate is determined with the smoothness and effective dimensionality of input. Then, the smoothness of the teacher function is also derived through this analysis.
SP:655be2d7f8ffe68416e0c3a5b4218ffe45a37bfc
Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
1 INTRODUCTION . In supervised learning machines learn from a finite collection of n training data , and their generalization error is then evaluated on unseen data drawn from the same distribution . How many data are needed to learn a task is characterized by the learning curve relating generalization error to n. In various cases , the generalization error decays as a power law n−β , with an exponent β that depends on both the data and the algorithm . In ( Hestness et al. , 2017 ) β is reported for state-of-the-art ( SOTA ) deep neural networks for various tasks : in for neural-machine translation β ≈ 0.3–0.36 ( for fixed model size ) or β ≈ 0.13 ( for best-fit models at any n ) ; language modeling shows β ≈ 0.06–0.09 ; in speech recognition β ≈ 0.3 ; SOTA models for image classification ( on ImageNet ) have exponents β ≈ 0.3–0.5 . Currently there is no available theory of deep learning to rationalize these observations . Recently it was shown that for a proper initialization of the weights , deep learning in the infinite-width limit ( Jacot et al. , 2018 ) converges to kernel learning . Moreover , it is nowadays part of the lore that there exist kernels whose performance is nearly comparable to deep networks ( Bruna and Mallat , 2013 ; Arora et al. , 2019 ) , at least for some tasks . It is thus of great interest to understand the learning curves of kernels . For regression , if the target function being learned is simply assumed to be Lipschitz , then the best guarantee is β = 1/d ( Luxburg and Bousquet , 2004 ; Bach , 2017 ) where d is the data dimension . Thus for large d , β is very small : learning is completely inefficient , a phenomenon referred to as the curse of dimensionality . As a result , various works on kernel regression make the much stronger assumption that the training points are sampled from a target function that belongs to the reproducing kernel Hilbert space ( RKHS ) of the kernel ( see for example ( Smola et al. , 1998 ) ) . With this assumption β does not depend on d ( for instance in ( Rudi and Rosasco , 2017 ) β = 1/2 is guaranteed ) . Yet , RKHS is a very strong assumption which requires the smoothness of the target function to increase with d ( Bach , 2017 ) ( see more on this point below ) , which may not be realistic in large dimensions . In this work we compute β empirically for kernel methods applied on MNIST and CIFAR10 datasets . We find βMNIST ≈ 0.4 and βCIFAR10 ≈ 0.1 respectively . Quite remarkably , we observe essentially the same exponents for regression and classification tasks , using either a Gaussian or a Laplace kernel . Thus the exponents are not as small as 1/d ( d = 784 for MNIST , d = 3072 for CIFAR10 ) , but neither are they 1/2 as one would expect under the RKHS assumption . These facts call for frameworks in which assumptions on the smoothness of the data can be intermediary between Lipschitz and RKHS . Here we propose such a framework for regression , in which the target function is assumed to be a Gaussian random field of zero mean with translation-invariant isotropic covariance KT ( x ) . The data can equivalently be thought as being synthesized by a “ Teacher ” kernelKT ( x ) . Learning is performed with a “ Student ” kernel KS ( x ) that minimizes the mean-square error . In general KT ( x ) 6= KS ( x ) . In this set-up learning is very similar to a technique referred to as kriging , or Gaussian process regression , originally developed in the geostatistics community ( Matheron , 1963 ; Stein , 1999b ) . To quantify learning , we first perform numerical experiments for data points distributed uniformly at random on a hypersphere of varying dimension d , focusing on a Laplace kernel for the Student , and considering a Laplace or Gaussian kernel for the Teacher . We observe that in both cases β ( d ) is a decreasing function . To derive β ( d ) we consider the simplified situation where the Gaussian random field is sampled at training points lying on a regular lattice . Building on the kriging literature ( Stein , 1999b ) , we show that β is controlled by the high-frequency scaling of both the Teacher and Student kernels : assuming that the Fourier transforms of the kernels decay as K̃T ( w ) = cT ||w||−αT + o ( ||w||−αT ) and K̃S ( w ) = cS ||w||−αS + o ( ||w||−αS ) , we obtain β = 1 d min ( αT − d , 2αS ) . ( 1 ) Importantly ( i ) Eq . ( 1 ) leads to a prediction for β ( d ) that accurately matches our numerical study for random training data points , leading to the conjecture that Eq . ( 1 ) holds in that case as well . We offer the following interpretation : ultimately , kernel methods are performing a local interpolation whose quality depends on the distance δ ( n ) between adjacent data points . δ ( n ) is asymptotically similar for random data or data sitting on a lattice . ( ii ) If the kernel KS is not too sensitive to high-frequencies , then learning is optimal as far as scaling is concerned and β = ( αT − d ) /d . We will argue that the smoothness index s ≡ [ ( αT − d ) /2 ] characterizes the number of derivatives of the target function that are continuous . We thus recover the curse of dimensionality : s needs to be of order d to have non-vanishing β in large dimensions . Point ( ii ) leads to an apparent paradox : β is significant for MNIST and CIFAR10 , for which d is a priori very large , leading to a smoothness value s in the hundreds in both cases , which appears unrealistic . The paradox is resolved by considering that real datasets actually live on lower-dimensional manifolds . As far as kernel learning is concerned , our findings support that the correct definition of dimension should be based on how the nearest-neighbors distance δ ( n ) scales with n : δ ( n ) ∼ n−1/deff . Direct measurements of δ ( n ) support that MNIST and CIFAR10 live on manifolds of lower dimensions deffMNIST ≈ 15 and deffCIFAR10 ≈ 35 . Considering the effective dimensions that we find , the observed values for β would be obtained for Gaussian fields of smoothness sMNIST ≈ 3 and sCIFAR10 ≈ 1 , values that appear intuitively more reasonable . More generally this analogy with Gaussian fields allows one to associate a smoothness index s to any dataset once β and deff are measured , which may turn out to be a useful characterization of data complexity in the future . 2 RELATED WORKS . Our set-up of Teacher-Student learning with kernels is also referred to as kriging , or Gaussian process regression , and it was originally developed in the geostatistics community ( Matheron , 1963 ) . In Section 5 we present a theorem that allows one to know the rate at which the test error decreases as we increase the number of training points , assumed to lie on a high-dimensional regular lattice . Similar results have been previously derived in the kriging literature ( Stein , 1999b ) when sampling occurs on the regular lattice with the exception of the origin , where the inference is made . Here we propose an alternative derivation that some readers might find simpler . We also study a slightly different problem : instead of computing the test error when the inference is carried on at the origin , we compute the average error for a test point that lie at an arbitrary point , sampled uniformly at random and not necessarily on the lattice . In what follows we show , via extensive numerical simulations , that such predictions are accurate even when the training points do not lie on a regular lattice , but are taken at random on a hypersphere . An exact proof of our result in such a general setting is difficult and can not be found even in the kriging literature . To our knowledge the results that get closer to the point are those discussed in ( Stein , 1999a ) , where the author studies one-dimensional processes where the training data are not necessarily evenly spaced . In this work the effective dimension of the data plays an import role , as it controls how the distance between nearest neighbors scales with the dataset size . Of course , there exists a vast literature ( Grassberger and Procaccia , 1983 ; Costa and Hero , 2004 ; Hein and Audibert , 2005 ; Levina and Bickel , 2005 ; Rozza et al. , 2012 ; Facco et al. , 2017 ; Allegra et al. , 2019 ) devoted to the study of effective dimensions , where other definitions are analyzed . The effective dimensions that we find are compatible with those obtained with more refined methods . 3 LEARNING CURVE FOR KERNEL METHODS APPLIED TO REAL DATA . In what follows we apply kernel methods to the MNIST and CIFAR10 datasets , each consisting of a set of images ( xµ ) n µ=1 . We simplify the problem by considering only two classes whose label Z ( xµ ) = ±1 correspond to odd and even numbers for MNIST , and to two groups of 5 classes in CIFAR10 . The goal is to infer the value of the label ẐS ( x ) of an image x that does not belong to the dataset . The S subscript reminds us that inference is performed using a positive definite kernel KS . We perform inference in both a regression and a classification setting . The following algorithms and associated results can be found in ( Scholkopf and Smola , 2001 ) . Regression . Learning corresponds to minimizing a mean-square error : min n∑ µ=1 [ ẐS ( xµ ) − Z ( xµ ) ] 2 . ( 2 ) For algorithms seeking solutions of the form ẐS ( x ) = ∑ µ aµKS ( xµ , x ) ≡ a · kS ( x ) by minimizing the man-square loss over the vector a , one obtains : ẐS ( x ) = kS ( x ) ·K−1S Z , ( 3 ) where the vector Z contains all the labels in the training set , Z ≡ ( Z ( xµ ) ) nµ=1 , and KS , µν ≡ KS ( xµ , xν ) is the Gram matrix . The Gram matrix is always invertible if the kernel KS is positive definite . The generalization error is then evaluated as the expected mean-square error on unseen data , estimated by averaging over a test set composed of ntest unseen data points : MSE = 1 ntest ntest∑ µ=1 [ ẐS ( xµ ) − Z ( xµ ) ] 2 . ( 4 ) Classification . We perform kernel classification via the algorithm soft-margin SVM . The details can be found in Appendix A . After learning from the training data with a student kernel KS , performance is evaluated via the generalization error . It is estimated as the fraction of correctly predicted labels for data points belonging to a test set with ntest elements . In Fig . 1 we present the learning curves for ( binary ) MNIST and CIFAR10 , for regression and classification . Learning is performed both with a Gaussian kernel K ( x ) ∝ exp ( −||x||2/ ( 2σ2 ) ) and a Laplace one K ( x ) ∝ exp ( −||x||/σ ) . Remarkably , the power laws in the two tasks are essentially identical ( although the estimated exponent appears to be slightly larger , in absolute value , for classification ) . Moreover , the two kernels display a very similar behavior , compatible with the same exponent : about −0.4 for MNIST and −0.1 for CIFAR10 . The presented data are for σ = 1000 ; in Appendix B we show that the same behaviour is observed for different values .
This paper studies, empirically and theoretically, the learning rates of (shift-invariant) kernel learners in a misspecified setting. In the well-specified setting, the rate of kernel learners is at least $n^{-1/2}$, and in a misspecified setting assuming only Lipschitz targets, the rate is $n^{-1/d}$. Neither seems to match the experimental rate on MNIST and CIFAR-10; this paper proposes a theoretical model that can more-or-less match the experimental rate with essentially-reasonable assumptions.
SP:655be2d7f8ffe68416e0c3a5b4218ffe45a37bfc
On the "steerability" of generative adversarial networks
1 INTRODUCTION . The quality of deep generative models has increased dramatically over the past few years . When introduced in 2014 , Generative Adversarial Networks ( GANs ) could only synthesize MNIST digits and low-resolution grayscale faces ( Goodfellow et al. , 2014 ) . The most recent models , however , produce diverse high-resolution images that are often indistinguishable from natural photos ( Brock et al. , 2018 ; Karras et al. , 2018 ) . Science fiction has long dreamed of virtual realities filled of synthetic content as rich as , or richer , than the real world ( e.g. , The Matrix , Ready Player One ) . How close are we to this dream ? Traditional computer graphics can render photorealistic 3D scenes , but can not automatically generate detailed content . Generative models like GANs , in contrast , can create content from scratch , but we do not currently have tools for navigating the generated scenes in the same kind of way as you can walk through and interact with a 3D game engine . In this paper , we explore the degree to which you can navigate the visual world of a GAN . Figure 1 illustrates the kinds of transformations we explore . Consider the dog at the top-left . By moving in some direction of GAN latent space , can we hallucinate walking toward this dog ? As the figure indicates , and as we will show in this paper , the answer is yes . However , as we continue to zoom in , we quickly reach limits . Once the dog face fills the full frame , continuing to walk in this direction fails to increase the zoom . A similar effect occurs in the daisy example ( row 2 of Fig . 1 ) , where a direction in latent space moves the daisy up and down , but can not move it out of frame . We hypothesize that these limits are due to biases in the distribution of images on which the GAN is trained . For example , if the training dataset consists of centered dogs and daises , the same may be the case in GAN-generated images . Nonetheless , we find that some degree of transformation is possible . When and why can we achieve certain transformations but not others ? This paper seeks to quantify the degree to which we can achieve basic visual transformations by navigating in GAN latent space . In other words , are GANs “ steerable ” in latent space ? 1 We analyze the relationship between the data distribution on which the model is trained and the success in achieving these transformations . From our experiments , it is possible to shift the distribution of generated images to some degree , but we can not extrapolate entirely out of the dataset ’ s support . In particular , attributes can be shifted in proportion to the variability of that attribute in the training data . We further demonstrate an approach to increase model steerability by jointly optimizing the generator and latent direction , together with data augmentation on training images . One of the current criticisms of generative models is that they simply interpolate between datapoints , and fail to generate anything truly new , but our results add nuance to this story . It is possible to achieve distributional shift , but the ability to create realistic images from a modified distributions relies on sufficient diversity in the dataset along the dimension that we vary . Our main findings are : • A simple walk in the latent space of GANs achieves camera motion and color transformations in the output image space . These walks are learned in self-supervised manner without labeled attributes or distinct source and target images . • The linear walk is as effective as more complex non-linear walks , suggesting that the models learn to roughly linearize these operations without being explicitly trained to do so . • The extent of each transformation is limited , and we quantify a relationship between dataset variability and how much we can shift the model distribution . • The transformations are a general-purpose framework that work with different model architectures , e.g . BigGAN , StyleGAN , and DCGAN , and illustrate different disentanglement properties in their respective latent spaces . • Data augmentation improves steerability , as does jointly training the walk trajectory and the generator weights , which allows us to achieve larger transformation effects . 2 RELATED WORK . Latent space manipulations can be seen from several perspectives – how we achieve it , what limits it , and what it enables us to do . Our work addresses these three aspects together , and we briefly refer to each one in related work . Interpolations in latent space Traditional approaches to image editing with GAN latent spaces find linear directions that correspond to changes in labeled attributes , such as smile-vectors and gender-vectors for faces ( Radford et al. , 2015 ; Karras et al. , 2018 ) . However these manipulations are not exclusive to GANs ; in flow-based generative models , linearly interpolating between two encoded images allow one to edit a source image toward attributes of the target ( Kingma & Dhariwal , 2018 ) . Möllenhoff & Cremers ( 2019 ) proposes a modified GAN formulation by treating data 1We use the term “ steerable ” in analogy to the classic steerable filters of Freeman & Adelson ( 1991 ) . as directional k-currents , where moving along tangent planes naturally corresponds to interpretable manipulations . Upchurch et al . ( 2017 ) removes the generative model entirely and instead interpolates in the intermediate feature space of a pretrained classifier , again using feature mappings of source and target sets to determine an edit direction . Unlike these approaches , we learn our latentspace trajectories in a self-supervised manner without labeled attributes or distinct source and target images . Instead , we learn to approximate editing operations on individual source images . We find that linear trajectories in latent space can capture simple image manipulations , e.g. , zoom-vectors and shift-vectors , although we also obtain similar results using nonlinear trajectories . Dataset bias Biases from training data and network architecture both impact the generalization capacity of learned models ( Torralba & Efros , 2011 ; Geirhos et al. , 2018 ; Amini et al. ) . Dataset biases partly comes from human preferences in taking photos : we tend to take pictures in specific “ canonical ” views that are not fully representative of the entire visual world ( Mezuman & Weiss , 2012 ; Jahanian et al. , 2015 ) . Consequently , models trained with these datasets inherit their biases . This may result in models that misrepresent the given task – such as tendencies towards texture bias rather than shape bias on ImageNet classifiers ( Geirhos et al. , 2018 ) – and in turn limits their generalization performance on similar objectives ( Azulay & Weiss , 2018 ) . Our latent space trajectories transform the output corresponding to various image editing operations , but ultimately we are constrained by biases in the data and can not extrapolate arbitrarily far beyond the data ’ s support . Generative models for content creation The recent progress in generative models has opened interesting avenues for content creation ( Brock et al. , 2018 ; Karras et al. , 2018 ) , including applications that enable users to fine-tune the generated output ( Simon ; Zhu et al. , 2016 ; Bau et al. , 2018 ) . A by-product the current work is enable users to modify image properties by turning a single knob – the magnitude of the learned transformation in latent space . We further demonstrate that these image manipulations are not just a simple creativity tool ; they also provide us with a window into biases and generalization capacity of these models . Applications of latent space editing Image manipulations using generative models suggest several interesting downstream applications . For example , Denton et al . ( 2019 ) learns linear walks corresponding to various facial characteristics – they use these to measure biases in facial attribute detectors , whereas we study biases in the generative model that originate from training data . Shen et al . ( 2019 ) also assumes linear latent space trajectories and learns paths for face attribute editing according to semantic concepts such as age and expression , thus demonstrating disentanglement of the latent space . White ( 2016 ) suggests approaches to improve the learned manipulations , such as using spherical linear interpolations , resampling images to remove biases in attribute vectors , and using data augmentation as a synthetic attribute for variational autoencoders . Goetschalckx et al . ( 2019 ) applies a linear walk to achieve transformations corresponding to cognitive properties of an image such as memorability , aesthetics , and emotional valence . Unlike these works , we do not require an attribute detector or assessor function to learn the latent space trajectory , and therefore our loss function is based on image similarity between source and target images . In addition to linear walks , we explore using non-linear walks parametrized by neural networks for editing operations . 3 METHOD . Generative models such as GANs ( Goodfellow et al. , 2014 ) learn a mapping function G such that G : z → x . Here , z is the latent code drawn from a Gaussian density and x is an output , e.g. , an image . Our goal is to achieve transformations in the output space by moving in latent space , as shown in Fig . 2 . In general , this goal also captures the idea in equivariance , in which transformations in the input space result in equivalent transformations in the output space ( c.f . Hinton et al . ( 2011 ) ; Cohen et al . ( 2019 ) ; Lenc & Vedaldi ( 2015 ) ) . Objective We want to learn anN -dimensional vector representing the optimal path in latent space for a given transformation . The vector is multiplied with continuous parameter α which signifies the step size : large α values correspond to a greater degree of transformation , while small α values correspond to a lesser degree . Formally , we learn the walk w by minimizing the objective function : w∗ = arg min w Ez , α [ L ( G ( z+αw ) , edit ( G ( z ) , α ) ) ] . ( 1 ) Here , L measures the distance between the generated image after taking an α-step in the latent direction G ( z + αw ) and the target edit ( G ( z ) , α ) derived from the source image G ( z ) . We use L2 loss as our objective L , however we also obtain similar results when using the LPIPS perceptual image similarity metric ( Zhang et al. , 2018 ) ( see Appendix B.4.1 ) . Note that we can learn this walk in a fully self-supervised manner – we perform the edit ( · ) operation on an arbitrary generated image and subsequently the vector to minimize the objective . Let model ( α ) denote the optimized transformation vector w∗ with the step size α , defined as model ( α ) = G ( z + αw∗ ) . The previous setup assumes linear latent space walks , but we can also learn non-linear trajectories in which the walk direction depends on the current latent space position . For the non-linear walk , we learn a function , f∗ ( z ) , which corresponds to a small -step transformation edit ( G ( z ) , ) . To achieve bigger transformations , we apply f recursively , mimicking discrete Euler ODE approximations . Formally , for a fixed , we minimize L = Ez , n [ ||G ( fn ( z ) ) − edit ( G ( z ) , n ) ) || ] , ( 2 ) where fn ( · ) is an nth-order function composition f ( f ( f ( ... ) ) ) , and f ( z ) is parametrized with a neural network . We discuss further implementation details in Appendix A.4 . We use this function composition approach rather than the simpler setup of G ( z + αNN ( z ) ) because the latter learns to ignore the input z when α takes on continuous values , and is thus equivalent to the previous linear trajectory ( see Appendix A.3 for further details ) . Quantifying Steerability We further seek to quantify how well we can achieve desired image manipulations under each transformation . To this end , we compare the distribution of a given attribute , e.g. , “ luminance ” , in the dataset versus in images generated after walking in latent space . For color transformations , we consider the effect of increasing or decreasing the α coefficient corresponding to each color channel . To estimate the color distribution of model-generated images , we randomly sample N = 100 pixels per image both before and after taking a step in latent space . Then , we compute the pixel value for each channel , or the mean RGB value for luminance , and normalize the range between 0 and 1 . For zoom and shift transformations , we rely on an object detector which captures the central object in the image class . We use a MobileNet-SSD v1 ( Liu et al. , 2016 ) detector to estimate object bounding boxes , and average over image classes recognizable by the detector . For each successful detection , we take the highest probability bounding box corresponding to the desired class and use that to quantify the amount of transformation . For the zoom operation , we use the area of the bounding box normalized by the area of the total image . For shift in the X and Y directions , we take the center X and Y coordinates of the bounding box , and normalize by image width or height . Truncation parameters in GANs ( as used in Brock et al . ( 2018 ) ; Karras et al . ( 2018 ) ) trade off between the diversity of the generated images and sample quality . When comparing generated images to the dataset distribution , we use the largest possible truncation for the model and perform similar cropping and resizing of the dataset as done during model training ( see Brock et al . ( 2018 ) ) . When comparing the attributes of generated distributions under different α magnitudes to each other but not to the dataset , we reduce truncation to 0.5 to ensure better performance of the object detector . Reducing Transformation Limits Equations 1 and 2 learn a latent space walk assuming a pretrained generative model , thus keeping the model weights fixed . The previous approach allows us to understand the latent space organization and limitations in the model ’ s transformation capacity . To overcome these limits , we explore adding data augmentation by editing the training images with each corresponding transformation , and train the generative model with this augmented dataset . We also introduce a modified objective function that jointly optimizes the generator weights and a linear walk vector : G∗ , w∗ = arg min G , w ( Ledit + LGAN ) , ( 3 ) where the edit loss encourages low L2 error between learned transformation and target image : Ledit = L2 ( G ( z+αw ) − edit ( G ( z ) , α ) ) . ( 4 ) The GAN loss optimizes for discriminator error : LGAN = max D ( Ez , α [ D ( G ( z+αw ) ) ] − Ex , α [ D ( edit ( x , α ) ) ] ) , ( 5 ) where we draw images x from the training dataset and perform data augmentation by applying the edit operation on them . This optimization approach encourages the generator to organize its latent space so that the transformations lie along linear paths , and when combined with data augmentation , results in larger transformation ranges which we demonstrate in Sec . 4.4
This work explores the extent to which the natural image manifold is captured by generative adversarial networks (GANs) by performing walks in the latent space of pretrained models. To perform these walks, a transformation vector is learned by minimizing the distance between transformed images and the corresponding images generated from transformed latent vectors. It is found that when traversing the latent space of the GAN along the direction of the transformation vector, that the corresponding generated images initially exhibit the desired transform (such as zooming or changing X position), but soon reach a limit where further changes in the latent vector do not result in changes to the image. It is observed that this behaviour is likely due to bias in the dataset which the GAN is trained on, and that by exploring the limits of the generator, biases which exist in the original dataset can be revealed. In order to increase the extents to which images can be transformed, it is shown that GANs can be trained with an augmented dataset and using a loss function that encourages transformations to lie along linear paths.
SP:ad4e8d1e16eeeff006f8568bd6bf2c0862621526
On the "steerability" of generative adversarial networks
1 INTRODUCTION . The quality of deep generative models has increased dramatically over the past few years . When introduced in 2014 , Generative Adversarial Networks ( GANs ) could only synthesize MNIST digits and low-resolution grayscale faces ( Goodfellow et al. , 2014 ) . The most recent models , however , produce diverse high-resolution images that are often indistinguishable from natural photos ( Brock et al. , 2018 ; Karras et al. , 2018 ) . Science fiction has long dreamed of virtual realities filled of synthetic content as rich as , or richer , than the real world ( e.g. , The Matrix , Ready Player One ) . How close are we to this dream ? Traditional computer graphics can render photorealistic 3D scenes , but can not automatically generate detailed content . Generative models like GANs , in contrast , can create content from scratch , but we do not currently have tools for navigating the generated scenes in the same kind of way as you can walk through and interact with a 3D game engine . In this paper , we explore the degree to which you can navigate the visual world of a GAN . Figure 1 illustrates the kinds of transformations we explore . Consider the dog at the top-left . By moving in some direction of GAN latent space , can we hallucinate walking toward this dog ? As the figure indicates , and as we will show in this paper , the answer is yes . However , as we continue to zoom in , we quickly reach limits . Once the dog face fills the full frame , continuing to walk in this direction fails to increase the zoom . A similar effect occurs in the daisy example ( row 2 of Fig . 1 ) , where a direction in latent space moves the daisy up and down , but can not move it out of frame . We hypothesize that these limits are due to biases in the distribution of images on which the GAN is trained . For example , if the training dataset consists of centered dogs and daises , the same may be the case in GAN-generated images . Nonetheless , we find that some degree of transformation is possible . When and why can we achieve certain transformations but not others ? This paper seeks to quantify the degree to which we can achieve basic visual transformations by navigating in GAN latent space . In other words , are GANs “ steerable ” in latent space ? 1 We analyze the relationship between the data distribution on which the model is trained and the success in achieving these transformations . From our experiments , it is possible to shift the distribution of generated images to some degree , but we can not extrapolate entirely out of the dataset ’ s support . In particular , attributes can be shifted in proportion to the variability of that attribute in the training data . We further demonstrate an approach to increase model steerability by jointly optimizing the generator and latent direction , together with data augmentation on training images . One of the current criticisms of generative models is that they simply interpolate between datapoints , and fail to generate anything truly new , but our results add nuance to this story . It is possible to achieve distributional shift , but the ability to create realistic images from a modified distributions relies on sufficient diversity in the dataset along the dimension that we vary . Our main findings are : • A simple walk in the latent space of GANs achieves camera motion and color transformations in the output image space . These walks are learned in self-supervised manner without labeled attributes or distinct source and target images . • The linear walk is as effective as more complex non-linear walks , suggesting that the models learn to roughly linearize these operations without being explicitly trained to do so . • The extent of each transformation is limited , and we quantify a relationship between dataset variability and how much we can shift the model distribution . • The transformations are a general-purpose framework that work with different model architectures , e.g . BigGAN , StyleGAN , and DCGAN , and illustrate different disentanglement properties in their respective latent spaces . • Data augmentation improves steerability , as does jointly training the walk trajectory and the generator weights , which allows us to achieve larger transformation effects . 2 RELATED WORK . Latent space manipulations can be seen from several perspectives – how we achieve it , what limits it , and what it enables us to do . Our work addresses these three aspects together , and we briefly refer to each one in related work . Interpolations in latent space Traditional approaches to image editing with GAN latent spaces find linear directions that correspond to changes in labeled attributes , such as smile-vectors and gender-vectors for faces ( Radford et al. , 2015 ; Karras et al. , 2018 ) . However these manipulations are not exclusive to GANs ; in flow-based generative models , linearly interpolating between two encoded images allow one to edit a source image toward attributes of the target ( Kingma & Dhariwal , 2018 ) . Möllenhoff & Cremers ( 2019 ) proposes a modified GAN formulation by treating data 1We use the term “ steerable ” in analogy to the classic steerable filters of Freeman & Adelson ( 1991 ) . as directional k-currents , where moving along tangent planes naturally corresponds to interpretable manipulations . Upchurch et al . ( 2017 ) removes the generative model entirely and instead interpolates in the intermediate feature space of a pretrained classifier , again using feature mappings of source and target sets to determine an edit direction . Unlike these approaches , we learn our latentspace trajectories in a self-supervised manner without labeled attributes or distinct source and target images . Instead , we learn to approximate editing operations on individual source images . We find that linear trajectories in latent space can capture simple image manipulations , e.g. , zoom-vectors and shift-vectors , although we also obtain similar results using nonlinear trajectories . Dataset bias Biases from training data and network architecture both impact the generalization capacity of learned models ( Torralba & Efros , 2011 ; Geirhos et al. , 2018 ; Amini et al. ) . Dataset biases partly comes from human preferences in taking photos : we tend to take pictures in specific “ canonical ” views that are not fully representative of the entire visual world ( Mezuman & Weiss , 2012 ; Jahanian et al. , 2015 ) . Consequently , models trained with these datasets inherit their biases . This may result in models that misrepresent the given task – such as tendencies towards texture bias rather than shape bias on ImageNet classifiers ( Geirhos et al. , 2018 ) – and in turn limits their generalization performance on similar objectives ( Azulay & Weiss , 2018 ) . Our latent space trajectories transform the output corresponding to various image editing operations , but ultimately we are constrained by biases in the data and can not extrapolate arbitrarily far beyond the data ’ s support . Generative models for content creation The recent progress in generative models has opened interesting avenues for content creation ( Brock et al. , 2018 ; Karras et al. , 2018 ) , including applications that enable users to fine-tune the generated output ( Simon ; Zhu et al. , 2016 ; Bau et al. , 2018 ) . A by-product the current work is enable users to modify image properties by turning a single knob – the magnitude of the learned transformation in latent space . We further demonstrate that these image manipulations are not just a simple creativity tool ; they also provide us with a window into biases and generalization capacity of these models . Applications of latent space editing Image manipulations using generative models suggest several interesting downstream applications . For example , Denton et al . ( 2019 ) learns linear walks corresponding to various facial characteristics – they use these to measure biases in facial attribute detectors , whereas we study biases in the generative model that originate from training data . Shen et al . ( 2019 ) also assumes linear latent space trajectories and learns paths for face attribute editing according to semantic concepts such as age and expression , thus demonstrating disentanglement of the latent space . White ( 2016 ) suggests approaches to improve the learned manipulations , such as using spherical linear interpolations , resampling images to remove biases in attribute vectors , and using data augmentation as a synthetic attribute for variational autoencoders . Goetschalckx et al . ( 2019 ) applies a linear walk to achieve transformations corresponding to cognitive properties of an image such as memorability , aesthetics , and emotional valence . Unlike these works , we do not require an attribute detector or assessor function to learn the latent space trajectory , and therefore our loss function is based on image similarity between source and target images . In addition to linear walks , we explore using non-linear walks parametrized by neural networks for editing operations . 3 METHOD . Generative models such as GANs ( Goodfellow et al. , 2014 ) learn a mapping function G such that G : z → x . Here , z is the latent code drawn from a Gaussian density and x is an output , e.g. , an image . Our goal is to achieve transformations in the output space by moving in latent space , as shown in Fig . 2 . In general , this goal also captures the idea in equivariance , in which transformations in the input space result in equivalent transformations in the output space ( c.f . Hinton et al . ( 2011 ) ; Cohen et al . ( 2019 ) ; Lenc & Vedaldi ( 2015 ) ) . Objective We want to learn anN -dimensional vector representing the optimal path in latent space for a given transformation . The vector is multiplied with continuous parameter α which signifies the step size : large α values correspond to a greater degree of transformation , while small α values correspond to a lesser degree . Formally , we learn the walk w by minimizing the objective function : w∗ = arg min w Ez , α [ L ( G ( z+αw ) , edit ( G ( z ) , α ) ) ] . ( 1 ) Here , L measures the distance between the generated image after taking an α-step in the latent direction G ( z + αw ) and the target edit ( G ( z ) , α ) derived from the source image G ( z ) . We use L2 loss as our objective L , however we also obtain similar results when using the LPIPS perceptual image similarity metric ( Zhang et al. , 2018 ) ( see Appendix B.4.1 ) . Note that we can learn this walk in a fully self-supervised manner – we perform the edit ( · ) operation on an arbitrary generated image and subsequently the vector to minimize the objective . Let model ( α ) denote the optimized transformation vector w∗ with the step size α , defined as model ( α ) = G ( z + αw∗ ) . The previous setup assumes linear latent space walks , but we can also learn non-linear trajectories in which the walk direction depends on the current latent space position . For the non-linear walk , we learn a function , f∗ ( z ) , which corresponds to a small -step transformation edit ( G ( z ) , ) . To achieve bigger transformations , we apply f recursively , mimicking discrete Euler ODE approximations . Formally , for a fixed , we minimize L = Ez , n [ ||G ( fn ( z ) ) − edit ( G ( z ) , n ) ) || ] , ( 2 ) where fn ( · ) is an nth-order function composition f ( f ( f ( ... ) ) ) , and f ( z ) is parametrized with a neural network . We discuss further implementation details in Appendix A.4 . We use this function composition approach rather than the simpler setup of G ( z + αNN ( z ) ) because the latter learns to ignore the input z when α takes on continuous values , and is thus equivalent to the previous linear trajectory ( see Appendix A.3 for further details ) . Quantifying Steerability We further seek to quantify how well we can achieve desired image manipulations under each transformation . To this end , we compare the distribution of a given attribute , e.g. , “ luminance ” , in the dataset versus in images generated after walking in latent space . For color transformations , we consider the effect of increasing or decreasing the α coefficient corresponding to each color channel . To estimate the color distribution of model-generated images , we randomly sample N = 100 pixels per image both before and after taking a step in latent space . Then , we compute the pixel value for each channel , or the mean RGB value for luminance , and normalize the range between 0 and 1 . For zoom and shift transformations , we rely on an object detector which captures the central object in the image class . We use a MobileNet-SSD v1 ( Liu et al. , 2016 ) detector to estimate object bounding boxes , and average over image classes recognizable by the detector . For each successful detection , we take the highest probability bounding box corresponding to the desired class and use that to quantify the amount of transformation . For the zoom operation , we use the area of the bounding box normalized by the area of the total image . For shift in the X and Y directions , we take the center X and Y coordinates of the bounding box , and normalize by image width or height . Truncation parameters in GANs ( as used in Brock et al . ( 2018 ) ; Karras et al . ( 2018 ) ) trade off between the diversity of the generated images and sample quality . When comparing generated images to the dataset distribution , we use the largest possible truncation for the model and perform similar cropping and resizing of the dataset as done during model training ( see Brock et al . ( 2018 ) ) . When comparing the attributes of generated distributions under different α magnitudes to each other but not to the dataset , we reduce truncation to 0.5 to ensure better performance of the object detector . Reducing Transformation Limits Equations 1 and 2 learn a latent space walk assuming a pretrained generative model , thus keeping the model weights fixed . The previous approach allows us to understand the latent space organization and limitations in the model ’ s transformation capacity . To overcome these limits , we explore adding data augmentation by editing the training images with each corresponding transformation , and train the generative model with this augmented dataset . We also introduce a modified objective function that jointly optimizes the generator weights and a linear walk vector : G∗ , w∗ = arg min G , w ( Ledit + LGAN ) , ( 3 ) where the edit loss encourages low L2 error between learned transformation and target image : Ledit = L2 ( G ( z+αw ) − edit ( G ( z ) , α ) ) . ( 4 ) The GAN loss optimizes for discriminator error : LGAN = max D ( Ez , α [ D ( G ( z+αw ) ) ] − Ex , α [ D ( edit ( x , α ) ) ] ) , ( 5 ) where we draw images x from the training dataset and perform data augmentation by applying the edit operation on them . This optimization approach encourages the generator to organize its latent space so that the transformations lie along linear paths , and when combined with data augmentation , results in larger transformation ranges which we demonstrate in Sec . 4.4
This paper propose to study the generalization properties of GANs through interpolation. They first propose to learn a linear (and non-linear) interpolation in the latent space for a specific type of image transformation for example zoom, translation, rotation, luminance, etc... They show that linear interpolation in GANs can produce really realistic images along the path and enable to control and transform generated images to some extent. They then propose to measure to what extent the generated images can be transformed without "breaking". Finally they show that the quality of the interpolation can be improved by learning the interpolation and generator jointly.
SP:ad4e8d1e16eeeff006f8568bd6bf2c0862621526
On Mutual Information Maximization for Representation Learning
1 INTRODUCTION . Unsupervised representation learning is a fundamental problem in machine learning . Intuitively , one aims to learn a function g which maps the data into some , usually lower-dimensional , space where one can solve some ( generally a priori unknown ) target supervised tasks more efficiently , i.e . with fewer labels . In contrast to supervised and semi-supervised learning , the learner has access only to unlabeled data . Even though the task seems ill-posed as there is no natural objective one should optimize , by leveraging domain knowledge this approach can be successfully applied to a variety of problem areas , including image ( Kolesnikov et al. , 2019 ; van den Oord et al. , 2018 ; Hénaff et al. , 2019 ; Tian et al. , 2019 ; Hjelm et al. , 2019 ; Bachman et al. , 2019 ) and video classification ( Wang and Gupta , 2015 ; Sun et al. , 2019 ) , and natural language understanding ( van den Oord et al. , 2018 ; Peters et al. , 2018 ; Devlin et al. , 2019 ) . Recently , there has been a revival of approaches inspired by the InfoMax principle ( Linsker , 1988 ) : Choose a representation g ( x ) maximizing the mutual information ( MI ) between the input and its representation , possibly subject to some structural constraints . MI measures the amount of information obtained about a random variable X by observing some other random variable Y 1 Formally , the MI between X and Y , with joint density p ( x , y ) and marginal densities p ( x ) and p ( y ) , is defined as the Kullback–Leibler ( KL ) divergence between the joint and the product of the marginals I ( X ; Y ) = DKL ( p ( x , y ) ‖ p ( x ) p ( y ) ) = Ep ( x , y ) [ log p ( x , y ) p ( x ) p ( y ) ] . ( 1 ) The fundamental properties of MI are well understood and have been extensively studied ( see e.g . Kraskov et al . ( 2004 ) ) . Firstly , MI is invariant under reparametrization of the variables — namely , if X ′ = f1 ( X ) and Y ′ = f2 ( Y ) are homeomorphisms ( i.e . smooth invertible maps ) , then I ( X ; Y ) = I ( X ′ ; Y ′ ) . Secondly , estimating MI in high-dimensional spaces is a notoriously difficult task , and in practice one often maximizes a tractable lower bound on this quantity ( Poole et al. , 2019 ) . ∗Equal contribution . Correspondence to Michael Tschannen ( tschannen @ google.com ) , Josip Djolonga ( josipd @ google.com ) , and Mario Lucic ( lucic @ google.com ) . †PhD student at University of Cambridge and the Max Planck Institute for Intelligent Systems , Tübingen . 1We denote random variables using upper-case letters ( e.g . X , Y ) , and their realizations by the corresponding lower-case letter ( e.g . x , y ) . Nonetheless , any distribution-free high-confidence lower bound on entropy requires a sample size exponential in the size of the bound ( McAllester and Statos , 2018 ) . Despite these fundamental challenges , several recent works have demonstrated promising empirical results in representation learning using MI maximization ( van den Oord et al. , 2018 ; Hénaff et al. , 2019 ; Tian et al. , 2019 ; Hjelm et al. , 2019 ; Bachman et al. , 2019 ; Sun et al. , 2019 ) . In this work we argue , and provide empirical evidence , that the success of these methods can not be attributed to the properties of MI alone . In fact , we show that maximizing tighter bounds on MI can result in worse representations . In addition , we establish a connection to deep metric learning and argue that this interpretation may be a plausible explanation of the success of the recently introduced methods.2 2 BACKGROUND AND RELATED WORK . Recent progress and the InfoMax principle While promising results in other domains have been presented in the literature , we will focus on unsupervised image representation learning techniques that have achieved state-of-the-art performance on image classification tasks ( Hénaff et al. , 2019 ; Tian et al. , 2019 ; Bachman et al. , 2019 ) . The usual problem setup dates back at least to Becker and Hinton ( 1992 ) and can conceptually be described as follows : For a given image X , let X ( 1 ) and X ( 2 ) be different , possibly overlapping views of X , for instance the top and bottom halves of the image . These are encoded using encoders g1 and g2 respectively , and the MI between the two representations g1 ( X ( 1 ) ) and g2 ( X ( 2 ) ) is maximized , max g1∈G1 , g2∈G2 IEST ( g1 ( X ( 1 ) ) ; g2 ( X ( 2 ) ) ) , ( 2 ) where IEST ( X ; Y ) is a sample-based estimator of the true MI I ( X ; Y ) and the function classes G1 and G2 can be used to specify structural constraints on the encoders . While not explicitly reflected in ( 2 ) , note that g1 and g2 can often share parameters . Furthermore , it can be shown that I ( g1 ( X ( 1 ) ) ; g2 ( X ( 2 ) ) ) ≤ I ( X ; g1 ( X ( 1 ) ) , g2 ( X ( 2 ) ) ) ,3 hence the objective in ( 2 ) can be seen as a lower bound on the InfoMax objective maxg∈G I ( X ; g ( X ) ) ( Linsker , 1988 ) . Practical advantages of multi-view formulations There are two main advantages in using ( 2 ) rather than the original InfoMax objective . First , the MI has to be estimated only between the learned representations of the two views , which typically lie on a much lower-dimensional space than the one where the original data X lives . Second , it gives us plenty of modeling flexibility , as the two views can be chosen to capture completely different aspects and modalities of the data , for example : 1 . In the basic form of DeepInfoMax ( Hjelm et al. , 2019 ) g1 extracts global features from the entire image X ( 1 ) and g2 local features from image patches X ( 2 ) , where g1 and g2 correspond to activations in different layers of the same convolutional network . Bachman et al . ( 2019 ) build on this and compute the two views from different augmentations of the same image . 2 . Contrastive multiview coding ( CMC ) ( Tian et al. , 2019 ) generalizes the objective in ( 2 ) to consider multiple views X ( i ) , where each X ( i ) corresponds to a different image modality ( e.g. , different color channels , or the image and its segmentation mask ) . 3 . Contrastive predictive coding ( CPC ) ( van den Oord et al. , 2018 ; Hénaff et al. , 2019 ) incorporates a sequential component of the data . Concretely , one extracts a sequence of patches from an image in some fixed order , maps each patch using an encoder , aggregates the resulting features of the first t patches into a context vector , and maximizes the MI between the context and features extracted from the patch at position t+ k. In ( 2 ) , X ( 1 ) would thus correspond to the first t patches and X ( 2 ) to the patch at location t+ k. Other approaches , such as those presented by Sermanet et al . ( 2018 ) , Hu et al . ( 2017 ) , and Ji et al . ( 2019 ) , can be similarly subsumed under the same objective . Lower bounds on MI As evident from ( 2 ) , another critical choice is the MI estimator IEST . Given the fundamental limitations of MI estimation ( McAllester and Statos , 2018 ) , recent work has focused on deriving lower bounds on MI ( Barber and Agakov , 2003 ; Belghazi et al. , 2018 ; Poole et al. , 2The code for running the experiments and visualizing the results is available at https : //github.com/googleresearch/google-research/tree/master/mutual_information_representation_learning . 3Follows from the data processing inequality ( see Prop . 1 in Appendix A ) . 2019 ) . Intuitively , these bounds are based on the following idea : If a classifier can accurately distinguish between samples drawn from the joint p ( x , y ) and those drawn from the product of marginals p ( x ) p ( y ) , then X and Y have a high MI . We will focus on two such estimators , which are most commonly used in the representation learning literature . The first of them , termed InfoNCE ( van den Oord et al. , 2018 ) , is defined as I ( X ; Y ) ≥ E [ 1 K K∑ i=1 log ef ( xi , yi ) 1 K ∑K j=1 e f ( xi , yj ) ] , INCE ( X ; Y ) , ( 3 ) where the expectation is over K independent samples { ( xi , yi ) } Ki=1 from the joint distribution p ( x , y ) ( Poole et al. , 2019 ) . In practice we estimate ( 3 ) using Monte Carlo estimation by averaging over multiple batches of samples . Intuitively , the critic function f tries to predict for each xi which of the K samples y1 , . . . , yk it was jointly drawn with , by assigning high values to the jointly drawn pair , and low values to all other pairs . The second estimator is based on the variational form of the KL divergence due to Nguyen , Wainwright , and Jordan ( NWJ ) ( Nguyen et al. , 2010 ) and takes the form I ( X ; Y ) ≥ Ep ( x , y ) [ f ( x , y ) ] − e−1Ep ( x ) [ Ep ( y ) ef ( x , y ) ] , INWJ ( X ; Y ) . ( 4 ) For detailed derivations we refer the reader to ( Ruderman et al. , 2012 ; Poole et al. , 2019 ) . Note that these bounds hold for any critic f and when used in ( 2 ) one in practice jointly maximizes over g1 , g2 and f . Furthermore , it can be shown that ( 3 ) is maximized by f∗ ( x , y ) = log p ( y|x ) and ( 4 ) by f∗ ( x , y ) = 1 + log p ( y|x ) ( Poole et al. , 2019 ) . Common choices for f include bilinear critics f ( x , y ) = x > Wy ( van den Oord et al. , 2018 ; Hénaff et al. , 2019 ; Tian et al. , 2019 ) , separable critics f ( x , y ) = φ1 ( x ) > φ2 ( y ) ( Bachman et al. , 2019 ) , and concatenated critics f ( x , y ) = φ ( [ x , y ] ) ( Hjelm et al. , 2019 ) ( here φ , φ1 , φ2 are typically shallow multi-layer perceptrons ( MLPs ) ) . When applying these estimators to solve ( 2 ) , the line between the critic and the encoders g1 , g2 can be blurry . For example , one can train with an inner product critic f ( x , y ) = x > y , but extract features from an intermediate layer of g1 , g2 , in which case the top layers of g1 , g2 form a separable critic . Nevertheless , this boundary is crucial for the interplay between MI estimation and the interpretation of the learned representations .
The paper addresses a question on whether mutual information (MI) based models for representation learning succeed primarily thanks to the MI maximization. The motivation of the work comes from the fact that although MI is known to be problematic in treatment, it has been successfully applied in a number of recent works in computer vision and natural language processing. The paper conducts a series of experiments that constitute a convincing evidence for a weak connection between the InfoMax principle and these practical successes by showing that maximizing established lower bounds on MI are not predictive of the downstream performance and that contrary to the theory higher capacity instantiations of the critics of MI may result in worse downstream performance of learned representations. The paper concludes that there is a considerable inductive bias in the architectural choices inside MI models that are beneficial for downstream tasks and note that at least one of the lower bounds on MI can be interpreted as a triplet loss connecting it with a metric learning approach.
SP:1572adb9d8abb9edd60d11e7cdd0e48cfdf5bd4b
On Mutual Information Maximization for Representation Learning
1 INTRODUCTION . Unsupervised representation learning is a fundamental problem in machine learning . Intuitively , one aims to learn a function g which maps the data into some , usually lower-dimensional , space where one can solve some ( generally a priori unknown ) target supervised tasks more efficiently , i.e . with fewer labels . In contrast to supervised and semi-supervised learning , the learner has access only to unlabeled data . Even though the task seems ill-posed as there is no natural objective one should optimize , by leveraging domain knowledge this approach can be successfully applied to a variety of problem areas , including image ( Kolesnikov et al. , 2019 ; van den Oord et al. , 2018 ; Hénaff et al. , 2019 ; Tian et al. , 2019 ; Hjelm et al. , 2019 ; Bachman et al. , 2019 ) and video classification ( Wang and Gupta , 2015 ; Sun et al. , 2019 ) , and natural language understanding ( van den Oord et al. , 2018 ; Peters et al. , 2018 ; Devlin et al. , 2019 ) . Recently , there has been a revival of approaches inspired by the InfoMax principle ( Linsker , 1988 ) : Choose a representation g ( x ) maximizing the mutual information ( MI ) between the input and its representation , possibly subject to some structural constraints . MI measures the amount of information obtained about a random variable X by observing some other random variable Y 1 Formally , the MI between X and Y , with joint density p ( x , y ) and marginal densities p ( x ) and p ( y ) , is defined as the Kullback–Leibler ( KL ) divergence between the joint and the product of the marginals I ( X ; Y ) = DKL ( p ( x , y ) ‖ p ( x ) p ( y ) ) = Ep ( x , y ) [ log p ( x , y ) p ( x ) p ( y ) ] . ( 1 ) The fundamental properties of MI are well understood and have been extensively studied ( see e.g . Kraskov et al . ( 2004 ) ) . Firstly , MI is invariant under reparametrization of the variables — namely , if X ′ = f1 ( X ) and Y ′ = f2 ( Y ) are homeomorphisms ( i.e . smooth invertible maps ) , then I ( X ; Y ) = I ( X ′ ; Y ′ ) . Secondly , estimating MI in high-dimensional spaces is a notoriously difficult task , and in practice one often maximizes a tractable lower bound on this quantity ( Poole et al. , 2019 ) . ∗Equal contribution . Correspondence to Michael Tschannen ( tschannen @ google.com ) , Josip Djolonga ( josipd @ google.com ) , and Mario Lucic ( lucic @ google.com ) . †PhD student at University of Cambridge and the Max Planck Institute for Intelligent Systems , Tübingen . 1We denote random variables using upper-case letters ( e.g . X , Y ) , and their realizations by the corresponding lower-case letter ( e.g . x , y ) . Nonetheless , any distribution-free high-confidence lower bound on entropy requires a sample size exponential in the size of the bound ( McAllester and Statos , 2018 ) . Despite these fundamental challenges , several recent works have demonstrated promising empirical results in representation learning using MI maximization ( van den Oord et al. , 2018 ; Hénaff et al. , 2019 ; Tian et al. , 2019 ; Hjelm et al. , 2019 ; Bachman et al. , 2019 ; Sun et al. , 2019 ) . In this work we argue , and provide empirical evidence , that the success of these methods can not be attributed to the properties of MI alone . In fact , we show that maximizing tighter bounds on MI can result in worse representations . In addition , we establish a connection to deep metric learning and argue that this interpretation may be a plausible explanation of the success of the recently introduced methods.2 2 BACKGROUND AND RELATED WORK . Recent progress and the InfoMax principle While promising results in other domains have been presented in the literature , we will focus on unsupervised image representation learning techniques that have achieved state-of-the-art performance on image classification tasks ( Hénaff et al. , 2019 ; Tian et al. , 2019 ; Bachman et al. , 2019 ) . The usual problem setup dates back at least to Becker and Hinton ( 1992 ) and can conceptually be described as follows : For a given image X , let X ( 1 ) and X ( 2 ) be different , possibly overlapping views of X , for instance the top and bottom halves of the image . These are encoded using encoders g1 and g2 respectively , and the MI between the two representations g1 ( X ( 1 ) ) and g2 ( X ( 2 ) ) is maximized , max g1∈G1 , g2∈G2 IEST ( g1 ( X ( 1 ) ) ; g2 ( X ( 2 ) ) ) , ( 2 ) where IEST ( X ; Y ) is a sample-based estimator of the true MI I ( X ; Y ) and the function classes G1 and G2 can be used to specify structural constraints on the encoders . While not explicitly reflected in ( 2 ) , note that g1 and g2 can often share parameters . Furthermore , it can be shown that I ( g1 ( X ( 1 ) ) ; g2 ( X ( 2 ) ) ) ≤ I ( X ; g1 ( X ( 1 ) ) , g2 ( X ( 2 ) ) ) ,3 hence the objective in ( 2 ) can be seen as a lower bound on the InfoMax objective maxg∈G I ( X ; g ( X ) ) ( Linsker , 1988 ) . Practical advantages of multi-view formulations There are two main advantages in using ( 2 ) rather than the original InfoMax objective . First , the MI has to be estimated only between the learned representations of the two views , which typically lie on a much lower-dimensional space than the one where the original data X lives . Second , it gives us plenty of modeling flexibility , as the two views can be chosen to capture completely different aspects and modalities of the data , for example : 1 . In the basic form of DeepInfoMax ( Hjelm et al. , 2019 ) g1 extracts global features from the entire image X ( 1 ) and g2 local features from image patches X ( 2 ) , where g1 and g2 correspond to activations in different layers of the same convolutional network . Bachman et al . ( 2019 ) build on this and compute the two views from different augmentations of the same image . 2 . Contrastive multiview coding ( CMC ) ( Tian et al. , 2019 ) generalizes the objective in ( 2 ) to consider multiple views X ( i ) , where each X ( i ) corresponds to a different image modality ( e.g. , different color channels , or the image and its segmentation mask ) . 3 . Contrastive predictive coding ( CPC ) ( van den Oord et al. , 2018 ; Hénaff et al. , 2019 ) incorporates a sequential component of the data . Concretely , one extracts a sequence of patches from an image in some fixed order , maps each patch using an encoder , aggregates the resulting features of the first t patches into a context vector , and maximizes the MI between the context and features extracted from the patch at position t+ k. In ( 2 ) , X ( 1 ) would thus correspond to the first t patches and X ( 2 ) to the patch at location t+ k. Other approaches , such as those presented by Sermanet et al . ( 2018 ) , Hu et al . ( 2017 ) , and Ji et al . ( 2019 ) , can be similarly subsumed under the same objective . Lower bounds on MI As evident from ( 2 ) , another critical choice is the MI estimator IEST . Given the fundamental limitations of MI estimation ( McAllester and Statos , 2018 ) , recent work has focused on deriving lower bounds on MI ( Barber and Agakov , 2003 ; Belghazi et al. , 2018 ; Poole et al. , 2The code for running the experiments and visualizing the results is available at https : //github.com/googleresearch/google-research/tree/master/mutual_information_representation_learning . 3Follows from the data processing inequality ( see Prop . 1 in Appendix A ) . 2019 ) . Intuitively , these bounds are based on the following idea : If a classifier can accurately distinguish between samples drawn from the joint p ( x , y ) and those drawn from the product of marginals p ( x ) p ( y ) , then X and Y have a high MI . We will focus on two such estimators , which are most commonly used in the representation learning literature . The first of them , termed InfoNCE ( van den Oord et al. , 2018 ) , is defined as I ( X ; Y ) ≥ E [ 1 K K∑ i=1 log ef ( xi , yi ) 1 K ∑K j=1 e f ( xi , yj ) ] , INCE ( X ; Y ) , ( 3 ) where the expectation is over K independent samples { ( xi , yi ) } Ki=1 from the joint distribution p ( x , y ) ( Poole et al. , 2019 ) . In practice we estimate ( 3 ) using Monte Carlo estimation by averaging over multiple batches of samples . Intuitively , the critic function f tries to predict for each xi which of the K samples y1 , . . . , yk it was jointly drawn with , by assigning high values to the jointly drawn pair , and low values to all other pairs . The second estimator is based on the variational form of the KL divergence due to Nguyen , Wainwright , and Jordan ( NWJ ) ( Nguyen et al. , 2010 ) and takes the form I ( X ; Y ) ≥ Ep ( x , y ) [ f ( x , y ) ] − e−1Ep ( x ) [ Ep ( y ) ef ( x , y ) ] , INWJ ( X ; Y ) . ( 4 ) For detailed derivations we refer the reader to ( Ruderman et al. , 2012 ; Poole et al. , 2019 ) . Note that these bounds hold for any critic f and when used in ( 2 ) one in practice jointly maximizes over g1 , g2 and f . Furthermore , it can be shown that ( 3 ) is maximized by f∗ ( x , y ) = log p ( y|x ) and ( 4 ) by f∗ ( x , y ) = 1 + log p ( y|x ) ( Poole et al. , 2019 ) . Common choices for f include bilinear critics f ( x , y ) = x > Wy ( van den Oord et al. , 2018 ; Hénaff et al. , 2019 ; Tian et al. , 2019 ) , separable critics f ( x , y ) = φ1 ( x ) > φ2 ( y ) ( Bachman et al. , 2019 ) , and concatenated critics f ( x , y ) = φ ( [ x , y ] ) ( Hjelm et al. , 2019 ) ( here φ , φ1 , φ2 are typically shallow multi-layer perceptrons ( MLPs ) ) . When applying these estimators to solve ( 2 ) , the line between the critic and the encoders g1 , g2 can be blurry . For example , one can train with an inner product critic f ( x , y ) = x > y , but extract features from an intermediate layer of g1 , g2 , in which case the top layers of g1 , g2 form a separable critic . Nevertheless , this boundary is crucial for the interplay between MI estimation and the interpretation of the learned representations .
This paper gives a nice interpretation why recent works that are based on variational lower bounds of mutual information can demonstrate promising empirical results, where they argue that the success depends on "the inductive biasin both the choice of feature extractor architectures and the parametrization of theemployed MI estimators." To support this argument, they carefully design a series convincing experiments which are stated in full in Section 3. Moreover they show some connection to metric learning. 
SP:1572adb9d8abb9edd60d11e7cdd0e48cfdf5bd4b
Deep Multi-View Learning via Task-Optimal CCA
1 INTRODUCTION . Parallel modalities of data are increasingly common in a variety of applications , including images and text , audio and video , parallel texts of different languages , and a variety of medical imaging and omics modalities for each patient . Each view provides essential information for classification and , when used together , can form a more accurate model . This is especially important for difficult discriminative tasks such as those with a small training set size . Canonical Correlation Analysis ( CCA ) is the most common method for computing a shared representation from two views of data by computing a space in which they are maximally correlated ( Hotelling , 1936 ; Bie et al. , 2005 ) . In this paper we will demonstrate that , through optimizing for both discriminative features and correlation between views , we can improve classification accuracy for three real world scenarios . CCA is an unsupervised method but has been applied to many discriminative tasks ( Kan et al. , 2015 ; Sargin et al. , 2007 ; Arora & Livescu , 2012 ) . While some of the correlated CCA features are useful for discriminative tasks , many represent properties that are of no use for classification and obscure correlated information that is beneficial . This problem is magnified with recent non-linear extensions of CCA that use deep learning to make significant strides in improving correlation ( Andrew et al. , 2013 ; Wang et al. , 2015a ; 2016 ; Chang et al. , 2018 ) but often at the expense of discriminative capability ( cf . §5.1 ) . Therefore , we present Task-Optimal CCA ( TOCCA ) , a new deep learning technique to project the data from two views to a shared space that is also discriminative ( Fig . 1 ) . Implementing a task-optimal variant of CCA required a fundamental change in formulation . We show that the CCA objective can equivalently be expressed as an ` 2 distance minimization in the shared space plus an orthogonality constraint . Orthogonality constraints help regularize neural networks ( NNs ) ( Huang et al. , 2018 ) ; we present three techniques to accomplish this . While our method is derived from CCA , by manipulating the orthogonality constraints , we obtain deep CCA approaches that compute a shared latent space that is also discriminative . Our family of solutions for supervised CCA required a crucial and non-trivial change in formulation . We demonstrate the effectiveness and versatility of our model for three different tasks : 1 ) crossview classification on a variation of MNIST ( LeCun , 1998 ) , 2 ) regularization when two views are available for training but only one at test time on a cancer imaging and genomic data set with only 1,000 samples , and 3 ) semi-supervised representation learning to improve speech recognition . All experiments showed a significant improvement in accuracy over previous state-of-the-art . In addition , our approach is more robust in the small sample size regime than alternative methods . Overall , our experiments on real data show the effectiveness of our method in learning a shared space that is more discriminative than previous methods for a variety of practical problems . 2 RELATED WORK . CCA was initially used for unsupervised data analysis to gain insights into components shared by two sources ( Andrew et al. , 2013 ; Wang et al. , 2015a ; 2016 ) . CCA has also been used to compute a shared latent space for cross-view classification ( Kan et al. , 2015 ; Wang et al. , 2015a ; Chandar et al. , 2016 ; Chang et al. , 2018 ) , for representation learning on multiple views that are then joined for prediction ( Sargin et al. , 2007 ; Dorfer et al. , 2016b ) , and for classification from a single view when a second view is available during training ( Arora & Livescu , 2012 ) . Recent non-linear extensions of CCA implemented via NNs make significant improvements in correlation ( Andrew et al. , 2013 ; Wang et al. , 2015a ; 2016 ; Chang et al. , 2018 ) but with little focus on discriminative capability . Most prior work that boosts the discriminative capability of CCA is linear only ( Lee et al. , 2015 ; Singanamalli et al. , 2014 ; Duan et al. , 2016 ) . More recent work using NNs still remains limited in that it optimizes discriminative capability for an intermediate representation rather than the final CCA projection ( Dorfer et al. , 2016b ) , or optimizes the CCA objective only during pre-training , not while training the task objective ( Dorfer et al. , 2018 ) . We advocate to jointly optimize CCA and a discriminative objective by computing the CCA projection within a network layer while applying a task-driven operation such as classification . Experimental results show that our method significantly improves upon previous work ( Dorfer et al. , 2016b ; 2018 ) due to its focus on both the shared latent space and a task-driven objective . The latter is particularly important on small training set sizes . While alternative approaches to multi-view learning via CCA exist , they typically focus on a reconstruction objective . That is , they transform the input into a shared space such that the input could be reconstructed – either individually or reconstructing one view from the other . Variations of coupled dictionary learning ( Shekhar et al. , 2014 ; Xu et al. , 2015 ; Cha et al. , 2015 ; Bahrampour et al. , 2015 ) and autoencoders ( Wang et al. , 2015a ; Bhatt et al. , 2017 ) have been used in this context . CCA-based objectives , such as the model used in this work , instead learn a transformation to a shared space without the need for reconstructing the input . This task may be easier and sufficient in producing a representation for multi-view classification ( Wang et al. , 2015a ) . 3 BACKGROUND . We first introduce CCA and present our task-driven approach in §4 . Linear and non-linear CCA are unsupervised and find the shared signal between a pair of data sources , by maximizing the sum correlation between corresponding projections . Let X1 ∈ Rd1×n and X2 ∈ Rd2×n be meancentered input data from two different views with n samples and d1 , d2 features , respectively . CCA . The objective is to maximize the correlation between a1 = w > 1 X1 and a2 = w > 2 X2 , where w1 and w2 are projection vectors ( Hotelling , 1936 ) . The first canonical directions are found via arg max w1 , w2 corr ( w > 1 X1 , w > 2 X2 ) and subsequent projections are found by maximizing the same correlation but in orthogonal directions . Combining the projection vectors into matrices W1 = [ w ( 1 ) 1 , . . . , w ( k ) 1 ] and W2 = [ w ( 1 ) 2 , . . . , w ( k ) 2 ] ( k ≤ min ( d1 , d2 ) ) , CCA can be reformulated as a trace maximization under orthonormality constraints on the projections , i.e. , arg max W1 , W2 tr ( W > 1 Σ12W2 ) s.t . W > 1 Σ1W1 = W > 2 Σ2W2 = I ( 1 ) for covariance matrices Σ1 = X1XT1 , Σ2 = X2X T 2 , and cross-covariance matrix Σ12 = X1X T 2 . Let T = Σ−1/21 Σ12Σ −1/2 2 and its singular value decomposition ( SVD ) be T = U1diag ( σ ) U > 2 with singular values σ = [ σ1 , . . . , σmin ( d1 , d2 ) ] in descending order . W1 and W2 are computed from the top k singular vectors of T as W1 = Σ −1/2 1 U ( 1 : k ) 1 and W2 = Σ −1/2 2 U ( 1 : k ) 2 where U ( 1 : k ) denotes the k first columns of matrix U . The sum correlation in the projection space is equivalent to k∑ i=1 corr ( ( w ( i ) 1 ) > X1 , ( w ( i ) 2 ) > X2 ) = k∑ i=1 σ2i , ( 2 ) i.e. , the sum of the top k singular values . A regularized variation of CCA ( RCCA ) ensures that the covariance matrices are positive definite by computing the covariance matrices as Σ̂1 = 1 n−1X1X > 1 + rI and Σ̂2 = 1 n−1X2X > 2 + rI , for regularization parameter r > 0 and identity matrix I ( Bilenko & Gallant , 2016 ) . DCCA . Deep CCA adds non-linear projections to CCA by non-linearly mapping the input via a multilayer perceptron ( MLP ) . In particular , inputs X1 and X2 are mapped via non-linear functions f1 and f2 , parameterized by θ1 and θ2 , resulting in activations A1 = f1 ( X1 ; θ1 ) and A2 = f2 ( X2 ; θ2 ) ( assumed to be mean centered ) ( Andrew et al. , 2013 ) . When implemented by a NN , A1 and A2 are the output activations of the final layer with do features . Fig . 2 ( a ) shows the network structure . DCCA optimizes the same objective as CCA ( equation 1 ) but using activations A1 and A2 . Regularized covariance matrices are computed accordingly and the solution for W1 and W2 can be computed using SVD just as with linear CCA . When k = do ( i.e. , the number of CCA components is equal to the number of features in A1 and A2 ) , optimizing the sum correlation in the projection space ( equation 2 ) is equivalent to optimizing the following matrix trace norm objective ( TNO ) LTNO ( A1 , A2 ) = ‖T‖tr = tr ( T > T ) 1/2 , where T = Σ−1/21 Σ12Σ −1/2 2 as in CCA ( Andrew et al. , 2013 ) . DCCA optimizes this objective directly , without a need to compute the CCA projection within the network . The TNO is optimized first , followed by a linear CCA operation before downstream tasks like classification are performed . This formulation does not allow for combining directly with a supervised term . SoftCCA . While DCCA enforces orthogonality constraints on projections W > 1 A1 and W > 2 A2 , SoftCCA relaxes them using regularization ( Chang et al. , 2018 ) . Final projection matrices W1 and W2 are integrated into f1 and f2 as the top network layer . The trace objective for DCCA in equation 1 can be rewritten as minimizing the ` 2 distance between the projections when each feature in A1 and A2 is normalized to a unit variance ( Li et al. , 2003 ) , leading to1 L ` 2 dist ( A1 , A2 ) = ‖A1− A2‖2F . Regularization in SoftCCA penalizes the off-diagonal elements of the covariance matrix 1We use this ` 2 distance objective in our formulation . Σ , using a running average computed over batches as Σ̂ and a loss of LDecorr ( A ) = ∑do i6=i |Σ̂i , j | . Overall , the SoftCCA loss takes the form L ` 2 dist ( A1 , A2 ) + λ ( LDecorr ( A1 ) + LDecorr ( A2 ) ) . Supervised CCA methods . CCA , DCCA , and SoftCCA are all unsupervised methods to learn a projection to a shared space in which the data is maximally correlated . Although these methods have shown utility for discriminative tasks , a CCA decomposition may not be optimal for classification because features that are correlated may not be discriminative . Our experiments will show that maximizing the correlation objective too much can degrade performance on discriminative tasks . CCA has previously been extended to supervised settings in three ways : 1 ) with methods that are linear only ( Singanamalli et al. , 2014 ; Lee et al. , 2015 ; Kan et al. , 2015 ; Duan et al. , 2016 ) , 2 ) by maximizing the total correlation between each view and the training labels in addition to each pair of views ( Lee et al. , 2015 ; Singanamalli et al. , 2014 ) , and 3 ) with Linear Discriminant Analysis ( LDA ) -style approaches to encourage class separation ( Kan et al. , 2015 ; Dorfer et al. , 2016b ; Elmadany et al. , 2016 ) .2 LDA approaches to supervision are generative rather than discriminative . Importantly , we will show in §5.3 that encouraging class separation with an LDA-style objective performs significantly inferior to a softmax . Further , Dorfer et al . ( 2016b ) did not apply LDA to the shared space itself but to the NN layer below it , and Elmadany et al . ( 2016 ) did not validate the shared space created , only its use in multi-view classification using both views for training and test . Dorfer et . al ’ s CCA Layer ( CCAL ) is the closest to our method . It optimizes a task loss operating on a CCA projection ; however , the CCA objective itself is only optimized during pre-training , not in an end-to-end manner ( Dorfer et al. , 2018 ) . Further , their goal is retrieval with a pairwise rank loss , not classification . Instead of computing the CCA projection explicitly within the network , we optimize the non-linear mapping into the shared space together with the task objective , requiring a fundamental change in formulation . We optimize for the shared space with the ` 2 distance between activations ( similar to SoftCCA ) and propose three different ways to apply the orthogonality constraints of CCA .
CCA is a generative model that learns a shared subspace based on two (or multi) views of the data. Being generative, it might not have strong discriminative power for some downstream classification tasks. Previous approaches to infuse discriminative power into the shared subspace estimated by CCA are linear. So, this paper proposes to learn 1) non-linear 2) discriminative subspaces for CCA. The paper accomplishes this by simply adding a task specific term to the optimization objective of DeepCCA (Andrew et. al. 2013), which involves just adding a task-specific MLP on top and minimizing the associated loss-function.
SP:1bc27efda9dd80ce3cfabd6a1e16a904c85f37fc
Deep Multi-View Learning via Task-Optimal CCA
1 INTRODUCTION . Parallel modalities of data are increasingly common in a variety of applications , including images and text , audio and video , parallel texts of different languages , and a variety of medical imaging and omics modalities for each patient . Each view provides essential information for classification and , when used together , can form a more accurate model . This is especially important for difficult discriminative tasks such as those with a small training set size . Canonical Correlation Analysis ( CCA ) is the most common method for computing a shared representation from two views of data by computing a space in which they are maximally correlated ( Hotelling , 1936 ; Bie et al. , 2005 ) . In this paper we will demonstrate that , through optimizing for both discriminative features and correlation between views , we can improve classification accuracy for three real world scenarios . CCA is an unsupervised method but has been applied to many discriminative tasks ( Kan et al. , 2015 ; Sargin et al. , 2007 ; Arora & Livescu , 2012 ) . While some of the correlated CCA features are useful for discriminative tasks , many represent properties that are of no use for classification and obscure correlated information that is beneficial . This problem is magnified with recent non-linear extensions of CCA that use deep learning to make significant strides in improving correlation ( Andrew et al. , 2013 ; Wang et al. , 2015a ; 2016 ; Chang et al. , 2018 ) but often at the expense of discriminative capability ( cf . §5.1 ) . Therefore , we present Task-Optimal CCA ( TOCCA ) , a new deep learning technique to project the data from two views to a shared space that is also discriminative ( Fig . 1 ) . Implementing a task-optimal variant of CCA required a fundamental change in formulation . We show that the CCA objective can equivalently be expressed as an ` 2 distance minimization in the shared space plus an orthogonality constraint . Orthogonality constraints help regularize neural networks ( NNs ) ( Huang et al. , 2018 ) ; we present three techniques to accomplish this . While our method is derived from CCA , by manipulating the orthogonality constraints , we obtain deep CCA approaches that compute a shared latent space that is also discriminative . Our family of solutions for supervised CCA required a crucial and non-trivial change in formulation . We demonstrate the effectiveness and versatility of our model for three different tasks : 1 ) crossview classification on a variation of MNIST ( LeCun , 1998 ) , 2 ) regularization when two views are available for training but only one at test time on a cancer imaging and genomic data set with only 1,000 samples , and 3 ) semi-supervised representation learning to improve speech recognition . All experiments showed a significant improvement in accuracy over previous state-of-the-art . In addition , our approach is more robust in the small sample size regime than alternative methods . Overall , our experiments on real data show the effectiveness of our method in learning a shared space that is more discriminative than previous methods for a variety of practical problems . 2 RELATED WORK . CCA was initially used for unsupervised data analysis to gain insights into components shared by two sources ( Andrew et al. , 2013 ; Wang et al. , 2015a ; 2016 ) . CCA has also been used to compute a shared latent space for cross-view classification ( Kan et al. , 2015 ; Wang et al. , 2015a ; Chandar et al. , 2016 ; Chang et al. , 2018 ) , for representation learning on multiple views that are then joined for prediction ( Sargin et al. , 2007 ; Dorfer et al. , 2016b ) , and for classification from a single view when a second view is available during training ( Arora & Livescu , 2012 ) . Recent non-linear extensions of CCA implemented via NNs make significant improvements in correlation ( Andrew et al. , 2013 ; Wang et al. , 2015a ; 2016 ; Chang et al. , 2018 ) but with little focus on discriminative capability . Most prior work that boosts the discriminative capability of CCA is linear only ( Lee et al. , 2015 ; Singanamalli et al. , 2014 ; Duan et al. , 2016 ) . More recent work using NNs still remains limited in that it optimizes discriminative capability for an intermediate representation rather than the final CCA projection ( Dorfer et al. , 2016b ) , or optimizes the CCA objective only during pre-training , not while training the task objective ( Dorfer et al. , 2018 ) . We advocate to jointly optimize CCA and a discriminative objective by computing the CCA projection within a network layer while applying a task-driven operation such as classification . Experimental results show that our method significantly improves upon previous work ( Dorfer et al. , 2016b ; 2018 ) due to its focus on both the shared latent space and a task-driven objective . The latter is particularly important on small training set sizes . While alternative approaches to multi-view learning via CCA exist , they typically focus on a reconstruction objective . That is , they transform the input into a shared space such that the input could be reconstructed – either individually or reconstructing one view from the other . Variations of coupled dictionary learning ( Shekhar et al. , 2014 ; Xu et al. , 2015 ; Cha et al. , 2015 ; Bahrampour et al. , 2015 ) and autoencoders ( Wang et al. , 2015a ; Bhatt et al. , 2017 ) have been used in this context . CCA-based objectives , such as the model used in this work , instead learn a transformation to a shared space without the need for reconstructing the input . This task may be easier and sufficient in producing a representation for multi-view classification ( Wang et al. , 2015a ) . 3 BACKGROUND . We first introduce CCA and present our task-driven approach in §4 . Linear and non-linear CCA are unsupervised and find the shared signal between a pair of data sources , by maximizing the sum correlation between corresponding projections . Let X1 ∈ Rd1×n and X2 ∈ Rd2×n be meancentered input data from two different views with n samples and d1 , d2 features , respectively . CCA . The objective is to maximize the correlation between a1 = w > 1 X1 and a2 = w > 2 X2 , where w1 and w2 are projection vectors ( Hotelling , 1936 ) . The first canonical directions are found via arg max w1 , w2 corr ( w > 1 X1 , w > 2 X2 ) and subsequent projections are found by maximizing the same correlation but in orthogonal directions . Combining the projection vectors into matrices W1 = [ w ( 1 ) 1 , . . . , w ( k ) 1 ] and W2 = [ w ( 1 ) 2 , . . . , w ( k ) 2 ] ( k ≤ min ( d1 , d2 ) ) , CCA can be reformulated as a trace maximization under orthonormality constraints on the projections , i.e. , arg max W1 , W2 tr ( W > 1 Σ12W2 ) s.t . W > 1 Σ1W1 = W > 2 Σ2W2 = I ( 1 ) for covariance matrices Σ1 = X1XT1 , Σ2 = X2X T 2 , and cross-covariance matrix Σ12 = X1X T 2 . Let T = Σ−1/21 Σ12Σ −1/2 2 and its singular value decomposition ( SVD ) be T = U1diag ( σ ) U > 2 with singular values σ = [ σ1 , . . . , σmin ( d1 , d2 ) ] in descending order . W1 and W2 are computed from the top k singular vectors of T as W1 = Σ −1/2 1 U ( 1 : k ) 1 and W2 = Σ −1/2 2 U ( 1 : k ) 2 where U ( 1 : k ) denotes the k first columns of matrix U . The sum correlation in the projection space is equivalent to k∑ i=1 corr ( ( w ( i ) 1 ) > X1 , ( w ( i ) 2 ) > X2 ) = k∑ i=1 σ2i , ( 2 ) i.e. , the sum of the top k singular values . A regularized variation of CCA ( RCCA ) ensures that the covariance matrices are positive definite by computing the covariance matrices as Σ̂1 = 1 n−1X1X > 1 + rI and Σ̂2 = 1 n−1X2X > 2 + rI , for regularization parameter r > 0 and identity matrix I ( Bilenko & Gallant , 2016 ) . DCCA . Deep CCA adds non-linear projections to CCA by non-linearly mapping the input via a multilayer perceptron ( MLP ) . In particular , inputs X1 and X2 are mapped via non-linear functions f1 and f2 , parameterized by θ1 and θ2 , resulting in activations A1 = f1 ( X1 ; θ1 ) and A2 = f2 ( X2 ; θ2 ) ( assumed to be mean centered ) ( Andrew et al. , 2013 ) . When implemented by a NN , A1 and A2 are the output activations of the final layer with do features . Fig . 2 ( a ) shows the network structure . DCCA optimizes the same objective as CCA ( equation 1 ) but using activations A1 and A2 . Regularized covariance matrices are computed accordingly and the solution for W1 and W2 can be computed using SVD just as with linear CCA . When k = do ( i.e. , the number of CCA components is equal to the number of features in A1 and A2 ) , optimizing the sum correlation in the projection space ( equation 2 ) is equivalent to optimizing the following matrix trace norm objective ( TNO ) LTNO ( A1 , A2 ) = ‖T‖tr = tr ( T > T ) 1/2 , where T = Σ−1/21 Σ12Σ −1/2 2 as in CCA ( Andrew et al. , 2013 ) . DCCA optimizes this objective directly , without a need to compute the CCA projection within the network . The TNO is optimized first , followed by a linear CCA operation before downstream tasks like classification are performed . This formulation does not allow for combining directly with a supervised term . SoftCCA . While DCCA enforces orthogonality constraints on projections W > 1 A1 and W > 2 A2 , SoftCCA relaxes them using regularization ( Chang et al. , 2018 ) . Final projection matrices W1 and W2 are integrated into f1 and f2 as the top network layer . The trace objective for DCCA in equation 1 can be rewritten as minimizing the ` 2 distance between the projections when each feature in A1 and A2 is normalized to a unit variance ( Li et al. , 2003 ) , leading to1 L ` 2 dist ( A1 , A2 ) = ‖A1− A2‖2F . Regularization in SoftCCA penalizes the off-diagonal elements of the covariance matrix 1We use this ` 2 distance objective in our formulation . Σ , using a running average computed over batches as Σ̂ and a loss of LDecorr ( A ) = ∑do i6=i |Σ̂i , j | . Overall , the SoftCCA loss takes the form L ` 2 dist ( A1 , A2 ) + λ ( LDecorr ( A1 ) + LDecorr ( A2 ) ) . Supervised CCA methods . CCA , DCCA , and SoftCCA are all unsupervised methods to learn a projection to a shared space in which the data is maximally correlated . Although these methods have shown utility for discriminative tasks , a CCA decomposition may not be optimal for classification because features that are correlated may not be discriminative . Our experiments will show that maximizing the correlation objective too much can degrade performance on discriminative tasks . CCA has previously been extended to supervised settings in three ways : 1 ) with methods that are linear only ( Singanamalli et al. , 2014 ; Lee et al. , 2015 ; Kan et al. , 2015 ; Duan et al. , 2016 ) , 2 ) by maximizing the total correlation between each view and the training labels in addition to each pair of views ( Lee et al. , 2015 ; Singanamalli et al. , 2014 ) , and 3 ) with Linear Discriminant Analysis ( LDA ) -style approaches to encourage class separation ( Kan et al. , 2015 ; Dorfer et al. , 2016b ; Elmadany et al. , 2016 ) .2 LDA approaches to supervision are generative rather than discriminative . Importantly , we will show in §5.3 that encouraging class separation with an LDA-style objective performs significantly inferior to a softmax . Further , Dorfer et al . ( 2016b ) did not apply LDA to the shared space itself but to the NN layer below it , and Elmadany et al . ( 2016 ) did not validate the shared space created , only its use in multi-view classification using both views for training and test . Dorfer et . al ’ s CCA Layer ( CCAL ) is the closest to our method . It optimizes a task loss operating on a CCA projection ; however , the CCA objective itself is only optimized during pre-training , not in an end-to-end manner ( Dorfer et al. , 2018 ) . Further , their goal is retrieval with a pairwise rank loss , not classification . Instead of computing the CCA projection explicitly within the network , we optimize the non-linear mapping into the shared space together with the task objective , requiring a fundamental change in formulation . We optimize for the shared space with the ` 2 distance between activations ( similar to SoftCCA ) and propose three different ways to apply the orthogonality constraints of CCA .
This paper addresses the problem of jointly performing CCA with task labeling. The problem is timely and important as it is challenging to perform CCA jointly with the task classification (see below) and hence previous work typically perform this in a pipeline - that is, first projecting the data using a pre-trained CCA and then training a task classifier using the projected representation. As the authors note, this may be problematic as CCA may delete important information that is relevant for the classification, if training is not done jointly.
SP:1bc27efda9dd80ce3cfabd6a1e16a904c85f37fc
Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization
1 INTRODUCTION AND OVERVIEW . Neural networks used in practice often have sufficient effective capacity to learn arbitrary maps from their inputs to their outputs . This is typically demonstrated by training a classification network that achieves good test accuracy on a real dataset S , on a modified version of S ( call it S′ ) where the labels are randomized and observing that the training accuracy on S′ is very high , though , of course , the test accuracy is no better than chance ( Zhang et al. , 2017 ) . This leads to an important open question in the Deep Learning community ( Zhang et al . ( 2017 ) ; Arpit et al . ( 2017 ) ; Bartlett et al . ( 2017 ) ; Kawaguchi et al . ( 2017 ) ; Neyshabur et al . ( 2018 ) ; Arora et al . ( 2018 ) ; Belkin et al . ( 2019 ) ; Rahaman et al . ( 2019 ) ; Nagarajan & Kolter ( 2019 ) , etc . ) : Among all maps that fit a real dataset , how does Gradient Descent ( GD ) find one that generalizes well ? This is the question we address in this paper . We start by observing that this phenomenon is not limited to neural networks trained with GD but also applies to Random Forests and Decision Trees . However , there is no mystery with trees : A typical tree construction algorithm splits the training set recursively into similar subsets based on input features . If no similarity is found , eventually , each example is put into its own leaf to achieve good training accuracy ( but , of course , at the cost of poor generalization ) . Thus , trees that achieve good accuracy on a randomized dataset are much larger than those on a real dataset ( e.g . Chatterjee & Mishchenko ( 2019 , Expt . 5 ) ) . Is it possible that something similar happens with GD ? We believe so . The type of randomized-label experiments described above show that if there are common patterns to be found , then GD finds them . If not , it fits each example on a case-by-case basis . The question then is , what is it about the dynamics of GD that makes it possible to extract common patterns from the data ? And what does it mean for a pattern to be common ? Since the only change to the network parameters in GD comes from the gradients , the mechanism to detect commonality amongst examples must be through the gradients . We propose that this commonality detection can be explained as follows : 1 . Gradients are coherent , i.e , similar examples ( or parts of examples ) have similar gradients ( or similar components of gradients ) and dissimilar examples have dissimilar gradients . 2 . Since the overall gradient is the sum of the per-example gradients , it is stronger in directions where the per-example gradients are similar and reinforce each other and weaker in other directions where they are different and do not add up . 3 . Since network parameters are updated proportionally to gradients , they change faster in the direction of stronger gradients . 4 . Thus the changes to the network during training are biased towards those that simultaneously benefit many examples instead of a few ( or one example ) . For convenience , we refer to this as the Coherent Gradients hypothesis . It is instructive to work through the proposed mechanism in the context of a simple thought experiment . Consider a training set with two examples a and b . At some point in training , suppose the gradient of a , ga , can be decomposed into two orthogonal components ga1 and ga2 of roughly equal magnitude , i.e. , there are two , equally good , independent ways in which the network can better fit a ( by using say two disjoint parts of the network ) . Likewise , for b . Now , further suppose that one of the two ways is common to both a and b , i.e. , say ga2 = gb2 = gab , whereas , the other two are example specific , i.e. , 〈ga1 , gb1〉 = 0 . Now , the overall gradient is g = ga + gb = ga1 + 2 gab + gb1 . Observe that the gradient is stronger in the direction that simultaneously helps both examples and thus the corresponding parameter changes are bigger than those those that only benefit only one example.1 It is important to emphasize that the notion of similarity used above ( i.e. , which examples are considered similar ) is not a constant but changes in the course of training as network parameters change . It starts from a mostly task independent notion due to random initialization and is bootstrapped in the course of training to be task dependent . We say “ mostly ” because even with random initialization , examples that are syntactically close are treated similarly ( e.g. , two images differing in the intensities of some pixels as opposed to two images where one is a translated version of the other ) . The relationship between strong gradients and generalization can also be understood through the lens of algorithmic stability ( Bousquet & Elisseeff , 2002 ) : strong gradient directions are more stable since the presence or absence of a single example does not impact them as much , as opposed to weak gradient directions which may altogether disappear if a specific example is missing from the training set . With this observation , we can reason inductively about the stability of GD : since the initial values of the parameters do not depend on the training data , the initial function mapping examples to their gradients is stable . Now , if all parameter updates are due to strong gradient directions , then stability is preserved . However , if some parameter updates are due to weak gradient directions , then stability is diminished . Since stability ( suitably formalized ) is equivalent to generalization ( Shalev-Shwartz et al. , 2010 ) , this allows us to see how generalization may degrade as training progresses . Based on this insight , we shall see later how a simple modification to GD to suppress the weak gradient directions can dramatically reduce overfitting . In addition to providing insight into why GD generalizes in practice , we believe that the Coherent Gradients hypothesis can help explain several other empirical observations about deep learning in the literature : ( a ) Learning is slower with random labels than with real labels ( Zhang et al. , 2017 ; Arpit et al. , 2017 ) ( b ) Robustness to large amounts of label noise ( Rolnick et al. , 2017 ) ( c ) Early stopping leads to better generalization ( Caruana et al. , 2000 ) ( d ) Increasing capacity improves generalization ( Caruana et al. , 2000 ; Neyshabur et al. , 2018 ) ( e ) The existence of adversarial initialization schemes ( Liu et al. , 2019 ) ( f ) GD detects common patterns even when trained with random labels ( Chatterjee & Mishchenko , 2019 ) 1While the mechanism is easiest to see with full or large minibatches , we believe it holds even for small minibatches ( though there one has to consider the bias in updates over time ) . A direct experimental verification of the Coherent Gradients hypothesis is challenging since the notion of similarity between examples depends on the parameters of the network and thus changes during training . Our approach , therefore , is to design intervention experiments where we establish a baseline and compare it against variants designed to test some aspect or prediction of the theory . As part of these experiments , we replicate the observations ( a ) – ( c ) in the literature noted above , and analyze the corresponding explanations provided by Coherent Gradients ( §2 ) , and outline for future work how ( d ) – ( f ) may be accounted for ( §5 ) . In this paper , we limit our study to simple baselines : vanilla Stochastic Gradient Descent ( SGD ) on MNIST using fully connected networks . We believe that this is a good starting point , since even in this simple setting , with all frills eliminated ( e.g. , inductive bias from architecture or explicit regularization , or a more sophisticated optimization procedure ) , we are challenged to find a satisfactory explanation of why SGD generalizes well . Furthermore , our prior is that the difference between weak and strong directions is small at any one step of training , and therefore having a strong learning signal as in the case of MNIST makes a direct analysis of gradients easier . It also has the benefit of having a smaller carbon footprint and being easier to reproduce . Finally , based on preliminary experiments on other architectures and datasets we are optimistic that the insights we get from studying this simple setup apply more broadly . 2 EFFECT OF REDUCING SIMILARITY BETWEEN EXAMPLES . Our first test of the Coherent Gradients hypothesis is to see what happens when we reduce similarity between examples . Although , at any point during training , we do not know which examples are similar , and which are different , we can ( with high probability ) reduce the similarity among training examples simply by injecting label noise . In other words , under any notion of similarity , adding label noise to a dataset that has clean labels is likely to make similar examples less similar . Note that this perturbation does not reduce coherence since gradients still depend on the examples . ( To break coherence , we would have to make the gradients independent of the training examples which would requiring perturbing SGD itself and not just the dataset ) . 2.1 SETUP . For our baseline , we use the standard MNIST dataset of 60,000 training examples and 10,000 test examples . Each example is a 28x28 pixel grayscale handwritten digit along with a label ( ‘ 0 ’ – ‘ 9 ’ ) . We train a fully connected network on this dataset . The network has one hidden layer with 2048 ReLUs and an output layer with a 10-way softmax . We initialize it with Xavier and train using vanilla SGD ( i.e. , no momentum ) using cross entropy loss with a constant learning rate of 0.1 and a minibatch size of 100 for 105 steps ( i.e. , about 170 epochs ) . We do not use any explicit regularizers . We perturb the baseline by modifying only the dataset and keeping all other aspects of the architecture and learning algorithm fixed . The dataset is modified by adding various amounts of noise ( 25 % , 50 % , 75 % , and 100 % ) to the labels of the training set ( but not the test set ) . This noise is added by taking , say in the case of 25 % label noise , 25 % of the examples at random and randomly permuting their labels . Thus , when we add 25 % label noise , we still expect about 75 % + 0.1 * 25 % , i.e. , 77.5 % of the examples to have unchanged ( i.e . “ correct ” ) labels which we call the proper accuracy of the modified dataset . In what follows , we call examples with unchanged labels , pristine , and the remaining , corrupt . Also , from this perspective , it is convenient to refer to the original MNIST dataset as having 0 % label noise . We use a fully connected architecture instead of a convolutional one to mitigate concerns that some of the difference in generalization between the original MNIST and the noisy variants could stem from architectural inductive bias . We restrict ourselves to only 1 hidden layer to have the gradients be as well-behaved as possible . Finally , the network width , learning rate , and the number of training steps are chosen to ensure that exactly the same procedure is usually able to fit all 5 variants to 100 % training accuracy .
This paper posits that similar input examples will have similar gradients, leading to a gradient "coherence" phenomenon. A simple argument then suggests that the loss should decrease much more rapidly when gradients cohere than when they do not. This hypothesis and analysis is supported with clever experiments that confirm some of the predictions of this theory. Furthermore, since, as the authors emphasize, their hypothesis is prescriptive, they are able to suggest a novel regularization technique and show that it is effective in a simple setting.
SP:d905f831fd48700ebbaf8286fa71f77b45aa685f
Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization
1 INTRODUCTION AND OVERVIEW . Neural networks used in practice often have sufficient effective capacity to learn arbitrary maps from their inputs to their outputs . This is typically demonstrated by training a classification network that achieves good test accuracy on a real dataset S , on a modified version of S ( call it S′ ) where the labels are randomized and observing that the training accuracy on S′ is very high , though , of course , the test accuracy is no better than chance ( Zhang et al. , 2017 ) . This leads to an important open question in the Deep Learning community ( Zhang et al . ( 2017 ) ; Arpit et al . ( 2017 ) ; Bartlett et al . ( 2017 ) ; Kawaguchi et al . ( 2017 ) ; Neyshabur et al . ( 2018 ) ; Arora et al . ( 2018 ) ; Belkin et al . ( 2019 ) ; Rahaman et al . ( 2019 ) ; Nagarajan & Kolter ( 2019 ) , etc . ) : Among all maps that fit a real dataset , how does Gradient Descent ( GD ) find one that generalizes well ? This is the question we address in this paper . We start by observing that this phenomenon is not limited to neural networks trained with GD but also applies to Random Forests and Decision Trees . However , there is no mystery with trees : A typical tree construction algorithm splits the training set recursively into similar subsets based on input features . If no similarity is found , eventually , each example is put into its own leaf to achieve good training accuracy ( but , of course , at the cost of poor generalization ) . Thus , trees that achieve good accuracy on a randomized dataset are much larger than those on a real dataset ( e.g . Chatterjee & Mishchenko ( 2019 , Expt . 5 ) ) . Is it possible that something similar happens with GD ? We believe so . The type of randomized-label experiments described above show that if there are common patterns to be found , then GD finds them . If not , it fits each example on a case-by-case basis . The question then is , what is it about the dynamics of GD that makes it possible to extract common patterns from the data ? And what does it mean for a pattern to be common ? Since the only change to the network parameters in GD comes from the gradients , the mechanism to detect commonality amongst examples must be through the gradients . We propose that this commonality detection can be explained as follows : 1 . Gradients are coherent , i.e , similar examples ( or parts of examples ) have similar gradients ( or similar components of gradients ) and dissimilar examples have dissimilar gradients . 2 . Since the overall gradient is the sum of the per-example gradients , it is stronger in directions where the per-example gradients are similar and reinforce each other and weaker in other directions where they are different and do not add up . 3 . Since network parameters are updated proportionally to gradients , they change faster in the direction of stronger gradients . 4 . Thus the changes to the network during training are biased towards those that simultaneously benefit many examples instead of a few ( or one example ) . For convenience , we refer to this as the Coherent Gradients hypothesis . It is instructive to work through the proposed mechanism in the context of a simple thought experiment . Consider a training set with two examples a and b . At some point in training , suppose the gradient of a , ga , can be decomposed into two orthogonal components ga1 and ga2 of roughly equal magnitude , i.e. , there are two , equally good , independent ways in which the network can better fit a ( by using say two disjoint parts of the network ) . Likewise , for b . Now , further suppose that one of the two ways is common to both a and b , i.e. , say ga2 = gb2 = gab , whereas , the other two are example specific , i.e. , 〈ga1 , gb1〉 = 0 . Now , the overall gradient is g = ga + gb = ga1 + 2 gab + gb1 . Observe that the gradient is stronger in the direction that simultaneously helps both examples and thus the corresponding parameter changes are bigger than those those that only benefit only one example.1 It is important to emphasize that the notion of similarity used above ( i.e. , which examples are considered similar ) is not a constant but changes in the course of training as network parameters change . It starts from a mostly task independent notion due to random initialization and is bootstrapped in the course of training to be task dependent . We say “ mostly ” because even with random initialization , examples that are syntactically close are treated similarly ( e.g. , two images differing in the intensities of some pixels as opposed to two images where one is a translated version of the other ) . The relationship between strong gradients and generalization can also be understood through the lens of algorithmic stability ( Bousquet & Elisseeff , 2002 ) : strong gradient directions are more stable since the presence or absence of a single example does not impact them as much , as opposed to weak gradient directions which may altogether disappear if a specific example is missing from the training set . With this observation , we can reason inductively about the stability of GD : since the initial values of the parameters do not depend on the training data , the initial function mapping examples to their gradients is stable . Now , if all parameter updates are due to strong gradient directions , then stability is preserved . However , if some parameter updates are due to weak gradient directions , then stability is diminished . Since stability ( suitably formalized ) is equivalent to generalization ( Shalev-Shwartz et al. , 2010 ) , this allows us to see how generalization may degrade as training progresses . Based on this insight , we shall see later how a simple modification to GD to suppress the weak gradient directions can dramatically reduce overfitting . In addition to providing insight into why GD generalizes in practice , we believe that the Coherent Gradients hypothesis can help explain several other empirical observations about deep learning in the literature : ( a ) Learning is slower with random labels than with real labels ( Zhang et al. , 2017 ; Arpit et al. , 2017 ) ( b ) Robustness to large amounts of label noise ( Rolnick et al. , 2017 ) ( c ) Early stopping leads to better generalization ( Caruana et al. , 2000 ) ( d ) Increasing capacity improves generalization ( Caruana et al. , 2000 ; Neyshabur et al. , 2018 ) ( e ) The existence of adversarial initialization schemes ( Liu et al. , 2019 ) ( f ) GD detects common patterns even when trained with random labels ( Chatterjee & Mishchenko , 2019 ) 1While the mechanism is easiest to see with full or large minibatches , we believe it holds even for small minibatches ( though there one has to consider the bias in updates over time ) . A direct experimental verification of the Coherent Gradients hypothesis is challenging since the notion of similarity between examples depends on the parameters of the network and thus changes during training . Our approach , therefore , is to design intervention experiments where we establish a baseline and compare it against variants designed to test some aspect or prediction of the theory . As part of these experiments , we replicate the observations ( a ) – ( c ) in the literature noted above , and analyze the corresponding explanations provided by Coherent Gradients ( §2 ) , and outline for future work how ( d ) – ( f ) may be accounted for ( §5 ) . In this paper , we limit our study to simple baselines : vanilla Stochastic Gradient Descent ( SGD ) on MNIST using fully connected networks . We believe that this is a good starting point , since even in this simple setting , with all frills eliminated ( e.g. , inductive bias from architecture or explicit regularization , or a more sophisticated optimization procedure ) , we are challenged to find a satisfactory explanation of why SGD generalizes well . Furthermore , our prior is that the difference between weak and strong directions is small at any one step of training , and therefore having a strong learning signal as in the case of MNIST makes a direct analysis of gradients easier . It also has the benefit of having a smaller carbon footprint and being easier to reproduce . Finally , based on preliminary experiments on other architectures and datasets we are optimistic that the insights we get from studying this simple setup apply more broadly . 2 EFFECT OF REDUCING SIMILARITY BETWEEN EXAMPLES . Our first test of the Coherent Gradients hypothesis is to see what happens when we reduce similarity between examples . Although , at any point during training , we do not know which examples are similar , and which are different , we can ( with high probability ) reduce the similarity among training examples simply by injecting label noise . In other words , under any notion of similarity , adding label noise to a dataset that has clean labels is likely to make similar examples less similar . Note that this perturbation does not reduce coherence since gradients still depend on the examples . ( To break coherence , we would have to make the gradients independent of the training examples which would requiring perturbing SGD itself and not just the dataset ) . 2.1 SETUP . For our baseline , we use the standard MNIST dataset of 60,000 training examples and 10,000 test examples . Each example is a 28x28 pixel grayscale handwritten digit along with a label ( ‘ 0 ’ – ‘ 9 ’ ) . We train a fully connected network on this dataset . The network has one hidden layer with 2048 ReLUs and an output layer with a 10-way softmax . We initialize it with Xavier and train using vanilla SGD ( i.e. , no momentum ) using cross entropy loss with a constant learning rate of 0.1 and a minibatch size of 100 for 105 steps ( i.e. , about 170 epochs ) . We do not use any explicit regularizers . We perturb the baseline by modifying only the dataset and keeping all other aspects of the architecture and learning algorithm fixed . The dataset is modified by adding various amounts of noise ( 25 % , 50 % , 75 % , and 100 % ) to the labels of the training set ( but not the test set ) . This noise is added by taking , say in the case of 25 % label noise , 25 % of the examples at random and randomly permuting their labels . Thus , when we add 25 % label noise , we still expect about 75 % + 0.1 * 25 % , i.e. , 77.5 % of the examples to have unchanged ( i.e . “ correct ” ) labels which we call the proper accuracy of the modified dataset . In what follows , we call examples with unchanged labels , pristine , and the remaining , corrupt . Also , from this perspective , it is convenient to refer to the original MNIST dataset as having 0 % label noise . We use a fully connected architecture instead of a convolutional one to mitigate concerns that some of the difference in generalization between the original MNIST and the noisy variants could stem from architectural inductive bias . We restrict ourselves to only 1 hidden layer to have the gradients be as well-behaved as possible . Finally , the network width , learning rate , and the number of training steps are chosen to ensure that exactly the same procedure is usually able to fit all 5 variants to 100 % training accuracy .
The surprising generalization properties of neural networks trained with stochastic gradient descent are still poorly understood. The present work suggests that they can be explained at least partly by the fact that patterns shared across many data points will lead to gradients pointing in similar directions, thus reinforcing each other. Artefacts specific to small numbers of data points however will not have this property and thus have a substantially smaller impact on the learning. Numerical experiments on MNIST with label-noise indeed show that even though the neural network is able to perfectly fit even the flipped labels, the "pristine" labels are fittet much earlier during training. The authors also experiment with explicitly clipping "outlier gradients" and show that the resulting algorithm drastically reduces overfitting, thus further supporting the coherent gradient hypothesis.
SP:d905f831fd48700ebbaf8286fa71f77b45aa685f
Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck
In this paper , we present an in-depth investigation of the convolutional autoencoder ( CAE ) bottleneck . Autoencoders ( AE ) , and especially their convolutional variants , play a vital role in the current deep learning toolbox . Researchers and practitioners employ CAEs for a variety of tasks , ranging from outlier detection and compression to transfer and representation learning . Despite their widespread adoption , we have limited insight into how the bottleneck shape impacts the emergent properties of the CAE . We demonstrate that increased height and width of the bottleneck drastically improves generalization , which in turn leads to better performance of the latent codes in downstream transfer learning tasks . The number of channels in the bottleneck , on the other hand , is secondary in importance . Furthermore , we show empirically that , contrary to popular belief , CAEs do not learn to copy their input , even when the bottleneck has the same number of neurons as there are pixels in the input . Copying does not occur , despite training the CAE for 1,000 epochs on a tiny ( ≈ 600 images ) dataset . We believe that the findings in this paper are directly applicable and will lead to improvements in models that rely on CAEs . 1 INTRODUCTION . Autoencoders ( AE ) are an integral part of the neural network toolkit . They are a class of neural networks that consist of an encoder and decoder part and are trained by reconstructing datapoints after encoding them . Due to their conceptual simplicity , autoencoders often appear in teaching materials as introductory models to the field of deep unsupervised learning . Nevertheless , autoencoders have enabled major contributions in the application and research of the field . The main areas of application include outlier detection ( Xia et al. , 2015 ; Chen et al. , 2017 ; Zhou & Paffenroth , 2017 ; Baur et al. , 2019 ) , data compression ( Yildirim et al. , 2018 ; Cheng et al. , 2018 ; Dumas et al. , 2018 ) , and image enhancement ( Mao et al. , 2016 ; Lore et al. , 2017 ) . In the early days of deep learning , autoencoders were a crucial tool for the training of deep models . Training large ( by the standards of the time ) models was challenging , due to the lack of big datasets and computational resources . One way around this problem was to pre-train some or all layers of the network greedily by treating them as autoencoders with one hidden layer ( Bengio et al. , 2007 ) . Subsequently , Erhan et al . ( 2009 ) demonstrated that autoencoder pre-training also benefits generalization . Currently , researchers in the field of representation learning frequently rely on autoencoders for learning nuanced and high-level representations of data ( Kingma & Welling , 2013 ; Tretschk et al. , 2019 ; Shu et al. , 2018 ; Makhzani et al. , 2015 ; Berthelot et al. , 2018 ) . However , despite its widespread use , we propose that the ( deep ) autoencoder model is not well understood . Many papers have aimed to deepen our understanding of the autoencoder through theoretical analysis ( Nguyen et al. , 2018 ; Arora et al. , 2013 ; Baldi , 2012 ; Alain & Bengio , 2012 ) . While such analyses provide valuable theoretical insight , there is a significant discrepancy between the theoretical frameworks and actual behavior of autoencoders in practice , mainly due to the assumptions made ( e.g. , weight tying , infinite depth ) or the simplicity of the models under study . Others have approached this issue from a more experimental angle ( Arpit et al. , 2015 ; Bengio et al. , 2013 ; Le , 2013 ; Vincent et al. , 2008 ; Berthelot et al. , 2019 ) . Such investigations are part of an ongoing effort to understand the behavior of autoencoders in a variety of settings . The focus of most such investigations so far has been the traditional autoencoder setting with fully connected layers . When working with image data , however , the default choice is to use convolutions , as they provide a prior that is well suited to this type of data ( Ulyanov et al. , 2018 ) . For this reason , Masci et al . ( 2011 ) introduced the convolutional autoencoder ( CAE ) by replacing the fully connected layers in the classical AE with convolutions . In an autoencoder , the layer with the least amount of neurons is referred to as a bottleneck . In the regular AE , this bottleneck is simply a vector ( rank-1 tensor ) . In CAEs , however , the bottleneck assumes the shape of a multichannel image ( rank-3 tensor , height × width × channels ) instead of a vector . This bottleneck shape prompts the question : What is the relative importance of the number of channels versus the height and width ( hereafter referred to as size ) in determining the tightness of the CAE bottleneck ? Intuitively , we might expect that only the total number of neurons should matter since convolutions with one-hot filters can distribute values across channels . Generally , the study of CAE properties appears to be underrepresented in literature , despite their widespread adoption . In this paper , we share new insights into the properties of convolutional autoencoders , which we gained through extensive experimentation . We address the following questions : • How does the number of channels and the feature map size in the bottleneck layer impact – reconstruction quality ? – generalization ability ? – the structure of the latent code ? – knowledge transfer to downstream tasks ? • How and when do CAEs overfit ? • How does the complexity of the data distribution affect all of the above ? • Are CAEs capable of learning a “ copy function ” if the CAE is complete ( i. e. , when the number of pixels in input equals the number of neurons in bottleneck ) ? This “ copying CAE ” hypothesis is a commonly held belief that was carried over from regular AEs ( see Sections 4 and 5 in Masci et al . ( 2011 ) . We begin the following section by formally introducing convolutional autoencoders and explaining the convolutional autoencoder model we used in our experiments . Additionally , we introduce our three datasets and the motivation for choosing them . In Section 3 , we outline the experiments and their respective aims . Afterward , we present and discuss our findings in Section 4 . All of our code , as well as the trained models and datasets , will be published at https : //github.com/YmouslyAnon/WalkingTheTightrope . This repository will also include an interactive Jupyter Notebook for investigating the trained models . We invite interested readers to take a look and experiment with our models . 2 MATERIALS AND METHODS . 2.1 AUTOENCODERS AND CONVOLUTIONAL AUTOENCODERS . The regular autoencoder , as introduced by Rumelhart et al . ( 1985 ) , is a neural network that learns a mapping from data points in the input space x ∈ Rd to a code vector in latent space h ∈ Rm and back . Typically , unless we introduce some other constraint , m is set to be smaller than d to force the autoencoder to learn higher-level abstractions by having to compress the data . In this context , the encoder is the mapping f ( x ) : Rd → Rm and the decoder is the mapping g ( h ) : Rm → Rd . The layers in both the encoder and decoder are fully connected : li+1 = σ ( W ili + bi ) . ( 1 ) Here , li is the activation vector in the i-th layer , W i and bi are the trainable weights and σ is a element-wise non-linear activation function . If necessary , we can tie weights in the encoder to the ones in the decoder such that W i = ( W n−i ) T , where n is the total number of layers . Literature refers to autoencoders with this type of encoder-decoder relation as weight-tied . The convolutional autoencoder keeps the overall structure of the traditional autoencoder but replaces the fully connected layers with convolutions : Li+1 = σ ( Wi ∗ Li + bi ) , ( 2 ) where ∗ denotes the convolution operation and the bias bi is broadcast to match the shape of Li such that the j-th entry in bi is added to the j-th channel in Li . Whereas before the hidden code was an m-dimensional vector , it is now a tensor with a rank equal to the rank of the input tensor . In the case of images , that rank is three ( height , width , and the number of channels ) . CAEs generally include pooling layers or convolutions with strides > 1 or dilation > 1 in the encoder to reduce the size of the input . In the decoder , unpooling or transposed convolution layers ( Dumoulin & Visin , 2016 ) inflate the latent code to the size of the input . 2.2 OUR MODEL . Our model consists of five strided convolution layer in the encoder and five up-sampling convolution layers ( bilinear up-sampling followed by padded convolution ) ( Odena et al. , 2016 ) in the decoder . We chose to use five layers so that the size of the latent code , after the strided convolutions , would be 4x4 or 3x3 depending on the dataset . To increase the level of abstraction in the latent code , we increased the depth of the network by placing two residual blocks ( He et al. , 2016 ) with two convolutions each after each every strided / up-sampling convolution layer . We applied instance normalization ( Ulyanov et al. , 2016 ) and ReLU activation ( Nair & Hinton , 2010 ) following every convolution in the architecture . One of our goals was to understand the effect latent code shape has on different aspects of the network . Therefore , we wanted to be able to change the shape of the bottleneck from one experiment to another , while keeping the rest of the network constant . To this end , we quadrupled the number of channels with every strided convolution si and reduced it by a factor of four with every up-sampling convolution ui . In effect , this means that the volume ( i. e. , height×width× channels ) of the feature maps is identical to the input in all layers up to the bottleneck : si ( Li ) ∈ R hi/2×w i /2×4nic , for Li ∈ Rh i×wi×nic ( 3 ) ui ( Li ) ∈ R2h i×2wi×n i c/4 , for Li ∈ Rh i×wi×nic ( 4 ) In this regard , our model , differs from CAEs commonly found in literature , where it is customary to double/halve the number of channels with every down-/up-sampling layer . However , our scheme allows us to test architectures with different bottleneck shapes while ensuring that the volume of the feature maps stays the same as the input until the bottleneck . In this sense , the bottleneck is the only moving part in our experiments . The resulting models range from having ∼ 50M to 90M parameters . 2.3 DATASETS . To increase the robustness of our study , we conducted experiments on three different datasets . Additionally , the three datasets allowed us to address the question , how the difficulty of the data ( i. e. , the complexity of the data distribution ) affects learning in the CAE . To study this effect , we decided to run our experiments on three datasets of varying difficulty . We determined the difficulty of each dataset based on intuitive heuristics . In the following , we present the datasets in the order of increasing difficulty and our reasoning for the difficulty grading .
The authors evaluate convolutional autoencoders (CAE) by varying the size (width & height) and depth of the bottleneck layer on three datasets and compare test and training performance. They furthermore evaluate the quality of the bottleneck activations for linear classification. The authors also investigate the belief that a bottleneck layer of size equal to the input image will copy the image.
SP:761207caf0d1b23f060e3957a6309bc6d76819a6
Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck
In this paper , we present an in-depth investigation of the convolutional autoencoder ( CAE ) bottleneck . Autoencoders ( AE ) , and especially their convolutional variants , play a vital role in the current deep learning toolbox . Researchers and practitioners employ CAEs for a variety of tasks , ranging from outlier detection and compression to transfer and representation learning . Despite their widespread adoption , we have limited insight into how the bottleneck shape impacts the emergent properties of the CAE . We demonstrate that increased height and width of the bottleneck drastically improves generalization , which in turn leads to better performance of the latent codes in downstream transfer learning tasks . The number of channels in the bottleneck , on the other hand , is secondary in importance . Furthermore , we show empirically that , contrary to popular belief , CAEs do not learn to copy their input , even when the bottleneck has the same number of neurons as there are pixels in the input . Copying does not occur , despite training the CAE for 1,000 epochs on a tiny ( ≈ 600 images ) dataset . We believe that the findings in this paper are directly applicable and will lead to improvements in models that rely on CAEs . 1 INTRODUCTION . Autoencoders ( AE ) are an integral part of the neural network toolkit . They are a class of neural networks that consist of an encoder and decoder part and are trained by reconstructing datapoints after encoding them . Due to their conceptual simplicity , autoencoders often appear in teaching materials as introductory models to the field of deep unsupervised learning . Nevertheless , autoencoders have enabled major contributions in the application and research of the field . The main areas of application include outlier detection ( Xia et al. , 2015 ; Chen et al. , 2017 ; Zhou & Paffenroth , 2017 ; Baur et al. , 2019 ) , data compression ( Yildirim et al. , 2018 ; Cheng et al. , 2018 ; Dumas et al. , 2018 ) , and image enhancement ( Mao et al. , 2016 ; Lore et al. , 2017 ) . In the early days of deep learning , autoencoders were a crucial tool for the training of deep models . Training large ( by the standards of the time ) models was challenging , due to the lack of big datasets and computational resources . One way around this problem was to pre-train some or all layers of the network greedily by treating them as autoencoders with one hidden layer ( Bengio et al. , 2007 ) . Subsequently , Erhan et al . ( 2009 ) demonstrated that autoencoder pre-training also benefits generalization . Currently , researchers in the field of representation learning frequently rely on autoencoders for learning nuanced and high-level representations of data ( Kingma & Welling , 2013 ; Tretschk et al. , 2019 ; Shu et al. , 2018 ; Makhzani et al. , 2015 ; Berthelot et al. , 2018 ) . However , despite its widespread use , we propose that the ( deep ) autoencoder model is not well understood . Many papers have aimed to deepen our understanding of the autoencoder through theoretical analysis ( Nguyen et al. , 2018 ; Arora et al. , 2013 ; Baldi , 2012 ; Alain & Bengio , 2012 ) . While such analyses provide valuable theoretical insight , there is a significant discrepancy between the theoretical frameworks and actual behavior of autoencoders in practice , mainly due to the assumptions made ( e.g. , weight tying , infinite depth ) or the simplicity of the models under study . Others have approached this issue from a more experimental angle ( Arpit et al. , 2015 ; Bengio et al. , 2013 ; Le , 2013 ; Vincent et al. , 2008 ; Berthelot et al. , 2019 ) . Such investigations are part of an ongoing effort to understand the behavior of autoencoders in a variety of settings . The focus of most such investigations so far has been the traditional autoencoder setting with fully connected layers . When working with image data , however , the default choice is to use convolutions , as they provide a prior that is well suited to this type of data ( Ulyanov et al. , 2018 ) . For this reason , Masci et al . ( 2011 ) introduced the convolutional autoencoder ( CAE ) by replacing the fully connected layers in the classical AE with convolutions . In an autoencoder , the layer with the least amount of neurons is referred to as a bottleneck . In the regular AE , this bottleneck is simply a vector ( rank-1 tensor ) . In CAEs , however , the bottleneck assumes the shape of a multichannel image ( rank-3 tensor , height × width × channels ) instead of a vector . This bottleneck shape prompts the question : What is the relative importance of the number of channels versus the height and width ( hereafter referred to as size ) in determining the tightness of the CAE bottleneck ? Intuitively , we might expect that only the total number of neurons should matter since convolutions with one-hot filters can distribute values across channels . Generally , the study of CAE properties appears to be underrepresented in literature , despite their widespread adoption . In this paper , we share new insights into the properties of convolutional autoencoders , which we gained through extensive experimentation . We address the following questions : • How does the number of channels and the feature map size in the bottleneck layer impact – reconstruction quality ? – generalization ability ? – the structure of the latent code ? – knowledge transfer to downstream tasks ? • How and when do CAEs overfit ? • How does the complexity of the data distribution affect all of the above ? • Are CAEs capable of learning a “ copy function ” if the CAE is complete ( i. e. , when the number of pixels in input equals the number of neurons in bottleneck ) ? This “ copying CAE ” hypothesis is a commonly held belief that was carried over from regular AEs ( see Sections 4 and 5 in Masci et al . ( 2011 ) . We begin the following section by formally introducing convolutional autoencoders and explaining the convolutional autoencoder model we used in our experiments . Additionally , we introduce our three datasets and the motivation for choosing them . In Section 3 , we outline the experiments and their respective aims . Afterward , we present and discuss our findings in Section 4 . All of our code , as well as the trained models and datasets , will be published at https : //github.com/YmouslyAnon/WalkingTheTightrope . This repository will also include an interactive Jupyter Notebook for investigating the trained models . We invite interested readers to take a look and experiment with our models . 2 MATERIALS AND METHODS . 2.1 AUTOENCODERS AND CONVOLUTIONAL AUTOENCODERS . The regular autoencoder , as introduced by Rumelhart et al . ( 1985 ) , is a neural network that learns a mapping from data points in the input space x ∈ Rd to a code vector in latent space h ∈ Rm and back . Typically , unless we introduce some other constraint , m is set to be smaller than d to force the autoencoder to learn higher-level abstractions by having to compress the data . In this context , the encoder is the mapping f ( x ) : Rd → Rm and the decoder is the mapping g ( h ) : Rm → Rd . The layers in both the encoder and decoder are fully connected : li+1 = σ ( W ili + bi ) . ( 1 ) Here , li is the activation vector in the i-th layer , W i and bi are the trainable weights and σ is a element-wise non-linear activation function . If necessary , we can tie weights in the encoder to the ones in the decoder such that W i = ( W n−i ) T , where n is the total number of layers . Literature refers to autoencoders with this type of encoder-decoder relation as weight-tied . The convolutional autoencoder keeps the overall structure of the traditional autoencoder but replaces the fully connected layers with convolutions : Li+1 = σ ( Wi ∗ Li + bi ) , ( 2 ) where ∗ denotes the convolution operation and the bias bi is broadcast to match the shape of Li such that the j-th entry in bi is added to the j-th channel in Li . Whereas before the hidden code was an m-dimensional vector , it is now a tensor with a rank equal to the rank of the input tensor . In the case of images , that rank is three ( height , width , and the number of channels ) . CAEs generally include pooling layers or convolutions with strides > 1 or dilation > 1 in the encoder to reduce the size of the input . In the decoder , unpooling or transposed convolution layers ( Dumoulin & Visin , 2016 ) inflate the latent code to the size of the input . 2.2 OUR MODEL . Our model consists of five strided convolution layer in the encoder and five up-sampling convolution layers ( bilinear up-sampling followed by padded convolution ) ( Odena et al. , 2016 ) in the decoder . We chose to use five layers so that the size of the latent code , after the strided convolutions , would be 4x4 or 3x3 depending on the dataset . To increase the level of abstraction in the latent code , we increased the depth of the network by placing two residual blocks ( He et al. , 2016 ) with two convolutions each after each every strided / up-sampling convolution layer . We applied instance normalization ( Ulyanov et al. , 2016 ) and ReLU activation ( Nair & Hinton , 2010 ) following every convolution in the architecture . One of our goals was to understand the effect latent code shape has on different aspects of the network . Therefore , we wanted to be able to change the shape of the bottleneck from one experiment to another , while keeping the rest of the network constant . To this end , we quadrupled the number of channels with every strided convolution si and reduced it by a factor of four with every up-sampling convolution ui . In effect , this means that the volume ( i. e. , height×width× channels ) of the feature maps is identical to the input in all layers up to the bottleneck : si ( Li ) ∈ R hi/2×w i /2×4nic , for Li ∈ Rh i×wi×nic ( 3 ) ui ( Li ) ∈ R2h i×2wi×n i c/4 , for Li ∈ Rh i×wi×nic ( 4 ) In this regard , our model , differs from CAEs commonly found in literature , where it is customary to double/halve the number of channels with every down-/up-sampling layer . However , our scheme allows us to test architectures with different bottleneck shapes while ensuring that the volume of the feature maps stays the same as the input until the bottleneck . In this sense , the bottleneck is the only moving part in our experiments . The resulting models range from having ∼ 50M to 90M parameters . 2.3 DATASETS . To increase the robustness of our study , we conducted experiments on three different datasets . Additionally , the three datasets allowed us to address the question , how the difficulty of the data ( i. e. , the complexity of the data distribution ) affects learning in the CAE . To study this effect , we decided to run our experiments on three datasets of varying difficulty . We determined the difficulty of each dataset based on intuitive heuristics . In the following , we present the datasets in the order of increasing difficulty and our reasoning for the difficulty grading .
This paper studies some of the properties of fully convolutional autoencoders (CAE) as a function of the shape and total size of the bottleneck. They train and test CAEs with bottlenecks consisting of different ratios of spatial resolution versus number of channels, as well as different total number of neurons. The authors investigate which type of change in the bottleneck is most influential on training behavior, generalization to test set, and linear separability for classification/regression. Their first main finding is that the spatial resolution of the bottleneck is a stronger influencer of generalization to the test set than the number of channels and the total number of neurons in the bottleneck. The second main finding is that even when the total number of neurons in the bottleneck is equal to the data input size, the neural network does not appear to simply learn to copy the input image into the bottleneck.
SP:761207caf0d1b23f060e3957a6309bc6d76819a6
On the Global Convergence of Training Deep Linear ResNets
1 INTRODUCTION . Despite the remarkable power of deep neural networks ( DNNs ) trained using stochastic gradient descent ( SGD ) in many machine learning applications , theoretical understanding of the properties of this algorithm , or even plain gradient descent ( GD ) , remains limited . Many key properties of the learning process for such systems are also present in the idealized case of deep linear networks . For example , ( a ) the objective function is not convex ; ( b ) errors back-propagate ; and ( c ) there is potential for exploding and vanishing gradients . In addition to enabling study of systems with these properties in a relatively simple setting , analysis of deep linear networks also facilitates the scientific understanding of deep learning because using linear networks can control for the effect of architecture choices on the expressiveness of networks ( Arora et al. , 2018 ; Du & Hu , 2019 ) . For these reasons , deep linear networks have received extensive attention in recent years . One important line of theoretical investigation of deep linear networks concerns optimization landscape analysis ( Kawaguchi , 2016 ; Hardt & Ma , 2016 ; Freeman & Bruna , 2016 ; Lu & Kawaguchi , 2017 ; Yun et al. , 2018 ; Zhou & Liang , 2018 ) , where major findings include that any critical point of a deep linear network with square loss function is either a global minimum or a saddle point , and identifying conditions on the weight matrices that exclude saddle points . Beyond landscape analysis , another research direction aims to establish convergence guarantees for optimization algorithms ( e.g . GD , SGD ) for training deep linear networks . Arora et al . ( 2018 ) studied the trajectory of gradient flow and showed that depth can help accelerate the optimization of deep linear networks . Ji & Telgarsky ( 2019 ) ; Gunasekar et al . ( 2018 ) investigated the implicit bias of GD for training deep linear networks and deep linear convolutional networks respectively . More recently , Bartlett et al . ( 2019 ) ; Arora et al . ( 2019a ) ; Shamir ( 2018 ) ; Du & Hu ( 2019 ) analyzed the optimization trajectory of GD for training deep linear networks and proved global convergence rates under certain assumptions on the training data , initialization , and neural network structure . Inspired by the great empirical success of residual networks ( ResNets ) , Hardt & Ma ( 2016 ) considered identity parameterizations in deep linear networks , i.e. , parameterizing each layer ’ s weight matrix as I ` W , which leads to the so-called deep linear ResNets . In particular , Hardt & Ma ( 2016 ) established the existence of small norm solutions for deep residual networks with sufficiently large depth L , and proved that there are no critical points other than the global minimum when the maximum spectral norm among all weight matrices is smaller than Op1 { Lq . Motivated by this intriguing finding , Bartlett et al . ( 2019 ) studied the convergence rate of GD for training deep linear networks with identity initialization , which is equivalent to zero initialization in deep linear ResNets . They assumed whitened data and showed that GD can converge to the global minimum if ( i ) the training loss at the initialization is very close to optimal or ( ii ) the regression matrix Φ is symmetric and positive definite . ( In fact , they proved that , when Φ is symmetric and has negative eigenvalues , GD for linear ResNets with zero-initialization does not converge . ) Arora et al . ( 2019a ) showed that GD converges under substantially weaker conditions , which can be satisfied by random initialization schemes . The convergence theory of stochastic gradient descent for training deep linear ResNets is largely missing ; it remains unclear under which conditions SGD can be guaranteed to find the global minimum . In this paper , we establish the global convergence of both GD and SGD for training deep linear ResNets without any condition on the training data . More specifically , we consider the training of L-hidden-layer deep linear ResNets with fixed linear transformations at input and output layers . We prove that under certain conditions on the input and output linear transformations , GD and SGD can converge to the global minimum of the training loss function . Moreover , when specializing to appropriate Gaussian random linear transformations , we show that , as long as the neural network is wide enough , both GD and SGD with zero initialization on all hidden weights can find the global minimum . There are two main ingredients of our proof : ( i ) establishing restricted gradient bounds and a smoothness property ; and ( ii ) proving that these properties hold along the optimization trajectory and further lead to global convergence . We point out the second aspect is challenging especially for SGD due to the uncertainty of its optimization trajectory caused by stochastic gradients . We summarize our main contributions as follows : • We prove the global convergence of GD and SGD for training deep linear ResNets . Specifically , we derive a generic condition on the input and output linear transformations , under which both GD and SGD with zero initialization on all hidden weights can find global minima . Based on this condition , one can design a variety of input and output transformations for training deep linear ResNets . • When applying appropriate Gaussian random linear transformations , we show that as long as the neural network width satisfies m “ Ωpkrκ2q , with high probability , GD can converge to the global minimum up to an -error within Opκ logp1 { qq iterations , where k , r are the output dimension and the rank of training data matrix X respectively , and κ “ } X } 22 { σ2rpXq denotes the condition number of the covariance matrix of the training data . Compared with previous convergence results for training deep linear networks from Du & Hu ( 2019 ) , our condition on the neural network width is independent of the neural network depth L , and is strictly better by a factor of OpLκq . • Using the same Gaussian random linear transformations , we also establish the convergence guarantee of SGD for training deep linear ResNets . We show that if the neural network width satisfies m “ rΩ ` krκ2 log2p1 { q ¨n2 { B2 ˘ , with constant probability , SGD can converge to the global minimum up to an -error within rO ` κ2 ´1 logp1 { q ¨ n { B ˘ iterations , where n is the training sample size and B is the minibatch size of stochastic gradient . This is the first global convergence rate of SGD for training deep linear networks . Moreover , when the global minimum of the training loss is 0 , we prove that SGD can further achieve linear rate of global convergence , and the condition on the neural network width does not depend on the target error . As alluded to above , we analyze networks with d inputs , k outputs , and m ě maxtd , ku nodes in each hidden layer . Linear transformations that are fixed throughout training map the inputs to the first hidden layer , and the last hidden layer to the outputs . We prove that our bounds hold with high probability when these input and output transformations are randomly generated by Gaussian distributions . If , instead , the input transformation simply copies the inputs onto the first d compo- nents of the first hidden layer , and the output transformation takes the first k components of the last hidden layer , then our analysis does not provide a guarantee . There is a good reason for this : a slight modification of a lower bound argument from Bartlett et al . ( 2019 ) demonstrates that GD may fail to converge in this case . However , we describe a similarly simple , deterministic , choice of input and output transformations such that wide enough networks always converge . The resulting condition on the network width is weaker than that for Gaussian random transformations , and thus improves on the corresponding convergence guarantee for linear networks , which , in addition to requiring wider networks , only hold with high probability for random transformations . 1.1 ADDITIONAL RELATED WORK . In addition to what we discussed above , a large bunch of work focusing on the optimization of neural networks with nonlinear activation functions has emerged . We will briefly review them in this subsection . It is widely believed that the training loss landscape of nonlinear neural networks is highly nonconvex and nonsmooth ( e.g. , neural networks with ReLU/LeakyReLU activation ) , thus it is fundamentally difficult to characterize the optimization trajectory and convergence performance of GD and SGD . Some early work ( Andoni et al. , 2014 ; Daniely , 2017 ) showed that wide enough ( polynomial in sample size n ) neural networks trained by GD/SGD can learn a class of continuous functions ( e.g. , polynomial functions ) in polynomial time . However , those works only consider training some of the neural network weights rather than all of them ( e.g. , the input and output layers ) 1 . In addition , a series of papers investigated the convergence of gradient descent for training shallow networks ( typically 2-layer networks ) under certain assumptions on the training data and initialization scheme ( Tian , 2017 ; Du et al. , 2018b ; Brutzkus et al. , 2018 ; Zhong et al. , 2017 ; Li & Yuan , 2017 ; Zhang et al. , 2018 ) . However , the assumptions made in these works are rather strong and not consistent with practice . For example , Tian ( 2017 ) ; Du et al . ( 2018b ) ; Zhong et al . ( 2017 ) ; Li & Yuan ( 2017 ) ; Zhang et al . ( 2018 ) assumed that the label of each training data is generated by a teacher network , which has the same architecture as the learned network . Brutzkus et al . ( 2018 ) assumed that the training data is linearly separable . Li & Liang ( 2018 ) addressed this drawback ; they proved that for two-layer ReLU network with cross-entropy loss , as long as the neural network is sufficiently wide , under mild assumptions on the training data SGD with commonly-used Gaussian random initialization can achieve nearly zero expected error . Du et al . ( 2018c ) proved the similar results of GD for training two-layer ReLU networks with square loss . Beyond shallow neural networks , Allen-Zhu et al . ( 2019 ) ; Du et al . ( 2019 ) ; Zou et al . ( 2019 ) generalized the global convergence results to multi-layer over-parameterized ReLU networks . Chizat et al . ( 2019 ) showed that training over-parameterized neural networks actually belongs to a so-called “ lazy training ” regime , in which the model behaves like its linearization around the initialization . Furthermore , the parameter scaling is more essential than over-paramterization to make the model learning within the “ lazy training ” regime . Along this line of research , several follow up works have been conducted . Oymak & Soltanolkotabi ( 2019 ) ; Zou & Gu ( 2019 ) ; Su & Yang ( 2019 ) ; Kawaguchi & Huang ( 2019 ) improved the convergence rate and over-parameterization condition for both shallow and deep networks . Arora et al . ( 2019b ) showed that training a sufficiently wide deep neural network is almost equivalent to kernel regression using neural tangent kernel ( NTK ) , proposed in Jacot et al . ( 2018 ) . Allen-Zhu et al . ( 2019 ) ; Du et al . ( 2019 ) ; Zhang et al . ( 2019 ) proved the global convergence for training deep ReLU ResNets . Frei et al . ( 2019 ) proved the convergence of GD for training deep ReLU ResNets under an over-parameterization condition that is only logarithmic in the depth of the network , which partially explains why deep residual networks are preferable to fully connected ones . However , all the results in Allen-Zhu et al . ( 2019 ) ; Du et al . ( 2019 ) ; Zhang et al . ( 2019 ) ; Frei et al . ( 2019 ) require a very stringent condition on the network width , which typically has a high-degree polynomial dependence on the training sample size n. Besides , the results in Allen-Zhu et al . ( 2019 ) ; Zhang et al . ( 2019 ) also require that all data points are separated by a positive distance and have unit norm . As shown in Du & Hu ( 2019 ) and will be proved in this paper , for deep linear ( residual ) networks , there is no assumption on the training data , and the condition on the network width is significantly milder , which is independent of the sample size n. While achieving a stronger result for linear networks than for nonlinear ones is not surprising , we believe that our analysis , conducted in the idealized deep linear case , can provide useful insights to understand optimization in the nonlinear case . 1In Daniely ( 2017 ) , the weight changes in all hidden layers make negligible contribution to the final output , thus can be approximately treated as only training the output layer . Two concurrent works analyze gradient descent applied to deep linear ( residual ) networks ( Hu et al. , 2020 ; Wu et al. , 2019 ) . Hu et al . ( 2020 ) consider deep linear networks with orthogonal initialization , and Wu et al . ( 2019 ) consider zero initialization on the last layer and identity initialization for the rest of the layers , which are similar to our setting . However , there are several differences between their work and ours . One major difference is that Hu et al . ( 2020 ) and Wu et al . ( 2019 ) only prove global convergence for GD , but our results cover both GD and SGD . In addition , Hu et al . ( 2020 ) focuses on proving the global convergence of GD for sufficiently wide networks , while we provide a generic condition on the input and output linear transformations for ensuring global convergence . Wu et al . ( 2019 ) assumes whitened data and proves a OpL3 logp1 { qq bound on the number of iterations required for GD to converge , where we establish a Oplogp1 { qq2 bound .
This paper deals with the global convergence of deep linear ResNets. The author show that under some initialization conditions for the first and the last layer (that are not optimized !) GD and SGD does converge to a global minimum of the min squared error. The closed related work seems to be Bartlett et al. 2019 that study the convergence of GD in the case of linear networks.
SP:4363825dfbd8c5b5a616ea5b0f67a751dcbe7eaf
On the Global Convergence of Training Deep Linear ResNets
1 INTRODUCTION . Despite the remarkable power of deep neural networks ( DNNs ) trained using stochastic gradient descent ( SGD ) in many machine learning applications , theoretical understanding of the properties of this algorithm , or even plain gradient descent ( GD ) , remains limited . Many key properties of the learning process for such systems are also present in the idealized case of deep linear networks . For example , ( a ) the objective function is not convex ; ( b ) errors back-propagate ; and ( c ) there is potential for exploding and vanishing gradients . In addition to enabling study of systems with these properties in a relatively simple setting , analysis of deep linear networks also facilitates the scientific understanding of deep learning because using linear networks can control for the effect of architecture choices on the expressiveness of networks ( Arora et al. , 2018 ; Du & Hu , 2019 ) . For these reasons , deep linear networks have received extensive attention in recent years . One important line of theoretical investigation of deep linear networks concerns optimization landscape analysis ( Kawaguchi , 2016 ; Hardt & Ma , 2016 ; Freeman & Bruna , 2016 ; Lu & Kawaguchi , 2017 ; Yun et al. , 2018 ; Zhou & Liang , 2018 ) , where major findings include that any critical point of a deep linear network with square loss function is either a global minimum or a saddle point , and identifying conditions on the weight matrices that exclude saddle points . Beyond landscape analysis , another research direction aims to establish convergence guarantees for optimization algorithms ( e.g . GD , SGD ) for training deep linear networks . Arora et al . ( 2018 ) studied the trajectory of gradient flow and showed that depth can help accelerate the optimization of deep linear networks . Ji & Telgarsky ( 2019 ) ; Gunasekar et al . ( 2018 ) investigated the implicit bias of GD for training deep linear networks and deep linear convolutional networks respectively . More recently , Bartlett et al . ( 2019 ) ; Arora et al . ( 2019a ) ; Shamir ( 2018 ) ; Du & Hu ( 2019 ) analyzed the optimization trajectory of GD for training deep linear networks and proved global convergence rates under certain assumptions on the training data , initialization , and neural network structure . Inspired by the great empirical success of residual networks ( ResNets ) , Hardt & Ma ( 2016 ) considered identity parameterizations in deep linear networks , i.e. , parameterizing each layer ’ s weight matrix as I ` W , which leads to the so-called deep linear ResNets . In particular , Hardt & Ma ( 2016 ) established the existence of small norm solutions for deep residual networks with sufficiently large depth L , and proved that there are no critical points other than the global minimum when the maximum spectral norm among all weight matrices is smaller than Op1 { Lq . Motivated by this intriguing finding , Bartlett et al . ( 2019 ) studied the convergence rate of GD for training deep linear networks with identity initialization , which is equivalent to zero initialization in deep linear ResNets . They assumed whitened data and showed that GD can converge to the global minimum if ( i ) the training loss at the initialization is very close to optimal or ( ii ) the regression matrix Φ is symmetric and positive definite . ( In fact , they proved that , when Φ is symmetric and has negative eigenvalues , GD for linear ResNets with zero-initialization does not converge . ) Arora et al . ( 2019a ) showed that GD converges under substantially weaker conditions , which can be satisfied by random initialization schemes . The convergence theory of stochastic gradient descent for training deep linear ResNets is largely missing ; it remains unclear under which conditions SGD can be guaranteed to find the global minimum . In this paper , we establish the global convergence of both GD and SGD for training deep linear ResNets without any condition on the training data . More specifically , we consider the training of L-hidden-layer deep linear ResNets with fixed linear transformations at input and output layers . We prove that under certain conditions on the input and output linear transformations , GD and SGD can converge to the global minimum of the training loss function . Moreover , when specializing to appropriate Gaussian random linear transformations , we show that , as long as the neural network is wide enough , both GD and SGD with zero initialization on all hidden weights can find the global minimum . There are two main ingredients of our proof : ( i ) establishing restricted gradient bounds and a smoothness property ; and ( ii ) proving that these properties hold along the optimization trajectory and further lead to global convergence . We point out the second aspect is challenging especially for SGD due to the uncertainty of its optimization trajectory caused by stochastic gradients . We summarize our main contributions as follows : • We prove the global convergence of GD and SGD for training deep linear ResNets . Specifically , we derive a generic condition on the input and output linear transformations , under which both GD and SGD with zero initialization on all hidden weights can find global minima . Based on this condition , one can design a variety of input and output transformations for training deep linear ResNets . • When applying appropriate Gaussian random linear transformations , we show that as long as the neural network width satisfies m “ Ωpkrκ2q , with high probability , GD can converge to the global minimum up to an -error within Opκ logp1 { qq iterations , where k , r are the output dimension and the rank of training data matrix X respectively , and κ “ } X } 22 { σ2rpXq denotes the condition number of the covariance matrix of the training data . Compared with previous convergence results for training deep linear networks from Du & Hu ( 2019 ) , our condition on the neural network width is independent of the neural network depth L , and is strictly better by a factor of OpLκq . • Using the same Gaussian random linear transformations , we also establish the convergence guarantee of SGD for training deep linear ResNets . We show that if the neural network width satisfies m “ rΩ ` krκ2 log2p1 { q ¨n2 { B2 ˘ , with constant probability , SGD can converge to the global minimum up to an -error within rO ` κ2 ´1 logp1 { q ¨ n { B ˘ iterations , where n is the training sample size and B is the minibatch size of stochastic gradient . This is the first global convergence rate of SGD for training deep linear networks . Moreover , when the global minimum of the training loss is 0 , we prove that SGD can further achieve linear rate of global convergence , and the condition on the neural network width does not depend on the target error . As alluded to above , we analyze networks with d inputs , k outputs , and m ě maxtd , ku nodes in each hidden layer . Linear transformations that are fixed throughout training map the inputs to the first hidden layer , and the last hidden layer to the outputs . We prove that our bounds hold with high probability when these input and output transformations are randomly generated by Gaussian distributions . If , instead , the input transformation simply copies the inputs onto the first d compo- nents of the first hidden layer , and the output transformation takes the first k components of the last hidden layer , then our analysis does not provide a guarantee . There is a good reason for this : a slight modification of a lower bound argument from Bartlett et al . ( 2019 ) demonstrates that GD may fail to converge in this case . However , we describe a similarly simple , deterministic , choice of input and output transformations such that wide enough networks always converge . The resulting condition on the network width is weaker than that for Gaussian random transformations , and thus improves on the corresponding convergence guarantee for linear networks , which , in addition to requiring wider networks , only hold with high probability for random transformations . 1.1 ADDITIONAL RELATED WORK . In addition to what we discussed above , a large bunch of work focusing on the optimization of neural networks with nonlinear activation functions has emerged . We will briefly review them in this subsection . It is widely believed that the training loss landscape of nonlinear neural networks is highly nonconvex and nonsmooth ( e.g. , neural networks with ReLU/LeakyReLU activation ) , thus it is fundamentally difficult to characterize the optimization trajectory and convergence performance of GD and SGD . Some early work ( Andoni et al. , 2014 ; Daniely , 2017 ) showed that wide enough ( polynomial in sample size n ) neural networks trained by GD/SGD can learn a class of continuous functions ( e.g. , polynomial functions ) in polynomial time . However , those works only consider training some of the neural network weights rather than all of them ( e.g. , the input and output layers ) 1 . In addition , a series of papers investigated the convergence of gradient descent for training shallow networks ( typically 2-layer networks ) under certain assumptions on the training data and initialization scheme ( Tian , 2017 ; Du et al. , 2018b ; Brutzkus et al. , 2018 ; Zhong et al. , 2017 ; Li & Yuan , 2017 ; Zhang et al. , 2018 ) . However , the assumptions made in these works are rather strong and not consistent with practice . For example , Tian ( 2017 ) ; Du et al . ( 2018b ) ; Zhong et al . ( 2017 ) ; Li & Yuan ( 2017 ) ; Zhang et al . ( 2018 ) assumed that the label of each training data is generated by a teacher network , which has the same architecture as the learned network . Brutzkus et al . ( 2018 ) assumed that the training data is linearly separable . Li & Liang ( 2018 ) addressed this drawback ; they proved that for two-layer ReLU network with cross-entropy loss , as long as the neural network is sufficiently wide , under mild assumptions on the training data SGD with commonly-used Gaussian random initialization can achieve nearly zero expected error . Du et al . ( 2018c ) proved the similar results of GD for training two-layer ReLU networks with square loss . Beyond shallow neural networks , Allen-Zhu et al . ( 2019 ) ; Du et al . ( 2019 ) ; Zou et al . ( 2019 ) generalized the global convergence results to multi-layer over-parameterized ReLU networks . Chizat et al . ( 2019 ) showed that training over-parameterized neural networks actually belongs to a so-called “ lazy training ” regime , in which the model behaves like its linearization around the initialization . Furthermore , the parameter scaling is more essential than over-paramterization to make the model learning within the “ lazy training ” regime . Along this line of research , several follow up works have been conducted . Oymak & Soltanolkotabi ( 2019 ) ; Zou & Gu ( 2019 ) ; Su & Yang ( 2019 ) ; Kawaguchi & Huang ( 2019 ) improved the convergence rate and over-parameterization condition for both shallow and deep networks . Arora et al . ( 2019b ) showed that training a sufficiently wide deep neural network is almost equivalent to kernel regression using neural tangent kernel ( NTK ) , proposed in Jacot et al . ( 2018 ) . Allen-Zhu et al . ( 2019 ) ; Du et al . ( 2019 ) ; Zhang et al . ( 2019 ) proved the global convergence for training deep ReLU ResNets . Frei et al . ( 2019 ) proved the convergence of GD for training deep ReLU ResNets under an over-parameterization condition that is only logarithmic in the depth of the network , which partially explains why deep residual networks are preferable to fully connected ones . However , all the results in Allen-Zhu et al . ( 2019 ) ; Du et al . ( 2019 ) ; Zhang et al . ( 2019 ) ; Frei et al . ( 2019 ) require a very stringent condition on the network width , which typically has a high-degree polynomial dependence on the training sample size n. Besides , the results in Allen-Zhu et al . ( 2019 ) ; Zhang et al . ( 2019 ) also require that all data points are separated by a positive distance and have unit norm . As shown in Du & Hu ( 2019 ) and will be proved in this paper , for deep linear ( residual ) networks , there is no assumption on the training data , and the condition on the network width is significantly milder , which is independent of the sample size n. While achieving a stronger result for linear networks than for nonlinear ones is not surprising , we believe that our analysis , conducted in the idealized deep linear case , can provide useful insights to understand optimization in the nonlinear case . 1In Daniely ( 2017 ) , the weight changes in all hidden layers make negligible contribution to the final output , thus can be approximately treated as only training the output layer . Two concurrent works analyze gradient descent applied to deep linear ( residual ) networks ( Hu et al. , 2020 ; Wu et al. , 2019 ) . Hu et al . ( 2020 ) consider deep linear networks with orthogonal initialization , and Wu et al . ( 2019 ) consider zero initialization on the last layer and identity initialization for the rest of the layers , which are similar to our setting . However , there are several differences between their work and ours . One major difference is that Hu et al . ( 2020 ) and Wu et al . ( 2019 ) only prove global convergence for GD , but our results cover both GD and SGD . In addition , Hu et al . ( 2020 ) focuses on proving the global convergence of GD for sufficiently wide networks , while we provide a generic condition on the input and output linear transformations for ensuring global convergence . Wu et al . ( 2019 ) assumes whitened data and proves a OpL3 logp1 { qq bound on the number of iterations required for GD to converge , where we establish a Oplogp1 { qq2 bound .
In this paper, the authors study the convergence of (stochastic) gradient descent in training deep linear residual networks, where linear transformation at input and output layers are fixed and matrices in other layers are trained. They first establish a global convergence of GD/SGD under some conditions on the fixed linear transformations. They they showed that for Gaussian random input and output transformation, global convergence still holds under conditions on the width of networks strictly milder than the literature. Linear convergence rate of SG/SGD are also established.
SP:4363825dfbd8c5b5a616ea5b0f67a751dcbe7eaf
HiLLoC: lossless image compression with hierarchical latent variable models
1 INTRODUCTION . Bits back coding ( Wallace , 1990 ; Hinton & van Camp , 1993 ) is a method for performing lossless compression using a latent variable model . In an ideal implementation , the method can achieve an expected message length equal to the variational free energy , often referred to as the negative evidence lower bound ( ELBO ) of the model . Bits back was first introduced to form a theoretical argument for using the ELBO as an objective function for machine learning ( Hinton & van Camp , 1993 ) . The first implementation of bits back coding ( Frey , 1997 ; Frey & Hinton , 1996 ) made use of first-infirst-out ( FIFO ) arithmetic coding ( AC ) ( Witten et al. , 1987 ) . However , the implementation did not achieve optimal compression , due to an incompatibility between a FIFO coder and bits back coding , and its use was only demonstrated on a small dataset of 8×8 binary images . Recently , zero-overhead bits back compression with a significantly simpler implementation has been developed by Townsend et al . ( 2019 ) . This implementation makes use of asymmetric numeral systems ( ANS ) , a last-in-first-out ( LIFO ) entropy coding scheme ( Duda , 2009 ) . The method , known as ‘ Bits Back with Asymmetric Numeral Systems ’ ( BB-ANS ) was demonstrated by compressing the MNIST test set using a variational auto-encoder ( VAE ) model ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) , achieving a compression rate within 1 % of the model ELBO . More recently , Hoogeboom et al . ( 2019 ) and Ho et al . ( 2019 ) have proposed flow-based methods for lossless compression , and Kingma et al . ( 2019 ) have presented ‘ Bit-Swap ’ , extending BB-ANS to hierarchical models . In this work we present an alternative method for extending to hierarchical VAEs . This entails the following novel techniques : 1 . Direct coding of arbitrary sized images using a fully convolutional model . 2 . A vectorized ANS implementation supporting dynamic shape . 3 . Dynamic discretization to avoid having to calibrate a static discretization . 4 . Initializing the bits back chain using a different codec . We discuss each of these contributions in detail in Section 3 . We call the combination of BB-ANS using a hierarchical latent variable model and the above techniques : ‘ Hierarchical Latent Lossless ∗Equal contribution . 1Available at https : //github.com/hilloc-submission/hilloc . Compression ’ ( HiLLoC ) . In our experiments ( Section 4 ) , we demonstrate that HiLLoC can be used to compress color images from the ImageNet test set at rates close to the ELBO , outperforming all of the other codecs which we benchmark . We also demonstrate the speedup , of nearly three orders of magnitude , resulting from vectorization . We release an open source implementation based on ‘ Craystack ’ , a Python package which we have written for general prototyping of lossless compression with ANS . 2 BACKGROUND . In this section we briefly describe the BB-ANS algorithm first introduced by Townsend et al . ( 2019 ) . We begin by giving a high-level description of the ANS LIFO entropy coder ( Duda , 2009 ) , along with a new notation for describing the basic ANS operations . Throughout the rest of the paper we use log to mean the base two logarithm , usually denoted log2 , and we measure message lengths in bits . 2.1 ASYMMETRIC NUMERAL SYSTEMS . As an entropy coder , ANS was designed for compressing sequences of discretely distributed symbols . It achieves a compressed message length equal to the negative log-probability ( information content ) of the sequence plus an implementation dependent constant , which is usually less than 32 bits . For long sequences , the constant overhead has a negligible contribution to the overall compression rate . Thus , by Shannon ’ s source coding theorem ( Shannon , 1948 ) , ANS coding is guaranteed to be near-optimal for long sequences . There are two basic operations defined by ANS , which we will refer to as ‘ push ’ and ‘ pop ’ . Push encodes a symbol by adding it to an existing message . It has the signature push : ( message , symbol ) 7→ message′ . ( 1 ) Pop is the inverse of push , and may be used to decode a symbol and recover a message identical to that before pushing . pop : message′ 7→ ( message , symbol ) . ( 2 ) When multiple symbols are pushed in sequence , they must be popped using the precise inverse procedure , which means popping the symbols in the opposite order . Hence why ANS is referred to as a last-in-first-out coder , or a stack . The push and pop operations require access to a probabilistic model of symbols , summarized by a probability mass function p over the alphabet of possible symbols . The way that symbols are encoded depends on the model , and pushing a symbol s according to p results in an increase in message length of log 1p ( s ) . Popping s results in an equal reduction in message length . For details on how the ANS operations are implemented , see Duda ( 2009 ) . Note that any model/mass function can be used for the pop operation , i.e . there ’ s no hard restriction to use the distribution that was used to encode the message . In this way , rather than decoding the same data that was encoded , pop can actually be used to sample a symbol from a different distribution . The pop method itself is deterministic , so the source of randomness for the sample comes from the data contained within the message . This sampling operation , which can be inverted by pushing the sample back onto the stack , is essential for bits back coding . For convenience , we introduce the shorthand notation s→ p ( · ) for encoding ( pushing ) a symbol s according to p , and s← p ( · ) for decoding ( popping ) . 2.2 BITS BACK WITH ANS . Suppose we have a model for data x which involves a latent variable z . A sender and receiver wish to communicate a sample x . They have access to a prior on z , denoted p ( z ) , a likelihood p ( x | z ) and a ( possibly approximate ) posterior q ( z |x ) , but not the marginal distribution p ( x ) . Without access to p ( x ) , sender and receiver can not directly code x using ANS . However , BB-ANS specifies an indirect way to push and pop x . It does not require access to the marginal p ( x ) , but rather uses the prior , conditional , and posterior from the latent variable model . Table 1 ( a ) shows , in order from the top , the three steps of the BB-ANS pushing procedure which the sender can perform to encode x . The ‘ Variables ’ column shows the variables known to the sender before each step . 1 ( b ) shows the inverse steps which the receiver can use to pop x , with the ‘ Variables ’ column showing what is known to the receiver after each step . After decoding x , the third step of popping , z → q ( · |x ) , is necessary to ensure that BB-ANS pop is a precise inverse of push . The change in message length from BB-ANS can easily be derived by adding up the quantities in the ∆L column of Table 1 . For encoding we get ∆LBB−ANS = − log 1 q ( z |x ) + log 1 p ( x | z ) + log 1 p ( z ) ( 3 ) = − log p ( x , z ) q ( z |x ) . ( 4 ) Taking the expectation over z gives the expected message length for a datum x L ( x ) = −Eq ( z | x ) [ log p ( x , z ) q ( z |x ) ] ( 5 ) which is the negative evidence lower bound ( ELBO ) , also known as the free energy . This is a commonly used training objective for latent variable models . The above equation implies that latent variable models trained using the ELBO are implicitly being trained to minimize the expected message length of lossless compression using BB-ANS . Note that , as Table 1 shows , the first step of encoding a data point , x , using BB-ANS is to , counterintuitively , decode ( and thereby sample ) a latent z ← q ( · |x ) . This requires that there is already a buffer of random data pushed to the ANS coder , which can be popped . This data used to start the encoding process is recovered after the final stage of decoding , hence the name ‘ bits back ’ . If we have multiple samples to compress , then we can use ‘ chaining ’ , which is essentially repeated application of the procedure in Table 1 ( Townsend et al. , 2019 ) . In Section 3.4 we describe how we build up an initial buffer of compressed data by using a different codec to code the first images in a sequence . 3 SCALING UP BITS BACK WITH ANS . We now discuss the techniques we introduce to scale up BB-ANS . 3.1 FULLY CONVOLUTIONAL MODELS . When all of the layers in the generative and recognition networks of a VAE are either convolutional or elementwise functions ( i.e . the VAE has no densely connected layers ) , then it is possible to evaluate the recognition network on images of any height and width , and similarly to pass latents of any height and width through the generative network to generate an image . Thus , such a VAE can be used as a ( probabilistic ) model for images of any size . We exploit this fact , and show empirically in Section 4 that , surprisingly , a fully convolutional VAE trained on 32 × 32 images can perform well ( in the sense of having a high ELBO ) as a model for 64 × 64 images as well as far larger images . This in turn corresponds to a good compression rate , and we implement lossless compression of arbitrary sized images by using a VAE in this way . 3.2 VECTORIZED LOSSLESS COMPRESSION . The primary computational bottlenecks in the original BB-ANS implementation ( Townsend et al. , 2019 ) were loops over data and latent variables occurring in the Python interpreter . We have been able to vectorize these , achieving an implementation which can scale to large ImageNet images . The effect of vectorization on runtime is shown in Figure 4 . A vectorized implementation of ANS was described in Giesen ( 2014 ) using SIMD instructions . This works by expanding the size of the ANS stack head , from a scalar to a vector , and interleaving the output/input bit stream . We implement this in our lossless compression library , Craystack , using Numpy . Please refer to the Craystack code and to Giesen ( 2014 ) for more detail . We ensure that the compression rate overhead to vectorization is low by using the BitKnit technique described in Giesen ( 2015 ) , see Appendix D for more detail . Having vectorized , we found that most of the compute time for our compression was spent in neural net inference , whether running on CPU or GPU , which we know to already be reasonably well optimized . In Craystack , we further generalize the ANS coder using Numpy ’ s n-dimensional array view interface , allowing the stack head to be ‘ shaped ’ like an n-dimensional array , or a nested Python data-structure containing arrays . We can then use a shape which fits that of the data that we wish to encode or decode . When coding data according to a VAE we use an ANS stack head shaped into a pair of arrays , matching the shapes of the observation x and the latent z . This allows for a straightforward implementation and clarifies the lack of data dependence between certain operations , such as the x→ p ( · | z ) and z → p ( · ) during encoding , which can theoretically be performed concurrently . This vectorized encoding process is visualized in Figure 2 .
This paper focuses on lossless source compression with bits back coding for hierarchical fully convolutional VAEs. The focus/contribution is three-fold: 1. Improve the compression rate performance by adapting the discretization of latent space required for the entropy coder ANS. The newly proposed discretization scheme allows for a dependency structure that is not restricted to a Markov chain structure in the encoder model q(z|x) and in the generative part of the model p(x,z). This is in contrast with bit-swap[1], which requires a markov chain structure. The dependency structure that is allowed in the proposed method is widely known to perform better than a markov chain structure, which can explain why it improves significantly over Bit-swap [1] (another hierarchical VAE compression algorithm that uses bits back coding.) 2. Increasing compression speed by implementing a vectorized version of ANS, and heaving an ANS head in the shape of a pair of arrays matching that of the latent variable and the observed variable. The latter allows for simultaneous encoding of the latent with the prior distribution and the image with the decoder distribution. 3. Showing that a model trained on a low-resolution imagenet 32 dataset can generalize its compression capabilities to higher resolution datasets with convincing results.
SP:cbff5688c7be72a90c6e2ff6e3629c6feac3717c
HiLLoC: lossless image compression with hierarchical latent variable models
1 INTRODUCTION . Bits back coding ( Wallace , 1990 ; Hinton & van Camp , 1993 ) is a method for performing lossless compression using a latent variable model . In an ideal implementation , the method can achieve an expected message length equal to the variational free energy , often referred to as the negative evidence lower bound ( ELBO ) of the model . Bits back was first introduced to form a theoretical argument for using the ELBO as an objective function for machine learning ( Hinton & van Camp , 1993 ) . The first implementation of bits back coding ( Frey , 1997 ; Frey & Hinton , 1996 ) made use of first-infirst-out ( FIFO ) arithmetic coding ( AC ) ( Witten et al. , 1987 ) . However , the implementation did not achieve optimal compression , due to an incompatibility between a FIFO coder and bits back coding , and its use was only demonstrated on a small dataset of 8×8 binary images . Recently , zero-overhead bits back compression with a significantly simpler implementation has been developed by Townsend et al . ( 2019 ) . This implementation makes use of asymmetric numeral systems ( ANS ) , a last-in-first-out ( LIFO ) entropy coding scheme ( Duda , 2009 ) . The method , known as ‘ Bits Back with Asymmetric Numeral Systems ’ ( BB-ANS ) was demonstrated by compressing the MNIST test set using a variational auto-encoder ( VAE ) model ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) , achieving a compression rate within 1 % of the model ELBO . More recently , Hoogeboom et al . ( 2019 ) and Ho et al . ( 2019 ) have proposed flow-based methods for lossless compression , and Kingma et al . ( 2019 ) have presented ‘ Bit-Swap ’ , extending BB-ANS to hierarchical models . In this work we present an alternative method for extending to hierarchical VAEs . This entails the following novel techniques : 1 . Direct coding of arbitrary sized images using a fully convolutional model . 2 . A vectorized ANS implementation supporting dynamic shape . 3 . Dynamic discretization to avoid having to calibrate a static discretization . 4 . Initializing the bits back chain using a different codec . We discuss each of these contributions in detail in Section 3 . We call the combination of BB-ANS using a hierarchical latent variable model and the above techniques : ‘ Hierarchical Latent Lossless ∗Equal contribution . 1Available at https : //github.com/hilloc-submission/hilloc . Compression ’ ( HiLLoC ) . In our experiments ( Section 4 ) , we demonstrate that HiLLoC can be used to compress color images from the ImageNet test set at rates close to the ELBO , outperforming all of the other codecs which we benchmark . We also demonstrate the speedup , of nearly three orders of magnitude , resulting from vectorization . We release an open source implementation based on ‘ Craystack ’ , a Python package which we have written for general prototyping of lossless compression with ANS . 2 BACKGROUND . In this section we briefly describe the BB-ANS algorithm first introduced by Townsend et al . ( 2019 ) . We begin by giving a high-level description of the ANS LIFO entropy coder ( Duda , 2009 ) , along with a new notation for describing the basic ANS operations . Throughout the rest of the paper we use log to mean the base two logarithm , usually denoted log2 , and we measure message lengths in bits . 2.1 ASYMMETRIC NUMERAL SYSTEMS . As an entropy coder , ANS was designed for compressing sequences of discretely distributed symbols . It achieves a compressed message length equal to the negative log-probability ( information content ) of the sequence plus an implementation dependent constant , which is usually less than 32 bits . For long sequences , the constant overhead has a negligible contribution to the overall compression rate . Thus , by Shannon ’ s source coding theorem ( Shannon , 1948 ) , ANS coding is guaranteed to be near-optimal for long sequences . There are two basic operations defined by ANS , which we will refer to as ‘ push ’ and ‘ pop ’ . Push encodes a symbol by adding it to an existing message . It has the signature push : ( message , symbol ) 7→ message′ . ( 1 ) Pop is the inverse of push , and may be used to decode a symbol and recover a message identical to that before pushing . pop : message′ 7→ ( message , symbol ) . ( 2 ) When multiple symbols are pushed in sequence , they must be popped using the precise inverse procedure , which means popping the symbols in the opposite order . Hence why ANS is referred to as a last-in-first-out coder , or a stack . The push and pop operations require access to a probabilistic model of symbols , summarized by a probability mass function p over the alphabet of possible symbols . The way that symbols are encoded depends on the model , and pushing a symbol s according to p results in an increase in message length of log 1p ( s ) . Popping s results in an equal reduction in message length . For details on how the ANS operations are implemented , see Duda ( 2009 ) . Note that any model/mass function can be used for the pop operation , i.e . there ’ s no hard restriction to use the distribution that was used to encode the message . In this way , rather than decoding the same data that was encoded , pop can actually be used to sample a symbol from a different distribution . The pop method itself is deterministic , so the source of randomness for the sample comes from the data contained within the message . This sampling operation , which can be inverted by pushing the sample back onto the stack , is essential for bits back coding . For convenience , we introduce the shorthand notation s→ p ( · ) for encoding ( pushing ) a symbol s according to p , and s← p ( · ) for decoding ( popping ) . 2.2 BITS BACK WITH ANS . Suppose we have a model for data x which involves a latent variable z . A sender and receiver wish to communicate a sample x . They have access to a prior on z , denoted p ( z ) , a likelihood p ( x | z ) and a ( possibly approximate ) posterior q ( z |x ) , but not the marginal distribution p ( x ) . Without access to p ( x ) , sender and receiver can not directly code x using ANS . However , BB-ANS specifies an indirect way to push and pop x . It does not require access to the marginal p ( x ) , but rather uses the prior , conditional , and posterior from the latent variable model . Table 1 ( a ) shows , in order from the top , the three steps of the BB-ANS pushing procedure which the sender can perform to encode x . The ‘ Variables ’ column shows the variables known to the sender before each step . 1 ( b ) shows the inverse steps which the receiver can use to pop x , with the ‘ Variables ’ column showing what is known to the receiver after each step . After decoding x , the third step of popping , z → q ( · |x ) , is necessary to ensure that BB-ANS pop is a precise inverse of push . The change in message length from BB-ANS can easily be derived by adding up the quantities in the ∆L column of Table 1 . For encoding we get ∆LBB−ANS = − log 1 q ( z |x ) + log 1 p ( x | z ) + log 1 p ( z ) ( 3 ) = − log p ( x , z ) q ( z |x ) . ( 4 ) Taking the expectation over z gives the expected message length for a datum x L ( x ) = −Eq ( z | x ) [ log p ( x , z ) q ( z |x ) ] ( 5 ) which is the negative evidence lower bound ( ELBO ) , also known as the free energy . This is a commonly used training objective for latent variable models . The above equation implies that latent variable models trained using the ELBO are implicitly being trained to minimize the expected message length of lossless compression using BB-ANS . Note that , as Table 1 shows , the first step of encoding a data point , x , using BB-ANS is to , counterintuitively , decode ( and thereby sample ) a latent z ← q ( · |x ) . This requires that there is already a buffer of random data pushed to the ANS coder , which can be popped . This data used to start the encoding process is recovered after the final stage of decoding , hence the name ‘ bits back ’ . If we have multiple samples to compress , then we can use ‘ chaining ’ , which is essentially repeated application of the procedure in Table 1 ( Townsend et al. , 2019 ) . In Section 3.4 we describe how we build up an initial buffer of compressed data by using a different codec to code the first images in a sequence . 3 SCALING UP BITS BACK WITH ANS . We now discuss the techniques we introduce to scale up BB-ANS . 3.1 FULLY CONVOLUTIONAL MODELS . When all of the layers in the generative and recognition networks of a VAE are either convolutional or elementwise functions ( i.e . the VAE has no densely connected layers ) , then it is possible to evaluate the recognition network on images of any height and width , and similarly to pass latents of any height and width through the generative network to generate an image . Thus , such a VAE can be used as a ( probabilistic ) model for images of any size . We exploit this fact , and show empirically in Section 4 that , surprisingly , a fully convolutional VAE trained on 32 × 32 images can perform well ( in the sense of having a high ELBO ) as a model for 64 × 64 images as well as far larger images . This in turn corresponds to a good compression rate , and we implement lossless compression of arbitrary sized images by using a VAE in this way . 3.2 VECTORIZED LOSSLESS COMPRESSION . The primary computational bottlenecks in the original BB-ANS implementation ( Townsend et al. , 2019 ) were loops over data and latent variables occurring in the Python interpreter . We have been able to vectorize these , achieving an implementation which can scale to large ImageNet images . The effect of vectorization on runtime is shown in Figure 4 . A vectorized implementation of ANS was described in Giesen ( 2014 ) using SIMD instructions . This works by expanding the size of the ANS stack head , from a scalar to a vector , and interleaving the output/input bit stream . We implement this in our lossless compression library , Craystack , using Numpy . Please refer to the Craystack code and to Giesen ( 2014 ) for more detail . We ensure that the compression rate overhead to vectorization is low by using the BitKnit technique described in Giesen ( 2015 ) , see Appendix D for more detail . Having vectorized , we found that most of the compute time for our compression was spent in neural net inference , whether running on CPU or GPU , which we know to already be reasonably well optimized . In Craystack , we further generalize the ANS coder using Numpy ’ s n-dimensional array view interface , allowing the stack head to be ‘ shaped ’ like an n-dimensional array , or a nested Python data-structure containing arrays . We can then use a shape which fits that of the data that we wish to encode or decode . When coding data according to a VAE we use an ANS stack head shaped into a pair of arrays , matching the shapes of the observation x and the latent z . This allows for a straightforward implementation and clarifies the lack of data dependence between certain operations , such as the x→ p ( · | z ) and z → p ( · ) during encoding , which can theoretically be performed concurrently . This vectorized encoding process is visualized in Figure 2 .
This paper proposes a method for lossless image compression consisting of a VAE and using a bits-back version of ANS. The results are very impressive on a ImageNet (but maybe not so impressive on the other benchmarks). The authors also discuss how to speed up inference and present some frightening runtime numbers for the serial method, and some better numbers for the vectorized version, though they're nowhere close to being practical.
SP:cbff5688c7be72a90c6e2ff6e3629c6feac3717c
Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability
Deep neural networks are known to be vulnerable to adversarial perturbations . In this paper , we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems . From this viewpoint , training neural nets is equivalent to finding an optimal control of the discrete dynamical system , which allows one to utilize methods of successive approximations , an optimal control algorithm based on Pontryagin ’ s maximum principle , to train neural nets . This decoupled training method allows us to add constraints to the optimization , which makes the deep model more robust . The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently . Experiments show that our method effectively improves deep model ’ s adversarial robustness . 1 INTRODUCTION . Deep neural networks achieve state-of-the-art performances on a variety of tasks ( LeCun et al. , 2015 ) . However , neural nets are known to be vulnerable to adversarial examples . Imperceptibly perturbed inputs can induce erroneous outputs in neural nets ( Szegedy et al. , 2013 ) . In image classification problems of computer vision , previous work has proposed various methods to attack deep models and induce low accuracy ( Goodfellow et al. , 2015 ; Madry et al. , 2017 ; Papernot et al. , 2016a ; Carlini & Wagner , 2017a ) . Whereas multiple defenses against adversarial attacks are developed , they don ’ t ensure safety faced with strong attacking methods . There are also theories that explain the existence of adversarial examples ( Ilyas et al. , 2019 ; Shamir et al. , 2019 ) , but they often fail to fully explain the features and behaviors of this phenomenon . This makes the study of adversarial attacks important in that it is a threat to real-life machine learning systems ( Kurakin et al. , 2016 ) . In this paper , we propose a dynamical system view on the adversarial robustness of the models , as well as new method that significantly defense adversarial attacks . Recent works have shown the connection between deep neural networks and dynamical systems ( E , 2017 ; Li et al. , 2017 ; Haber & Ruthotto , 2017 ; Lu et al. , 2017 ) . If we regard the neural net as a discretization of an ordinary differential equation ( ODE ) , then training neural nets becomes finding an optimal control of the corresponding discrete dynamical system . Traditionally , we often treat training neural networks as an unconstrained non-convex optimization problem min θ∈Θ J ( θ ) +R ( θ ) , where θ denotes the parameters of the model , J denotes the loss function and R denotes the regularizer term , and we solve the problem with ( stochastic ) gradient-descent based methods ( Bottou , 2010 ; Ruder , 2016 ) . In the training process , we feed the network with a batch of training data , and compute the gradient with forward and backward propagation ( E. Rumelhart et al. , 1986 ) . The propagation process resembles solving optimal control problems that tune the parameters to make the output be close to target states . This viewpoint motivates us to bridge adversarial robustness with Lyapunov stability of a dynamical system , and to train robust networks with algorithms that find stable optimal control . We will formulate the discussion in later sections . 2 RELATED WORK . 2.1 ADVERSARIAL DEFENSE . Many defense methods have been proposed to improve the models ’ adversarial robustness . The defenses mainly fall into three types : adversarial training ( Szegedy et al. , 2013 ; Zhang et al. , 2019 ) , modifying the networks ( Gu & Rigazio , 2015 ; Lyu et al. , 2015 ; Papernot et al. , 2016b ; Nayebi & Ganguli , 2017 ; Ross & Doshi-Velez , 2017 ) , and adding external models ( Lee et al. , 2017 ; Akhtar et al. , 2017 ; Gebhart & Schrater , 2017 ; Xu et al. , 2018 ; Sun et al. , 2019 ) . Although various defense methods have been developed , a defended deep model is often successfully attacked by newly developed attacks or specific counter-counter measures ( Carlini & Wagner , 2017b ) . Therefore , it can be hoped that defenses against general attacks will be devised to make deep learning models ( adversarially ) robust to real-life threats . 2.2 NEURAL ODES AND OPTIMAL CONTROL . Recent works have bridged deep neural networks with ODEs and dynamical systems . On the one hand , deep residual networks ( He et al. , 2015 ) can be illustrated as forward Euler scheme approximating an ODE ( E , 2017 ) , which motivates us to design effective network structures ( Lu et al. , 2017 ) . On the other hand , regarding the network as a dynamical system allows us to set up an optimal control viewpoint of neural nets . Pontryagin ’ s Maximum Principle ( Boltyanskii et al. , 1960 ) has been applied to train neural nets ( Li et al. , 2017 ; Li & Hao , 2018 ) . 3 ADVERSARIAL ROBUSTNESS AND LYAPUNOV STABILITY . 3.1 DYNAMICS OF DEEP NEURAL NETS . Given a T -layer neural net , we let the dynamical system { ft ( xt , θt ) : t = 0 , . . . , T } represents the network , where xt is the input of t-th layer , θt is the parameter , and ft : Rdt × Θt → Rdt+1 denotes the t-th layer ’ s transformation , which is usually a non-linear function σ ( θtxt + bt ) for fully-connected layers , convolution layers and batch normalization layers , etc . Therefore , training the neural net can be regarded as controlling the parameters to let the dynamics fit the training data . Specifically , the training optimization problem can be formulated as a typical optimal control problem as follows : min θ B∑ i=1 J ( xiT ) + T∑ i=0 L ( θi ) , subj . to xit+1 = ft ( x i t , θt ) , t = 0 , . . . , T − 1 , where we use xi to denote the i-th input in the batch and B denote the batch size . J and L are the loss function and the regularizer , respectively . Specially , if the model is a deep residual network with structure xt+1 = xt+ft ( xt , θt ) , we can regard the problem as the forward Euler discretization of the following continuous optimal control problem : min θ J ( x ( T ) ) + ∫ T 0 L ( θ ( t ) ) dt , subj . to ẋ = f ( t , x ( t ) , θ ( t ) ) , x ( 0 ) = x , 0 ≤ t ≤ T , where x ( t ) is a continuous trajectory from the input to the output logits . 3.2 LYAPUNOV STABILITY . Adversarial examples are usually clean images added by a small calculated perturbation η . The model predicts correct labels fed with clean inputs x0 , while the output is completely different when it is fed with perturbed input x0 + η . The dynamical system view of neural nets motivate us to characterize this sensitivity with Lyapunov stability of a system ( Hirsch et al. , 2004 ) . Definition 1 ( Lyapunov Stability ) . For a given dynamical system ẋ = f ( x ) , x ( 0 ) = x0 , xe is an equilibrium , then • The system is Lyapunov stable , if , ∀ > 0 , ∃ δ > 0 such that , if ‖x ( 0 ) − xe‖ < δ , then for every t ≥ 0 , ‖x ( t ) − xe‖ < . • The system is asymptotically stable if it is Lyapunov stable and ∃ δ > 0 such that if ‖x ( 0 ) − xe‖ < δ , then limt→∞ ‖x ( t ) − xe‖ = 0 . • The system is exponentially stable if it is asymptotically stable and ∃α > 0 , β > 0 , δ > 0 such that if ‖x ( 0 ) − xe‖ < δ , then ‖x ( t ) − xe‖ ≤ α‖x ( 0 ) − xe‖e−βt , for all t ≥ 0 . The definitions can be easily extended to discrete-time systems . Intuitively , the Lyapunov stability states that for any small perturbation η , the trajectory is still “ close enough ” to the original one . If we regard a neural net as a dynamical system , and ensure the network is Lyapunov stable , then the model is robust to all ( adversarial ) perturbations . 3.3 ADVERSARIALLY ROBUST NEURAL NETS . Due to the connection between numerical ODEs and residual networks , we first consider robustness ( i.e . Lyapunov stability ) of continuous ODEs . Theorem 1 ( Stable ODEs ) . For a given ODE ẋ = f ( t , x , θ ) = σ ( Ax+b ) , where σ is the activation function , e.g. , Sigmoid function or ReLU function , it is stable if Re ( λi ( A ) ) ≤ 0 , ∀i , where Re denotes the real part , and λi denotes the i-th eigenvalue . One can see , e.g . Hirsch et al . ( 2004 ) , for the proof of this theorem . Theorem 1 provides a set of conditions for stable ODEs . However , deep residual network is only a forward Euler discretization scheme of continuous ODE . To ensure numerical stability , we require |1− λi ( A ) h| ≤ 1 ( Ascher & Petzold , 1998 ) , where the step size h = 1 in residual networks . Added by the identity mapping in residual networks , we can get the stable conditions for discrete dynamics . Theorem 2 ( Stable Discrete Networks ) . For a discrete neural network , i.e. , discrete dynamics { ft ( xt , θt ) : t = 0 , . . . , T } , where ft ( xt , θt ) = σ ( θtxt ) ( we omit the bias term for simplicity ) , the network is stable if the ρ ( θt ) ≤ 1 , where ρ ( A ) = maxi ( |λi ( A ) | ) is the spectral radius . If the conditions are added to the unconstrained optimization problem of training , we can greatly improve the adversarial robustness of neural nets . The methods will be discussed in the following section . 4 TRAINING ROBUST NEURAL NETS . 4.1 PMP AND MSA . For deterministic systems , the Pontryagin ’ s Maximum Principle ( PMP ) ( Boltyanskii et al. , 1960 ) provides a set of necessary conditions for optimal control of the system . Various algorithms have been proposed to solve the deterministic optimal control problem based on PMP . Among them , the Method of Successive Approximations ( MSA ) ( Krylov & Chernous ’ ko , 1963 ) is one of the simplest algorithms . In the field of deep learning , previous work has utilized MSA to train neural networks ( Li et al. , 2017 ; Li & Hao , 2018 ) . Formally , consider the optimal control problem for training neural nets in section 3 . For dynamics { ft ( xt , θt ) : t = 0 , . . . , T } , assume θ∗ = { θ∗0 , . . . , θ ∗ T−1 } is a solution to the optimal control problem . Also , we define the Hamiltonian function H : Rdt × Rdt+1 × Θt × [ T ] → R by H ( x , p , θ , t ) = p · ft ( x , θ ) −L ( θt ) , where the dot denotes the inner product . We have the following necessary conditions for θ∗ . Theorem 3 ( Pontryagin ’ s Maximum Principle for Discrete Systems ) . Assume ft and J are sufficiently smooth . There exists co-states p∗ = { p∗0 , . . . , p∗T } s.t . the following conditions hold : x∗t+1 = ∇pH ( x∗t , p∗t+1 , θ∗t , t ) , x∗0 = x0 , p∗t = ∇xH ( x∗t , p∗t+1 , θ∗t , t ) , p∗T = −∇xJ ( x∗T ) , θ∗t = arg max θ H ( x∗t , p ∗ t+1 , θ , t ) . For simplicity of notations , here we assume the batch size is 1 . One can easily extend the theorem to minibatch training case by summing over the batch . The theorem can be proved by KKT conditions ( Boyd & Vandenberghe , 2004 ) , where the co-states can be seen as the Lagrangian dual variables . Consider the conditions in PMP , one can find the x equations are exactly the forward propagation of a neural net , and the p equations resemble the backward propagation process . The third condition states that the model parameters must maximize the Hamiltonian function . This motivates us to iteratively compute forward and backward propagation , and solve the Hamiltonian maximization to find the optimal control , which is exactly the Method of Successive Approximations ( Algorithm 1 ) . In practice , we usually add regularizer terms that penalize great changes in the maximization step to prevent drastic steps that cause divergence . For the connection between MSA and back-propagationbased gradient descent algorithms , see the appendix of Li & Hao ( 2018 ) . Algorithm 1 The Method of Successive Approximations Initialize θ0 = { θ00 , . . . , θ 0 T−1 } , set k = 0 ; repeat Compute the states ( forward propagation ) : xt+1 = ∇pH ( xt , pt+1 , θkt , t ) , t = 0 , . . . , T − 1 ; Compute the co-states ( backward propagation ) : pt = ∇xH ( xt , pt+1 , θkt , t ) , t = T − 1 , . . . , 0 , with initial pT = −∇xJ ( xT ) ; For each t = 0 , . . . , T − 1 , solve the maximization θk+1t = arg maxθH ( xt , pt+1 , θ , t ) ; Set k = k + 1 ; until Converge ; The advantages of training by MSA compared with gradient descent algorithms has been discussed in ( Li et al. , 2017 ) , among which the most significant feature is that the optimization steps on different layers are decoupled . Concretely , after computing the states x and co-states p , the optimization step on layer t is only searching for parameters θt . This not only suggests that the optimization process can be accelerated by parallelization , but also allows us to utilize the features of the problem . The parameter space is greatly reduced compared with the original intractable optimization problem , and hence the optimization is much more easier . This allows us to add constraints that ensure robustness of the model .
The goal of this paper is to train neural networks (NNs) in a way to be robust to adversarial attacks. The authors formulate training a NN as finding an optimal controller for a discrete dynamical system. This formulation allows them to use an optimal control algorithm, called method of successive approximations (MSA), to train a NN. The authors then show how constraints can be added to this optimization problem in order to make the trained NN more robust. They show that the resulted constraint optimization problem can be formulated as a semi-definite programming and provide some experimental results.
SP:3dd9ae7b88b3e6848ee1fbd11c274d7a395d3167
Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability
Deep neural networks are known to be vulnerable to adversarial perturbations . In this paper , we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems . From this viewpoint , training neural nets is equivalent to finding an optimal control of the discrete dynamical system , which allows one to utilize methods of successive approximations , an optimal control algorithm based on Pontryagin ’ s maximum principle , to train neural nets . This decoupled training method allows us to add constraints to the optimization , which makes the deep model more robust . The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently . Experiments show that our method effectively improves deep model ’ s adversarial robustness . 1 INTRODUCTION . Deep neural networks achieve state-of-the-art performances on a variety of tasks ( LeCun et al. , 2015 ) . However , neural nets are known to be vulnerable to adversarial examples . Imperceptibly perturbed inputs can induce erroneous outputs in neural nets ( Szegedy et al. , 2013 ) . In image classification problems of computer vision , previous work has proposed various methods to attack deep models and induce low accuracy ( Goodfellow et al. , 2015 ; Madry et al. , 2017 ; Papernot et al. , 2016a ; Carlini & Wagner , 2017a ) . Whereas multiple defenses against adversarial attacks are developed , they don ’ t ensure safety faced with strong attacking methods . There are also theories that explain the existence of adversarial examples ( Ilyas et al. , 2019 ; Shamir et al. , 2019 ) , but they often fail to fully explain the features and behaviors of this phenomenon . This makes the study of adversarial attacks important in that it is a threat to real-life machine learning systems ( Kurakin et al. , 2016 ) . In this paper , we propose a dynamical system view on the adversarial robustness of the models , as well as new method that significantly defense adversarial attacks . Recent works have shown the connection between deep neural networks and dynamical systems ( E , 2017 ; Li et al. , 2017 ; Haber & Ruthotto , 2017 ; Lu et al. , 2017 ) . If we regard the neural net as a discretization of an ordinary differential equation ( ODE ) , then training neural nets becomes finding an optimal control of the corresponding discrete dynamical system . Traditionally , we often treat training neural networks as an unconstrained non-convex optimization problem min θ∈Θ J ( θ ) +R ( θ ) , where θ denotes the parameters of the model , J denotes the loss function and R denotes the regularizer term , and we solve the problem with ( stochastic ) gradient-descent based methods ( Bottou , 2010 ; Ruder , 2016 ) . In the training process , we feed the network with a batch of training data , and compute the gradient with forward and backward propagation ( E. Rumelhart et al. , 1986 ) . The propagation process resembles solving optimal control problems that tune the parameters to make the output be close to target states . This viewpoint motivates us to bridge adversarial robustness with Lyapunov stability of a dynamical system , and to train robust networks with algorithms that find stable optimal control . We will formulate the discussion in later sections . 2 RELATED WORK . 2.1 ADVERSARIAL DEFENSE . Many defense methods have been proposed to improve the models ’ adversarial robustness . The defenses mainly fall into three types : adversarial training ( Szegedy et al. , 2013 ; Zhang et al. , 2019 ) , modifying the networks ( Gu & Rigazio , 2015 ; Lyu et al. , 2015 ; Papernot et al. , 2016b ; Nayebi & Ganguli , 2017 ; Ross & Doshi-Velez , 2017 ) , and adding external models ( Lee et al. , 2017 ; Akhtar et al. , 2017 ; Gebhart & Schrater , 2017 ; Xu et al. , 2018 ; Sun et al. , 2019 ) . Although various defense methods have been developed , a defended deep model is often successfully attacked by newly developed attacks or specific counter-counter measures ( Carlini & Wagner , 2017b ) . Therefore , it can be hoped that defenses against general attacks will be devised to make deep learning models ( adversarially ) robust to real-life threats . 2.2 NEURAL ODES AND OPTIMAL CONTROL . Recent works have bridged deep neural networks with ODEs and dynamical systems . On the one hand , deep residual networks ( He et al. , 2015 ) can be illustrated as forward Euler scheme approximating an ODE ( E , 2017 ) , which motivates us to design effective network structures ( Lu et al. , 2017 ) . On the other hand , regarding the network as a dynamical system allows us to set up an optimal control viewpoint of neural nets . Pontryagin ’ s Maximum Principle ( Boltyanskii et al. , 1960 ) has been applied to train neural nets ( Li et al. , 2017 ; Li & Hao , 2018 ) . 3 ADVERSARIAL ROBUSTNESS AND LYAPUNOV STABILITY . 3.1 DYNAMICS OF DEEP NEURAL NETS . Given a T -layer neural net , we let the dynamical system { ft ( xt , θt ) : t = 0 , . . . , T } represents the network , where xt is the input of t-th layer , θt is the parameter , and ft : Rdt × Θt → Rdt+1 denotes the t-th layer ’ s transformation , which is usually a non-linear function σ ( θtxt + bt ) for fully-connected layers , convolution layers and batch normalization layers , etc . Therefore , training the neural net can be regarded as controlling the parameters to let the dynamics fit the training data . Specifically , the training optimization problem can be formulated as a typical optimal control problem as follows : min θ B∑ i=1 J ( xiT ) + T∑ i=0 L ( θi ) , subj . to xit+1 = ft ( x i t , θt ) , t = 0 , . . . , T − 1 , where we use xi to denote the i-th input in the batch and B denote the batch size . J and L are the loss function and the regularizer , respectively . Specially , if the model is a deep residual network with structure xt+1 = xt+ft ( xt , θt ) , we can regard the problem as the forward Euler discretization of the following continuous optimal control problem : min θ J ( x ( T ) ) + ∫ T 0 L ( θ ( t ) ) dt , subj . to ẋ = f ( t , x ( t ) , θ ( t ) ) , x ( 0 ) = x , 0 ≤ t ≤ T , where x ( t ) is a continuous trajectory from the input to the output logits . 3.2 LYAPUNOV STABILITY . Adversarial examples are usually clean images added by a small calculated perturbation η . The model predicts correct labels fed with clean inputs x0 , while the output is completely different when it is fed with perturbed input x0 + η . The dynamical system view of neural nets motivate us to characterize this sensitivity with Lyapunov stability of a system ( Hirsch et al. , 2004 ) . Definition 1 ( Lyapunov Stability ) . For a given dynamical system ẋ = f ( x ) , x ( 0 ) = x0 , xe is an equilibrium , then • The system is Lyapunov stable , if , ∀ > 0 , ∃ δ > 0 such that , if ‖x ( 0 ) − xe‖ < δ , then for every t ≥ 0 , ‖x ( t ) − xe‖ < . • The system is asymptotically stable if it is Lyapunov stable and ∃ δ > 0 such that if ‖x ( 0 ) − xe‖ < δ , then limt→∞ ‖x ( t ) − xe‖ = 0 . • The system is exponentially stable if it is asymptotically stable and ∃α > 0 , β > 0 , δ > 0 such that if ‖x ( 0 ) − xe‖ < δ , then ‖x ( t ) − xe‖ ≤ α‖x ( 0 ) − xe‖e−βt , for all t ≥ 0 . The definitions can be easily extended to discrete-time systems . Intuitively , the Lyapunov stability states that for any small perturbation η , the trajectory is still “ close enough ” to the original one . If we regard a neural net as a dynamical system , and ensure the network is Lyapunov stable , then the model is robust to all ( adversarial ) perturbations . 3.3 ADVERSARIALLY ROBUST NEURAL NETS . Due to the connection between numerical ODEs and residual networks , we first consider robustness ( i.e . Lyapunov stability ) of continuous ODEs . Theorem 1 ( Stable ODEs ) . For a given ODE ẋ = f ( t , x , θ ) = σ ( Ax+b ) , where σ is the activation function , e.g. , Sigmoid function or ReLU function , it is stable if Re ( λi ( A ) ) ≤ 0 , ∀i , where Re denotes the real part , and λi denotes the i-th eigenvalue . One can see , e.g . Hirsch et al . ( 2004 ) , for the proof of this theorem . Theorem 1 provides a set of conditions for stable ODEs . However , deep residual network is only a forward Euler discretization scheme of continuous ODE . To ensure numerical stability , we require |1− λi ( A ) h| ≤ 1 ( Ascher & Petzold , 1998 ) , where the step size h = 1 in residual networks . Added by the identity mapping in residual networks , we can get the stable conditions for discrete dynamics . Theorem 2 ( Stable Discrete Networks ) . For a discrete neural network , i.e. , discrete dynamics { ft ( xt , θt ) : t = 0 , . . . , T } , where ft ( xt , θt ) = σ ( θtxt ) ( we omit the bias term for simplicity ) , the network is stable if the ρ ( θt ) ≤ 1 , where ρ ( A ) = maxi ( |λi ( A ) | ) is the spectral radius . If the conditions are added to the unconstrained optimization problem of training , we can greatly improve the adversarial robustness of neural nets . The methods will be discussed in the following section . 4 TRAINING ROBUST NEURAL NETS . 4.1 PMP AND MSA . For deterministic systems , the Pontryagin ’ s Maximum Principle ( PMP ) ( Boltyanskii et al. , 1960 ) provides a set of necessary conditions for optimal control of the system . Various algorithms have been proposed to solve the deterministic optimal control problem based on PMP . Among them , the Method of Successive Approximations ( MSA ) ( Krylov & Chernous ’ ko , 1963 ) is one of the simplest algorithms . In the field of deep learning , previous work has utilized MSA to train neural networks ( Li et al. , 2017 ; Li & Hao , 2018 ) . Formally , consider the optimal control problem for training neural nets in section 3 . For dynamics { ft ( xt , θt ) : t = 0 , . . . , T } , assume θ∗ = { θ∗0 , . . . , θ ∗ T−1 } is a solution to the optimal control problem . Also , we define the Hamiltonian function H : Rdt × Rdt+1 × Θt × [ T ] → R by H ( x , p , θ , t ) = p · ft ( x , θ ) −L ( θt ) , where the dot denotes the inner product . We have the following necessary conditions for θ∗ . Theorem 3 ( Pontryagin ’ s Maximum Principle for Discrete Systems ) . Assume ft and J are sufficiently smooth . There exists co-states p∗ = { p∗0 , . . . , p∗T } s.t . the following conditions hold : x∗t+1 = ∇pH ( x∗t , p∗t+1 , θ∗t , t ) , x∗0 = x0 , p∗t = ∇xH ( x∗t , p∗t+1 , θ∗t , t ) , p∗T = −∇xJ ( x∗T ) , θ∗t = arg max θ H ( x∗t , p ∗ t+1 , θ , t ) . For simplicity of notations , here we assume the batch size is 1 . One can easily extend the theorem to minibatch training case by summing over the batch . The theorem can be proved by KKT conditions ( Boyd & Vandenberghe , 2004 ) , where the co-states can be seen as the Lagrangian dual variables . Consider the conditions in PMP , one can find the x equations are exactly the forward propagation of a neural net , and the p equations resemble the backward propagation process . The third condition states that the model parameters must maximize the Hamiltonian function . This motivates us to iteratively compute forward and backward propagation , and solve the Hamiltonian maximization to find the optimal control , which is exactly the Method of Successive Approximations ( Algorithm 1 ) . In practice , we usually add regularizer terms that penalize great changes in the maximization step to prevent drastic steps that cause divergence . For the connection between MSA and back-propagationbased gradient descent algorithms , see the appendix of Li & Hao ( 2018 ) . Algorithm 1 The Method of Successive Approximations Initialize θ0 = { θ00 , . . . , θ 0 T−1 } , set k = 0 ; repeat Compute the states ( forward propagation ) : xt+1 = ∇pH ( xt , pt+1 , θkt , t ) , t = 0 , . . . , T − 1 ; Compute the co-states ( backward propagation ) : pt = ∇xH ( xt , pt+1 , θkt , t ) , t = T − 1 , . . . , 0 , with initial pT = −∇xJ ( xT ) ; For each t = 0 , . . . , T − 1 , solve the maximization θk+1t = arg maxθH ( xt , pt+1 , θ , t ) ; Set k = k + 1 ; until Converge ; The advantages of training by MSA compared with gradient descent algorithms has been discussed in ( Li et al. , 2017 ) , among which the most significant feature is that the optimization steps on different layers are decoupled . Concretely , after computing the states x and co-states p , the optimization step on layer t is only searching for parameters θt . This not only suggests that the optimization process can be accelerated by parallelization , but also allows us to utilize the features of the problem . The parameter space is greatly reduced compared with the original intractable optimization problem , and hence the optimization is much more easier . This allows us to add constraints that ensure robustness of the model .
Neural Networks are vulnerable to adversarial perturbations. This paper proposes a method that based on optimal control theory that uses semidefinite-programming. This is a quite popular topic in Adversarial training recently, there has been a few works in that line. There are almost no experiments in this paper. There are several typos in the paper and writing of this paper requires more work. There are several typos in this paper, for example STOA, should be SOTA (in the Section 6.) In its current state, this paper looks very rushed.
SP:3dd9ae7b88b3e6848ee1fbd11c274d7a395d3167
Imitation Learning via Off-Policy Distribution Matching
1 INTRODUCTION . Reinforcement learning ( RL ) is typically framed as learning a behavior policy based on reward feedback from trial-and-error experience . Accordingly , many successful demonstrations of RL often rely on carefully handcrafted rewards with various bonuses and penalties designed to encourage intended behavior ( Nachum et al. , 2019a ; Andrychowicz et al. , 2018 ) . In contrast , many real-world behaviors are easier to demonstrate rather than devise explicit rewards . This realization is at the heart of imitation learning ( Ho & Ermon , 2016 ; Ng et al . ; Pomerleau , 1989 ) , in which one aims to learn a behavior policy from a set of expert demonstrations – logged experience data of a near-optimal policy interacting with the environment – without explicit knowledge of rewards . Distribution matching via adversarial learning , or Adversarial Imitation Learning ( AIL ) , has recently become a popular approach for imitation learning ( Ho & Ermon , 2016 ; Fu et al. , 2017 ; Ke et al. , 2019 ; Kostrikov et al. , 2019 ) . These methods interpret the states and actions provided in the expert demonstrations as a finite sample from a target distribution . Imitation learning can then be framed as learning a behavior policy which minimizes a divergence between this target distribution and the state-action distribution induced by the behavior policy interacting with the environment . As derived by Ho & Ermon ( 2016 ) , this divergence minimization may be achieved by iteratively performing two alternating steps , reminiscent of GAN algorithms ( Goodfellow et al. , 2014 ) . First , one estimates the density ratio of states and actions between the target distribution and the behavior policy . Then , these density ratios are used as rewards for a standard RL algorithm , and the behavior policy is updated to maximize these cumulative rewards ( data distribution ratios ) . The main limitation of current distribution matching approaches is that estimating distribution density ratios ( the first step of every iteration ) typically requires samples from the behavior policy distribution . This means that every iteration – every update to the behavior policy – requires new interactions with the environment , precluding the use of these algorithms in settings where interactions with the environment are expensive and limited . Several papers attempt to relax this on-policy ∗Also at NYU . 1Code to reproduce our results is available at https : //github.com/google-research/ google-research/tree/master/value_dice . requirement and resolve the sample inefficiency problem by designing off-policy imitation learning algorithms , which may take advantage of past logged data , usually in the form of a replay buffer ( Kostrikov et al. , 2019 ; Sasaki et al. , 2019 ) . However , these methods do so by altering the original divergence minimization objective to measure a divergence between the target expert distribution and the replay buffer distribution . Accordingly , there is no guarantee that the learned policy will recover the desired target distribution . In this work , we introduce an algorithm for imitation learning that , on the one hand , performs divergence minimization as in the original AIL methods , while on the other hand , is completely off-policy . We begin by providing a new formulation of the minimum divergence objective that avoids the use of any explicit on-policy expectations . While this objective may be used in the traditional way to estimate data distribution ratios that are then input to an RL algorithm , we go further to show how the specific form of the derived objective renders the use of a separate RL optimization unnecessary . Rather , gradients of the minimum divergence objective with respect to behavior policy may be computed directly . This way , an imitating behavior policy may be learned to minimize the divergence without the use of explicit rewards . We call this stream-lined imitation learning algorithm ValueDICE . In addition to being simpler than standard imitation learning methods , we show that our proposed algorithm is able to achieve state-of-the-art performance on a suite of imitation learning benchmarks . 2 BACKGROUND . We consider environments represented as a Markov Decision Process ( MDP ) ( Puterman , 2014 ) , defined by the tuple , ( S , A , p0 ( s ) , p ( s′|s , a ) , r ( s , a ) , γ ) , where S and A are the state and action space , respectively , p0 ( s ) is an initial state distribution , p ( s′|s , a ) defines environment dynamics represented as a conditional state distribution , r ( s , a ) is a reward function , and γ is a return discount factor . A behavior policy π ( ·|· ) interacts with the environment to yield experience ( st , at , rt , st+1 ) , for t = 0 , 1 , . . . , where s0 ∼ p0 ( · ) , at ∼ π ( ·|st ) , st+1 ∼ p ( ·|st , at ) , rt = r ( st , at ) . Without loss of generality , we consider infinite-horizon , non-terminating environments . In standard RL , one aims to learn a behavior policy π ( ·|s ) to maximize cumulative rewards , based on experience gained from interacting with the environment . In imitation learning ( Pomerleau , 1989 ; Abbeel & Ng , 2004 ; Ho & Ermon , 2016 ) , the environment reward is not observed . Rather , one has access to a set of expert demonstrations D : = { ( sk , ak , s′k } Nk=1 given by state-action-next-state transitions in the environment induced by an unknown expert policy πexp and the goal is to learn a behavior policy π which recovers πexp . During the learning process , in addition to the finite set of expert demonstrationsD , one may also optionally interact with the environment ( in these interactions , no rewards are observed ) . This setting describes a number of real-world applications where rewards are unknown , such as Pomerleau ( 1989 ) ; Muller et al . ( 2006 ) ; Bojarski et al . ( 2016 ) . 2.1 BEHAVIORAL CLONING ( BC ) . Supervised behavioral cloning ( BC ) is a popular approach for imitation learning . Given a set of expert demonstrations , a mapping of state observations to actions is fit using regression or density estimation . In the simplest case , one simply trains the behavior policy π to minimize the negative log-likelihood of the observed expert actions : min π JBC ( π ) : = − 1 N N∑ k=1 log π ( ak|sk ) . ( 1 ) Unlike Inverse Reinforcement Learning ( IRL ) algorithms ( e.g . GAIL ( Ho & Ermon , 2016 ) ) , BC does not perform any additional policy interactions with the learning environment and hence does not suffer from the same issue of policy sample complexity . However , behavioral cloning suffers from distributional drift ( Ross et al. , 2011 ) ; i.e. , there is no way for π to learn how to recover if it deviates from the expert behavior to a state s̃ not seen in the expert demonstrations . 2.2 DISTRIBUTION MATCHING . The distribution matching approach provides a family of methods that are robust to distributional shift . Rather than considering the policy directly as a conditional distribution π ( ·|s ) over actions , this approach considers the state-action distribution induced by a policy . In particular , under certain conditions ( Puterman , 2014 ) , there is a one-to-one correspondence between a policy and its stateaction distribution dπ defined as , dπ ( s , a ) = ( 1− γ ) · ∞∑ t=0 γtp ( st = s , at = a|s0 ∼ p0 ( · ) , st ∼ p ( ·|st−1 , at−1 ) , at ∼ π ( ·|st ) ) . ( 2 ) By the same token , the unknown expert policy πexp also possesses a state-action distribution dexp , and one may usually assume that the expert demonstrations D : = { ( sk , ak , s′k } Nk=1 are sampled as ( sk , ak ) ∼ dexp , s′k ∼ p ( ·|sk , ak ) . Accordingly , the distribution matching approach proposes to learn π to minimize the divergence between dπ and dexp . The KL-divergence is typically used to measure the discrepancy between dπ and dexp ( Ho & Ermon , 2016 ; Ke et al. , 2019 ) : −DKL ( dπ||dexp ) = E ( s , a ) ∼dπ [ log dexp ( s , a ) dπ ( s , a ) ] . ( 3 ) The use of the KL-divergence is convenient , as it may be expressed as an RL problem where rewards are given by log distribution ratios : −DKL ( dπ||dexp ) = ( 1− γ ) · E s0∼p0 ( · ) , at∼π ( ·|st ) st+1∼p ( ·|st , at ) [ ∞∑ t=0 γt log dexp ( st , at ) dπ ( st , at ) ] . ( 4 ) In other words , if one has access to estimates of the distribution ratios of the two policies , then the minimum divergence problem reduces to a max-return RL problem with rewards r̃ ( s , a ) = log d exp ( s , a ) dπ ( s , a ) . Any on-policy or off-policy RL algorithm can be used to maximize the corresponding expected returns in Equation 4 . Capitalizing on this observation , Ho & Ermon ( 2016 ) and Ke et al . ( 2019 ) propose algorithms ( e.g. , GAIL ) in which the distribution ratio is estimated using a GAN-like objective : max h : S×A→ ( 0,1 ) JGAIL ( h ) : = E ( s , a ) ∼dexp [ log h ( s , a ) ] + E ( s , a ) ∼dπ [ log ( 1− h ( s , a ) ) ] . ( 5 ) In this objective , the function h acts as a discriminator , discriminating between samples ( s , a ) from dexp and dπ . The optimal discriminator satisfies , log h∗ ( s , a ) − log ( 1− h∗ ( s , a ) ) = log d exp ( s , a ) dπ ( s , a ) , ( 6 ) and so the distribution matching rewards may be computed as r̃ ( s , a ) = log h∗ ( s , a ) − log ( 1 − h∗ ( s , a ) ) . In practice , the discriminator is not fully optimized , and instead gradient updates to the discriminator and policy are alternated . These prior distribution matching approaches possess two limitations which we will resolve with our proposed ValueDICE algorithm : • On-policy . Arguably the main limitation of these prior approaches is that they require access to on-policy samples from dπ . While off-policy RL can be used for learning π , optimizing the discriminator h necessitates having on-policy samples ( the second expectation in Equation 5 ) . Thus , in practice , GAIL requires a prohibitively large number of environment interactions , making it unfeasible for use in many real-world applications . Attempts to remedy this , such as DiscriminatorActor-Critic ( DAC ) ( Kostrikov et al. , 2019 ) , often do so via ad-hoc methods ; for example , changing the on-policy expectation E ( s , a ) ∼dπ [ log ( 1−h ( s , a ) ) ] in Equation 5 to an expectation over the replay buffer E ( s , a ) ∼dRB [ log ( 1 − h ( s , a ) ) ] . While DAC achieves good empirical results , it does not guarantee distribution matching of π to πexp , especially when dRB is far from dπ . • Separate RL optimization . Prior approaches require iteratively taking alternating steps : first estimate the data distribution ratios using the GAN-like objective , then input these into an RL optimization and repeat . The use of a separate RL algorithm introduces complexity to any implementation of these approaches , with many additional design choices that need to be made and more function approximators to learn ( e.g. , value functions ) . Our introduced ValueDICE will be shown to not need a separate RL optimization .
This paper presents an algorithm for adversarial imitation that uses off-policy data in a principled manner, unlike prior work. The core idea is to express the KL-divergence between the policy's state-action marginal and the expert's state action marginal using the Donsker-Varadhan representation and then applying the change of variable similar to DualDICE to avoid computing the marginal of the current policy, thus getting rid of the on-policy sampling requirement. The paper then shows how the auxiliary variable (critic) added to the optimization is a value function that maximizes the corresponding induced reward in AIL methods, thus unifying the objectives for policy optimization and reward learning. The authors then present practical considerations needed in getting this formulation to work, including sampling from a replay buffer, biased sampling for the exponentiated term and avoid the double-sampling issue. Finally, the paper presents some results, which show that valueDICE is comparable to most of the other imitation learning methods.
SP:b63d45fa7937d0efe9d4d471ca75e52114393ea7
Imitation Learning via Off-Policy Distribution Matching
1 INTRODUCTION . Reinforcement learning ( RL ) is typically framed as learning a behavior policy based on reward feedback from trial-and-error experience . Accordingly , many successful demonstrations of RL often rely on carefully handcrafted rewards with various bonuses and penalties designed to encourage intended behavior ( Nachum et al. , 2019a ; Andrychowicz et al. , 2018 ) . In contrast , many real-world behaviors are easier to demonstrate rather than devise explicit rewards . This realization is at the heart of imitation learning ( Ho & Ermon , 2016 ; Ng et al . ; Pomerleau , 1989 ) , in which one aims to learn a behavior policy from a set of expert demonstrations – logged experience data of a near-optimal policy interacting with the environment – without explicit knowledge of rewards . Distribution matching via adversarial learning , or Adversarial Imitation Learning ( AIL ) , has recently become a popular approach for imitation learning ( Ho & Ermon , 2016 ; Fu et al. , 2017 ; Ke et al. , 2019 ; Kostrikov et al. , 2019 ) . These methods interpret the states and actions provided in the expert demonstrations as a finite sample from a target distribution . Imitation learning can then be framed as learning a behavior policy which minimizes a divergence between this target distribution and the state-action distribution induced by the behavior policy interacting with the environment . As derived by Ho & Ermon ( 2016 ) , this divergence minimization may be achieved by iteratively performing two alternating steps , reminiscent of GAN algorithms ( Goodfellow et al. , 2014 ) . First , one estimates the density ratio of states and actions between the target distribution and the behavior policy . Then , these density ratios are used as rewards for a standard RL algorithm , and the behavior policy is updated to maximize these cumulative rewards ( data distribution ratios ) . The main limitation of current distribution matching approaches is that estimating distribution density ratios ( the first step of every iteration ) typically requires samples from the behavior policy distribution . This means that every iteration – every update to the behavior policy – requires new interactions with the environment , precluding the use of these algorithms in settings where interactions with the environment are expensive and limited . Several papers attempt to relax this on-policy ∗Also at NYU . 1Code to reproduce our results is available at https : //github.com/google-research/ google-research/tree/master/value_dice . requirement and resolve the sample inefficiency problem by designing off-policy imitation learning algorithms , which may take advantage of past logged data , usually in the form of a replay buffer ( Kostrikov et al. , 2019 ; Sasaki et al. , 2019 ) . However , these methods do so by altering the original divergence minimization objective to measure a divergence between the target expert distribution and the replay buffer distribution . Accordingly , there is no guarantee that the learned policy will recover the desired target distribution . In this work , we introduce an algorithm for imitation learning that , on the one hand , performs divergence minimization as in the original AIL methods , while on the other hand , is completely off-policy . We begin by providing a new formulation of the minimum divergence objective that avoids the use of any explicit on-policy expectations . While this objective may be used in the traditional way to estimate data distribution ratios that are then input to an RL algorithm , we go further to show how the specific form of the derived objective renders the use of a separate RL optimization unnecessary . Rather , gradients of the minimum divergence objective with respect to behavior policy may be computed directly . This way , an imitating behavior policy may be learned to minimize the divergence without the use of explicit rewards . We call this stream-lined imitation learning algorithm ValueDICE . In addition to being simpler than standard imitation learning methods , we show that our proposed algorithm is able to achieve state-of-the-art performance on a suite of imitation learning benchmarks . 2 BACKGROUND . We consider environments represented as a Markov Decision Process ( MDP ) ( Puterman , 2014 ) , defined by the tuple , ( S , A , p0 ( s ) , p ( s′|s , a ) , r ( s , a ) , γ ) , where S and A are the state and action space , respectively , p0 ( s ) is an initial state distribution , p ( s′|s , a ) defines environment dynamics represented as a conditional state distribution , r ( s , a ) is a reward function , and γ is a return discount factor . A behavior policy π ( ·|· ) interacts with the environment to yield experience ( st , at , rt , st+1 ) , for t = 0 , 1 , . . . , where s0 ∼ p0 ( · ) , at ∼ π ( ·|st ) , st+1 ∼ p ( ·|st , at ) , rt = r ( st , at ) . Without loss of generality , we consider infinite-horizon , non-terminating environments . In standard RL , one aims to learn a behavior policy π ( ·|s ) to maximize cumulative rewards , based on experience gained from interacting with the environment . In imitation learning ( Pomerleau , 1989 ; Abbeel & Ng , 2004 ; Ho & Ermon , 2016 ) , the environment reward is not observed . Rather , one has access to a set of expert demonstrations D : = { ( sk , ak , s′k } Nk=1 given by state-action-next-state transitions in the environment induced by an unknown expert policy πexp and the goal is to learn a behavior policy π which recovers πexp . During the learning process , in addition to the finite set of expert demonstrationsD , one may also optionally interact with the environment ( in these interactions , no rewards are observed ) . This setting describes a number of real-world applications where rewards are unknown , such as Pomerleau ( 1989 ) ; Muller et al . ( 2006 ) ; Bojarski et al . ( 2016 ) . 2.1 BEHAVIORAL CLONING ( BC ) . Supervised behavioral cloning ( BC ) is a popular approach for imitation learning . Given a set of expert demonstrations , a mapping of state observations to actions is fit using regression or density estimation . In the simplest case , one simply trains the behavior policy π to minimize the negative log-likelihood of the observed expert actions : min π JBC ( π ) : = − 1 N N∑ k=1 log π ( ak|sk ) . ( 1 ) Unlike Inverse Reinforcement Learning ( IRL ) algorithms ( e.g . GAIL ( Ho & Ermon , 2016 ) ) , BC does not perform any additional policy interactions with the learning environment and hence does not suffer from the same issue of policy sample complexity . However , behavioral cloning suffers from distributional drift ( Ross et al. , 2011 ) ; i.e. , there is no way for π to learn how to recover if it deviates from the expert behavior to a state s̃ not seen in the expert demonstrations . 2.2 DISTRIBUTION MATCHING . The distribution matching approach provides a family of methods that are robust to distributional shift . Rather than considering the policy directly as a conditional distribution π ( ·|s ) over actions , this approach considers the state-action distribution induced by a policy . In particular , under certain conditions ( Puterman , 2014 ) , there is a one-to-one correspondence between a policy and its stateaction distribution dπ defined as , dπ ( s , a ) = ( 1− γ ) · ∞∑ t=0 γtp ( st = s , at = a|s0 ∼ p0 ( · ) , st ∼ p ( ·|st−1 , at−1 ) , at ∼ π ( ·|st ) ) . ( 2 ) By the same token , the unknown expert policy πexp also possesses a state-action distribution dexp , and one may usually assume that the expert demonstrations D : = { ( sk , ak , s′k } Nk=1 are sampled as ( sk , ak ) ∼ dexp , s′k ∼ p ( ·|sk , ak ) . Accordingly , the distribution matching approach proposes to learn π to minimize the divergence between dπ and dexp . The KL-divergence is typically used to measure the discrepancy between dπ and dexp ( Ho & Ermon , 2016 ; Ke et al. , 2019 ) : −DKL ( dπ||dexp ) = E ( s , a ) ∼dπ [ log dexp ( s , a ) dπ ( s , a ) ] . ( 3 ) The use of the KL-divergence is convenient , as it may be expressed as an RL problem where rewards are given by log distribution ratios : −DKL ( dπ||dexp ) = ( 1− γ ) · E s0∼p0 ( · ) , at∼π ( ·|st ) st+1∼p ( ·|st , at ) [ ∞∑ t=0 γt log dexp ( st , at ) dπ ( st , at ) ] . ( 4 ) In other words , if one has access to estimates of the distribution ratios of the two policies , then the minimum divergence problem reduces to a max-return RL problem with rewards r̃ ( s , a ) = log d exp ( s , a ) dπ ( s , a ) . Any on-policy or off-policy RL algorithm can be used to maximize the corresponding expected returns in Equation 4 . Capitalizing on this observation , Ho & Ermon ( 2016 ) and Ke et al . ( 2019 ) propose algorithms ( e.g. , GAIL ) in which the distribution ratio is estimated using a GAN-like objective : max h : S×A→ ( 0,1 ) JGAIL ( h ) : = E ( s , a ) ∼dexp [ log h ( s , a ) ] + E ( s , a ) ∼dπ [ log ( 1− h ( s , a ) ) ] . ( 5 ) In this objective , the function h acts as a discriminator , discriminating between samples ( s , a ) from dexp and dπ . The optimal discriminator satisfies , log h∗ ( s , a ) − log ( 1− h∗ ( s , a ) ) = log d exp ( s , a ) dπ ( s , a ) , ( 6 ) and so the distribution matching rewards may be computed as r̃ ( s , a ) = log h∗ ( s , a ) − log ( 1 − h∗ ( s , a ) ) . In practice , the discriminator is not fully optimized , and instead gradient updates to the discriminator and policy are alternated . These prior distribution matching approaches possess two limitations which we will resolve with our proposed ValueDICE algorithm : • On-policy . Arguably the main limitation of these prior approaches is that they require access to on-policy samples from dπ . While off-policy RL can be used for learning π , optimizing the discriminator h necessitates having on-policy samples ( the second expectation in Equation 5 ) . Thus , in practice , GAIL requires a prohibitively large number of environment interactions , making it unfeasible for use in many real-world applications . Attempts to remedy this , such as DiscriminatorActor-Critic ( DAC ) ( Kostrikov et al. , 2019 ) , often do so via ad-hoc methods ; for example , changing the on-policy expectation E ( s , a ) ∼dπ [ log ( 1−h ( s , a ) ) ] in Equation 5 to an expectation over the replay buffer E ( s , a ) ∼dRB [ log ( 1 − h ( s , a ) ) ] . While DAC achieves good empirical results , it does not guarantee distribution matching of π to πexp , especially when dRB is far from dπ . • Separate RL optimization . Prior approaches require iteratively taking alternating steps : first estimate the data distribution ratios using the GAN-like objective , then input these into an RL optimization and repeat . The use of a separate RL algorithm introduces complexity to any implementation of these approaches , with many additional design choices that need to be made and more function approximators to learn ( e.g. , value functions ) . Our introduced ValueDICE will be shown to not need a separate RL optimization .
This paper provides a novel off policy objective to solve imitation learning. It resolves the limitation of the famous GAIL algorithm that we need on-policy samples to interact with the environment. The new algorithm is simple but efficient, and can handle off-policy settings. The derivation of equation (12) is nice and intuitive, provide a potential on creating new imitation learning algorithm. Empirical results show that the new algorithm can perform as good as the state-of-the-art baseline, under on-policy setting.
SP:b63d45fa7937d0efe9d4d471ca75e52114393ea7
Contextual Inverse Reinforcement Learning
1 INTRODUCTION . We study sequential decision-making in a Contextual Markov Decision Process ( CMDP , Hallak et al . ( 2015 ) ) , where the reward , while unknown to the agent , depends on a static parameter referred to as the context . For a concrete example , consider the dynamic treatment regime ( Chakraborty & Murphy , 2014 ) . Here , there is a patient and a clinician which acts to improve the patient ’ s health . The context is composed of static information of the patient ( such as age and weight ) ; the state is composed of the patient ’ s dynamic measurements ( such as heart rate and blood pressure ) ; and the clinician ’ s actions are a set of intervention categories ( e.g. , infusion ) . The reward is different for each patient ( context ) , and there is a mapping from the context to the reward . Recent trends in personalized medicine motivate this model – instead of treating the ” average patient ” , patients are separated into different groups for which the medical decisions are tailored ( Fig . 1b ) . For example , in Wesselink et al . ( 2018 ) , the authors study organ injury , which may occur when a specific measurement ( mean arterial pressure ) decreases below a certain threshold . They found that this threshold varies across different patient groups ( context ) . In other examples , clinicians set treatment goals for the patients , i.e. , they take actions to make the patient measurements reach some pre-determined values . For instance , in acute respiratory distress syndrome ( ARDS ) , clinicians argue that these treatment goals should depend on the static patient information ( the context ) ( Berngard et al. , 2016 ) . There are serious issues when trying to manually define a reward signal in real-world tasks . When treating patients with sepsis , for example , the only available signal is the mortality of the patient at the end of the treatment ( Komorowski et al. , 2018 ) . This signal is sparse , and it is unclear how to manually tweak the reward to maximize the patient ’ s health condition ( Leike et al. , 2017 ; Raghu et al. , 2017 ; Lee et al. , 2019 ) . To address these issues , we propose the Contextual Inverse Reinforcement Learning ( COIRL ) framework . Similarly to Inverse Reinforcement Learning ( Ng & Russell , 2000 , IRL ) , we focus on trying to infer the mapping from contexts to rewards by observing experts . The main challenge in our problem is that for each context there is a different reward , hence , a different optimal policy for each context . Therefore , Apprenticeship Learning algorithms ( Abbeel & Ng , 2004 ; Syed & Schapire , 2008 ) that try to mimic the expert can not be used and , instead , we focus on directly learning the mapping . In particular , our main contributions are : 1 . We formulate COIRL with a linear mapping as a convex optimization problem . 2 . We propose and analyze the sample complexity of three algorithms for COIRL : the mirrored descent alg . ( MDA ) , evolution strategies ( ES ) , and the ellipsoid method . 3 . For nonlinear mappings , we implement a deep learning version for MDA and ES ( without theoretical guarantees ) . 4 . We compare these methods empirically on two frameworks : an autonomous driving simulator ( Abbeel & Ng , 2004 ) and a dynamic treatment regime ( Komorowski et al. , 2018 ) . 2 PRELIMINARIES . Contextual MDPs : A Markov Decision Process ( Puterman , 1994 , MDP ) is defined by the tuple ( S , A , P , ξ , R , γ ) where S is a finite state space , A a finite action space , P : S × S ×A→ [ 0 , 1 ] the transition kernel , ξ the initial state distribution , R : S → R the reward function and γ ∈ [ 0 , 1 ) is the discount factor . A Contextual MDP ( Hallak et al. , 2015 , CMDP ) is an extension of an MDP , and is defined by ( C , S , A , M , γ ) where C is the context space , andM is a mapping from contexts c ∈ C to MDPs : M ( c ) = ( S , A , P , Rc , ξ , γ ) . In addition , each state is associated with a feature vector φ : S → [ 0 , 1 ] k. Note that P and ξ are not context dependent . We consider a setting in which the reward for context c is a linear combination of the state features : R∗c ( s ) = f ∗ ( c ) Tφ ( s ) . The goal is to approximate f∗ ( c ) using a function fW ( c ) , with parameters W . This notation allows us to present our algorithms for any function approximator fW ( c ) , and in particular , a deep neural network ( DNN ) . For the theoretical analysis , we will further assume a linear setting , where f∗ ( c ) = cTW ∗ , fW ( c ) = cTW and that W ∗ is in some convex setW . We assume that c ∈ C = ∆d−1 , the standard d− 1 dimensional simplex . This assumption makes the contexts bounded ( which we use in our proofs ) , and it also allows a straight-forward expansion to a model in which the transitions are also a linear mapping of the context ( Modi et al. , 2018 ) . One way of viewing this model is that each row in the mapping W ∗ is a base rewards coefficient vector , and the reward for a specific context is a convex combination of these base rewards . We consider deterministic policies π : S → A which dictate the agent ’ s behaviour at each state . The value of a policy π for reward coefficients vector r is : V πr = Eξ , P , π [ ∑∞ t=0 γ tR ( st ) ] = r Tµ ( π ) where µ ( π ) : = Eξ , P , π [ ∑∞ t=0 γ tφ ( st ) ] ∈ Rk is called the feature expectations of π . For the optimal policy with respect to ( w.r.t . ) a reward coefficients vector r , we denote the value by V ∗r . For any context c , π∗c denotes the optimal policy w.r.t . reward R ∗ c ( s ) = f ∗ ( c ) Tφ ( s ) and π̂c ( W ) denotes the optimal policy w.r.t . reward R̂c ( s ) = fW ( c ) Tφ ( s ) . Inverse Reinforcement Learning in CMDPs : In standard IRL , the goal is to learn a reward which best explains the behavior of an observed expert . The model describing this scenario is the MDP\R - an MDP without a reward function ( also commonly called a controlled Markov chain ) . Similarly , we denote a CMDP without a mapping of context to reward by CMDP\M . The goal in Contextual IRL is to approximate the mapping f∗ ( c ) by observing an expert . The expert knows f∗ ( c ) , and for each context c , can provide a demonstration from π∗c . Contextual dynamics : Learning a transition kernel and an initial state distribution that is parametrized by the context is an orthogonal problem to COIRL . Therefore , we focus only on a contextual reward which simplifies our analysis . Existing methods , such as in Modi et al . ( 2018 ) , can be used to learn the mappings for the transition kernel and initial distribution in a contextual model . In conjunction with the simulation lemma ( Kearns & Singh , 2002 ) , these methods can extend our results to the more general CMDP setting . 3 OPTIMIZATION METHODS FOR COIRL . In this section , we propose and analyze optimization algorithms for minimizing the following loss function ; Lemma 1 below justifies its use for COIRL . Loss ( W ) = Ec max π [ fW ( c ) · ( µ ( π ) − µ ( π∗c ) ) ] = Ec [ fW ( c ) · ( µ ( π̂c ( W ) ) − µ ( π∗c ) ) ] . ( 1 ) Lemma 1 . Loss ( W ) satisfies the following properties : ( 1 ) ∀W , Loss ( W ) ≥ 0 , and Loss ( W ∗ ) = 0 . ( 2 ) If Loss ( W ) = 0 then ∀c ∈ C , the expert policy π∗c is the optimal policy w.r.t . reward cTW . To evaluate the loss , the optimal policy π̂c ( W ) and its features expectations µ ( π̂c ( W ) ) must be computed for all contexts . For a specific context , finding π̂c ( W ) can be solved with standard RL methods such as Value or Policy Iteration . Computing µ ( π̂c ( W ) ) is equivalent to policy evaluation ( solving linear equations ) . The challenge is that Eq . ( 1 ) is is not differentiable in W . We tackle this problem using two methods for computing descent directions that do not involve differentiation : ( i ) computing subgradients and ( ii ) randomly perturbing the loss function . In addition , as the loss is defined in expectation over the contexts , computing it requires to calculate the optimal policy for all contexts . We deal with this issue at the end of Section 3.1 . In the special case that fW ( c ) is a linear function , Eq . ( 1 ) is convex . The following Lemma characterizes Eq . ( 1 ) in this case . Lemma 2 . Let Llin ( W ) = Ec [ cTW · ( µ ( π̂c ( W ) ) −µ ( π∗c ) ) ] . We have that : ( 1 ) Llin ( W ) is a convex function . ( 2 ) g ( W ) = Ec [ c ( µ ( π̂c ( W ) ) − µ ( π∗c ) ) ] is a sub gradient of Llin ( W ) . ( 3 ) Llin is a Lipschitz continuous function , with Lipschitz constant L = 21−γ w.r.t . ‖·‖∞ and L = 2 √ dk 1−γ w.r.t . ‖·‖2 . A technical proof ( by definition ) is provided in the supplementary material . Note that g ( W ) ∈ Rd×k ; we will sometimes refer to it as a matrix and sometimes as a flattened vector , no confusion will arise . Remark 1 . The Lipschitz of LLin ( W ) is related to the simulation lemma ( Kearns & Singh , 2002 ) ; a small change in the reward results in a small change in the optimal value . Remark 2 . As g ( W ) is a subgradient of Loss ( W ) , it can be used to back-propagate DNNs . Clearly , we can not guarantee convexity ( hence no theoretical guarantees ) , but we can design Loss ( W ) to be Lipschitz continuous in W using the methods presented in Cisse et al . ( 2017 ) ; Arjovsky et al . ( 2017 ) . Remark 3 . The subgradient g ( W ) is given in expectation over contexts , and in expectation over trajectories ( feature expectations ) . We will later see how to replace it with an unbiased estimate , which can be computed by observing a single expert trajectory for a single context .
This paper introduces a formulation for the contextual inverse reinforcement learning (COIRL) problem and proposed three algorithms for solving the proposed problem. Theoretical analysis of scalability and sample complexity are conducted for cases where both the feature function and the context-to-reward mapping function are linear. Experiments were conducted in both a simulated driving domain and a medical treatment domain to compare the three proposed algorithms empirically. Empirical results for using a deep network as the contextual mapping function is also provided.
SP:caa7fcf551f4ee75b6c06f05581bc5ef298fedbe
Contextual Inverse Reinforcement Learning
1 INTRODUCTION . We study sequential decision-making in a Contextual Markov Decision Process ( CMDP , Hallak et al . ( 2015 ) ) , where the reward , while unknown to the agent , depends on a static parameter referred to as the context . For a concrete example , consider the dynamic treatment regime ( Chakraborty & Murphy , 2014 ) . Here , there is a patient and a clinician which acts to improve the patient ’ s health . The context is composed of static information of the patient ( such as age and weight ) ; the state is composed of the patient ’ s dynamic measurements ( such as heart rate and blood pressure ) ; and the clinician ’ s actions are a set of intervention categories ( e.g. , infusion ) . The reward is different for each patient ( context ) , and there is a mapping from the context to the reward . Recent trends in personalized medicine motivate this model – instead of treating the ” average patient ” , patients are separated into different groups for which the medical decisions are tailored ( Fig . 1b ) . For example , in Wesselink et al . ( 2018 ) , the authors study organ injury , which may occur when a specific measurement ( mean arterial pressure ) decreases below a certain threshold . They found that this threshold varies across different patient groups ( context ) . In other examples , clinicians set treatment goals for the patients , i.e. , they take actions to make the patient measurements reach some pre-determined values . For instance , in acute respiratory distress syndrome ( ARDS ) , clinicians argue that these treatment goals should depend on the static patient information ( the context ) ( Berngard et al. , 2016 ) . There are serious issues when trying to manually define a reward signal in real-world tasks . When treating patients with sepsis , for example , the only available signal is the mortality of the patient at the end of the treatment ( Komorowski et al. , 2018 ) . This signal is sparse , and it is unclear how to manually tweak the reward to maximize the patient ’ s health condition ( Leike et al. , 2017 ; Raghu et al. , 2017 ; Lee et al. , 2019 ) . To address these issues , we propose the Contextual Inverse Reinforcement Learning ( COIRL ) framework . Similarly to Inverse Reinforcement Learning ( Ng & Russell , 2000 , IRL ) , we focus on trying to infer the mapping from contexts to rewards by observing experts . The main challenge in our problem is that for each context there is a different reward , hence , a different optimal policy for each context . Therefore , Apprenticeship Learning algorithms ( Abbeel & Ng , 2004 ; Syed & Schapire , 2008 ) that try to mimic the expert can not be used and , instead , we focus on directly learning the mapping . In particular , our main contributions are : 1 . We formulate COIRL with a linear mapping as a convex optimization problem . 2 . We propose and analyze the sample complexity of three algorithms for COIRL : the mirrored descent alg . ( MDA ) , evolution strategies ( ES ) , and the ellipsoid method . 3 . For nonlinear mappings , we implement a deep learning version for MDA and ES ( without theoretical guarantees ) . 4 . We compare these methods empirically on two frameworks : an autonomous driving simulator ( Abbeel & Ng , 2004 ) and a dynamic treatment regime ( Komorowski et al. , 2018 ) . 2 PRELIMINARIES . Contextual MDPs : A Markov Decision Process ( Puterman , 1994 , MDP ) is defined by the tuple ( S , A , P , ξ , R , γ ) where S is a finite state space , A a finite action space , P : S × S ×A→ [ 0 , 1 ] the transition kernel , ξ the initial state distribution , R : S → R the reward function and γ ∈ [ 0 , 1 ) is the discount factor . A Contextual MDP ( Hallak et al. , 2015 , CMDP ) is an extension of an MDP , and is defined by ( C , S , A , M , γ ) where C is the context space , andM is a mapping from contexts c ∈ C to MDPs : M ( c ) = ( S , A , P , Rc , ξ , γ ) . In addition , each state is associated with a feature vector φ : S → [ 0 , 1 ] k. Note that P and ξ are not context dependent . We consider a setting in which the reward for context c is a linear combination of the state features : R∗c ( s ) = f ∗ ( c ) Tφ ( s ) . The goal is to approximate f∗ ( c ) using a function fW ( c ) , with parameters W . This notation allows us to present our algorithms for any function approximator fW ( c ) , and in particular , a deep neural network ( DNN ) . For the theoretical analysis , we will further assume a linear setting , where f∗ ( c ) = cTW ∗ , fW ( c ) = cTW and that W ∗ is in some convex setW . We assume that c ∈ C = ∆d−1 , the standard d− 1 dimensional simplex . This assumption makes the contexts bounded ( which we use in our proofs ) , and it also allows a straight-forward expansion to a model in which the transitions are also a linear mapping of the context ( Modi et al. , 2018 ) . One way of viewing this model is that each row in the mapping W ∗ is a base rewards coefficient vector , and the reward for a specific context is a convex combination of these base rewards . We consider deterministic policies π : S → A which dictate the agent ’ s behaviour at each state . The value of a policy π for reward coefficients vector r is : V πr = Eξ , P , π [ ∑∞ t=0 γ tR ( st ) ] = r Tµ ( π ) where µ ( π ) : = Eξ , P , π [ ∑∞ t=0 γ tφ ( st ) ] ∈ Rk is called the feature expectations of π . For the optimal policy with respect to ( w.r.t . ) a reward coefficients vector r , we denote the value by V ∗r . For any context c , π∗c denotes the optimal policy w.r.t . reward R ∗ c ( s ) = f ∗ ( c ) Tφ ( s ) and π̂c ( W ) denotes the optimal policy w.r.t . reward R̂c ( s ) = fW ( c ) Tφ ( s ) . Inverse Reinforcement Learning in CMDPs : In standard IRL , the goal is to learn a reward which best explains the behavior of an observed expert . The model describing this scenario is the MDP\R - an MDP without a reward function ( also commonly called a controlled Markov chain ) . Similarly , we denote a CMDP without a mapping of context to reward by CMDP\M . The goal in Contextual IRL is to approximate the mapping f∗ ( c ) by observing an expert . The expert knows f∗ ( c ) , and for each context c , can provide a demonstration from π∗c . Contextual dynamics : Learning a transition kernel and an initial state distribution that is parametrized by the context is an orthogonal problem to COIRL . Therefore , we focus only on a contextual reward which simplifies our analysis . Existing methods , such as in Modi et al . ( 2018 ) , can be used to learn the mappings for the transition kernel and initial distribution in a contextual model . In conjunction with the simulation lemma ( Kearns & Singh , 2002 ) , these methods can extend our results to the more general CMDP setting . 3 OPTIMIZATION METHODS FOR COIRL . In this section , we propose and analyze optimization algorithms for minimizing the following loss function ; Lemma 1 below justifies its use for COIRL . Loss ( W ) = Ec max π [ fW ( c ) · ( µ ( π ) − µ ( π∗c ) ) ] = Ec [ fW ( c ) · ( µ ( π̂c ( W ) ) − µ ( π∗c ) ) ] . ( 1 ) Lemma 1 . Loss ( W ) satisfies the following properties : ( 1 ) ∀W , Loss ( W ) ≥ 0 , and Loss ( W ∗ ) = 0 . ( 2 ) If Loss ( W ) = 0 then ∀c ∈ C , the expert policy π∗c is the optimal policy w.r.t . reward cTW . To evaluate the loss , the optimal policy π̂c ( W ) and its features expectations µ ( π̂c ( W ) ) must be computed for all contexts . For a specific context , finding π̂c ( W ) can be solved with standard RL methods such as Value or Policy Iteration . Computing µ ( π̂c ( W ) ) is equivalent to policy evaluation ( solving linear equations ) . The challenge is that Eq . ( 1 ) is is not differentiable in W . We tackle this problem using two methods for computing descent directions that do not involve differentiation : ( i ) computing subgradients and ( ii ) randomly perturbing the loss function . In addition , as the loss is defined in expectation over the contexts , computing it requires to calculate the optimal policy for all contexts . We deal with this issue at the end of Section 3.1 . In the special case that fW ( c ) is a linear function , Eq . ( 1 ) is convex . The following Lemma characterizes Eq . ( 1 ) in this case . Lemma 2 . Let Llin ( W ) = Ec [ cTW · ( µ ( π̂c ( W ) ) −µ ( π∗c ) ) ] . We have that : ( 1 ) Llin ( W ) is a convex function . ( 2 ) g ( W ) = Ec [ c ( µ ( π̂c ( W ) ) − µ ( π∗c ) ) ] is a sub gradient of Llin ( W ) . ( 3 ) Llin is a Lipschitz continuous function , with Lipschitz constant L = 21−γ w.r.t . ‖·‖∞ and L = 2 √ dk 1−γ w.r.t . ‖·‖2 . A technical proof ( by definition ) is provided in the supplementary material . Note that g ( W ) ∈ Rd×k ; we will sometimes refer to it as a matrix and sometimes as a flattened vector , no confusion will arise . Remark 1 . The Lipschitz of LLin ( W ) is related to the simulation lemma ( Kearns & Singh , 2002 ) ; a small change in the reward results in a small change in the optimal value . Remark 2 . As g ( W ) is a subgradient of Loss ( W ) , it can be used to back-propagate DNNs . Clearly , we can not guarantee convexity ( hence no theoretical guarantees ) , but we can design Loss ( W ) to be Lipschitz continuous in W using the methods presented in Cisse et al . ( 2017 ) ; Arjovsky et al . ( 2017 ) . Remark 3 . The subgradient g ( W ) is given in expectation over contexts , and in expectation over trajectories ( feature expectations ) . We will later see how to replace it with an unbiased estimate , which can be computed by observing a single expert trajectory for a single context .
This work focuses on the problem of 'contextual' inverse reinforcement learning, where the reward is a function of the current state of the MDP, and a set of context features, which remain constant within each episode. The primary contribution of this work is the formulation of inverse reinforcement learning (for restricted spaces of context-dependent reward functions) as a convex optimization problem. Based on this formulation, the paper describes several IRL algorithms based on approaches to solving convex and non-convex optimization problems, including variations of mirror descent, and evolution strategies (in principle allowing for the optimization of reward functions with arbitrary parametric representations). The algorithms presented in this work all assume that computing an optimal policy for a specific reward function is a relatively inexpensive subroutine, which limits their applicability to domains where such planning is straightforward. Experimental results are presented for a simple highway driving domain, as well as a simulated patient treatment domain constructed from real-world clinical data.
SP:caa7fcf551f4ee75b6c06f05581bc5ef298fedbe
MULTI-STAGE INFLUENCE FUNCTION
Multi-stage training and knowledge transfer from a large-scale pretraining task to various finetuning tasks have revolutionized natural language processing ( NLP ) and computer vision ( CV ) , with state-of-the-art performances constantly being improved . In this paper , we develop a multi-stage influence function score to track predictions from a finetuned model all the way back to the pretraining data . With this score , we can identify the pretraining examples in the pretraining task that contribute most to a prediction in the finetuning task . The proposed multistage influence function generalizes the original influence function for a single model in Koh & Liang ( 2017 ) , thereby enabling influence computation through both pretrained and finetuned models . We study two different scenarios with the pretrained embeddings fixed or updated in the finetuning tasks . We test our proposed method in various experiments to show its effectiveness and potential applications . 1 INTRODUCTION . Multi-stage training has become increasingly important and has achieved state-of-the-art results in many tasks . In NLP applications , it is now a common practice to first learn word embeddings ( e.g. , word2vec ( Mikolov et al. , 2013 ) , GloVe ( Pennington et al. , 2014 ) ) or contextual representations ( e.g. , ELMo ( Peters et al. , 2018 ) , BERT ( Devlin et al. , 2018 ) ) from a large unsupervised corpus , and then refine or finetune the model on supervised end tasks . In computer vision applications , it is common to use a pretrained CNN as a feature extractor and only finetune top-layer networks through training on the end task . Also , it has been demonstrated that pretraining ResNet ( He et al. , 2016 ) with large hashtag data can greatly benefit many end tasks ( Mahajan et al. , 2018 ) . Intuitively , the successes of these multi-stage learning paradigms are due to knowledge transfer from pretraining tasks to the end task . However , current approaches using multi-stage learning are usually based on trial-and-error and many fundamental questions remain unanswered . For example , which part of the pretraining data/task contributes most to the end task ? How can one detect “ false transfer ” where some pretraining data/task could be harmful for the end task ? If a testing point is wrongly predicted by the finetuned model , can we trace back to the problematic examples in the pretraining data ? Answering these questions requires a quantitative measurement of how the data and loss function in the pretraining stage influence the end model , which has not been studied in the past and will be the main focus of this paper . To find the most influential training data responsible for a model ’ s prediction , the influence function was first introduced by Cook & Weisberg ( 1980 ) from a robust statistics point of view . Recently , as large-scale applications become more challenging for influence function computation , Koh & Liang ( 2017 ) proposed to use a first-order approximation to measure the effect of removing one training point on the model ’ s prediction , to overcome computational challenges . More broadly , there are many works using influence functions to investigate the impact of training data on models in various machine learning applications , such as tracing back the origins of bias in the word embeddings generated by GloVe ( Brunet et al. , 2019 ) , and understanding and mitigating disparate impact to improve model fairness ( Wang et al. , 2019 ) . However , all of the existing influence score computation algorithms study the case of single-stage training – where there is only one model with one set of training/prediction data in the training process . To the best of our knowledge , the influence of pretraining data on a subsequent finetuning task and model has not been studied , and it is nontrivial to apply the original influence function in ( Koh & Liang , 2017 ) to this scenario . In this work , we derive the influence function from pretraining data to the end task in multi-stage training . Since the computation involves several expensive Hessian vector products , we also show how to compute the influence function efficiently in large-scale problems . Based on this technique , we show that • in real datasets and experiments across various vision and NLP tasks , predictions using the technique and actual influence for the pretraining data to the finetuned model are highly correlated ( Pearson ’ s r score to be around 0.6 ) . This shows the effectiveness of our proposed technique for computing the influence scores in multi-stage models ; • the influence for the pretraining data to the finetuned model can be split into two parts : the influence of the pretraining data on the pretrained model , and influence of the pretraining data on the finetuned model . Therefore the testing data from the finetuning task will be impacted by changes in the pretraining data , which can be quantified using our proposed technique ; • the influence of the pretraining data on the finetuning task is highly dependent on 1 ) the similarity of two tasks or stages , 2 ) and the number of training data in the finetuning task . Thus our proposed technique provides a novel way to measure how the pretraining data helps or benefits the finetuning data . 2 RELATED WORK . Multi-stage model training that trains models in many stages on different tasks to improve the endtask has been used widely in many machine learning areas , such as in transfer learning ( Ando & Zhang , 2005 ) and zero-shot learning ( Larochelle et al. , 2008 ) . Recently , multi-stage model training has achieved state-of-the-art performance by learning large embeddings or data representation as the pretraining step on a very large pretraining dataset , which is then followed by a finetune step with further training on the end-task . Examples include the recently proposed BERT ( Devlin et al. , 2018 ) , which learns contextual embeddings on a large corpus with the pretraining tasks chosen to be predicting the masked words in a sentence and predicting whether one sentence is after another sentence . This contextual embedding is then used in finetuning tasks , such as a question answering task . ELMo ( Peters et al. , 2018 ) is widely used in multi-stage model training as a sentence feature extractor to benefit the end-task . Similarly , there are some works in computer vision that train an image representation model on a large number of images as the pretraining step , and then use the resulting features to finetune another task , such as particular image classification tasks . For example , ( Mahajan et al. , 2018 ) uses ResNet in the pretraining step , and the finetuning task is based on hashtag data . The rationale for this multi-stage model is that the pretraining task can learn some common or latent representation which could benefit the end task . Another related line of research is on understanding machine learning models . One category of research is to explain predictions with respect to model variables , and trace back the contribution of variables to the prediction . For example , Oramas et al . ( 2019 ) automatically detects internal features in the set of classes in the pretrained model that are relevant to interpreting the prediction , and shows for various vision tasks that the proposed scheme can produce detailed explanations based on the features that are relevant to the targeted classes . Guo et al . ( 2019 ) aims to interpret variable-wise hidden states in an LSTM model to quantify variable importance and variable-wise temporal importance in the model . Closely related research has sought to connect model prediction and training data , and trace back the most influential training data that are most responsible for the model prediction . Among them , the influence function ( Cook & Weisberg , 1980 ; Koh & Liang , 2017 ) , which aims to model the prediction changes when training data is added/removed , has been shown to be effective in many applications . There is a series of work on influence functions , including investigating the influence of a group of data on the prediction ( Koh et al. , 2019 ) , using influence functions to detect bias in word embeddings ( Brunet et al. , 2019 ) , and using it in preventing data poisoning attacks ( Steinhardt et al. , 2017 ) , etc . All of these works only consider a single stage training procedure , and it is not straightforward to apply the existing influence functions to multi-stage models . In this paper , we propose to analyze the influence of pretraining data on predictions in the subsequent finetuned model and end task . 3 ALGORITHMS . In this section , we detail the procedure of multi-stage training , show how to compute the influence score for the multi-stage training , and then discuss how to scale up the computation . 3.1 MULTI-STAGE MODEL TRAINING . Multi-stage models , which train different models in consecutive stages , have been widely used in various ML tasks . Mathematically , letZ be the training set for pretraining task with data size |Z| = m , and X be the training data for the finetuning task with data size |X | = n. In pretraining stage , we assume the parameters of the pretrained network have two parts : the parameters W that are shared with the end task , and the task-specific parameters U that will only be used in the pretraining stage . Note that W could be a word embedding matrix ( e.g. , in word2vec ) or a representation extraction network ( e.g. , Elmo , BERT , ResNet ) , while U is usually the last few layers that corresponds to the pretraining task . After training on the pretraining task , we obtain the optimal parameters W ∗ , U∗ . The pretraining stage can be formulated as Pretrain Stage : W ∗ , U∗ = arg min W , U 1 m ∑ z∈Z g ( z , W , U ) : = arg min W , U G ( W , U ) , ( 1 ) where g ( · ) is the loss function for the pretrain task . In the finetuning stage , the network parameters are W , Θ , where W is shared with the pretraining task and Θ is the rest of the parameters specifically associated with the finetuning task . We will initialize the W part by W ∗ . Let f ( · ) denote the finetuning loss , there are two cases when finetuning the end-task : • Finetuning Case 1 : Fixing embedding parameters W = W ∗ , and only finetune Θ : Θ∗ = arg min Θ 1 n ∑ x∈X f ( x , W ∗ , Θ ) : = arg min Θ F ( W ∗ , Θ ) . ( 2 ) • Finetuning Case 2 : finetune both the embedding parameters W ( initialized from W ∗ ) and Θ . Sometimes updating the embedding parameters W in the finetuning stage is necessary , as the embedding parameters from the pretrained model may not be good enough for the finetuning task . This corresponds to the following formulation : W ∗∗ , Θ∗ = arg min W , Θ 1 n ∑ x∈X f ( x , W , Θ ) : = arg min W , Θ F ( W , Θ ) . ( 3 ) 3.2 INFLUENCE FUNCTION FOR MULTI-STAGE MODELS . We derive the influence function for the multi-stage model to trace the influence of pretraining data on the finetuned model . In Figure 1 we show the task we are interested in solving in this paper . Note that we use the same definition of influence function as ( Koh & Liang , 2017 ) and discuss how to compute it in the multi-stage training scenario . As discussed at the end of Section 3.1 , depending on whether or not we are updating the shared parameters W in the finetuning stage , we will derive the influence functions under two different scenarios .
The authors derive the influence function of models that are first pre-trained and then fine-tuned. This extends influence functions beyond the standard supervised setting that they have been primarily considered in. To do so, the authors make two methodological contributions: 1) working through the calculus for the pre-training setting and deriving a corresponding efficient algorithm, and 2) adding $L_2$ regularization to approximate the effect of fine-tuning for a limited number of gradient steps.
SP:5406872be7f8a36576284f9a18ecb76d658bf25c
MULTI-STAGE INFLUENCE FUNCTION
Multi-stage training and knowledge transfer from a large-scale pretraining task to various finetuning tasks have revolutionized natural language processing ( NLP ) and computer vision ( CV ) , with state-of-the-art performances constantly being improved . In this paper , we develop a multi-stage influence function score to track predictions from a finetuned model all the way back to the pretraining data . With this score , we can identify the pretraining examples in the pretraining task that contribute most to a prediction in the finetuning task . The proposed multistage influence function generalizes the original influence function for a single model in Koh & Liang ( 2017 ) , thereby enabling influence computation through both pretrained and finetuned models . We study two different scenarios with the pretrained embeddings fixed or updated in the finetuning tasks . We test our proposed method in various experiments to show its effectiveness and potential applications . 1 INTRODUCTION . Multi-stage training has become increasingly important and has achieved state-of-the-art results in many tasks . In NLP applications , it is now a common practice to first learn word embeddings ( e.g. , word2vec ( Mikolov et al. , 2013 ) , GloVe ( Pennington et al. , 2014 ) ) or contextual representations ( e.g. , ELMo ( Peters et al. , 2018 ) , BERT ( Devlin et al. , 2018 ) ) from a large unsupervised corpus , and then refine or finetune the model on supervised end tasks . In computer vision applications , it is common to use a pretrained CNN as a feature extractor and only finetune top-layer networks through training on the end task . Also , it has been demonstrated that pretraining ResNet ( He et al. , 2016 ) with large hashtag data can greatly benefit many end tasks ( Mahajan et al. , 2018 ) . Intuitively , the successes of these multi-stage learning paradigms are due to knowledge transfer from pretraining tasks to the end task . However , current approaches using multi-stage learning are usually based on trial-and-error and many fundamental questions remain unanswered . For example , which part of the pretraining data/task contributes most to the end task ? How can one detect “ false transfer ” where some pretraining data/task could be harmful for the end task ? If a testing point is wrongly predicted by the finetuned model , can we trace back to the problematic examples in the pretraining data ? Answering these questions requires a quantitative measurement of how the data and loss function in the pretraining stage influence the end model , which has not been studied in the past and will be the main focus of this paper . To find the most influential training data responsible for a model ’ s prediction , the influence function was first introduced by Cook & Weisberg ( 1980 ) from a robust statistics point of view . Recently , as large-scale applications become more challenging for influence function computation , Koh & Liang ( 2017 ) proposed to use a first-order approximation to measure the effect of removing one training point on the model ’ s prediction , to overcome computational challenges . More broadly , there are many works using influence functions to investigate the impact of training data on models in various machine learning applications , such as tracing back the origins of bias in the word embeddings generated by GloVe ( Brunet et al. , 2019 ) , and understanding and mitigating disparate impact to improve model fairness ( Wang et al. , 2019 ) . However , all of the existing influence score computation algorithms study the case of single-stage training – where there is only one model with one set of training/prediction data in the training process . To the best of our knowledge , the influence of pretraining data on a subsequent finetuning task and model has not been studied , and it is nontrivial to apply the original influence function in ( Koh & Liang , 2017 ) to this scenario . In this work , we derive the influence function from pretraining data to the end task in multi-stage training . Since the computation involves several expensive Hessian vector products , we also show how to compute the influence function efficiently in large-scale problems . Based on this technique , we show that • in real datasets and experiments across various vision and NLP tasks , predictions using the technique and actual influence for the pretraining data to the finetuned model are highly correlated ( Pearson ’ s r score to be around 0.6 ) . This shows the effectiveness of our proposed technique for computing the influence scores in multi-stage models ; • the influence for the pretraining data to the finetuned model can be split into two parts : the influence of the pretraining data on the pretrained model , and influence of the pretraining data on the finetuned model . Therefore the testing data from the finetuning task will be impacted by changes in the pretraining data , which can be quantified using our proposed technique ; • the influence of the pretraining data on the finetuning task is highly dependent on 1 ) the similarity of two tasks or stages , 2 ) and the number of training data in the finetuning task . Thus our proposed technique provides a novel way to measure how the pretraining data helps or benefits the finetuning data . 2 RELATED WORK . Multi-stage model training that trains models in many stages on different tasks to improve the endtask has been used widely in many machine learning areas , such as in transfer learning ( Ando & Zhang , 2005 ) and zero-shot learning ( Larochelle et al. , 2008 ) . Recently , multi-stage model training has achieved state-of-the-art performance by learning large embeddings or data representation as the pretraining step on a very large pretraining dataset , which is then followed by a finetune step with further training on the end-task . Examples include the recently proposed BERT ( Devlin et al. , 2018 ) , which learns contextual embeddings on a large corpus with the pretraining tasks chosen to be predicting the masked words in a sentence and predicting whether one sentence is after another sentence . This contextual embedding is then used in finetuning tasks , such as a question answering task . ELMo ( Peters et al. , 2018 ) is widely used in multi-stage model training as a sentence feature extractor to benefit the end-task . Similarly , there are some works in computer vision that train an image representation model on a large number of images as the pretraining step , and then use the resulting features to finetune another task , such as particular image classification tasks . For example , ( Mahajan et al. , 2018 ) uses ResNet in the pretraining step , and the finetuning task is based on hashtag data . The rationale for this multi-stage model is that the pretraining task can learn some common or latent representation which could benefit the end task . Another related line of research is on understanding machine learning models . One category of research is to explain predictions with respect to model variables , and trace back the contribution of variables to the prediction . For example , Oramas et al . ( 2019 ) automatically detects internal features in the set of classes in the pretrained model that are relevant to interpreting the prediction , and shows for various vision tasks that the proposed scheme can produce detailed explanations based on the features that are relevant to the targeted classes . Guo et al . ( 2019 ) aims to interpret variable-wise hidden states in an LSTM model to quantify variable importance and variable-wise temporal importance in the model . Closely related research has sought to connect model prediction and training data , and trace back the most influential training data that are most responsible for the model prediction . Among them , the influence function ( Cook & Weisberg , 1980 ; Koh & Liang , 2017 ) , which aims to model the prediction changes when training data is added/removed , has been shown to be effective in many applications . There is a series of work on influence functions , including investigating the influence of a group of data on the prediction ( Koh et al. , 2019 ) , using influence functions to detect bias in word embeddings ( Brunet et al. , 2019 ) , and using it in preventing data poisoning attacks ( Steinhardt et al. , 2017 ) , etc . All of these works only consider a single stage training procedure , and it is not straightforward to apply the existing influence functions to multi-stage models . In this paper , we propose to analyze the influence of pretraining data on predictions in the subsequent finetuned model and end task . 3 ALGORITHMS . In this section , we detail the procedure of multi-stage training , show how to compute the influence score for the multi-stage training , and then discuss how to scale up the computation . 3.1 MULTI-STAGE MODEL TRAINING . Multi-stage models , which train different models in consecutive stages , have been widely used in various ML tasks . Mathematically , letZ be the training set for pretraining task with data size |Z| = m , and X be the training data for the finetuning task with data size |X | = n. In pretraining stage , we assume the parameters of the pretrained network have two parts : the parameters W that are shared with the end task , and the task-specific parameters U that will only be used in the pretraining stage . Note that W could be a word embedding matrix ( e.g. , in word2vec ) or a representation extraction network ( e.g. , Elmo , BERT , ResNet ) , while U is usually the last few layers that corresponds to the pretraining task . After training on the pretraining task , we obtain the optimal parameters W ∗ , U∗ . The pretraining stage can be formulated as Pretrain Stage : W ∗ , U∗ = arg min W , U 1 m ∑ z∈Z g ( z , W , U ) : = arg min W , U G ( W , U ) , ( 1 ) where g ( · ) is the loss function for the pretrain task . In the finetuning stage , the network parameters are W , Θ , where W is shared with the pretraining task and Θ is the rest of the parameters specifically associated with the finetuning task . We will initialize the W part by W ∗ . Let f ( · ) denote the finetuning loss , there are two cases when finetuning the end-task : • Finetuning Case 1 : Fixing embedding parameters W = W ∗ , and only finetune Θ : Θ∗ = arg min Θ 1 n ∑ x∈X f ( x , W ∗ , Θ ) : = arg min Θ F ( W ∗ , Θ ) . ( 2 ) • Finetuning Case 2 : finetune both the embedding parameters W ( initialized from W ∗ ) and Θ . Sometimes updating the embedding parameters W in the finetuning stage is necessary , as the embedding parameters from the pretrained model may not be good enough for the finetuning task . This corresponds to the following formulation : W ∗∗ , Θ∗ = arg min W , Θ 1 n ∑ x∈X f ( x , W , Θ ) : = arg min W , Θ F ( W , Θ ) . ( 3 ) 3.2 INFLUENCE FUNCTION FOR MULTI-STAGE MODELS . We derive the influence function for the multi-stage model to trace the influence of pretraining data on the finetuned model . In Figure 1 we show the task we are interested in solving in this paper . Note that we use the same definition of influence function as ( Koh & Liang , 2017 ) and discuss how to compute it in the multi-stage training scenario . As discussed at the end of Section 3.1 , depending on whether or not we are updating the shared parameters W in the finetuning stage , we will derive the influence functions under two different scenarios .
This is an analysis paper of pretraining with the tool “influence function”. First, the authors calculate the influence score for the models with/without pretraining, and then propose some implementation details (i.e., use CG to estimate the inversed Hessian). To calculate the influence function of a model with pretraining, the authors use an approximation f(w)+||w-w*||, where w* is pretrained.
SP:5406872be7f8a36576284f9a18ecb76d658bf25c
Difference-Seeking Generative Adversarial Network--Unseen Sample Generation
1 INTRODUCTION . Unseen data1are not samples from the distribution of the training data and are difficult to collect . It has been demonstrated that unseen samples can be applied to several applications . Dai et al . ( 2017 ) proposed how to create complement data , and theoretically showed that complement data , considered as unseen data , could improve semi-supervised learning . In novelty detection , Yu et al . ( 2017 ) proposed a method to generate unseen data and used them to train an anomaly detector . Another related area is adversarial training Goodfellow et al . ( 2015 ) , where classifiers are trained to resist adversarial examples , which are unseen during the training phase . However , the aforementioned methods only focus on producing specific types of unseen data , instead of enabling the generation of general types of unseen data . In this paper , we propose a general framework called difference-seeking generative adversarial network ( DSGAN ) , to generate a variety of unseen data . The DSGAN is a generative approach . Traditionally , generative approaches , which are usually conducted in an unsupervised learning manner , are developed for learning the data distribution from its samples , from which subsequently , they produce novel and high-dimensional samples , such as the synthesized image Saito et al . ( 2018 ) . A state-of-the-art approach is the so-called generative adversarial network ( GAN ) Goodfellow et al . ( 2014 ) . GAN produces sharp images based on a game-theoretic framework , but it can be difficult and unstable to train owing to multiple interaction losses . Specifically , GAN consists of two functions : generator and discriminator . Both functions are represented as parameterized neural networks . The discriminator network is trained to determine whether the inputs belong to the real dataset or fake dataset created by the generator . The generator learns to map a sample from a latent space to some distribution to increase the classification errors of the discriminator . 1In traditional machine learning scenarios , `` unseen '' data corresponds to data that is not used or seen during the training stage but rather the testing stage . The distribution of `` unseen '' data could be same as or different Nevertheless , if a generator can learn to create unseen data , then a traditional GAN requires numerous training samples of unseen classes for training , leading to a contradiction with the definition of the unseen data . This fact motivates us to present the DSGAN , which can generate unseen data by adopting seen data as training samples ( see Fig . 9 , which illustrates the difference between GAN and the DSGAN , in Appendix A ) . The key concept is to consider the distribution of the unseen data as the difference between two distributions that are relatively easy to obtain . For example , the out-of-distribution examples in the MNIST dataset , from another perspective , are found to belong to the differences between the sets of examples in MNIST and the universal set . It should be noted that in traditional GAN , the target distribution is identical to the training data distribution ; however , in the DSGAN these two distributions , are considered to be different . This paper makes the following contributions : ( 1 ) We propose the DSGAN to generate any unseen data only if the density of the target ( unseen data ) distribution is the difference between those of any two distributions , pd̄ and pd . ( 2 ) We show that the DSGAN possesses the flexibility to learn different target ( unseen data ) distributions in two key applications , semi-supervised learning and novelty detection . Specifically , for novelty detection , the DSGAN can produce boundary points around the seen data because this type of unseen data is easily misclassified . For semi-supervised learning , the unseen data are linear combinations of any labeled data and unlabeled data , excluding the labeled and unlabeled data themselves2 . ( 3 ) The DSGAN yields results comparable to a semi-supervised learning but with a short training time and low memory consumption . In novelty detection , combining both the DSGAN and variational auto-encoder ( VAE , Kingma & Welling ( 2014b ) ) methods achieve the state-of-the-art results . 2 PROPOSED METHOD-DSGAN . 2.1 FORMULATION . We denote the generator distribution as pg and training data distribution as pd , both in an N - dimensional space . Let pd̄ be the distribution decided by the user . For example , pd̄ can be the convolution of pd and normal distribution . Let pt be the target distribution that the user is interested in , and it can be expressed as ( 1− α ) pt ( x ) + αpd ( x ) = pd̄ ( x ) , ( 1 ) where α ∈ [ 0 , 1 ] . Our method , the DSGAN , aims to learn pg such that pg = pt . Note that if the support set of pd belongs to that of pd̄ , then there exists at least an α such that the equality in ( 1 ) holds . However , even if the equality does not hold , intuitively , the DSGAN attempts to learn pg such that pg ( x ) ∼ pd̄ ( x ) − αpd ( x ) 1− α with the constraint , pg ( x ) ≥ 0 . Specifically , the generator will output samples located in the high-density areas of pd̄ − αpd . Furthermore , we show that the DSGAN can learn pg , whose support set is the difference between those of pd̄ and pd in Theorem 1 . First , we formulate the generator and discriminator in GANs . The inputs , z , of the generator are drawn from pz ( z ) in an M -dimensional space . The generator function , G ( z ; θg ) : RM → RN , represents a mapping to the data space , where G is a differentiable function with parameter θg . The discriminator is defined as D ( x ; θd ) : RN → [ 0 , 1 ] , which outputs a single scalar . D ( x ) can be considered as the probability that x belongs to a class of the real data . Similar to traditional GAN , we train D to distinguish the real data from the fake data sampled from G. Concurrently , G is trained to produce realistic data that can mislead D. However , in the DSGAN , the definitions of “ real data ” and “ fake data ” are different from those in traditional GAN . The samples from pd̄ are considered as real , but those from the mixture distribution between pd and pg are considered as fake . The objective function is defined as follows : from the `` seen '' data , according to applications . In this paper , we focus on the scenario that the two distributions are different . 2The linear combination of any labeled data and unlabeled data probably belongs to the set of seen data ( labeled data and unlabeled data ) , which contradicts the definition of unseen data . Thus , the samples generated by the DSGAN should not include the seen data themselves . V ( G , D ) : = Ex∼pd̄ ( x ) [ logD ( x ) ] + ( 1− α ) Ez∼pz ( z ) [ log ( 1−D ( G ( z ) ) ) ] + αEx∼pd ( x ) [ log ( 1−D ( x ) ) ] . ( 2 ) We optimize ( 2 ) by a min–max game between G and D , i.e. , min G max D V ( G , D ) . During the training procedure , an iterative approach , like traditional GAN , is to alternate between k steps of training D and one step of training G. In practice , minibatch stochastic gradient descent via backpropagation is used to update θd and θg . Thus , for each pg , pd , and pd̄ , m samples are required for computing the gradients , where m is the number of samples in a minibatch . The training procedure is illustrated in Algorithm 1 in Appendix A . The DSGAN suffers from the same drawbacks as traditional GAN , ( e.g. , mode collapse , overfitting , and strong discriminator ) so that the generator gradient vanishes . There are literature Salimans et al . ( 2016 ) ; Arjovsky & Bottou ( 2017 ) ; Miyato et al . ( 2018 ) focusing on dealing with the above problems , and such concepts can be readily combined with the DSGAN . Li et al . ( 2017 ) and Reed et al . ( 2016 ) proposed an objective function similar to ( 2 ) . Their goal was to learn the conditional distribution of training data . However , we aim to learn the target distribution , pt , in Eq . ( 1 ) , and not the training data distribution . 2.2 CASE STUDY ON VARIOUS UNSEEN DATA GENERATION . To achieve a more intuitive understanding about the DSGAN , we conduct several case studies on two-dimensional ( 2D ) synthetic datasets and MNIST . In Eq . ( 1 ) , α = 0.8 is used . 1 7 1 7 Figure 4 : Illustration of the difference-set seeking in MNIST . Figure 5 : DSGAN learns the difference between two sets . Complement samples generation Fig . 1 illustrates that the DSGAN can generate complement samples between 2 circles . Denoting the density function of the two circles as pd , we assign the samples drawn from pd̄ as linear combinations of the two circles . Then , by applying the DSGAN , we achieve our goal of generating complement samples . In fact , this type of unseen data is used in semi-supervised learning . Boundary samples generation Fig . 2 illustrates that the DSGAN generates boundary points between four circles . This type of unseen data is used in novelty detection . In this case , we assign pd and pd̄ as “ the density function of four circles ” and “ the convolution of pd and normal distribution , ” respectively . The basis of our concept is also illustrated by a one-dimensional ( 1D ) example in Fig . 3 . Difference-set generation We also validate the DSGAN on a high-dimensional dataset such as MNIST . In this example , we define pd as the distribution of digit “ 1 ” and pd̄ as the distribution containing two digits “ 1 ” and “ 7 ” . Because the density , pd ( x ) , is high when x is digit “ 1 , ” the generator is prone to output digit “ 7 ” with a high probability . More sample qualities of DSGAN on CelebA can be refer to Appendix G. From the above results , we can observe two properties of the generator distribution , pg : i ) the higher the density of pd ( x ) , the lower the density of pg ( x ) ; ii ) pg prefers to output samples from the high-density areas of pd̄ ( x ) − αpd ( x ) . 2.3 DESIGNING pd̄ Thus far , we have demonstrated how the DSGAN can produce various types of unseen data by choosing a specific pd̄ . In this section , we introduce a standard procedure to design pd̄ , and illustrate each step with pictures . Step 1 . First , the training data , pd , are collected ( Fig . 6 ( a ) ) . Step 2 . Second , based on the applications , the desired unseen data distribution is defined ( e.g. , complement samples for semi-supervised learning ) ( Fig . 6 ( b ) ) . In the above procedure , the most important step is to determine which types of unseen data are suitable for a specific problem ( Step 2 ) . In this paper , we show two types of unseen data , which are useful in semi-supervised learning and novelty detection . However , determining all types of unseen data for all applications is beyond the scope of this study , and we leave this for future work . Furthermore , we provide a method ( see Appendix B in supplementary materials ) by reformulating the objective function ( 2 ) , so that it is more stable to train the DSGAN .
This paper proposed DSGAN which learns to generate unseen data from seen data distribution p_d and its somehow “broad” version p_{\hat d} (E.g., p_d convolved with Gaussian). The “unseen data” is the one that appears in p_{\hat d} but not in p_d. DSGAN is trained to generate such data. In particular, it uses samples from p_d as fake data and samples from p_{\hat d} as the real one.
SP:1879f9692f92bb2249772e14f27839d9e426f9b3
Difference-Seeking Generative Adversarial Network--Unseen Sample Generation
1 INTRODUCTION . Unseen data1are not samples from the distribution of the training data and are difficult to collect . It has been demonstrated that unseen samples can be applied to several applications . Dai et al . ( 2017 ) proposed how to create complement data , and theoretically showed that complement data , considered as unseen data , could improve semi-supervised learning . In novelty detection , Yu et al . ( 2017 ) proposed a method to generate unseen data and used them to train an anomaly detector . Another related area is adversarial training Goodfellow et al . ( 2015 ) , where classifiers are trained to resist adversarial examples , which are unseen during the training phase . However , the aforementioned methods only focus on producing specific types of unseen data , instead of enabling the generation of general types of unseen data . In this paper , we propose a general framework called difference-seeking generative adversarial network ( DSGAN ) , to generate a variety of unseen data . The DSGAN is a generative approach . Traditionally , generative approaches , which are usually conducted in an unsupervised learning manner , are developed for learning the data distribution from its samples , from which subsequently , they produce novel and high-dimensional samples , such as the synthesized image Saito et al . ( 2018 ) . A state-of-the-art approach is the so-called generative adversarial network ( GAN ) Goodfellow et al . ( 2014 ) . GAN produces sharp images based on a game-theoretic framework , but it can be difficult and unstable to train owing to multiple interaction losses . Specifically , GAN consists of two functions : generator and discriminator . Both functions are represented as parameterized neural networks . The discriminator network is trained to determine whether the inputs belong to the real dataset or fake dataset created by the generator . The generator learns to map a sample from a latent space to some distribution to increase the classification errors of the discriminator . 1In traditional machine learning scenarios , `` unseen '' data corresponds to data that is not used or seen during the training stage but rather the testing stage . The distribution of `` unseen '' data could be same as or different Nevertheless , if a generator can learn to create unseen data , then a traditional GAN requires numerous training samples of unseen classes for training , leading to a contradiction with the definition of the unseen data . This fact motivates us to present the DSGAN , which can generate unseen data by adopting seen data as training samples ( see Fig . 9 , which illustrates the difference between GAN and the DSGAN , in Appendix A ) . The key concept is to consider the distribution of the unseen data as the difference between two distributions that are relatively easy to obtain . For example , the out-of-distribution examples in the MNIST dataset , from another perspective , are found to belong to the differences between the sets of examples in MNIST and the universal set . It should be noted that in traditional GAN , the target distribution is identical to the training data distribution ; however , in the DSGAN these two distributions , are considered to be different . This paper makes the following contributions : ( 1 ) We propose the DSGAN to generate any unseen data only if the density of the target ( unseen data ) distribution is the difference between those of any two distributions , pd̄ and pd . ( 2 ) We show that the DSGAN possesses the flexibility to learn different target ( unseen data ) distributions in two key applications , semi-supervised learning and novelty detection . Specifically , for novelty detection , the DSGAN can produce boundary points around the seen data because this type of unseen data is easily misclassified . For semi-supervised learning , the unseen data are linear combinations of any labeled data and unlabeled data , excluding the labeled and unlabeled data themselves2 . ( 3 ) The DSGAN yields results comparable to a semi-supervised learning but with a short training time and low memory consumption . In novelty detection , combining both the DSGAN and variational auto-encoder ( VAE , Kingma & Welling ( 2014b ) ) methods achieve the state-of-the-art results . 2 PROPOSED METHOD-DSGAN . 2.1 FORMULATION . We denote the generator distribution as pg and training data distribution as pd , both in an N - dimensional space . Let pd̄ be the distribution decided by the user . For example , pd̄ can be the convolution of pd and normal distribution . Let pt be the target distribution that the user is interested in , and it can be expressed as ( 1− α ) pt ( x ) + αpd ( x ) = pd̄ ( x ) , ( 1 ) where α ∈ [ 0 , 1 ] . Our method , the DSGAN , aims to learn pg such that pg = pt . Note that if the support set of pd belongs to that of pd̄ , then there exists at least an α such that the equality in ( 1 ) holds . However , even if the equality does not hold , intuitively , the DSGAN attempts to learn pg such that pg ( x ) ∼ pd̄ ( x ) − αpd ( x ) 1− α with the constraint , pg ( x ) ≥ 0 . Specifically , the generator will output samples located in the high-density areas of pd̄ − αpd . Furthermore , we show that the DSGAN can learn pg , whose support set is the difference between those of pd̄ and pd in Theorem 1 . First , we formulate the generator and discriminator in GANs . The inputs , z , of the generator are drawn from pz ( z ) in an M -dimensional space . The generator function , G ( z ; θg ) : RM → RN , represents a mapping to the data space , where G is a differentiable function with parameter θg . The discriminator is defined as D ( x ; θd ) : RN → [ 0 , 1 ] , which outputs a single scalar . D ( x ) can be considered as the probability that x belongs to a class of the real data . Similar to traditional GAN , we train D to distinguish the real data from the fake data sampled from G. Concurrently , G is trained to produce realistic data that can mislead D. However , in the DSGAN , the definitions of “ real data ” and “ fake data ” are different from those in traditional GAN . The samples from pd̄ are considered as real , but those from the mixture distribution between pd and pg are considered as fake . The objective function is defined as follows : from the `` seen '' data , according to applications . In this paper , we focus on the scenario that the two distributions are different . 2The linear combination of any labeled data and unlabeled data probably belongs to the set of seen data ( labeled data and unlabeled data ) , which contradicts the definition of unseen data . Thus , the samples generated by the DSGAN should not include the seen data themselves . V ( G , D ) : = Ex∼pd̄ ( x ) [ logD ( x ) ] + ( 1− α ) Ez∼pz ( z ) [ log ( 1−D ( G ( z ) ) ) ] + αEx∼pd ( x ) [ log ( 1−D ( x ) ) ] . ( 2 ) We optimize ( 2 ) by a min–max game between G and D , i.e. , min G max D V ( G , D ) . During the training procedure , an iterative approach , like traditional GAN , is to alternate between k steps of training D and one step of training G. In practice , minibatch stochastic gradient descent via backpropagation is used to update θd and θg . Thus , for each pg , pd , and pd̄ , m samples are required for computing the gradients , where m is the number of samples in a minibatch . The training procedure is illustrated in Algorithm 1 in Appendix A . The DSGAN suffers from the same drawbacks as traditional GAN , ( e.g. , mode collapse , overfitting , and strong discriminator ) so that the generator gradient vanishes . There are literature Salimans et al . ( 2016 ) ; Arjovsky & Bottou ( 2017 ) ; Miyato et al . ( 2018 ) focusing on dealing with the above problems , and such concepts can be readily combined with the DSGAN . Li et al . ( 2017 ) and Reed et al . ( 2016 ) proposed an objective function similar to ( 2 ) . Their goal was to learn the conditional distribution of training data . However , we aim to learn the target distribution , pt , in Eq . ( 1 ) , and not the training data distribution . 2.2 CASE STUDY ON VARIOUS UNSEEN DATA GENERATION . To achieve a more intuitive understanding about the DSGAN , we conduct several case studies on two-dimensional ( 2D ) synthetic datasets and MNIST . In Eq . ( 1 ) , α = 0.8 is used . 1 7 1 7 Figure 4 : Illustration of the difference-set seeking in MNIST . Figure 5 : DSGAN learns the difference between two sets . Complement samples generation Fig . 1 illustrates that the DSGAN can generate complement samples between 2 circles . Denoting the density function of the two circles as pd , we assign the samples drawn from pd̄ as linear combinations of the two circles . Then , by applying the DSGAN , we achieve our goal of generating complement samples . In fact , this type of unseen data is used in semi-supervised learning . Boundary samples generation Fig . 2 illustrates that the DSGAN generates boundary points between four circles . This type of unseen data is used in novelty detection . In this case , we assign pd and pd̄ as “ the density function of four circles ” and “ the convolution of pd and normal distribution , ” respectively . The basis of our concept is also illustrated by a one-dimensional ( 1D ) example in Fig . 3 . Difference-set generation We also validate the DSGAN on a high-dimensional dataset such as MNIST . In this example , we define pd as the distribution of digit “ 1 ” and pd̄ as the distribution containing two digits “ 1 ” and “ 7 ” . Because the density , pd ( x ) , is high when x is digit “ 1 , ” the generator is prone to output digit “ 7 ” with a high probability . More sample qualities of DSGAN on CelebA can be refer to Appendix G. From the above results , we can observe two properties of the generator distribution , pg : i ) the higher the density of pd ( x ) , the lower the density of pg ( x ) ; ii ) pg prefers to output samples from the high-density areas of pd̄ ( x ) − αpd ( x ) . 2.3 DESIGNING pd̄ Thus far , we have demonstrated how the DSGAN can produce various types of unseen data by choosing a specific pd̄ . In this section , we introduce a standard procedure to design pd̄ , and illustrate each step with pictures . Step 1 . First , the training data , pd , are collected ( Fig . 6 ( a ) ) . Step 2 . Second , based on the applications , the desired unseen data distribution is defined ( e.g. , complement samples for semi-supervised learning ) ( Fig . 6 ( b ) ) . In the above procedure , the most important step is to determine which types of unseen data are suitable for a specific problem ( Step 2 ) . In this paper , we show two types of unseen data , which are useful in semi-supervised learning and novelty detection . However , determining all types of unseen data for all applications is beyond the scope of this study , and we leave this for future work . Furthermore , we provide a method ( see Appendix B in supplementary materials ) by reformulating the objective function ( 2 ) , so that it is more stable to train the DSGAN .
This paper provides an interesting application of GAN which can generate the outlier distribution of training data which forces generator to learn the distribution of the low probability density area of given data. To show the effectiveness of the method, the author intuitively shows how it works on 2-D points data as well as the reconstructed Mnist dataset. Additionally, this approach reaches a comparable performance on semi-supervised learning and novelty detection task.
SP:1879f9692f92bb2249772e14f27839d9e426f9b3
Generative Cleaning Networks with Quantized Nonlinear Transform for Deep Neural Network Defense
1 INTRODUCTION . Recent research has shown that deep neural networks are sensitive to adversarial attacks ( Szegedy et al. , 2013 ) . Very small changes of the input image can fool the state-of-art classifier with very high success probabilities . During the past few years , a number of methods have been proposed to construct adversarial samples to attack the deep neural networks , including fast gradient sign ( FGS ) method ( Goodfellow et al. , 2014b ) , Jacobian-based saliency map attack ( J-BSMA ) ( Papernot et al. , 2016a ) , and projected gradient descent ( PGD ) attack ( Kurakin et al. , 2016 ; Madry et al. , 2018 ) . It has also been demonstrated that different classifiers can be attacked by the same adversarial perturbation ( Szegedy et al. , 2013 ) . The fragility of deep neural networks and the availability of these powerful attacking methods present an urgent need for effective defense methods . During the past few years , a number of deep neural network defense methods have been developed , including adversarial training ( Kurakin et al. , 2016 ; Szegedy et al. , 2013 ) , defensive distillation ( Papernot et al. , 2016b ; Carlini & Wagner , 2016 ; Papernot & McDaniel , 2016 ) , Magnet ( Meng & Chen , 2017 ) and featuring squeezing ( He et al. , 2017 ; Xu et al. , 2017 ) . It has been recognized that these methods suffer from significant performance degradation under strong attacks , especially white-box attacks with large magnitude and iterations ( Samangouei et al. , 2018 ) . In this work , we explore a new approach to defend various attacks by developing a generative cleaning network with quantized nonlinear transform . We recognize that the attack noise is not random and has sophisticated patterns . The attackers often generate noise patterns by exploring the specific network architecture or classification behavior of the target deep neural network so that the small noise at the input layer can accumulate along the network inference layers , finally exceed the decision threshold at the output layer , and result in false decision . On the other hand , we know a well-trained deep neural networks are robust to random noise ( Arjovsky et al. , 2017 ) , such as Gaussian noise . Therefore , the key issue in network defense is to randomize or destroy the sophisticated pattern of the attack noise while preserving the original image content . Motivated by this observation , we design a new generative cleaning network with quantized nonlinear transform to first destroy the sophisticated noise patterns of adversarial attacks and then recover the original image content damaged during this nonlinear transform . We also construct a detector network which serves as the dual network for the target classifier to be defended . The generative cleaning network and detector network are jointly trained using adversarial learning so that the detector network can not detect the existence of attack noise pattern in the images recovered by the generative cleaning network . Our extensive experimental results demonstrate that our approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks . It significantly improves the classification accuracy for white-box PGD attacks upon the second best method by more than 40 % on the SVHN dataset from 46.90 % to 93.80 % , and more than 20 % on the challenging CIFAR-10 dataset from 60.15 % to 86.05 % . The major contributions of this work can be summarized as follows . ( 1 ) We have proposed a new approach for deep neural network defense by developing a unique generative cleaning network with quantized nonlinear transform . ( 2 ) We have formulated the problem of destroying the noise patterns of adversarial attacks and reconstructing original image content into generative adversarial network design and training which considers both perceptual loss and adversarial loss . ( 3 ) Our new method has significantly improved the performance of the state-of-the-art methods in the literature under a wide variety of attacks . The rest of this paper is organized as follows . Section 2 reviews related work . The proposed method is presented in Section 3 . Experimental results and performance comparison with existing methods are provided in Section 4 . Section 5 concludes the paper . 2 RELATED WORK . In this section , we review related work on adversarial attack and network defense methods . ( A ) Attack methods . Attack methods can be divided into two threat models : white-box attacks and black-box attacks . The white-box attacker has full access to the classifier network parameters , network architecture , and weights . The black-box attacker has no knowledge of or access to the target network . For white-box attack , a simple and fast approach called Fast Gradient Sign ( FGS ) method has been developed by Goodfellow et al . ( 2014b ) using error back propagation to directly modify the original image . Basic Iterative Method ( BIM ) is an improved version of the FGS method . Carlini & Wagner ( 2016 ) designed an optimization-based attack method , called Carlini-Wagner ( C & W ) attack , which is able to fool the target network with the smallest perturbation . Xiao et al . ( 2018 ) trained a generative adversarial network ( GAN ) ( Goodfellow et al. , 2014a ) to generate perturbations . Kannan et al . ( 2018 ) found that the Projected Gradient Descent ( PGD ) is the strongest among all attack methods . It can be viewed as a multi-step variant of FGSk ( Madry et al. , 2018 ) . Athalye et al . ( 2018 ) introduced a method , called Backward Pass Differentiable Approximation ( BPDA ) , to attack networks where gradients are not available . It is able to successfully attack all existing state-of-the-arts defense methods . For black-box attacks , the attacker has no knowledge about target classifier . Papernot et al . ( 2017 ) introduced the first approach for black-box attack using a substitute model . Dong et al . ( 2018 ) proposed a momentum-based iterative algorithms to improve the transferability of adversarial examples . Xie et al . ( 2018c ) boosted the transferability of adversarial examples by creating diverse input patterns . ( B ) Defense methods Several approaches have recently been proposed for defending both whitebox attacks and black-box attacks . Adversarial training defends various attacks by training the target model with adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014b ) . Madry et al . ( 2018 ) suggested that training with adversarial examples generated by PGD improves the robustness . Meng & Chen ( 2017 ) proposed a method , called MagNet , which detects the perturbations and then reshape them according to the difference between clean and adversarial examples . Recently , there are several defense methods based on GANs have been developed . Samangouei et al . ( 2018 ) projected the adversarial examples into a trained generative adversarial network ( GAN ) to approximate the input using generated clean image with multiple iterations . Recently , some defense methods have been developed based on input transformations . Guo et al . ( 2018 ) proposed several input transformations to defend the adversarial examples , including image cropping and re-scaling , bit-depth reduction , and JPEG compression . Xie et al . ( 2018a ) proposed to defend against adversarial attacks by adding a randomization layer , which randomly re-scales the image and then randomly zero-pads the image . Jia et al . ( 2019 ) proposed an image compression framework to defend adversarial examples , called ComDefend . Xie et al . ( 2018b ) introduced a feature denoising method for defending PGD white-box attacks . Our proposed defense method is also related to GANs and image transformations . But , compared to existing methods , our method is unique in the following aspects : ( 1 ) We introduce a special layer called quantized nonlinear transform , into the generative cleaning network to destroy the sophisticated noise pattern of adversarial attacks . ( 2 ) Unlike the GAN-based methods in ( Wang & Yu , 2019 ; Xiao et al. , 2018 ) which aim to approximate input noise image using images generated by the GAN over multiple iterations , our generative cleaning network aims to reconstruct the image content damaged by quantized nonlinear transform . ( 3 ) Our method does not need to modify the target network to be protected . 3 THE PROPOSED DEFENSE METHOD . In this section , we present our proposed generative cleaning network method for effective deep neural network defense . For convenience , we refer to our proposed method by GCLN . 3.1 METHOD OVERVIEW . Figure 1 provides an overview of the proposed method . The attacked image x∗ is fed into the generative cleaning network Gθ . The network has a special layer , called quantized nonlinear transform , to destroy the noise pattern of the adversarial attack in the input image . The generative cleaning network aims to recover the original image content and produce a recovered image x̄ . This recovered image x̄ will be passed to the target classifier Cα for image classification or recognition . To successfully learn the generative cleaning network Gθ , we construct a detector network Dφ , which serves as the dual network for the target classifier network Cα . The task of Dφ is to determine if the input image is clean or being attacked . In our proposed method , the generative cleaning network Gθ and the detector network Dφ are jointly trained through adversarial learning : the Gθ network is trying to recover the image x̂ so that Dφ can not detect any attack noise in it . In the following sections , we will explain the proposed method in more detail . 3.2 QUANTIZED NONLINEAR TRANSFORM LAYER IN THE GENERATIVE CLEANING NETWORK . During the generative cleaning network design , We incorporate one special layer into the network , called quantized nonlinear transform . This transform aims to disturb and partially destroy the sophisticated pattern of the attack noise . In this work , we propose to construct such a transform using a linear transform T , followed by a quantizer Q and an inverse transform T−1 . For the linear transform , we can use the discrete cosine transform ( DCT ) ( Ahmed et al. , 1974 ) which has been in JPEG image compression ( Wallace , 1992 ) . Specifically , we partition the input image into blocks of M × M . The original image block is denoted by X∗B = [ x∗nk ] 1≤n , k≤M . The output block X̂B = [ x̂ij ] 1≤i , j≤M after DCT transform is given by x̂ij = 1 4 CiCj M−1∑ n=0 M−1∑ k=0 xnk cos ( iπ 2n+ 1 2M ) cos ( jπ 2k + 1 2M ) , ( 1 ) with Ci = 1/ √ 2 for i = 0 , and Ci = 1 for i 6= 0 . After transform , we will quantize the transform coefficient x̂ij as follows RQ ( x̂ij ) = Round ( x̂ij q ) × q , ( 2 ) where q is the quantization parameter . Certainly , this DCT transform can be replaced with other invertible transform , such as discrete wavelet transform ( Daubechies , 1990 ) . During network training , this special quantized nonlinear transform layer is implemented in the same way as the pooling layers in existing deep neural networks and included into the training process of the whole generative cleaning network .
This paper proposes a new method to defend a neural network agains adversarial attacks (both white-box and black-box attacks). By jointly training a Generative Cleaning Network with quantized nonlinear transform, and a Detector Network, the proposed cleans the incoming attacked image and correctly classifies its true label. The authors use state-of-the-art attack methods on various models, and the proposed model consistently outperforms all baseline models, even dramatically outperforming them for some specific attack methods.
SP:b386db96a13357984b61761342ae1cc876fe6d3a
Generative Cleaning Networks with Quantized Nonlinear Transform for Deep Neural Network Defense
1 INTRODUCTION . Recent research has shown that deep neural networks are sensitive to adversarial attacks ( Szegedy et al. , 2013 ) . Very small changes of the input image can fool the state-of-art classifier with very high success probabilities . During the past few years , a number of methods have been proposed to construct adversarial samples to attack the deep neural networks , including fast gradient sign ( FGS ) method ( Goodfellow et al. , 2014b ) , Jacobian-based saliency map attack ( J-BSMA ) ( Papernot et al. , 2016a ) , and projected gradient descent ( PGD ) attack ( Kurakin et al. , 2016 ; Madry et al. , 2018 ) . It has also been demonstrated that different classifiers can be attacked by the same adversarial perturbation ( Szegedy et al. , 2013 ) . The fragility of deep neural networks and the availability of these powerful attacking methods present an urgent need for effective defense methods . During the past few years , a number of deep neural network defense methods have been developed , including adversarial training ( Kurakin et al. , 2016 ; Szegedy et al. , 2013 ) , defensive distillation ( Papernot et al. , 2016b ; Carlini & Wagner , 2016 ; Papernot & McDaniel , 2016 ) , Magnet ( Meng & Chen , 2017 ) and featuring squeezing ( He et al. , 2017 ; Xu et al. , 2017 ) . It has been recognized that these methods suffer from significant performance degradation under strong attacks , especially white-box attacks with large magnitude and iterations ( Samangouei et al. , 2018 ) . In this work , we explore a new approach to defend various attacks by developing a generative cleaning network with quantized nonlinear transform . We recognize that the attack noise is not random and has sophisticated patterns . The attackers often generate noise patterns by exploring the specific network architecture or classification behavior of the target deep neural network so that the small noise at the input layer can accumulate along the network inference layers , finally exceed the decision threshold at the output layer , and result in false decision . On the other hand , we know a well-trained deep neural networks are robust to random noise ( Arjovsky et al. , 2017 ) , such as Gaussian noise . Therefore , the key issue in network defense is to randomize or destroy the sophisticated pattern of the attack noise while preserving the original image content . Motivated by this observation , we design a new generative cleaning network with quantized nonlinear transform to first destroy the sophisticated noise patterns of adversarial attacks and then recover the original image content damaged during this nonlinear transform . We also construct a detector network which serves as the dual network for the target classifier to be defended . The generative cleaning network and detector network are jointly trained using adversarial learning so that the detector network can not detect the existence of attack noise pattern in the images recovered by the generative cleaning network . Our extensive experimental results demonstrate that our approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks . It significantly improves the classification accuracy for white-box PGD attacks upon the second best method by more than 40 % on the SVHN dataset from 46.90 % to 93.80 % , and more than 20 % on the challenging CIFAR-10 dataset from 60.15 % to 86.05 % . The major contributions of this work can be summarized as follows . ( 1 ) We have proposed a new approach for deep neural network defense by developing a unique generative cleaning network with quantized nonlinear transform . ( 2 ) We have formulated the problem of destroying the noise patterns of adversarial attacks and reconstructing original image content into generative adversarial network design and training which considers both perceptual loss and adversarial loss . ( 3 ) Our new method has significantly improved the performance of the state-of-the-art methods in the literature under a wide variety of attacks . The rest of this paper is organized as follows . Section 2 reviews related work . The proposed method is presented in Section 3 . Experimental results and performance comparison with existing methods are provided in Section 4 . Section 5 concludes the paper . 2 RELATED WORK . In this section , we review related work on adversarial attack and network defense methods . ( A ) Attack methods . Attack methods can be divided into two threat models : white-box attacks and black-box attacks . The white-box attacker has full access to the classifier network parameters , network architecture , and weights . The black-box attacker has no knowledge of or access to the target network . For white-box attack , a simple and fast approach called Fast Gradient Sign ( FGS ) method has been developed by Goodfellow et al . ( 2014b ) using error back propagation to directly modify the original image . Basic Iterative Method ( BIM ) is an improved version of the FGS method . Carlini & Wagner ( 2016 ) designed an optimization-based attack method , called Carlini-Wagner ( C & W ) attack , which is able to fool the target network with the smallest perturbation . Xiao et al . ( 2018 ) trained a generative adversarial network ( GAN ) ( Goodfellow et al. , 2014a ) to generate perturbations . Kannan et al . ( 2018 ) found that the Projected Gradient Descent ( PGD ) is the strongest among all attack methods . It can be viewed as a multi-step variant of FGSk ( Madry et al. , 2018 ) . Athalye et al . ( 2018 ) introduced a method , called Backward Pass Differentiable Approximation ( BPDA ) , to attack networks where gradients are not available . It is able to successfully attack all existing state-of-the-arts defense methods . For black-box attacks , the attacker has no knowledge about target classifier . Papernot et al . ( 2017 ) introduced the first approach for black-box attack using a substitute model . Dong et al . ( 2018 ) proposed a momentum-based iterative algorithms to improve the transferability of adversarial examples . Xie et al . ( 2018c ) boosted the transferability of adversarial examples by creating diverse input patterns . ( B ) Defense methods Several approaches have recently been proposed for defending both whitebox attacks and black-box attacks . Adversarial training defends various attacks by training the target model with adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014b ) . Madry et al . ( 2018 ) suggested that training with adversarial examples generated by PGD improves the robustness . Meng & Chen ( 2017 ) proposed a method , called MagNet , which detects the perturbations and then reshape them according to the difference between clean and adversarial examples . Recently , there are several defense methods based on GANs have been developed . Samangouei et al . ( 2018 ) projected the adversarial examples into a trained generative adversarial network ( GAN ) to approximate the input using generated clean image with multiple iterations . Recently , some defense methods have been developed based on input transformations . Guo et al . ( 2018 ) proposed several input transformations to defend the adversarial examples , including image cropping and re-scaling , bit-depth reduction , and JPEG compression . Xie et al . ( 2018a ) proposed to defend against adversarial attacks by adding a randomization layer , which randomly re-scales the image and then randomly zero-pads the image . Jia et al . ( 2019 ) proposed an image compression framework to defend adversarial examples , called ComDefend . Xie et al . ( 2018b ) introduced a feature denoising method for defending PGD white-box attacks . Our proposed defense method is also related to GANs and image transformations . But , compared to existing methods , our method is unique in the following aspects : ( 1 ) We introduce a special layer called quantized nonlinear transform , into the generative cleaning network to destroy the sophisticated noise pattern of adversarial attacks . ( 2 ) Unlike the GAN-based methods in ( Wang & Yu , 2019 ; Xiao et al. , 2018 ) which aim to approximate input noise image using images generated by the GAN over multiple iterations , our generative cleaning network aims to reconstruct the image content damaged by quantized nonlinear transform . ( 3 ) Our method does not need to modify the target network to be protected . 3 THE PROPOSED DEFENSE METHOD . In this section , we present our proposed generative cleaning network method for effective deep neural network defense . For convenience , we refer to our proposed method by GCLN . 3.1 METHOD OVERVIEW . Figure 1 provides an overview of the proposed method . The attacked image x∗ is fed into the generative cleaning network Gθ . The network has a special layer , called quantized nonlinear transform , to destroy the noise pattern of the adversarial attack in the input image . The generative cleaning network aims to recover the original image content and produce a recovered image x̄ . This recovered image x̄ will be passed to the target classifier Cα for image classification or recognition . To successfully learn the generative cleaning network Gθ , we construct a detector network Dφ , which serves as the dual network for the target classifier network Cα . The task of Dφ is to determine if the input image is clean or being attacked . In our proposed method , the generative cleaning network Gθ and the detector network Dφ are jointly trained through adversarial learning : the Gθ network is trying to recover the image x̂ so that Dφ can not detect any attack noise in it . In the following sections , we will explain the proposed method in more detail . 3.2 QUANTIZED NONLINEAR TRANSFORM LAYER IN THE GENERATIVE CLEANING NETWORK . During the generative cleaning network design , We incorporate one special layer into the network , called quantized nonlinear transform . This transform aims to disturb and partially destroy the sophisticated pattern of the attack noise . In this work , we propose to construct such a transform using a linear transform T , followed by a quantizer Q and an inverse transform T−1 . For the linear transform , we can use the discrete cosine transform ( DCT ) ( Ahmed et al. , 1974 ) which has been in JPEG image compression ( Wallace , 1992 ) . Specifically , we partition the input image into blocks of M × M . The original image block is denoted by X∗B = [ x∗nk ] 1≤n , k≤M . The output block X̂B = [ x̂ij ] 1≤i , j≤M after DCT transform is given by x̂ij = 1 4 CiCj M−1∑ n=0 M−1∑ k=0 xnk cos ( iπ 2n+ 1 2M ) cos ( jπ 2k + 1 2M ) , ( 1 ) with Ci = 1/ √ 2 for i = 0 , and Ci = 1 for i 6= 0 . After transform , we will quantize the transform coefficient x̂ij as follows RQ ( x̂ij ) = Round ( x̂ij q ) × q , ( 2 ) where q is the quantization parameter . Certainly , this DCT transform can be replaced with other invertible transform , such as discrete wavelet transform ( Daubechies , 1990 ) . During network training , this special quantized nonlinear transform layer is implemented in the same way as the pooling layers in existing deep neural networks and included into the training process of the whole generative cleaning network .
This paper developed a method for defending deep neural networks against adversarial attacks based on generative cleaning networks with quantized nonlinear transform. The network is claimed to recover the original image while cleaning up the residual attack noise. The authors developed a detector network, which serves as the dual network of the target classifier network to be defended, to detect if the image is clean or being attacked. This detector network and the generative cleaning network are jointly trained with adversarial learning so that the detector network cannot find any attack noise in the output image of generative cleaning network. The experimental results demonstrated that the proposed approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks.
SP:b386db96a13357984b61761342ae1cc876fe6d3a
Optimistic Adaptive Acceleration for Optimization
1 INTRODUCTION . Nowadays deep learning has been very successful in numerous applications , from robotics ( e.g. , Levine et al . ( 2017 ) ) , computer vision ( e.g. , He et al . ( 2016 ) ; Goodfellow et al . ( 2014 ) ) , reinforcement learning ( e.g. , Mnih et al . ( 2013 ) ) , to natural language processing ( e.g. , Graves et al . ( 2013 ) ) . A common goal in these applications is learning quickly . It becomes a desired goal due to the presence of big data and/or the use of large neural nets . To accelerate the process , there are variety of training algorithms proposed in recent years , such as AMSGRAD ( Reddi et al . ( 2018 ) ) , ADAM ( Kingma & Ba ( 2015 ) ) , RMSPROP ( Tieleman & Hinton ( 2012 ) ) , ADADELTA ( Zeiler ( 2012 ) ) , and NADAM ( Dozat ( 2016 ) ) , etc . All the prevalent algorithms for training deep nets mentioned above combine two ideas : the idea of adaptivity from ADAGRAD ( Duchi et al . ( 2011 ) ; McMahan & Streeter ( 2010 ) ) and the idea of momentum from NESTEROV ’ S METHOD ( Nesterov ( 2004 ) ) or HEAVY BALL method ( Polyak ( 1964 ) ) . ADAGRAD is an online learning algorithm that works well compared to the standard online gradient descent when the gradient is sparse . Its update has a notable feature : the effective learning rate is different for each dimension , depending on the magnitude of gradient in each dimension , which might help in exploiting the geometry of data and leading to a better update . On the other hand , NESTEROV ’ S METHOD or HEAVY BALL Method ( Polyak ( 1964 ) ) is an accelerated optimization algorithm whose update not only depends on the current iterate and current gradient but also depends on the past gradients ( i.e. , momentum ) . State-of-the-art algorithms like AMSGRAD ( Reddi et al . ( 2018 ) ) and ADAM ( Kingma & Ba ( 2015 ) ) leverage the ideas to accelerate training neural nets . In this paper , we propose an algorithm that goes further than the hybrid of the adaptivity and momentum approach . Our algorithm is inspired by OPTIMISTIC ONLINE LEARNING ( see e.g . Chiang et al . ( 2012 ) ; Rakhlin & Sridharan ( 2013a ; b ) ; Syrgkanis et al . ( 2015 ) ; Abernethy et al . ( 2018 ) ) . OPTIMISTIC ONLINE LEARNING considers that a good guess of the loss function in each round is available and plays an action by utilizing the guess . By exploiting the guess , algorithms in OPTIMISTIC ONLINE LEARNING can enjoy a smaller regret than the ones without exploiting the guess . We combine the OPTIMISTIC ONLINE LEARNING idea with the adaptivity and the momentum ideas to design a new algorithm — OPTIMISTIC-AMSGRAD . We also provide a theoretical analysis of OPTIMISTIC-AMSGRAD . The proposed algorithm not only adapts to the informative dimensions , exhibits momentum , but also exploits a good guess of the next gradient to facilitate acceleration . We conduct experiments and show that OPTIMISTIC-AMSGRAD improves AMSGRAD in terms of various measures : training loss , testing loss , and classification accuracy on training/testing data over epochs . 2 PRELIMINARIES . We begin by providing some background in online learning , as it will be the main tool to design and analyze our proposed algorithm . We follow the notation in the literature of adaptive optimization ( Kingma & Ba ( 2015 ) ; Reddi et al . ( 2018 ) ) . For any vector u , v ∈ Rd , u/v represents element-wise division , u2 represents element-wise square , √ u represents element-wise square-root . We denote g1 : T [ i ] as the sum of the ith element of T vectors g1 , g2 , . . . , gT ∈ Rd . 2.1 A BRIEF REVIEW OF ONLINE LEARNING AND OPTIMISTIC ONLINE LEARNING . The standard setup of online learning is that , in each round t , an online learner selects an action wt ∈ K ⊆ Rd , then the learner observes ` t ( · ) and suffers loss ` t ( wt ) after the learner commits the action . The goal of the learner is minimizing the regret , RegretT ( { wt } ) : = ∑T t=1 ` t ( wt ) − ∑T t=1 ` t ( w ∗ ) , which is the cumulative loss of the learner minus the cumulative loss of some benchmark w∗ ∈ K. The idea of OPTIMISTIC ONLINE LEARNING ( e.g. , Chiang et al . ( 2012 ) ; Rakhlin & Sridharan ( 2013a ; b ) ; Syrgkanis et al . ( 2015 ) ; Abernethy et al . ( 2018 ) ) is as follows . Suppose that , in each round t , the learner has a good guess mt ( · ) of the loss function ` t ( · ) before playing an action wt . Then , the learner should exploit the guess mt ( · ) to choose an action wt since mt ( · ) is close to the true loss function ` t ( · ) . 1 For example , Syrgkanis et al . ( 2015 ) proposes an optimistic-variant of FOLLOW-THE-REGULARIZED-LEADER ( FTRL ) . FTRL ( see e.g . Hazan ( 2016 ) ) is an online learning algorithm whose update is wt = arg minw∈K〈w , Lt−1〉+ 1ηR ( w ) , where η is a parameter , R ( · ) is a 1-strongly convex function with respect to a norm ( ‖ · ‖ ) on the constraint setK , and Lt−1 : = ∑t−1 s=1 gs is the cumulative sum of gradient vectors of the loss functions ( i.e. , gs : = ∇ ` s ( ws ) ) up to but not including t. FTRL has regret at most O ( √∑T t=1 ‖gt‖∗ ) . On the other hand , OPTIMISTIC-FTRL ( Syrgkanis et al . ( 2015 ) ) has the update wt = arg minw∈K〈w , Lt−1 +mt〉+ 1ηR ( w ) , where mt is the learner ’ s guess of the gradient vector gt : = ∇ ` t ( wt ) . Under the assumption that loss functions are convex , the regret of OPTIMISTIC-FTRL is at most O ( √∑T t=1 ‖gt −mt‖∗ ) , which can be much smaller than the regret of FTRL if mt is close to gt . Consequently , OPTIMISTIC-FTRL can achieve better performance than FTRL . On the other hand , if mt is far from gt , then the regret of OPTIMISTIC-FTRL would be only a constant factor worse than that of its non-optimistic counterpart . In Section 4 , we will provide a strategy to obtain mt . At the moment , we just would like to use this example of FTRL to emphasize the importance of leveraging a good guess mt for updating wt , in order to achieve a faster convergence rate ( or equivalently , small regret ) . We will have a similar argument when we compare OPTIMISTIC-AMSGRAD and AMSGRAD . 2.2 ADAM AND AMSGRAD . ADAM ( Kingma & Ba ( 2015 ) ) is a popular algorithm for training deep nets . It combines the momentum idea ( Polyak ( 1964 ) ) with the idea of ADAGRAD ( Duchi et al . ( 2011 ) ) , which has effective different learning rates for different dimensions . The effective learning rate of ADAGRAD in iteration t for a dimension j is proportional to the inverse of √ Σts=1gs [ j ] 2 , where gs [ j ] is the jth element of the gradient vector gs in time s. This adaptive learning rate might help for accelerating the convergence when the gradient vector is sparse ( Duchi et al . ( 2011 ) ) . However , when applying ADAGRAD to train deep nets , it is observed that the learning rate might decay too fast ( Kingma & Ba ( 2015 ) ) . Therefore , Kingma & Ba ( 2015 ) propose using a moving average of gradients divided by the square root of the second moment of the moving average ( element-wise fashion ) , for updating the model parameter w ( i.e. , lines 5,6 and 8 of Algorithm 1 ) . Yet , ADAM ( Kingma & Ba ( 2015 ) ) fails at 1Imagine that if the learner would had been known ` t ( · ) before committing its action , then it would exploit the knowledge to determine its action and consequently minimizes the regret . Algorithm 1 AMSGRAD ( Reddi et al . ( 2018 ) ) 1 : Required : parameter β1 , β2 , and ηt . 2 : Init : w1 ∈ K ⊆ Rd and v̂0 = v0 = 1 ∈ Rd . 3 : for t = 1 to T do 4 : Get mini-batch stochastic gradient vector gt at wt . 5 : θt = β1θt−1 + ( 1− β1 ) gt . 6 : vt = β2vt−1 + ( 1− β2 ) g2t . 7 : v̂t = max ( v̂t−1 , vt ) . 8 : wt+1 = wt − ηt θt√ v̂t . ( element-wise division ) 9 : end for some online convex optimization problems . AMSGRAD ( Reddi et al . ( 2018 ) ) fixes the issue . The algorithm of AMSGRAD is shown in Algorithm 1 . The difference between ADAM and AMSGRAD lies on line 7 of Algorithm 1 . ADAM does not have the max operation on line 7 ( i.e. , v̂t = vt for ADAM ) while AMSGRAD adds the operation to guarantee a non-increasing learning rate , ηt√ v̂t , which helps for the convergence ( i.e. , average regret RegretTT → 0 ) . For the parameters of AMSGRAD , it is suggested that β1 = 0.9 and β2 = 0.99 . 3 OPTIMISTIC-AMSGRAD . Algorithm 2 OPTIMISTIC-AMSGRAD 1 : Required : parameter β1 , β2 , , and ηt . 2 : Init : w1 = w−1/2 ∈ K ⊆ Rd and v̂0 = v0 = 1 ∈ Rd . 3 : for t = 1 to T do 4 : Get mini-batch stochastic gradient vector gt at wt . 5 : θt = β1θt−1 + ( 1− β1 ) gt . 6 : vt = β2vt−1 + ( 1− β2 ) ( gt −mt ) 2 . 7 : v̂t = max ( v̂t−1 , vt ) . 8 : wt+ 1 2 = ΠK [ wt− 1 2 − ηt θt√v̂t ] . 9 : wt+1 = ΠK [ wt+ 1 2 − ηt+1 ht+1√v̂t ] , where ht+1 : = β1θt−1 + ( 1− β1 ) mt+1 and mt+1 is the guess of gt+1 . 10 : end for We propose a new optimization algorithm , OPTIMISTIC-AMSGRAD , shown in Algorithm 2 . In each iteration , the learner computes a gradient vector gt : = ∇ ` t ( wt ) at wt ( line 4 ) , then it maintains an exponential moving average of θt ∈ Rd ( line 5 ) and vt ∈ Rd ( line 6 ) , which is followed by the max operation to obtain v̂t ∈ Rd ( line 7 ) . The learner also updates an auxiliary variable wt+ 12 ∈ K ( line 8 ) . It uses the auxiliary variable to update and commit wt+1 ( line 9 ) , which exploits the guess mt+1 of gt+1 to get wt+1 . As the learner ’ s action set is K ⊆ Rd , we adopt the notation ΠK [ · ] for the projection to K if needed . We see that OPTIMISTIC-AMSGRAD has three properties : • Adaptive learning rate of each dimension as ADAGRAD ( Duchi et al . ( 2011 ) ) . ( line 6 , line 8 and line 9 ) • Exponentially moving average of the past gradients as NESTEROV ’ S METHOD ( Nesterov ( 2004 ) ) and the HEAVY-BALL method ( Polyak ( 1964 ) ) . ( line 5 ) • Optimistic update that exploits a good guess of the next gradient vector as optimistic online learning algorithms ( e.g . Chiang et al . ( 2012 ) ; Rakhlin & Sridharan ( 2013a ; b ) ; Syrgkanis et al . ( 2015 ) ) . ( line 9 ) The first property helps for acceleration when the gradient has a sparse structure . The second one is from the well-recognized idea of momentum which can also help for acceleration . The last one , perhaps less known outside the ONLINE LEARNING community , can actually lead to acceleration when the prediction of the next gradient is good . This property will be elaborated in the following subsection in which we provide the theoretical analysis of OPTIMISTIC-AMSGRAD . Observe that the proposed algorithm does not reduce to AMSGRAD when mt = 0 . Furthermore , if K = Rd ( unconstrained case ) , one might want to combine line 8 and line 9 and get a single line as wt+1 = wt− 12 − ηt θt√ v̂t − ηt+1 ht+1√v̂t . Yet , based on this expression , we see that wt+1 is updated from wt− 12 instead of wt . Therefore , while OPTIMISTIC-AMSGRAD looks like just doing an additional update compared to AMSGRAD , the difference of the updates is subtle . In the following analysis , we show that the interleaving actually leads to certain cancellation in the regret bound .
This paper studies an optimistic variant of AMSGrad algorithm, where an estimate of the future gradient is incorporated into the optimization problem. The main claim is that when we have good enough (distance from the ground truth is small) estimate of the unknown gradient, the proposed algorithm will enjoy lower regret. Theoretical results are provided and experiments are conducted to compare the proposed algorithm with baselines. The idea seems to be not very novel since the optimistic optimization techniques are borrowed directly from the online optimization field, while it is still interesting to see this kind of work and to see its comparison with existing algorithms in experiments. However, the comparison seems to be not fair both in theory and experiments.
SP:35b6bf3da512cae6ad93e1422b1e272474f9a8cb
Optimistic Adaptive Acceleration for Optimization
1 INTRODUCTION . Nowadays deep learning has been very successful in numerous applications , from robotics ( e.g. , Levine et al . ( 2017 ) ) , computer vision ( e.g. , He et al . ( 2016 ) ; Goodfellow et al . ( 2014 ) ) , reinforcement learning ( e.g. , Mnih et al . ( 2013 ) ) , to natural language processing ( e.g. , Graves et al . ( 2013 ) ) . A common goal in these applications is learning quickly . It becomes a desired goal due to the presence of big data and/or the use of large neural nets . To accelerate the process , there are variety of training algorithms proposed in recent years , such as AMSGRAD ( Reddi et al . ( 2018 ) ) , ADAM ( Kingma & Ba ( 2015 ) ) , RMSPROP ( Tieleman & Hinton ( 2012 ) ) , ADADELTA ( Zeiler ( 2012 ) ) , and NADAM ( Dozat ( 2016 ) ) , etc . All the prevalent algorithms for training deep nets mentioned above combine two ideas : the idea of adaptivity from ADAGRAD ( Duchi et al . ( 2011 ) ; McMahan & Streeter ( 2010 ) ) and the idea of momentum from NESTEROV ’ S METHOD ( Nesterov ( 2004 ) ) or HEAVY BALL method ( Polyak ( 1964 ) ) . ADAGRAD is an online learning algorithm that works well compared to the standard online gradient descent when the gradient is sparse . Its update has a notable feature : the effective learning rate is different for each dimension , depending on the magnitude of gradient in each dimension , which might help in exploiting the geometry of data and leading to a better update . On the other hand , NESTEROV ’ S METHOD or HEAVY BALL Method ( Polyak ( 1964 ) ) is an accelerated optimization algorithm whose update not only depends on the current iterate and current gradient but also depends on the past gradients ( i.e. , momentum ) . State-of-the-art algorithms like AMSGRAD ( Reddi et al . ( 2018 ) ) and ADAM ( Kingma & Ba ( 2015 ) ) leverage the ideas to accelerate training neural nets . In this paper , we propose an algorithm that goes further than the hybrid of the adaptivity and momentum approach . Our algorithm is inspired by OPTIMISTIC ONLINE LEARNING ( see e.g . Chiang et al . ( 2012 ) ; Rakhlin & Sridharan ( 2013a ; b ) ; Syrgkanis et al . ( 2015 ) ; Abernethy et al . ( 2018 ) ) . OPTIMISTIC ONLINE LEARNING considers that a good guess of the loss function in each round is available and plays an action by utilizing the guess . By exploiting the guess , algorithms in OPTIMISTIC ONLINE LEARNING can enjoy a smaller regret than the ones without exploiting the guess . We combine the OPTIMISTIC ONLINE LEARNING idea with the adaptivity and the momentum ideas to design a new algorithm — OPTIMISTIC-AMSGRAD . We also provide a theoretical analysis of OPTIMISTIC-AMSGRAD . The proposed algorithm not only adapts to the informative dimensions , exhibits momentum , but also exploits a good guess of the next gradient to facilitate acceleration . We conduct experiments and show that OPTIMISTIC-AMSGRAD improves AMSGRAD in terms of various measures : training loss , testing loss , and classification accuracy on training/testing data over epochs . 2 PRELIMINARIES . We begin by providing some background in online learning , as it will be the main tool to design and analyze our proposed algorithm . We follow the notation in the literature of adaptive optimization ( Kingma & Ba ( 2015 ) ; Reddi et al . ( 2018 ) ) . For any vector u , v ∈ Rd , u/v represents element-wise division , u2 represents element-wise square , √ u represents element-wise square-root . We denote g1 : T [ i ] as the sum of the ith element of T vectors g1 , g2 , . . . , gT ∈ Rd . 2.1 A BRIEF REVIEW OF ONLINE LEARNING AND OPTIMISTIC ONLINE LEARNING . The standard setup of online learning is that , in each round t , an online learner selects an action wt ∈ K ⊆ Rd , then the learner observes ` t ( · ) and suffers loss ` t ( wt ) after the learner commits the action . The goal of the learner is minimizing the regret , RegretT ( { wt } ) : = ∑T t=1 ` t ( wt ) − ∑T t=1 ` t ( w ∗ ) , which is the cumulative loss of the learner minus the cumulative loss of some benchmark w∗ ∈ K. The idea of OPTIMISTIC ONLINE LEARNING ( e.g. , Chiang et al . ( 2012 ) ; Rakhlin & Sridharan ( 2013a ; b ) ; Syrgkanis et al . ( 2015 ) ; Abernethy et al . ( 2018 ) ) is as follows . Suppose that , in each round t , the learner has a good guess mt ( · ) of the loss function ` t ( · ) before playing an action wt . Then , the learner should exploit the guess mt ( · ) to choose an action wt since mt ( · ) is close to the true loss function ` t ( · ) . 1 For example , Syrgkanis et al . ( 2015 ) proposes an optimistic-variant of FOLLOW-THE-REGULARIZED-LEADER ( FTRL ) . FTRL ( see e.g . Hazan ( 2016 ) ) is an online learning algorithm whose update is wt = arg minw∈K〈w , Lt−1〉+ 1ηR ( w ) , where η is a parameter , R ( · ) is a 1-strongly convex function with respect to a norm ( ‖ · ‖ ) on the constraint setK , and Lt−1 : = ∑t−1 s=1 gs is the cumulative sum of gradient vectors of the loss functions ( i.e. , gs : = ∇ ` s ( ws ) ) up to but not including t. FTRL has regret at most O ( √∑T t=1 ‖gt‖∗ ) . On the other hand , OPTIMISTIC-FTRL ( Syrgkanis et al . ( 2015 ) ) has the update wt = arg minw∈K〈w , Lt−1 +mt〉+ 1ηR ( w ) , where mt is the learner ’ s guess of the gradient vector gt : = ∇ ` t ( wt ) . Under the assumption that loss functions are convex , the regret of OPTIMISTIC-FTRL is at most O ( √∑T t=1 ‖gt −mt‖∗ ) , which can be much smaller than the regret of FTRL if mt is close to gt . Consequently , OPTIMISTIC-FTRL can achieve better performance than FTRL . On the other hand , if mt is far from gt , then the regret of OPTIMISTIC-FTRL would be only a constant factor worse than that of its non-optimistic counterpart . In Section 4 , we will provide a strategy to obtain mt . At the moment , we just would like to use this example of FTRL to emphasize the importance of leveraging a good guess mt for updating wt , in order to achieve a faster convergence rate ( or equivalently , small regret ) . We will have a similar argument when we compare OPTIMISTIC-AMSGRAD and AMSGRAD . 2.2 ADAM AND AMSGRAD . ADAM ( Kingma & Ba ( 2015 ) ) is a popular algorithm for training deep nets . It combines the momentum idea ( Polyak ( 1964 ) ) with the idea of ADAGRAD ( Duchi et al . ( 2011 ) ) , which has effective different learning rates for different dimensions . The effective learning rate of ADAGRAD in iteration t for a dimension j is proportional to the inverse of √ Σts=1gs [ j ] 2 , where gs [ j ] is the jth element of the gradient vector gs in time s. This adaptive learning rate might help for accelerating the convergence when the gradient vector is sparse ( Duchi et al . ( 2011 ) ) . However , when applying ADAGRAD to train deep nets , it is observed that the learning rate might decay too fast ( Kingma & Ba ( 2015 ) ) . Therefore , Kingma & Ba ( 2015 ) propose using a moving average of gradients divided by the square root of the second moment of the moving average ( element-wise fashion ) , for updating the model parameter w ( i.e. , lines 5,6 and 8 of Algorithm 1 ) . Yet , ADAM ( Kingma & Ba ( 2015 ) ) fails at 1Imagine that if the learner would had been known ` t ( · ) before committing its action , then it would exploit the knowledge to determine its action and consequently minimizes the regret . Algorithm 1 AMSGRAD ( Reddi et al . ( 2018 ) ) 1 : Required : parameter β1 , β2 , and ηt . 2 : Init : w1 ∈ K ⊆ Rd and v̂0 = v0 = 1 ∈ Rd . 3 : for t = 1 to T do 4 : Get mini-batch stochastic gradient vector gt at wt . 5 : θt = β1θt−1 + ( 1− β1 ) gt . 6 : vt = β2vt−1 + ( 1− β2 ) g2t . 7 : v̂t = max ( v̂t−1 , vt ) . 8 : wt+1 = wt − ηt θt√ v̂t . ( element-wise division ) 9 : end for some online convex optimization problems . AMSGRAD ( Reddi et al . ( 2018 ) ) fixes the issue . The algorithm of AMSGRAD is shown in Algorithm 1 . The difference between ADAM and AMSGRAD lies on line 7 of Algorithm 1 . ADAM does not have the max operation on line 7 ( i.e. , v̂t = vt for ADAM ) while AMSGRAD adds the operation to guarantee a non-increasing learning rate , ηt√ v̂t , which helps for the convergence ( i.e. , average regret RegretTT → 0 ) . For the parameters of AMSGRAD , it is suggested that β1 = 0.9 and β2 = 0.99 . 3 OPTIMISTIC-AMSGRAD . Algorithm 2 OPTIMISTIC-AMSGRAD 1 : Required : parameter β1 , β2 , , and ηt . 2 : Init : w1 = w−1/2 ∈ K ⊆ Rd and v̂0 = v0 = 1 ∈ Rd . 3 : for t = 1 to T do 4 : Get mini-batch stochastic gradient vector gt at wt . 5 : θt = β1θt−1 + ( 1− β1 ) gt . 6 : vt = β2vt−1 + ( 1− β2 ) ( gt −mt ) 2 . 7 : v̂t = max ( v̂t−1 , vt ) . 8 : wt+ 1 2 = ΠK [ wt− 1 2 − ηt θt√v̂t ] . 9 : wt+1 = ΠK [ wt+ 1 2 − ηt+1 ht+1√v̂t ] , where ht+1 : = β1θt−1 + ( 1− β1 ) mt+1 and mt+1 is the guess of gt+1 . 10 : end for We propose a new optimization algorithm , OPTIMISTIC-AMSGRAD , shown in Algorithm 2 . In each iteration , the learner computes a gradient vector gt : = ∇ ` t ( wt ) at wt ( line 4 ) , then it maintains an exponential moving average of θt ∈ Rd ( line 5 ) and vt ∈ Rd ( line 6 ) , which is followed by the max operation to obtain v̂t ∈ Rd ( line 7 ) . The learner also updates an auxiliary variable wt+ 12 ∈ K ( line 8 ) . It uses the auxiliary variable to update and commit wt+1 ( line 9 ) , which exploits the guess mt+1 of gt+1 to get wt+1 . As the learner ’ s action set is K ⊆ Rd , we adopt the notation ΠK [ · ] for the projection to K if needed . We see that OPTIMISTIC-AMSGRAD has three properties : • Adaptive learning rate of each dimension as ADAGRAD ( Duchi et al . ( 2011 ) ) . ( line 6 , line 8 and line 9 ) • Exponentially moving average of the past gradients as NESTEROV ’ S METHOD ( Nesterov ( 2004 ) ) and the HEAVY-BALL method ( Polyak ( 1964 ) ) . ( line 5 ) • Optimistic update that exploits a good guess of the next gradient vector as optimistic online learning algorithms ( e.g . Chiang et al . ( 2012 ) ; Rakhlin & Sridharan ( 2013a ; b ) ; Syrgkanis et al . ( 2015 ) ) . ( line 9 ) The first property helps for acceleration when the gradient has a sparse structure . The second one is from the well-recognized idea of momentum which can also help for acceleration . The last one , perhaps less known outside the ONLINE LEARNING community , can actually lead to acceleration when the prediction of the next gradient is good . This property will be elaborated in the following subsection in which we provide the theoretical analysis of OPTIMISTIC-AMSGRAD . Observe that the proposed algorithm does not reduce to AMSGRAD when mt = 0 . Furthermore , if K = Rd ( unconstrained case ) , one might want to combine line 8 and line 9 and get a single line as wt+1 = wt− 12 − ηt θt√ v̂t − ηt+1 ht+1√v̂t . Yet , based on this expression , we see that wt+1 is updated from wt− 12 instead of wt . Therefore , while OPTIMISTIC-AMSGRAD looks like just doing an additional update compared to AMSGRAD , the difference of the updates is subtle . In the following analysis , we show that the interleaving actually leads to certain cancellation in the regret bound .
This paper proposes an online optimization method called Optimistic-AMSGrad, which combines two existing methods: (i) AMSGrad (Reddi et al 2018) and (ii) optimistic online learning where the prediction step is done with the extrapolation algorithm by Scieur et al 2016. The authors do a good job of presenting the method (by introducing the background in proper order), the paper seems self-contained and cites the relevant literature. The regret analysis of the proposed algorithm is provided, where the obtained regret can be smaller than AMSGrad depending on whether or not the guess of the gradient and the gradient are close.
SP:35b6bf3da512cae6ad93e1422b1e272474f9a8cb
Making the Shoe Fit: Architectures, Initializations, and Tuning for Learning with Privacy
1 INTRODUCTION . Machine learning ( ML ) can be usefully applied to the analysis of sensitive data , e.g. , in the domain of healthcare ( Kononenko , 2001 ) . However , ML models may unintentionally reveal sensitive aspects of their training data , e.g. , due to overfitting ( Shokri et al. , 2017 ; Song & Shmatikov , 2019 ) . To counter this , ML techniques that offer strong privacy guarantees have been developed . Notably , the differentially private stochastic gradient descent , or DP-SGD , of Abadi et al . ( 2016 ) is an easyto-use , generally-applicable modification of stochastic gradient descent . In addition to its rigorous privacy guarantees , it has been empirically shown to stop the leaking of secrets ( Carlini et al. , 2019 ) . To strictly bound the impact of any training example , DP-SGD makes two changes to every gradient step : first , each example ’ s gradient contribution is limited to a fixed bound ( in practice , by clipping all per-example gradients to a maximum ` 2 norm ) ; second , random ( Gaussian ) noise of the scale of the clipping norm is added to each batch ’ s combined gradient , before it is backpropagated to update model parameters . Together , these changes create a new , artificial noise floor at each step of gradient descent , such that the unique signal of any individual example is below this new noise floor ; this allows differential privacy to be guaranteed for all training examples ( Dwork & Roth , 2014 ) . Training using DP-SGD is eminently practical and in addition to privacy offers advantages such as strong generalization and the promise of reusable holdouts ( Google , 2019 ; Dwork et al. , 2015 ) . Unfortunately , its advantages have not been without cost : empirically , the test accuracy of differentially private ML is consistently lower than that of non-private learning ( e.g. , see Papernot et al . ( 2018 ) ) . Such accuracy loss may sometimes be inevitable : for example , the task may involve heavy-tailed distributions and adding noise will definitely hinder visibility of examples in the tails ( Feldman , 2019 ; Bagdasaryan & Shmatikov , 2019 ) . However , this does not explain the accuracy loss of differentially private learning on standard benchmark tasks that are known to be relatively simple : MNIST ( Yann et al. , 1998 ) , FashionMNIST ( Xiao et al. , 2017 ) , CIFAR10 ( Krizhevsky et al. , 2009 ) , etc . This paper presents several new results for privacy-preserving learning that improve the state-ofthe-art in terms of both privacy and accuracy . Significantly , these new results stem from a single , simple observation : differentially-private learning with DP-SGD is different enough that all aspects of learning—model architecture , parameter initialization , and optimization strategy , as well as hyperparameter tuning—must be reconsidered . To achieve the best privacy/accuracy tradeoffs , we must tune our learning strategies to the specifics of privacy-preserving learning ; i.e. , we must “ learn to learn ” with privacy . Conversely , we concretely demonstrate how the architecture , initialization , and optimization strategy that gives the best accuracy for non-private learning can be a poor fit for learning with privacy . Instead , by revisiting our choices , we can reduce the information loss induced by clipping , limit the impact of added noise , and improve the utility of each gradient step when learning with privacy . Our contributions facilitate DP-SGD learning as follows : • We show how simple architecture changes , such as the use of tanh instead of ReLU activations , can improve a model ’ s private-learning suitability and achievable privacy/accuracy tradeoffs , by eliminating the negative effects of clipping and noising large gradients . • We explain how high-capacity models can be disadvantageous , as well as the advantages of models with a final , fully-connected layer that can be independently fine tuned , and how both help address the curse of dimensionality and high-dimensional noise . • We demonstrate the importance of finding good initializations , and show how this can be done with privacy using either transfer learning or weight scaling ( Raghu et al. , 2019 ) . • We show that better tradeoffs and increased wall-clock learning speeds can be achieved by tuning hyperparameters and choosing optimizers directly for DP-SGD learning . By applying the above , we advance the state of the art for MNIST , FashionMNIST , and CIFAR10 , significantly improving upon the privacy/accuracy tradoffs from prior work . On MNIST , we achieve 98.1 % test accuracy for a privacy guarantee of ( ε , δ ) = ( 2.93 , 10−5 ) , whereas the previous stateof-the-art reported in the TensorFlow Privacy library ( Google , 2019 ) was 96.6 % . On CIFAR10 , we achieve 72 % test accuracy at ( ε , δ ) = ( 2.1 , 10−5 ) in a setup for which to the best of our knowledge the previous state-of-the-art was achieved by Abadi et al . ( 2016 ) at 67 % accuracy . 2 TRAINING-DATA MEMORIZATION , DIFFERENTIAL PRIVACY , AND DP-SGD . Machine-learning models will easily memorize whatever sensitive , personal , or private data that was used in their training , and models may in practice disclose this data—as demonstrated by the attacks of Shokri et al . ( 2017 ) , Song & Shmatikov ( 2019 ) , and Carlini et al . ( 2019 ) . For reasoning about the privacy guarantees of algorithms such as training by stochastic gradient descent , differential privacy has become the established gold standard ( Dwork & Roth , 2014 ) . Informally , an algorithm can be differentially private if it will always produce effectively the same output ( in a mathematically precise sense ) , when applied to two input datasets that differ by only one record . Formally , a learning algorithmA that trains models from the set S is ( ε , δ ) -differentiallyprivate , if the following holds for all training datasets d and d′ that differ by exactly one record : Pr [ A ( d ) ∈ S ] ≤ eεPr [ A ( d′ ) ∈ S ] + δ ( 1 ) Here , ε gives the formal privacy guarantee , by placing a strong upper bound on any privacy loss , even in the worst possible case . A lower ε indicates a stronger privacy guarantee or a tighter upper bound ( Erlingsson et al. , 2019 ) . The factor δ allows for some probability that the property may not hold ( in practice , this δ is required to be very small , e.g. , in inverse proportion to the dataset size ) . A very attractive property of differential-privacy guarantees is that they hold true for all attackers— whatever they are probing and whatever their prior knowledge—and that they remain true under various forms of composition . In particular , the output of a differentially-private algorithm can be arbitrarily post processed , without any weakening of the guarantees . Also , if sensitive training data contains multiple examples from the same person ( or , more generally , the same sensitive group ) , ε-differentially-private training on this data will result in model with a kε-differential-privacy guarantee for each person , as long as at most k training-data records are present per person . Abadi et al . ( 2016 ) introduced DP-SGD as a method for training deep neural networks with differential-privacy guarantees that was able to achieve better privacy and utility than previous efforts ( Chaudhuri et al. , 2011 ; Song et al. , 2013 ; Bassily et al. , 2014 ) . DP-SGD bounds the sensitivity of the learning process to each individual training example by computing per-example gradients { gi } i∈0 .. n−1 with respect to the loss , for the n model parameters { θi } i∈0 .. n−1 , and clipping each per-example gradient to a maximum fixed ` 2 norm C. Subsequently , to the average of these perexample gradients , DP-SGD adds ( Gaussian ) noise that whose standard deviation σ is proportional to this sensitivity . In this work , we use the canonical implementation of DP-SGD and its associated analysis that has been made available through the TensorFlow Privacy library ( Google , 2019 ) . 3 MODEL ARCHITECTURES BETTER SUITED TO LEARNING WITH PRIVACY . We show here that learning with differential privacy imposes additional constraints that need to be taken into account when designing neural network architectures . They help us control the sensitivity of learning to training examples before the clipping operation is performed in DP-SGD , thus reducing the potential negative impact of clipping on the estimated gradient direction . 3.1 MODEL CAPACITY . The success of neural networks is in part explained by their ability to scale to complex tasks through an increase in model capacity . ResNets are an illustrative recent examples ( He et al. , 2016 ) . Here , we explain how additional capacity may not be beneficial when learning with privacy . One of the major challenges in training models with differential privacy is the curse of dimensionality ( Bassily et al. , 2014 ) . The accuracy of privately trained models typically degrades with the increase in the number of dimensions . Unfortunately , strong lower bounds suggest that this dependence on dimensionality is necessary ( Bassily et al. , 2014 ) . Consider the convolutional architecture described to the right . With all other architectural details being fixed , we can control the model ’ s capacity by varying the number of filters k in its two convolutional layers . While the relationship between generalization performance and the number of parameters is not always monotonic ( Neyshabur et al. , 2017 ) , we leave as future work a study of how different measures of capacity can inform the design of model architectures for private learning . We report the model ’ s accuracy when trained with SGD and DP-SGD in Figure 1 , both on MNIST ( left ) and FashionMNIST ( right ) . The test accuracy of models trained without privacy monotonically increases with the number of filters in their convolutional layers . Instead , we observe an inflection point at about 15 filters for which models trained with privacy achieve their highest test accuracy . Afterwards , the model ’ s generalization suffers as more filters are added . There are two competing explanations of this behavior , both compatible with the lower bound stated in Bassily et al . ( 2014 ) . First , recall that DP-SGD performs a clipping operation on each per-example gradient before the average gradients is used to update model parameters ; i.e. , each gradient is subject to the following transformation gi ← gi ·min 1 , C√∑n−1 i=0 g 2 i ( 2 ) where gi is the gradient corresponding to model parameter i . For a fixed clipping norm C ( corresponding to a certain , fixed privacy guarantee ) , the quantity C√∑n−1 i=0 g 2 i by which individual param- eters are multiplied decreases as the number n of parameters in a model increases . That is , the more parameters we have , the more likely DP-SGD is to clip the gradient ( or signal ) at each parameter . This can explain the presence of an inflection point in Figure 1 , after which learning with privacy becomes increasingly difficult as capacity is increased . Second , as the number of parameters ( i.e. , gi ’ s ) increases , the norm of the noise vector that DP-SGD must add to the gradient average to ensure privacy also increases . This noise norm increases as √ # parameters , and introduces another source of accuracy degradation with an increased number of parameters . Our observations may seem to contradict some of the findings in Abadi et al . ( 2016 ) . However , their limited experimental setup could offer few general lessons . First , they reduced data dimensionality using PCA to have inputs of only 60 dimensions ; second , they explored only a model architectures using a single layer perceptron with between 200 and 2 , 000 units . Instead , our experiments involve a realistic setting where the full input is passed to a convolutional neural network with a total of 3 hidden layers and over 26,000 parameters .
This paper presents experimental evidence that learning with privacy requires approaches that are not identical to those used when learning without privacy. These approaches include re-considering different model choices (i.e., its structure and activation functions), its initialization, and its optimization procedure. With these changes, they show that it is possible to obtain state-of-the-art results for some canonical learning tasks.
SP:8ff1115adfd50e2c1512534ec8b90f91e0c0c331
Making the Shoe Fit: Architectures, Initializations, and Tuning for Learning with Privacy
1 INTRODUCTION . Machine learning ( ML ) can be usefully applied to the analysis of sensitive data , e.g. , in the domain of healthcare ( Kononenko , 2001 ) . However , ML models may unintentionally reveal sensitive aspects of their training data , e.g. , due to overfitting ( Shokri et al. , 2017 ; Song & Shmatikov , 2019 ) . To counter this , ML techniques that offer strong privacy guarantees have been developed . Notably , the differentially private stochastic gradient descent , or DP-SGD , of Abadi et al . ( 2016 ) is an easyto-use , generally-applicable modification of stochastic gradient descent . In addition to its rigorous privacy guarantees , it has been empirically shown to stop the leaking of secrets ( Carlini et al. , 2019 ) . To strictly bound the impact of any training example , DP-SGD makes two changes to every gradient step : first , each example ’ s gradient contribution is limited to a fixed bound ( in practice , by clipping all per-example gradients to a maximum ` 2 norm ) ; second , random ( Gaussian ) noise of the scale of the clipping norm is added to each batch ’ s combined gradient , before it is backpropagated to update model parameters . Together , these changes create a new , artificial noise floor at each step of gradient descent , such that the unique signal of any individual example is below this new noise floor ; this allows differential privacy to be guaranteed for all training examples ( Dwork & Roth , 2014 ) . Training using DP-SGD is eminently practical and in addition to privacy offers advantages such as strong generalization and the promise of reusable holdouts ( Google , 2019 ; Dwork et al. , 2015 ) . Unfortunately , its advantages have not been without cost : empirically , the test accuracy of differentially private ML is consistently lower than that of non-private learning ( e.g. , see Papernot et al . ( 2018 ) ) . Such accuracy loss may sometimes be inevitable : for example , the task may involve heavy-tailed distributions and adding noise will definitely hinder visibility of examples in the tails ( Feldman , 2019 ; Bagdasaryan & Shmatikov , 2019 ) . However , this does not explain the accuracy loss of differentially private learning on standard benchmark tasks that are known to be relatively simple : MNIST ( Yann et al. , 1998 ) , FashionMNIST ( Xiao et al. , 2017 ) , CIFAR10 ( Krizhevsky et al. , 2009 ) , etc . This paper presents several new results for privacy-preserving learning that improve the state-ofthe-art in terms of both privacy and accuracy . Significantly , these new results stem from a single , simple observation : differentially-private learning with DP-SGD is different enough that all aspects of learning—model architecture , parameter initialization , and optimization strategy , as well as hyperparameter tuning—must be reconsidered . To achieve the best privacy/accuracy tradeoffs , we must tune our learning strategies to the specifics of privacy-preserving learning ; i.e. , we must “ learn to learn ” with privacy . Conversely , we concretely demonstrate how the architecture , initialization , and optimization strategy that gives the best accuracy for non-private learning can be a poor fit for learning with privacy . Instead , by revisiting our choices , we can reduce the information loss induced by clipping , limit the impact of added noise , and improve the utility of each gradient step when learning with privacy . Our contributions facilitate DP-SGD learning as follows : • We show how simple architecture changes , such as the use of tanh instead of ReLU activations , can improve a model ’ s private-learning suitability and achievable privacy/accuracy tradeoffs , by eliminating the negative effects of clipping and noising large gradients . • We explain how high-capacity models can be disadvantageous , as well as the advantages of models with a final , fully-connected layer that can be independently fine tuned , and how both help address the curse of dimensionality and high-dimensional noise . • We demonstrate the importance of finding good initializations , and show how this can be done with privacy using either transfer learning or weight scaling ( Raghu et al. , 2019 ) . • We show that better tradeoffs and increased wall-clock learning speeds can be achieved by tuning hyperparameters and choosing optimizers directly for DP-SGD learning . By applying the above , we advance the state of the art for MNIST , FashionMNIST , and CIFAR10 , significantly improving upon the privacy/accuracy tradoffs from prior work . On MNIST , we achieve 98.1 % test accuracy for a privacy guarantee of ( ε , δ ) = ( 2.93 , 10−5 ) , whereas the previous stateof-the-art reported in the TensorFlow Privacy library ( Google , 2019 ) was 96.6 % . On CIFAR10 , we achieve 72 % test accuracy at ( ε , δ ) = ( 2.1 , 10−5 ) in a setup for which to the best of our knowledge the previous state-of-the-art was achieved by Abadi et al . ( 2016 ) at 67 % accuracy . 2 TRAINING-DATA MEMORIZATION , DIFFERENTIAL PRIVACY , AND DP-SGD . Machine-learning models will easily memorize whatever sensitive , personal , or private data that was used in their training , and models may in practice disclose this data—as demonstrated by the attacks of Shokri et al . ( 2017 ) , Song & Shmatikov ( 2019 ) , and Carlini et al . ( 2019 ) . For reasoning about the privacy guarantees of algorithms such as training by stochastic gradient descent , differential privacy has become the established gold standard ( Dwork & Roth , 2014 ) . Informally , an algorithm can be differentially private if it will always produce effectively the same output ( in a mathematically precise sense ) , when applied to two input datasets that differ by only one record . Formally , a learning algorithmA that trains models from the set S is ( ε , δ ) -differentiallyprivate , if the following holds for all training datasets d and d′ that differ by exactly one record : Pr [ A ( d ) ∈ S ] ≤ eεPr [ A ( d′ ) ∈ S ] + δ ( 1 ) Here , ε gives the formal privacy guarantee , by placing a strong upper bound on any privacy loss , even in the worst possible case . A lower ε indicates a stronger privacy guarantee or a tighter upper bound ( Erlingsson et al. , 2019 ) . The factor δ allows for some probability that the property may not hold ( in practice , this δ is required to be very small , e.g. , in inverse proportion to the dataset size ) . A very attractive property of differential-privacy guarantees is that they hold true for all attackers— whatever they are probing and whatever their prior knowledge—and that they remain true under various forms of composition . In particular , the output of a differentially-private algorithm can be arbitrarily post processed , without any weakening of the guarantees . Also , if sensitive training data contains multiple examples from the same person ( or , more generally , the same sensitive group ) , ε-differentially-private training on this data will result in model with a kε-differential-privacy guarantee for each person , as long as at most k training-data records are present per person . Abadi et al . ( 2016 ) introduced DP-SGD as a method for training deep neural networks with differential-privacy guarantees that was able to achieve better privacy and utility than previous efforts ( Chaudhuri et al. , 2011 ; Song et al. , 2013 ; Bassily et al. , 2014 ) . DP-SGD bounds the sensitivity of the learning process to each individual training example by computing per-example gradients { gi } i∈0 .. n−1 with respect to the loss , for the n model parameters { θi } i∈0 .. n−1 , and clipping each per-example gradient to a maximum fixed ` 2 norm C. Subsequently , to the average of these perexample gradients , DP-SGD adds ( Gaussian ) noise that whose standard deviation σ is proportional to this sensitivity . In this work , we use the canonical implementation of DP-SGD and its associated analysis that has been made available through the TensorFlow Privacy library ( Google , 2019 ) . 3 MODEL ARCHITECTURES BETTER SUITED TO LEARNING WITH PRIVACY . We show here that learning with differential privacy imposes additional constraints that need to be taken into account when designing neural network architectures . They help us control the sensitivity of learning to training examples before the clipping operation is performed in DP-SGD , thus reducing the potential negative impact of clipping on the estimated gradient direction . 3.1 MODEL CAPACITY . The success of neural networks is in part explained by their ability to scale to complex tasks through an increase in model capacity . ResNets are an illustrative recent examples ( He et al. , 2016 ) . Here , we explain how additional capacity may not be beneficial when learning with privacy . One of the major challenges in training models with differential privacy is the curse of dimensionality ( Bassily et al. , 2014 ) . The accuracy of privately trained models typically degrades with the increase in the number of dimensions . Unfortunately , strong lower bounds suggest that this dependence on dimensionality is necessary ( Bassily et al. , 2014 ) . Consider the convolutional architecture described to the right . With all other architectural details being fixed , we can control the model ’ s capacity by varying the number of filters k in its two convolutional layers . While the relationship between generalization performance and the number of parameters is not always monotonic ( Neyshabur et al. , 2017 ) , we leave as future work a study of how different measures of capacity can inform the design of model architectures for private learning . We report the model ’ s accuracy when trained with SGD and DP-SGD in Figure 1 , both on MNIST ( left ) and FashionMNIST ( right ) . The test accuracy of models trained without privacy monotonically increases with the number of filters in their convolutional layers . Instead , we observe an inflection point at about 15 filters for which models trained with privacy achieve their highest test accuracy . Afterwards , the model ’ s generalization suffers as more filters are added . There are two competing explanations of this behavior , both compatible with the lower bound stated in Bassily et al . ( 2014 ) . First , recall that DP-SGD performs a clipping operation on each per-example gradient before the average gradients is used to update model parameters ; i.e. , each gradient is subject to the following transformation gi ← gi ·min 1 , C√∑n−1 i=0 g 2 i ( 2 ) where gi is the gradient corresponding to model parameter i . For a fixed clipping norm C ( corresponding to a certain , fixed privacy guarantee ) , the quantity C√∑n−1 i=0 g 2 i by which individual param- eters are multiplied decreases as the number n of parameters in a model increases . That is , the more parameters we have , the more likely DP-SGD is to clip the gradient ( or signal ) at each parameter . This can explain the presence of an inflection point in Figure 1 , after which learning with privacy becomes increasingly difficult as capacity is increased . Second , as the number of parameters ( i.e. , gi ’ s ) increases , the norm of the noise vector that DP-SGD must add to the gradient average to ensure privacy also increases . This noise norm increases as √ # parameters , and introduces another source of accuracy degradation with an increased number of parameters . Our observations may seem to contradict some of the findings in Abadi et al . ( 2016 ) . However , their limited experimental setup could offer few general lessons . First , they reduced data dimensionality using PCA to have inputs of only 60 dimensions ; second , they explored only a model architectures using a single layer perceptron with between 200 and 2 , 000 units . Instead , our experiments involve a realistic setting where the full input is passed to a convolutional neural network with a total of 3 hidden layers and over 26,000 parameters .
The paper methodically analyses the settings and choices used when training neural networks (specifically CNNs) via the DP-SGD algorithm and suggests changes to the standard procedures that empirically lead to higher accuracies despite the added noise. The main statement of the paper is quite simple: optimize hyperparameters for the model that you're training (DP-SGD) rather than the model it is inspired by. Yet, the findings an recommendations may be useful for practitioners.
SP:8ff1115adfd50e2c1512534ec8b90f91e0c0c331
A Closer Look at the Optimization Landscapes of Generative Adversarial Networks
1 INTRODUCTION . Deep neural networks have exhibited remarkable success in many applications ( Krizhevsky et al. , 2012 ) . This success has motivated many studies of their non-convex loss landscape ( Choromanska et al. , 2015 ; Kawaguchi , 2016 ; Li et al. , 2018b ) , which , in turn , has led to many improvements , such as better initialization and optimization methods ( Glorot and Bengio , 2010 ; Kingma and Ba , 2015 ) . While most of the work on studying non-convex loss landscapes has focused on single objective minimization , some recent class of models require the joint minimization of several objectives , making their optimization landscape intrinsically different . Among these models is the generative adversarial network ( GAN ) ( Goodfellow et al. , 2014 ) which is based on a two-player game formulation and has achieved state-of-the-art performance on some generative modeling tasks such as image generation ( Brock et al. , 2019 ) . On the theoretical side , many papers studying multi-player games have argued that one main optimization issue that arises in this case is the rotation due to the adversarial component of the game ( Mescheder et al. , 2018 ; Balduzzi et al. , 2018 ; Gidel et al. , 2019b ) . This has been extensively studied on toy examples , in particular on the so-called bilinear example ( Goodfellow , 2016 ) ( a.k.a Dirac GAN ( Mescheder et al. , 2018 ) ) . However , those toy examples are very far from the standard realistic setting of image generation involving deep networks and challenging datasets . To our knowledge it remains an open question if this rotation phenomenon actually occurs when training GANs in more practical settings . In this paper , we aim at closing this gap between theory and practice . Following Mescheder et al . ( 2017 ) and Balduzzi et al . ( 2018 ) , we argue that instead of studying the loss surface , we should study the game vector field ( i.e. , the concatenation of each player ’ s gradient ) , which can provide ∗Equal contributions . Correspondence to firstname.lastname @ umontreal.ca . †Canada CIFAR AI Chair ( held at Mila ) 1Code available at https : //bit.ly/2kwTu87 better insights to the problem . To this end , we propose a new visualization technique that we call Path-angle which helps us observe the nature of the game vector field close to a stationary point for high dimensional models , and carry on an empirical investigation of the properties of the optimization landscape of GANs . The core questions we want to address may be summarized as the following : Is rotation a phenomenon that occurs when training GANs on real world datasets , and do existing training methods find local Nash equilibria ? . To answer this question we conducted extensive experiments by training different GAN formulations ( NSGAN and WGAN-GP ) with different optimizers ( Adam and ExtraAdam ) on three datasets ( MoG , MNIST and CIFAR10 ) . Based on our experiments and using our visualization techniques we observe that the landscape of GANs is fundamentally different from the standard loss surfaces of deep networks . Furthermore , we provide evidence that existing GAN training methods do not converge to a local Nash equilibrium . Contributions More precisely , our contributions are the following : ( i ) We propose studying empirically the game vector field ( as opposed to studying the loss surfaces of each player ) to understand training dynamics in GANs using a novel visualization tool , which we call Path-angle and that captures the rotational and attractive behaviors near local stationary points ( ref . §4.2 ) . ( ii ) We observe experimentally on both a mixture of Gaussians , MNIST and CIFAR10 datasets that a variety of GAN formulations have a significant rotational behavior around their locally stable stationary points ( ref . §5.1 ) . ( iii ) We provide empirical evidence that existing training procedures find stable stationary points that are saddle points , not minima , for the loss function of the generator ( ref . § 5.2 ) . 2 RELATED WORK . Improving the training of GANs has been an active research area in the past few years . Most efforts in stabilizing GAN training have focused on formulating new objectives ( Arjovsky et al. , 2017 ) , or adding regularization terms ( Gulrajani et al. , 2017 ; Mescheder et al. , 2017 ; 2018 ) . In this work , we try to characterize the difference in the landscapes induced by different GAN formulations and how it relates to improving the training of GANs . Recently , Nagarajan and Kolter ( 2017 ) ; Mescheder et al . ( 2018 ) show that a local analysis of the eigenvalues of the Jacobian of the game can provide guarantees on local stability properties . However , their theoretical analysis is based on some unrealistic assumptions such as the generator ’ s ability to fully capture the real distribution . In this work , we assess experimentally to what extent these theoretical stability results apply in practice . Rotations in differentiable games has been mentioned and interpreted by ( Mescheder et al. , 2018 ; Balduzzi et al. , 2018 ) and Gidel et al . ( 2019b ) . While these papers address rotations in games from a theoretical perspective , it was never shown that GANs , which are games with highly non-convex losses , suffered from these rotations in practice . To our knowledge , trying to quantify that GANs actually suffer from this rotational component in practice for real world dataset is novel . The stable points of the gradient dynamics in general games have been studied independently by Mazumdar and Ratliff ( 2018 ) and Adolphs et al . ( 2018 ) . They notice that the locally stable stationary point of some games are not local Nash equilibria . In order to reach a local Nash equilibrium , Adolphs et al . ( 2018 ) ; Mazumdar et al . ( 2019 ) develop techniques based on second order information . In this work , we argue that reaching local Nash equilibria may not be as important as one may expect and that we do achieve good performance at a locally stable stationary point . Several works have studied the loss landscape of deep neural networks . Goodfellow et al . ( 2015 ) proposed to look at the linear path between two points in parameter space and show that neural networks behave similarly to a convex loss function along this path . Draxler et al . ( 2018 ) proposed an extension where they look at nonlinear paths between two points and show that local minima are connected in deep neural networks . Another extension was proposed by ( Li et al. , 2018a ) where they use contour plots to look at the 2D loss surface defined by two directions chosen appropriately . In this paper , we use a similar approach of following the linear path between two points to gain insight about GAN optimization landscapes . However , in this context , looking at the loss of both players along that path may be uninformative . We propose instead to look , along a linear path from initialization to best solution , at the game vector field , particularly at its angle w.r.t . the linear path , the Path-angle . Another way to gain insight into the landscape of deep neural networks is by looking at the Hessian of the loss ; this was done in the context of single objective minimization by ( Dauphin et al. , 2014 ; Sagun et al. , 2016 ; 2017 ; Alain et al. , 2019 ) . Compared to linear path visualizations which can give global information ( but only along one direction ) , the Hessian provides information about the loss landscape in several directions but only locally . The full Hessian is expensive to compute and one often has to resort to approximations such has computing only the top-k eigenvalues . While , the Hessian is symmetric and thus has real eigenvalues , the Jacobian of a game vector field is significantly different since it is in general not symmetric , which means that the eigenvalues belong to the complex plane . In the context of GANs , Mescheder et al . ( 2017 ) introduced a gradient penalty and use the eigenvalues of the Jacobian of the game vector field to show its benefits in terms of stability . In our work , we compute these eigenvalues to assess that , on different GAN formulations and datasets , existing training procedures find a locally stable stationary point that is a saddle point for the loss function of the generator . 3 FORMULATIONS FOR GAN OPTIMIZATION AND THEIR PRACTICAL IMPLICATIONS . 3.1 THE STANDARD GAME THEORY FORMULATION . From a game theory point of view , GAN training may be seen as a game between two players : the discriminator Dϕ and the generator Gθ , each of which is trying to minimize its loss LD and LG , respectively . Using the same formulation as Mescheder et al . ( 2017 ) , the GAN objective takes the following form ( for simplicity of presentation , we focus on the unconstrained formulation ) : θ∗ ∈ arg min θ∈Rp LG ( θ , ϕ∗ ) and ϕ∗ ∈ arg min ϕ∈Rd LD ( θ∗ , ϕ ) . ( 1 ) The solution ( θ∗ , ϕ∗ ) is called a Nash equilibrium ( NE ) . In practice , the considered objectives are non-convex and we typically can not expect better than a local Nash equilibrium ( LNE ) , i.e . a point at which ( 1 ) is only locally true ( see e.g . ( Adolphs et al. , 2018 ) for a formal definition ) . Ratliff et al . ( 2016 ) derived some derivative-based necessary and sufficient conditions for being a LNE . They show that , for being a local NE it is sufficient to be a differential Nash equilibrium : Definition 1 ( Differential NE ) . A point ( θ∗ , ϕ∗ ) is a differential Nash equilibrium ( DNE ) iff ‖∇θLG ( θ∗ , ϕ∗ ) ‖ = ‖∇ϕLD ( θ∗ , ϕ∗ ) ‖ = 0 , ∇2θLG ( θ∗ , ϕ∗ ) 0 and ∇2ϕLD ( θ∗ , ϕ∗ ) 0 ( 2 ) where S 0 if and only if S is positive definite . Being a DNE is not necessary for being a LNE because a local Nash equilibrium may have Hessians that are only semi-definite . NE are commonly used in GANs to describe the goal of the learning procedure ( Goodfellow et al. , 2014 ) : in this definition , θ∗ ( resp . ϕ∗ ) is seen as a local minimizer of LG ( · , ϕ∗ ) ( resp . LD ( θ∗ , · ) ) . Under this view , however , the interaction between the two networks is not taken into account . This is an important aspect of the game stability that is missed in the definition of DNE ( and Nash equilibrum in general ) . We illustrate this point in the following section , where we develop an example of a game for which gradient methods converge to a point which is a saddle point for the generator ’ s loss and thus not a DNE for the game . 3.2 AN ALTERNATIVE FORMULATION BASED ON THE GAME VECTOR FIELD . In practice , GANs are trained using first order methods that compute the gradients of the losses of each player . Following Gidel et al . ( 2019a ) , an alternative point of view on optimizing GANs is to jointly consider the players ’ parameters θ and ϕ as a joint state ω : = ( θ , ϕ ) , and to study the vector field associated with these gradients,2 which we call the game vector field v ( ω ) : = [ ∇θLG ( ω ) > ∇ϕLD ( ω ) > ] > where ω : = ( θ , ϕ ) . ( 3 ) 2Note that , in practice , the joint vector field ( 3 ) is not a gradient vector field , i.e. , it can not be rewritten as the gradient of a single function . With this perspective , the notion of DNE is replaced by the notion of locally stable stationary point ( LSSP ) . Verhulst ( 1989 , Theorem 7.1 ) defines a LSSP ω∗ using the eigenvalues of the Jacobian of the game vector field∇v ( ω∗ ) at that point . Definition 2 ( LSSP ) . A point ω∗ is a locally stable stationary point ( LSSP ) iff v ( ω∗ ) = 0 and < ( λ ) > 0 , ∀λ ∈ Sp ( ∇v ( ω∗ ) ) . ( 4 ) where < denote the real part of the eigenvalue λ belonging to the spectrum of∇v ( ω∗ ) . This definition is not easy to interpret but one can intuitively understand a LSSP as a stationary point ( a point ω∗ where v ( ω∗ ) = 0 ) to which all neighbouring points are attracted . We will formalize this intuition of attraction in Proposition 1 . In our two-player game setting , the Jacobian of the game vector field around the LSSP has the following block-matrices form : ∇v ( ω∗ ) = [ ∇2θLG ( ω∗ ) ∇ϕ∇θLG ( ω∗ ) ∇θ∇ϕLD ( ω∗ ) ∇2ϕLD ( ω∗ ) ] = [ S1 B A S2 ] . ( 5 ) WhenB = −A > , being a DNE is a sufficient condition for being of LSSP ( Mazumdar and Ratliff , 2018 ) . However , some LSSP may not be DNE ( Adolphs et al. , 2018 ) , meaning that the optimal generator θ∗ could be a saddle point of LG ( · , ϕ∗ ) , while the optimal joint state ( θ∗ , ϕ∗ ) may be a LSSP of the game . We summarize these properties in Table 1 . In order to illustrate the intuition behind this counter-intuitive fact , we study a simple example where the generator is 2D and the discriminator is 1D . Example 1 . Let us consider LG as a hyperbolic paraboloid ( a.k.a. , saddle point function ) centered in ( 1 , 1 ) where ( 1 , ϕ ) is the principal descent direction and ( −ϕ , 1 ) is the principal ascent direction , while LD is a simple bilinear objective . LG ( θ1 , θ2 , ϕ ) = ( θ2 − ϕθ1 − 1 ) 2 − 12 ( θ1 + ϕθ2 − 1 ) 2 , LD ( θ1 , θ2 , ϕ ) = ϕ ( 5θ1 + 4θ2 − 9 ) We plot LG in Fig . 1b . Note that the discriminator ϕ controls the principal descent direction of LG . We show ( see § A.2 ) that ( θ∗1 , θ∗2 , ϕ∗ ) = ( 1 , 1 , 0 ) is a locally stable stationary point but is not a DNE : the generator loss at the optimum ( θ1 , θ2 ) 7→ LG ( θ1 , θ2 , ϕ∗ ) = θ22 − 12θ21 is not at a DNE because it has a clear descent direction , ( 1 , 0 ) . However , if the generator follows this descent direction , the dynamics will remain stable because the discriminator will update its parameter , rotating the saddle and making ( 1 , 0 ) an ascent direction . We call this phenomenon dynamic stability : the loss LG ( · , ϕ∗ ) is unstable for a fixed ϕ∗ but becomes stable when ϕ dynamically interacts with the generator around ϕ∗ . A mechanical analogy for this dynamic stability phenomenon is a ball in a rotating saddle—even though the gravity pushes the ball to escape the saddle , a quick enough rotation of the saddle would trap the ball at the center ( see ( Thompson et al. , 2002 ) for more details ) . This analogy has been used to explain Paul ’ s trap ( Paul , 1990 ) : a counter-intuitive way to trap ions using a dynamic electric field . In Example 1 , the parameter ϕ explicitly controls the rotation of the saddle . This example illustrates the fact that the DNE corresponds to a notion of static stability : it is the stability of one player ’ s loss given the other player is fixed . Conversely , LSSP captures a notion of dynamic stability that considers both players jointly . By looking at the game vector field we capture these interactions . Fig . 1b only captures a snapshot of the generator ’ s loss surface for a fixed ϕ and indicates static instability ( the generator is at a saddle point of its loss ) . In Fig . 1a , however , one can see that , starting from any point , we will rotate around the stationary point ( ϕ∗ , θ∗1 ) = ( 0 , 1 ) and eventually converge to it . The visualization of the game vector field reveals an interesting behavior that does not occur in single objective minimization : close to a LSSP , the parameters rotate around it . Understanding this phenomenon is key to grasp the optimization difficulties arising in games . In the next section , we formally characterize the notion of rotation around a LSSP and in §4 we develop tools to visualize it in high dimensions . Note that gradient methods may converge to saddle points in single objective minimization , but these are not stable stationary points , unlike in our game example .
This paper proposes visualization techniques for the optimization landscape in GANs. The primary tool presented in this paper is a quantity called path-angle, which looks at the angle between the game vector field and the linear path between a point away from a stationary point and a point near a stationary point. The paper present examples of the visualization for dynamics with pure attraction, pure rotation, and a mix of attraction and rotation. Along with this, the authors propose to look at the eigenvalues of the game Jacobian and the individual player Hessian’s to evaluate convergence in GANs. The paper presents application of the tools on GANs trained with NSGAN and WGAN-GP objectives on a mixture of Gaussians, MNIST, and CIFAR10. The primary observation is that the generator performance is good, but the algorithms converge to non-Nash stable attractors. Moreover, it is shown using the path-angle plots that GANs exhibit rotational behavior around stable points.
SP:276b6721d8cad9d323908d8677706c8ad1668c95
A Closer Look at the Optimization Landscapes of Generative Adversarial Networks
1 INTRODUCTION . Deep neural networks have exhibited remarkable success in many applications ( Krizhevsky et al. , 2012 ) . This success has motivated many studies of their non-convex loss landscape ( Choromanska et al. , 2015 ; Kawaguchi , 2016 ; Li et al. , 2018b ) , which , in turn , has led to many improvements , such as better initialization and optimization methods ( Glorot and Bengio , 2010 ; Kingma and Ba , 2015 ) . While most of the work on studying non-convex loss landscapes has focused on single objective minimization , some recent class of models require the joint minimization of several objectives , making their optimization landscape intrinsically different . Among these models is the generative adversarial network ( GAN ) ( Goodfellow et al. , 2014 ) which is based on a two-player game formulation and has achieved state-of-the-art performance on some generative modeling tasks such as image generation ( Brock et al. , 2019 ) . On the theoretical side , many papers studying multi-player games have argued that one main optimization issue that arises in this case is the rotation due to the adversarial component of the game ( Mescheder et al. , 2018 ; Balduzzi et al. , 2018 ; Gidel et al. , 2019b ) . This has been extensively studied on toy examples , in particular on the so-called bilinear example ( Goodfellow , 2016 ) ( a.k.a Dirac GAN ( Mescheder et al. , 2018 ) ) . However , those toy examples are very far from the standard realistic setting of image generation involving deep networks and challenging datasets . To our knowledge it remains an open question if this rotation phenomenon actually occurs when training GANs in more practical settings . In this paper , we aim at closing this gap between theory and practice . Following Mescheder et al . ( 2017 ) and Balduzzi et al . ( 2018 ) , we argue that instead of studying the loss surface , we should study the game vector field ( i.e. , the concatenation of each player ’ s gradient ) , which can provide ∗Equal contributions . Correspondence to firstname.lastname @ umontreal.ca . †Canada CIFAR AI Chair ( held at Mila ) 1Code available at https : //bit.ly/2kwTu87 better insights to the problem . To this end , we propose a new visualization technique that we call Path-angle which helps us observe the nature of the game vector field close to a stationary point for high dimensional models , and carry on an empirical investigation of the properties of the optimization landscape of GANs . The core questions we want to address may be summarized as the following : Is rotation a phenomenon that occurs when training GANs on real world datasets , and do existing training methods find local Nash equilibria ? . To answer this question we conducted extensive experiments by training different GAN formulations ( NSGAN and WGAN-GP ) with different optimizers ( Adam and ExtraAdam ) on three datasets ( MoG , MNIST and CIFAR10 ) . Based on our experiments and using our visualization techniques we observe that the landscape of GANs is fundamentally different from the standard loss surfaces of deep networks . Furthermore , we provide evidence that existing GAN training methods do not converge to a local Nash equilibrium . Contributions More precisely , our contributions are the following : ( i ) We propose studying empirically the game vector field ( as opposed to studying the loss surfaces of each player ) to understand training dynamics in GANs using a novel visualization tool , which we call Path-angle and that captures the rotational and attractive behaviors near local stationary points ( ref . §4.2 ) . ( ii ) We observe experimentally on both a mixture of Gaussians , MNIST and CIFAR10 datasets that a variety of GAN formulations have a significant rotational behavior around their locally stable stationary points ( ref . §5.1 ) . ( iii ) We provide empirical evidence that existing training procedures find stable stationary points that are saddle points , not minima , for the loss function of the generator ( ref . § 5.2 ) . 2 RELATED WORK . Improving the training of GANs has been an active research area in the past few years . Most efforts in stabilizing GAN training have focused on formulating new objectives ( Arjovsky et al. , 2017 ) , or adding regularization terms ( Gulrajani et al. , 2017 ; Mescheder et al. , 2017 ; 2018 ) . In this work , we try to characterize the difference in the landscapes induced by different GAN formulations and how it relates to improving the training of GANs . Recently , Nagarajan and Kolter ( 2017 ) ; Mescheder et al . ( 2018 ) show that a local analysis of the eigenvalues of the Jacobian of the game can provide guarantees on local stability properties . However , their theoretical analysis is based on some unrealistic assumptions such as the generator ’ s ability to fully capture the real distribution . In this work , we assess experimentally to what extent these theoretical stability results apply in practice . Rotations in differentiable games has been mentioned and interpreted by ( Mescheder et al. , 2018 ; Balduzzi et al. , 2018 ) and Gidel et al . ( 2019b ) . While these papers address rotations in games from a theoretical perspective , it was never shown that GANs , which are games with highly non-convex losses , suffered from these rotations in practice . To our knowledge , trying to quantify that GANs actually suffer from this rotational component in practice for real world dataset is novel . The stable points of the gradient dynamics in general games have been studied independently by Mazumdar and Ratliff ( 2018 ) and Adolphs et al . ( 2018 ) . They notice that the locally stable stationary point of some games are not local Nash equilibria . In order to reach a local Nash equilibrium , Adolphs et al . ( 2018 ) ; Mazumdar et al . ( 2019 ) develop techniques based on second order information . In this work , we argue that reaching local Nash equilibria may not be as important as one may expect and that we do achieve good performance at a locally stable stationary point . Several works have studied the loss landscape of deep neural networks . Goodfellow et al . ( 2015 ) proposed to look at the linear path between two points in parameter space and show that neural networks behave similarly to a convex loss function along this path . Draxler et al . ( 2018 ) proposed an extension where they look at nonlinear paths between two points and show that local minima are connected in deep neural networks . Another extension was proposed by ( Li et al. , 2018a ) where they use contour plots to look at the 2D loss surface defined by two directions chosen appropriately . In this paper , we use a similar approach of following the linear path between two points to gain insight about GAN optimization landscapes . However , in this context , looking at the loss of both players along that path may be uninformative . We propose instead to look , along a linear path from initialization to best solution , at the game vector field , particularly at its angle w.r.t . the linear path , the Path-angle . Another way to gain insight into the landscape of deep neural networks is by looking at the Hessian of the loss ; this was done in the context of single objective minimization by ( Dauphin et al. , 2014 ; Sagun et al. , 2016 ; 2017 ; Alain et al. , 2019 ) . Compared to linear path visualizations which can give global information ( but only along one direction ) , the Hessian provides information about the loss landscape in several directions but only locally . The full Hessian is expensive to compute and one often has to resort to approximations such has computing only the top-k eigenvalues . While , the Hessian is symmetric and thus has real eigenvalues , the Jacobian of a game vector field is significantly different since it is in general not symmetric , which means that the eigenvalues belong to the complex plane . In the context of GANs , Mescheder et al . ( 2017 ) introduced a gradient penalty and use the eigenvalues of the Jacobian of the game vector field to show its benefits in terms of stability . In our work , we compute these eigenvalues to assess that , on different GAN formulations and datasets , existing training procedures find a locally stable stationary point that is a saddle point for the loss function of the generator . 3 FORMULATIONS FOR GAN OPTIMIZATION AND THEIR PRACTICAL IMPLICATIONS . 3.1 THE STANDARD GAME THEORY FORMULATION . From a game theory point of view , GAN training may be seen as a game between two players : the discriminator Dϕ and the generator Gθ , each of which is trying to minimize its loss LD and LG , respectively . Using the same formulation as Mescheder et al . ( 2017 ) , the GAN objective takes the following form ( for simplicity of presentation , we focus on the unconstrained formulation ) : θ∗ ∈ arg min θ∈Rp LG ( θ , ϕ∗ ) and ϕ∗ ∈ arg min ϕ∈Rd LD ( θ∗ , ϕ ) . ( 1 ) The solution ( θ∗ , ϕ∗ ) is called a Nash equilibrium ( NE ) . In practice , the considered objectives are non-convex and we typically can not expect better than a local Nash equilibrium ( LNE ) , i.e . a point at which ( 1 ) is only locally true ( see e.g . ( Adolphs et al. , 2018 ) for a formal definition ) . Ratliff et al . ( 2016 ) derived some derivative-based necessary and sufficient conditions for being a LNE . They show that , for being a local NE it is sufficient to be a differential Nash equilibrium : Definition 1 ( Differential NE ) . A point ( θ∗ , ϕ∗ ) is a differential Nash equilibrium ( DNE ) iff ‖∇θLG ( θ∗ , ϕ∗ ) ‖ = ‖∇ϕLD ( θ∗ , ϕ∗ ) ‖ = 0 , ∇2θLG ( θ∗ , ϕ∗ ) 0 and ∇2ϕLD ( θ∗ , ϕ∗ ) 0 ( 2 ) where S 0 if and only if S is positive definite . Being a DNE is not necessary for being a LNE because a local Nash equilibrium may have Hessians that are only semi-definite . NE are commonly used in GANs to describe the goal of the learning procedure ( Goodfellow et al. , 2014 ) : in this definition , θ∗ ( resp . ϕ∗ ) is seen as a local minimizer of LG ( · , ϕ∗ ) ( resp . LD ( θ∗ , · ) ) . Under this view , however , the interaction between the two networks is not taken into account . This is an important aspect of the game stability that is missed in the definition of DNE ( and Nash equilibrum in general ) . We illustrate this point in the following section , where we develop an example of a game for which gradient methods converge to a point which is a saddle point for the generator ’ s loss and thus not a DNE for the game . 3.2 AN ALTERNATIVE FORMULATION BASED ON THE GAME VECTOR FIELD . In practice , GANs are trained using first order methods that compute the gradients of the losses of each player . Following Gidel et al . ( 2019a ) , an alternative point of view on optimizing GANs is to jointly consider the players ’ parameters θ and ϕ as a joint state ω : = ( θ , ϕ ) , and to study the vector field associated with these gradients,2 which we call the game vector field v ( ω ) : = [ ∇θLG ( ω ) > ∇ϕLD ( ω ) > ] > where ω : = ( θ , ϕ ) . ( 3 ) 2Note that , in practice , the joint vector field ( 3 ) is not a gradient vector field , i.e. , it can not be rewritten as the gradient of a single function . With this perspective , the notion of DNE is replaced by the notion of locally stable stationary point ( LSSP ) . Verhulst ( 1989 , Theorem 7.1 ) defines a LSSP ω∗ using the eigenvalues of the Jacobian of the game vector field∇v ( ω∗ ) at that point . Definition 2 ( LSSP ) . A point ω∗ is a locally stable stationary point ( LSSP ) iff v ( ω∗ ) = 0 and < ( λ ) > 0 , ∀λ ∈ Sp ( ∇v ( ω∗ ) ) . ( 4 ) where < denote the real part of the eigenvalue λ belonging to the spectrum of∇v ( ω∗ ) . This definition is not easy to interpret but one can intuitively understand a LSSP as a stationary point ( a point ω∗ where v ( ω∗ ) = 0 ) to which all neighbouring points are attracted . We will formalize this intuition of attraction in Proposition 1 . In our two-player game setting , the Jacobian of the game vector field around the LSSP has the following block-matrices form : ∇v ( ω∗ ) = [ ∇2θLG ( ω∗ ) ∇ϕ∇θLG ( ω∗ ) ∇θ∇ϕLD ( ω∗ ) ∇2ϕLD ( ω∗ ) ] = [ S1 B A S2 ] . ( 5 ) WhenB = −A > , being a DNE is a sufficient condition for being of LSSP ( Mazumdar and Ratliff , 2018 ) . However , some LSSP may not be DNE ( Adolphs et al. , 2018 ) , meaning that the optimal generator θ∗ could be a saddle point of LG ( · , ϕ∗ ) , while the optimal joint state ( θ∗ , ϕ∗ ) may be a LSSP of the game . We summarize these properties in Table 1 . In order to illustrate the intuition behind this counter-intuitive fact , we study a simple example where the generator is 2D and the discriminator is 1D . Example 1 . Let us consider LG as a hyperbolic paraboloid ( a.k.a. , saddle point function ) centered in ( 1 , 1 ) where ( 1 , ϕ ) is the principal descent direction and ( −ϕ , 1 ) is the principal ascent direction , while LD is a simple bilinear objective . LG ( θ1 , θ2 , ϕ ) = ( θ2 − ϕθ1 − 1 ) 2 − 12 ( θ1 + ϕθ2 − 1 ) 2 , LD ( θ1 , θ2 , ϕ ) = ϕ ( 5θ1 + 4θ2 − 9 ) We plot LG in Fig . 1b . Note that the discriminator ϕ controls the principal descent direction of LG . We show ( see § A.2 ) that ( θ∗1 , θ∗2 , ϕ∗ ) = ( 1 , 1 , 0 ) is a locally stable stationary point but is not a DNE : the generator loss at the optimum ( θ1 , θ2 ) 7→ LG ( θ1 , θ2 , ϕ∗ ) = θ22 − 12θ21 is not at a DNE because it has a clear descent direction , ( 1 , 0 ) . However , if the generator follows this descent direction , the dynamics will remain stable because the discriminator will update its parameter , rotating the saddle and making ( 1 , 0 ) an ascent direction . We call this phenomenon dynamic stability : the loss LG ( · , ϕ∗ ) is unstable for a fixed ϕ∗ but becomes stable when ϕ dynamically interacts with the generator around ϕ∗ . A mechanical analogy for this dynamic stability phenomenon is a ball in a rotating saddle—even though the gravity pushes the ball to escape the saddle , a quick enough rotation of the saddle would trap the ball at the center ( see ( Thompson et al. , 2002 ) for more details ) . This analogy has been used to explain Paul ’ s trap ( Paul , 1990 ) : a counter-intuitive way to trap ions using a dynamic electric field . In Example 1 , the parameter ϕ explicitly controls the rotation of the saddle . This example illustrates the fact that the DNE corresponds to a notion of static stability : it is the stability of one player ’ s loss given the other player is fixed . Conversely , LSSP captures a notion of dynamic stability that considers both players jointly . By looking at the game vector field we capture these interactions . Fig . 1b only captures a snapshot of the generator ’ s loss surface for a fixed ϕ and indicates static instability ( the generator is at a saddle point of its loss ) . In Fig . 1a , however , one can see that , starting from any point , we will rotate around the stationary point ( ϕ∗ , θ∗1 ) = ( 0 , 1 ) and eventually converge to it . The visualization of the game vector field reveals an interesting behavior that does not occur in single objective minimization : close to a LSSP , the parameters rotate around it . Understanding this phenomenon is key to grasp the optimization difficulties arising in games . In the next section , we formally characterize the notion of rotation around a LSSP and in §4 we develop tools to visualize it in high dimensions . Note that gradient methods may converge to saddle points in single objective minimization , but these are not stable stationary points , unlike in our game example .
This paper tries to provide a deeper understanding of the training dynamics of GANs in practice via characterizing and visualizing the rotation and attraction phenomena nearby a locally stable stationary point (LSSP) and questions the necessity to access a differential/local Nash equilibrium (LNE). In particular, this paper first discusses the difference between LSSP and LNE and formalize the notions of rotation and attraction around LSSP in games. Then, this paper proposes the path angle to visualize the rotation and attraction nearby an LSSP. The path angle is a function that maps linearly distributed points in the line, which is determined by an initial parameter set and a well-trained parameter set, to the angles between the line and the gradient of a given point in that line. The rotation and attraction phenomena can be observed in the plot of the path angle as "a quick sign switch" and "a bump" nearby 1, respectively. The experiments empirically demonstrate that: 1. rotation exists in the training dynamics of practical GANs; 2. GANs often converge to an LSSP than an LNE, but still, achieve good results.
SP:276b6721d8cad9d323908d8677706c8ad1668c95
Adversarially Robust Representations with Smooth Encoders
1 INTRODUCTION . Representation learning is a fundamental problem in Machine learning and holds the promise to enable data-efficient learning and transfer to new tasks . Researchers working in domains like Computer Vision ( Krizhevsky et al. , 2012 ) and Natural Language Processing ( Devlin et al. , 2018 ) have already demonstrated the effectiveness of representations and features computed by deep architectures for the solution of other tasks . A case in point is the example of the FC7 features from the AlexNet image classification architecture that have been used for many other vision problems ( Krizhevsky et al. , 2012 ) . The effectiveness of learned representations has given new impetus to research in representation learning , leading to a lot of work being done on the development of techniques for inducing representations from data having desirable properties like disentanglement and compactness ( Burgess et al. , 2018 ; Achille & Soatto , 2017 ; Bengio , 2013 ; Locatello et al. , 2019 ) . Many popular techniques for generating representation are based on the Variational AutoEncoders ( VAE ) model ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . The use of deep networks as universal function approximators has facilitated very rapid advancements which samples generated from these models often being indistinguishable from natural data . While the quality of generated examples can provide significant convincing evidence that a generative model is flexible enough to capture the variability in the data distribution , it is far from a formal guarantee that the representation is fit for other purposes . In fact , if the actual goal is learning good latent representations , evaluating generative models only based on reconstruction fidelity and subjective quality of typical samples is neither sufficient nor entirely necessary , and can be even misleading . In this paper , we uncover the problematic failure mode where representations learned by VAEs exhibit over-sensitivity to semantically-irrelevant changes in data . One example of such problematic behaviour can be seen in Figure 1 . We identify a cause for this shortcoming in the classical Variational Auto-encoder ( VAE ) objective , the evidence lower bound ( ELBO ) , that fails to control the behaviour of the encoder out of the support of the empirical data distribution . We show this behaviour of the VAE can lead to extreme errors in the recovered representation by the encoder and is a key hurdle in the effective use of representations for data-efficient learning and transfer . To address this problem , we propose to augment the data with properties that enforce insensitivity of the representation with respect to families of transformations . To incorporate these specifications , we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point . For certain choices of parameters , our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations . We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner , without a reference to a particular downstream task and without a costly supervised adversarial training procedure . It is clear that if learned representations are overly sensitive to irrelevant changes in the input ( for example , small changes in the pixels of an image or video , or inaudible frequencies added to an audio signal ) , models that rely on these representations are naturally susceptible to make incorrect predictions when inputs are changed . We argue that such specifications about the robustness properties of learned representations can be one of the tractable guiding features in the search for good representations . Based on these observations , we make the following contributions : 1 . We introduce a method for learning robust latent representations by explicitly targeting a structured model that admits the original VAE model as a marginal . We also show that in the case the target is chosen a pairwise conditional random field with attractive potentials , this choice leads naturally to the Wasserstein divergence between posterior distributions over the latent space . This insight provides us a flexible class of robustness metrics for controlling representations learned by VAEs . 2 . We develop a modification to training algorithms for VAEs to improve robustness of learned representations , using an external selection mechanism for obtaining transformed examples and by enforcing the corresponding representations to be close . As a particular selection mechanism , we adopt attacks in adversarial supervised learning ( Madry et al. , 2017 ) to attacks to the latent representation . Using this novel unsupervised training procedure we learn encoders with adjustable robustness properties and show that these are effective at learning representations that perform well across a variety of downstream tasks . 3 . We show that alternative models proposed in the literature , in particular β-VAE model used for explicitly controlling the learned representations , or Wasserstein Generative Adversarial Networks ( GANs ) can also be interpreted in our framework as variational lower bound maximization . 4 . We show empirically using simulation studies on MNIST , color MNIST and CelebA datasets , that models trained using our method learn representations that provide a higher degree of adversarial robustness even without supervised adversarial training . 2 GENERATIVE MODELS . Modern generative models are samplers p ( X|θ ) for generating realizations from an ideal target distribution π ( X ) , also known as the data distribution . In practice π ( X ) is unknown in the sense that it is hard to formally specify . Instead , we have a representative data set X , samples that are assumed to be conditionally independently drawn from the data distribution π ( X ) of interest . We will refer to the empirical distribution as π̂ ( X ) = 1|X | ∑ ξ∈X δ ( x − ξ ) . The goal is learning a parameter θ∗ such that p ( X|θ = θ∗ ) = ∫ dZp ( X|Z , θ = θ∗ ) p ( Z ) ≈ π̂ ( X ) , thereby also learning a generator . 2.1 FROM VAE TO SMOOTH ENCODERS . The VAE corresponds to the latent variable model p ( X|Z , θ ) p ( Z ) with latent variable Z and observation X . The forward model p ( X|Z = z , θ ) ( the decoder ) is represented using a neural network g with parameters θ , usually the mean of a Gaussian N ( X ; g ( z ; θ ) , vIx ) where v is a scalar observation noise variance and Ix is an identity matrix . The prior is usually a standard Gaussian p ( Z = z ) = N ( z ; 0 , Iz ) . The exact posterior over latent variables p ( Z|X = x , θ ) is approximated by a probability model q ( Z|X = x , η ) with parameters η . A popular choice here is a multivariate Gaussian N ( Z ; µ ( x ; η ) , Σ ( x ; η ) ) , where the mapping f such that ( µ , Σ ) = f ( x , η ) is chosen to be a neural network ( with parameters η to be learned from data ) . We will refer to the pair f , g as an encoder-decoder pair . Under the above assumptions , VAE ’ s are trained by maximizing the following form of the ELBO using stochastic gradient descent ( SGD ) , log p ( X = x|θ ) ≥ E { log p ( X = x|Z , θ ) } q ( Z|X=x , η ) −DKL ( q ( Z|X = x , η ) ||p ( Z ) ) ≡ Bx ( η , θ ) The gradient of the Kullback-Leibler ( KL ) divergence term above ( see A.1 ) is available in closed form . An unbiased estimate of the gradient of the first term can be obtained via sampling z from q using the reparametrization trick Kingma & Welling ( 2013 ) , aided by automatic differentiation . 2.2 A PROBLEM WITH THE VAE OBJECTIVE . Under the i.i.d . assumption , where each data point x ( n ) , for n = 1 . . . N is independently drawn from the model an equivalent batch ELBO objective can be defined ( See also E.1 ) as B ( η , θ ) ≡ 1 N N∑ n=1 Bx ( n ) ( η , θ ) = −DKL ( π̂ ( X ) q ( Z|X , η ) ||p ( X|Z , θ ) p ( Z ) ) + const ( 1 ) where the empirical distribution of observed data is denoted as π̂ . This form makes it more clear that the variational lower bound is only calculating the distance between the encoder and decoder under the support of the empirical distribution . To see how this locality leads to a fragile representation , we construct a VAE with discrete latents and observations . We let X ∈ { 1 , . . . , Nx } and Z ∈ { 1 , . . . , Nz } and define the following system of conditional distributions as the decoder and encoder models as : p ( X = i|Z = j ) ∝ exp ( 1 v ω ( mj − i/Nx ) ) q ( Z = j|X = i ) ∝ exp ( 1 σi ω ( µi − j/Nz ) ) where ω ( u ) = cos ( 2πu ) . These distributions can be visualized by heatmaps of probability tables where i and j are row and column indicies , respectively Figure 2 . This particular von-Mises like parametrization is chosen for avoiding boundary effects due to a finite latent and observable spaces . Latent O b se rv a ti o n Encoder Latent Decoder Probability Figure 2 : Example VAE model . ( left ) Heatmap of the encoder distribution ( darker colors referring to higher probability ) q ( Z = j|X = i ; µi , σi ) where each row i is a probability distribution over latents with a mode around µi and spread σi ( middle ) Heatmap of the decoder distribution p ( X = i|Z = j , mj , vj ) where each column j is a probability distribution with mode at mj and spread v. The prior p ( Z ) is chosen to be uniform and is not shown here . ( right ) The marginal model p ( X = i|m , v ) = ∑Nz j=1 p ( Z = j ) p ( X = i|Z = j , mj , v ) depicted as an histogram . The prior p ( Z ) is taken as uniform , and is not shown . Note that this parametrization emulates a high capacity network that can model any functional relationship between latent states and observations , while being qualitatively similar to a standard VAE model with conditionally Gaussian decoder and encoder functions . In reality , the true target density is not available but we would have a representative sample . To simulate this scenario , we sample a ’ dataset ’ from a discrete target distribution π ( X ) : this is merely a draw from a multinomial distribution , yielding a multinomial vector s with entries si that gives the count how many times we observe x = i . The results of such an experiment are depicted in Figure 3 ( a ) ( see caption for details ) . This picture reveals several important properties of a VAE approximation . 1 . After training , we observe that when j and j′ are close , the corresponding conditionals p ( X|Z = j ) and p ( X|Z = j′ ) are close ( hence corresponding decoder mean parameters mj and mj′ are close , hence ( see middle panel of Fig.3 ( a ) with the title P showing the decoder ) . This smoothness is perhaps surprising at a first sight : in this example , we could arbitrarily permute columns of the decoder and still get the same marginal distribution . Technically speaking , given a uniform prior p ( Z ) , the marginal likelihood p ( X|θ ) is entirely invariant with respect to permutations of the latent state . In fact if the encoder distribution wouldn ’ t be constrained we could also permute the columns of the encoder to keep the ELBO invariant . In the appendix E.2 , we provide an argument why the choice of an unimodal encoder model and optimization of the variational objective leads naturally to smooth decoder functions . 2 . The encoders found by the VAE on the other hand are not smooth at all , despite the fact that the model shows a relatively good fit . This behaviour alerts us about judging generative models only by the quality of the samples , by traversing the latent space and generating conditional samples from the decoder . The quality of the decoder seems to be not a proxy for the robustness of the representation . The fragility of representations is inherent from the ELBO objective . For the entire dataset , a batch ELBO that involves the counts si can be written as ELBO = − ∑ i ∑ j siq ( Z = j|Xa = i ) log siq ( Z = j|Xa = i ) p ( X = i|Z = j ) p ( Z = j ) ( 2 ) The last expression is proportional to the negative KL divergence between two tabular distributions : siq ( Z = j|Xa = i ) /L and p ( X = i|Z = j ) p ( Z = j ) . As such , whenever si is zero , the contribution of row i of the encoder distribution vanishes and the corresponding parameters µi and σi are not effecting the lower bound . In a sense , the objective does not enforce any structure on the encoder outside of the position of the data points in the training set . This figure shows that the outof-sample behaviour ( i.e. , for i where π̂ ( X ) = 0 ) the encoder is entirely initialization dependent , hence no learning takes place . We would also expect that the resulting representations would be fragile , in the sense that a small perturbation of an observation can result in a large change in the encoder output .
This paper studies the vulnerability of representations learned by variational auto-encoders (VAE). It first show that the learned representation of VAE is susceptible to small changes, similar to the adversarial examples in supervised learning setting. Then propose a regularization method, called smooth encoder, to improve the robustness of the representation. Experiments are conducted on several benchmark datasets to show the effectiveness of the method.
SP:881f632bbadf0cac11ec1e466f02b26762f67073
Adversarially Robust Representations with Smooth Encoders
1 INTRODUCTION . Representation learning is a fundamental problem in Machine learning and holds the promise to enable data-efficient learning and transfer to new tasks . Researchers working in domains like Computer Vision ( Krizhevsky et al. , 2012 ) and Natural Language Processing ( Devlin et al. , 2018 ) have already demonstrated the effectiveness of representations and features computed by deep architectures for the solution of other tasks . A case in point is the example of the FC7 features from the AlexNet image classification architecture that have been used for many other vision problems ( Krizhevsky et al. , 2012 ) . The effectiveness of learned representations has given new impetus to research in representation learning , leading to a lot of work being done on the development of techniques for inducing representations from data having desirable properties like disentanglement and compactness ( Burgess et al. , 2018 ; Achille & Soatto , 2017 ; Bengio , 2013 ; Locatello et al. , 2019 ) . Many popular techniques for generating representation are based on the Variational AutoEncoders ( VAE ) model ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . The use of deep networks as universal function approximators has facilitated very rapid advancements which samples generated from these models often being indistinguishable from natural data . While the quality of generated examples can provide significant convincing evidence that a generative model is flexible enough to capture the variability in the data distribution , it is far from a formal guarantee that the representation is fit for other purposes . In fact , if the actual goal is learning good latent representations , evaluating generative models only based on reconstruction fidelity and subjective quality of typical samples is neither sufficient nor entirely necessary , and can be even misleading . In this paper , we uncover the problematic failure mode where representations learned by VAEs exhibit over-sensitivity to semantically-irrelevant changes in data . One example of such problematic behaviour can be seen in Figure 1 . We identify a cause for this shortcoming in the classical Variational Auto-encoder ( VAE ) objective , the evidence lower bound ( ELBO ) , that fails to control the behaviour of the encoder out of the support of the empirical data distribution . We show this behaviour of the VAE can lead to extreme errors in the recovered representation by the encoder and is a key hurdle in the effective use of representations for data-efficient learning and transfer . To address this problem , we propose to augment the data with properties that enforce insensitivity of the representation with respect to families of transformations . To incorporate these specifications , we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point . For certain choices of parameters , our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations . We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner , without a reference to a particular downstream task and without a costly supervised adversarial training procedure . It is clear that if learned representations are overly sensitive to irrelevant changes in the input ( for example , small changes in the pixels of an image or video , or inaudible frequencies added to an audio signal ) , models that rely on these representations are naturally susceptible to make incorrect predictions when inputs are changed . We argue that such specifications about the robustness properties of learned representations can be one of the tractable guiding features in the search for good representations . Based on these observations , we make the following contributions : 1 . We introduce a method for learning robust latent representations by explicitly targeting a structured model that admits the original VAE model as a marginal . We also show that in the case the target is chosen a pairwise conditional random field with attractive potentials , this choice leads naturally to the Wasserstein divergence between posterior distributions over the latent space . This insight provides us a flexible class of robustness metrics for controlling representations learned by VAEs . 2 . We develop a modification to training algorithms for VAEs to improve robustness of learned representations , using an external selection mechanism for obtaining transformed examples and by enforcing the corresponding representations to be close . As a particular selection mechanism , we adopt attacks in adversarial supervised learning ( Madry et al. , 2017 ) to attacks to the latent representation . Using this novel unsupervised training procedure we learn encoders with adjustable robustness properties and show that these are effective at learning representations that perform well across a variety of downstream tasks . 3 . We show that alternative models proposed in the literature , in particular β-VAE model used for explicitly controlling the learned representations , or Wasserstein Generative Adversarial Networks ( GANs ) can also be interpreted in our framework as variational lower bound maximization . 4 . We show empirically using simulation studies on MNIST , color MNIST and CelebA datasets , that models trained using our method learn representations that provide a higher degree of adversarial robustness even without supervised adversarial training . 2 GENERATIVE MODELS . Modern generative models are samplers p ( X|θ ) for generating realizations from an ideal target distribution π ( X ) , also known as the data distribution . In practice π ( X ) is unknown in the sense that it is hard to formally specify . Instead , we have a representative data set X , samples that are assumed to be conditionally independently drawn from the data distribution π ( X ) of interest . We will refer to the empirical distribution as π̂ ( X ) = 1|X | ∑ ξ∈X δ ( x − ξ ) . The goal is learning a parameter θ∗ such that p ( X|θ = θ∗ ) = ∫ dZp ( X|Z , θ = θ∗ ) p ( Z ) ≈ π̂ ( X ) , thereby also learning a generator . 2.1 FROM VAE TO SMOOTH ENCODERS . The VAE corresponds to the latent variable model p ( X|Z , θ ) p ( Z ) with latent variable Z and observation X . The forward model p ( X|Z = z , θ ) ( the decoder ) is represented using a neural network g with parameters θ , usually the mean of a Gaussian N ( X ; g ( z ; θ ) , vIx ) where v is a scalar observation noise variance and Ix is an identity matrix . The prior is usually a standard Gaussian p ( Z = z ) = N ( z ; 0 , Iz ) . The exact posterior over latent variables p ( Z|X = x , θ ) is approximated by a probability model q ( Z|X = x , η ) with parameters η . A popular choice here is a multivariate Gaussian N ( Z ; µ ( x ; η ) , Σ ( x ; η ) ) , where the mapping f such that ( µ , Σ ) = f ( x , η ) is chosen to be a neural network ( with parameters η to be learned from data ) . We will refer to the pair f , g as an encoder-decoder pair . Under the above assumptions , VAE ’ s are trained by maximizing the following form of the ELBO using stochastic gradient descent ( SGD ) , log p ( X = x|θ ) ≥ E { log p ( X = x|Z , θ ) } q ( Z|X=x , η ) −DKL ( q ( Z|X = x , η ) ||p ( Z ) ) ≡ Bx ( η , θ ) The gradient of the Kullback-Leibler ( KL ) divergence term above ( see A.1 ) is available in closed form . An unbiased estimate of the gradient of the first term can be obtained via sampling z from q using the reparametrization trick Kingma & Welling ( 2013 ) , aided by automatic differentiation . 2.2 A PROBLEM WITH THE VAE OBJECTIVE . Under the i.i.d . assumption , where each data point x ( n ) , for n = 1 . . . N is independently drawn from the model an equivalent batch ELBO objective can be defined ( See also E.1 ) as B ( η , θ ) ≡ 1 N N∑ n=1 Bx ( n ) ( η , θ ) = −DKL ( π̂ ( X ) q ( Z|X , η ) ||p ( X|Z , θ ) p ( Z ) ) + const ( 1 ) where the empirical distribution of observed data is denoted as π̂ . This form makes it more clear that the variational lower bound is only calculating the distance between the encoder and decoder under the support of the empirical distribution . To see how this locality leads to a fragile representation , we construct a VAE with discrete latents and observations . We let X ∈ { 1 , . . . , Nx } and Z ∈ { 1 , . . . , Nz } and define the following system of conditional distributions as the decoder and encoder models as : p ( X = i|Z = j ) ∝ exp ( 1 v ω ( mj − i/Nx ) ) q ( Z = j|X = i ) ∝ exp ( 1 σi ω ( µi − j/Nz ) ) where ω ( u ) = cos ( 2πu ) . These distributions can be visualized by heatmaps of probability tables where i and j are row and column indicies , respectively Figure 2 . This particular von-Mises like parametrization is chosen for avoiding boundary effects due to a finite latent and observable spaces . Latent O b se rv a ti o n Encoder Latent Decoder Probability Figure 2 : Example VAE model . ( left ) Heatmap of the encoder distribution ( darker colors referring to higher probability ) q ( Z = j|X = i ; µi , σi ) where each row i is a probability distribution over latents with a mode around µi and spread σi ( middle ) Heatmap of the decoder distribution p ( X = i|Z = j , mj , vj ) where each column j is a probability distribution with mode at mj and spread v. The prior p ( Z ) is chosen to be uniform and is not shown here . ( right ) The marginal model p ( X = i|m , v ) = ∑Nz j=1 p ( Z = j ) p ( X = i|Z = j , mj , v ) depicted as an histogram . The prior p ( Z ) is taken as uniform , and is not shown . Note that this parametrization emulates a high capacity network that can model any functional relationship between latent states and observations , while being qualitatively similar to a standard VAE model with conditionally Gaussian decoder and encoder functions . In reality , the true target density is not available but we would have a representative sample . To simulate this scenario , we sample a ’ dataset ’ from a discrete target distribution π ( X ) : this is merely a draw from a multinomial distribution , yielding a multinomial vector s with entries si that gives the count how many times we observe x = i . The results of such an experiment are depicted in Figure 3 ( a ) ( see caption for details ) . This picture reveals several important properties of a VAE approximation . 1 . After training , we observe that when j and j′ are close , the corresponding conditionals p ( X|Z = j ) and p ( X|Z = j′ ) are close ( hence corresponding decoder mean parameters mj and mj′ are close , hence ( see middle panel of Fig.3 ( a ) with the title P showing the decoder ) . This smoothness is perhaps surprising at a first sight : in this example , we could arbitrarily permute columns of the decoder and still get the same marginal distribution . Technically speaking , given a uniform prior p ( Z ) , the marginal likelihood p ( X|θ ) is entirely invariant with respect to permutations of the latent state . In fact if the encoder distribution wouldn ’ t be constrained we could also permute the columns of the encoder to keep the ELBO invariant . In the appendix E.2 , we provide an argument why the choice of an unimodal encoder model and optimization of the variational objective leads naturally to smooth decoder functions . 2 . The encoders found by the VAE on the other hand are not smooth at all , despite the fact that the model shows a relatively good fit . This behaviour alerts us about judging generative models only by the quality of the samples , by traversing the latent space and generating conditional samples from the decoder . The quality of the decoder seems to be not a proxy for the robustness of the representation . The fragility of representations is inherent from the ELBO objective . For the entire dataset , a batch ELBO that involves the counts si can be written as ELBO = − ∑ i ∑ j siq ( Z = j|Xa = i ) log siq ( Z = j|Xa = i ) p ( X = i|Z = j ) p ( Z = j ) ( 2 ) The last expression is proportional to the negative KL divergence between two tabular distributions : siq ( Z = j|Xa = i ) /L and p ( X = i|Z = j ) p ( Z = j ) . As such , whenever si is zero , the contribution of row i of the encoder distribution vanishes and the corresponding parameters µi and σi are not effecting the lower bound . In a sense , the objective does not enforce any structure on the encoder outside of the position of the data points in the training set . This figure shows that the outof-sample behaviour ( i.e. , for i where π̂ ( X ) = 0 ) the encoder is entirely initialization dependent , hence no learning takes place . We would also expect that the resulting representations would be fragile , in the sense that a small perturbation of an observation can result in a large change in the encoder output .
This paper analyzes the shortcoming of VAE objective, and propose a regularization method based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point. It is lead to Wasserstein distance between representations. Experiments are made on three datasets; ColorMNIST, MNIST, and CelebA, which shows superior performance on adversarial accuracy while similar accuracy to VAE on nominal accuracy.
SP:881f632bbadf0cac11ec1e466f02b26762f67073
On Robustness of Neural Ordinary Differential Equations
1 INTRODUCTION Neural ordinary differential equations ( Chen et al. , 2018 ) form a family of models that approximate nonlinear mappings by using continuous-time ODEs . Due to their desirable properties , such as invertibility and parameter efficiency , neural ODEs have attracted increasing attention recently ( Dupont et al. , 2019 ; Liu et al. , 2019 ) . For example , Grathwohl et al . ( 2018 ) proposed a neural ODE-based generative model—the FFJORD—to solve inverse problems ; Quaglino et al . ( 2019 ) used a higher-order approximation of the states in a neural ODE , and proposed the SNet to accelerate computation . Along with the wider deployment of neural ODEs , robustness issues come to the fore . However , the robustness of neural ODEs is still yet unclear . In particular , it is unclear how robust neural ODEs are in comparison to the widely-used CNNs . Robustness properties of CNNs have been studied extensively . In this work , we present the first systematic study on exploring the robustness properties of neural ODEs . To do so , we consider the task of image classification . We expect that results would be similar for other machine learning tasks such as regression . Neural ODEs are dimension-preserving mappings , but a classification model transforms a high-dimensional input—such as an image—into an output whose dimension is equal to the number of classes . Thus , we consider the neural ODE-based classification network ( ODENet ) whose architecture is shown in Figure 1 . An ODENet consists of three components : the feature extractor ( FE ) consists of convolutional layers which maps an input datum to a multi-channel feature map , a neural ODE that serves as the nonlinear representation mapping ( RM ) , and the fully-connected classifier ( FCC ) that generates a prediction vector based on the output of the RM . The robustness of a classification model can be evaluated through the lens of its performance on perturbed images . To comprehensively investigate the robustness of neural ODEs , we perturb original images with commonly-used perturbations , namely , random Gaussian noise ( Szegedy et al. , 2013 ) and harmful adversarial examples ( Goodfellow et al. , 2014 ; Madry et al. , 2017 ) . We conduct experiments in two common settings—training the model only on authentic non-perturbed images and training the model on authentic images as well as the Gaussian perturbed ones . We observe that ODENets are more robust compared to CNN models against all types of perturbations in both settings . We then provide an insightful understanding of such intriguing robustness of neural ODEs by exploiting a certain property of the flow ( Dupont et al. , 2019 ) , namely that integral curves that start at distinct initial states are nonintersecting . The flow of a continuous-time ODE is defined as the family of solutions/paths traversed by the state , starting from different initial points , and an integral curve is a specific solution for a given initial point . The non-intersecting property indicates that an integral curve starting from some point is constrained by the integral curves starting from that point ’ s neighborhood . Thus , in an ODENet , if a correctly classified datum is slightly perturbed , the integral curve associated to its perturbed version would not change too much from the original one . Consequently , the perturbed datum could still be correctly classified . Thus , there exists intrinsic robustness regularization in ODENets , which is absent from CNNs . Motivated by this property of the neural ODE flow , we attempt to explore a more robust neural ODE architecture by introducing stronger regularization on the flow . We thus propose a Time-Invariant Steady neural ODE ( TisODE ) . The TisODE removes the time dependence of the dynamics in an ODE and imposes a steady-state constraint on the integral curves . Removing the time dependence of the derivative results in the time-invariant property of the ODE . To wit , given a solution z1 ( t ) , another solution z̃1 ( t ) , with an initial state z̃1 ( 0 ) = z1 ( T ′ ) for some T ′ > 0 , can be regarded as the −T ′- shift version of z1 ( t ) . Such a time-invariant property would make bounding the difference between output states convenient . To elaborate , let the output of a neural ODE correspond to states at time T > 0 . By the time-invariant property , the difference between outputs , ‖z̃1 ( T ) − z1 ( T ) ‖ , equals to ‖z1 ( T + T ′ ) − z1 ( T ) ‖ . To control this distance , a steady-state regularization term is introduced to the overall objective to constrain the change of a state after time exceeds T . With the time-invariant property and the steady-state term , we show that TisODE even is more robust . We do so by evaluating the robustness of TisODE-based classifiers against various types of perturbations and observe that such models are more robust than vanilla ODE-based models . In addition , some other effective architectural solutions have also been recently proposed to improve the robustness of CNNs . For example , Xie et al . ( 2017 ) randomly resizes or pads zeros into test images to destroy the specific structure of adversarial perturbations . Besides , the model proposed by Xie et al . ( 2019 ) contains feature denoising filters to remove the feature-level patterns of adversarial examples . We conduct experiments to show that our proposed TisODE can work seamlessly and in conjunction with these methods to further boost the robustness of deep models . Thus , the proposed TisODE can be used as a generally applicable and effective component for improving the robustness of deep models . In summary , our contributions are as follows . Firstly , we are the first to provide a systematic empirical study on the robustness of neural ODEs and find that the neural ODE-based models are more robust compared to conventional CNN models . This finding inspires new applications of neural ODEs in improving robustness of deep models , a problem that concerns many deep learning theorists and practitioners alike . Secondly , we propose the TisODE method , which is simple yet effective in significantly boosting the robustness of neural ODEs . Moreover , the proposed TisODE can also be used in conjunction with other state-of-the-art robust architectures . Thus , TisODE can serve as a drop-in module to improve the robustness of deep models effectively . 2 PRELIMINARIES ON NEURAL ODE It has been shown that a residual block ( He et al. , 2016 ) can be interpreted as the discrete approximation of an ODE by setting the discretization step to be one . When the discretization step approaches zero , it yields a family of neural networks , which are called neural ODEs ( Chen et al. , 2018 ) . Formally , in a neural ODE , the relation between input and output is characterized by the following set of equations : dz ( t ) dt = fθ ( z ( t ) , t ) , z ( 0 ) = zin , zout = z ( T ) , ( 1 ) where fθ : Rd × [ 0 , ∞ ) → Rd denotes the trainable layers that are parameterized by weights θ and z : [ 0 , ∞ ) → Rd represents the d-dimensional state of the neural ODE . We assume that fθ is continuous in t and globally Lipschitz continuous in z . In this case , the input zin of the neural ODE corresponds to the state at t = 0 , and the output zout is associated to the state at some T ∈ ( 0 , ∞ ) . Because fθ governs how the state changes with respect to time t , we also use fθ to denote the dynamics of the neural ODE . Given input zin , the output zout can be computed by solving the ODE in ( 1 ) . If T is fixed , the output zout only depends on the input zin and the dynamics fθ , which also corresponds to the weighted layers in the neural ODE . Therefore , the neural ODE can be represented as the d-dimensional function φT ( · , · ) of the input zin and the dynamics fθ , i.e. , zout = z ( T ) = z ( 0 ) + ∫ T 0 fθ ( z ( t ) , t ) dt = φT ( zin , fθ ) . The terminal time T of the output state z ( T ) is set to be 1 in practice . Several methods have been proposed for training neural ODEs , such as the adjoint sensitivity method ( Chen et al. , 2018 ) , SNet ( Quaglino et al. , 2019 ) , and the auto-differentiation technique ( Paszke et al. , 2017 ) . In this work , we use the most straightforward technique , i.e. , updating the weights θ with the autodifferentiation technique in the PyTorch framework . 3 AN EMPIRICAL STUDY ON THE ROBUSTNESS OF ODENETS Robustness of deep models has gained increased attention , as it is imperative that deep models employed in critical applications , such as healthcare , are robust . The robustness of a model is measured by the sensitivity of the prediction with respect to small perturbations on the inputs . In this study , we consider three commonly-used perturbation schemes , namely random Gaussian perturbations , FGSM ( Goodfellow et al. , 2014 ) adversarial examples , and PGD ( Madry et al. , 2017 ) adversarial examples . These perturbation schemes reflect noise and adversarial robustness properties of the investigated models respectively . We evaluate the robustness via the classification accuracies on perturbed images , in which the original non-perturbed versions of these images are all correctly classified . For a fair comparison with conventional CNN models , we made sure that the number of parameters of an ODENet is close to that of its counterpart CNN model . Specifically , the ODENet shares the same network architecture with the CNN model for the FE and FCC parts . The only difference is that , for the RM part , the input of the ODE-based RM is concatenated with one more channel which represents the time t , while the RM in a CNN model has a skip connection and serves as a residual block . During the training phase , all the hyperparameters are kept the same , including training epochs , learning rate schedules , and weight decay coefficients . Each model is trained three times with different random seeds , and we report the average performance ( classification accuracy ) together with the standard deviation . 3.1 EXPERIMENTAL SETTINGS Dataset : We conduct experiments to compare the robustness of ODENets with CNN models on three datasets , i.e. , the MNIST ( LeCun et al. , 1998 ) , the SVHN ( Netzer et al. , 2011 ) , and a subset of the ImageNet datset ( Deng et al. , 2009 ) . We call the subset ImgNet10 since it is collected from 10 synsets of ImageNet : dog , bird , car , fish , monkey , turtle , lizard , bridge , cow , and crab . We selected 3,000 training images and 300 test images from each synset and resized all images to 128× 128 . Architectures : On the MNIST dataset , both the ODENet and the CNN model consists of four convolutional layers and one fully-connected layer . The total number of parameters of the two models is around 140k . On the SVHN dataset , the networks are similar to those for the MNIST ; we only changed the input channels of the first convolutional layer to three . On the ImgNet10 dataset , there are nine convolutional layers and one fully-connected layer for both the ODENet and the CNN model . The numbers of parameters is approximately 280k . In practice , the neural ODE can be solved with different numerical solvers such as the Euler method and the Runge-Kutta methods ( Chen et al. , 2018 ) . Here , we use the easily-implemented Euler method in the experiments . To balance the computation and the continuity of the flow , we solve the ODE initial value problem in equation ( 1 ) by the Euler method with step size 0.1 . Our implementation builds on the open-source neural ODE codes.1 Details on the network architectures are included in the Appendix . Training : The experiments are conducted using two settings on each dataset—training models only with original non-perturbed images and training models on original images together with their perturbed versions . In both settings , we added a weight decay term into the training objective to regularize the norm of the weights , since this can help control the model ’ s representation capacity and improve the robustness of a neural network ( Sokolić et al. , 2017 ) . In the second setting , images perturbed with random Gaussian noise are used to fine-tune the models , because augmenting the dataset with small perturbations can possibly improve the robustness of models and synthesizing Gaussian noise does not incur excessive computation time . 3.2 ROBUSTNESS OF ODENETS TRAINED ONLY ON NON-PERTURBED IMAGES The first question we are interested in is how robust ODENets are against perturbations if the model is only trained on original non-perturbed images . We train CNNs and ODEnets to perform classification on three datasets and set the weight decay parameters for all models to be 0.0005 . We make sure that both the well-trained ODENets and CNN models have satisfactory performances on original non-perturbed images , i.e. , around 99.5 % for MNIST , 95.0 % for the SVHN , and 80.0 % for ImgNet10 . Since Gaussian noise is ubiquitous in modeling image degradation , we first evaluated the robustness of the models in the presence of zero-mean random Gaussian perturbations . It has also been shown that a deep model is vulnerable to harmful adversarial examples , such as the FGSM ( Goodfellow et al. , 2014 ) . We are also interested in how robust ODENets are in the presence of adversarial examples . The standard deviation σ of Gaussian noise and the l∞-norm $ of the FGSM attack for each dataset are shown in Table 1 . From the results in Table 1 , we observe that the ODENets demonstrate superior robustness compared to CNNs for all types of perturbations . On the MNIST dataset , in the presence of Gaussian 1https : //github.com/rtqichen/torchdiffeq . perturbations with a large σ of 100 , the ODENet produces much higher accuracy on perturbed images compared to the CNN model ( 73.2 % vs. 56.4 % ) . For the FGSM-0.3 adversarial examples , the accuracy of ONEnet is around twice as high as that of the CNN model . On the SVHN dataset , ODENets significantly outperform CNN models , e.g. , for the FGSM-5/255 examples , the accuracy of the ODENet is 43.0 % , which is much higher than that of the CNN model ( 13.7 % ) . On the ImgNet10 , for both cases of σ = 25 and FGSM-8/255 , ODENet outperforms CNNs by a large margin of around 9 % . 3.3 ROBUSTNESS OF ODENETS TRAINED ON ORIGINAL IMAGES TOGETHER WITH GAUSSIAN PERTURBATIONS Training a model on original images together with their perturbed versions can improve the robustness of the model . As mentioned previously , Gaussian noise is commonly assumed to be present in real-world images . Synthesizing Gaussian noise is also fast and easy . Thus , we add random Gaussian noise into the original images to generate their perturbed versions . ODENets and CNN models are both trained on original images together with their perturbed versions . The standard deviation of the added Gaussian noise is randomly chosen from { 50 , 75 , 100 } on the MNIST dataset , { 15 , 25 , 35 } on the SVHN dataset , and { 10 , 15 , 25 } on the ImgNet10 . All other hyperparameters are kept the same as above . The robustness of the models is evaluated under Gaussian perturbations , FGSM adversarial examples , and PGD ( Madry et al. , 2017 ) adversarial examples . The latter is a stronger attacker compared to the FGSM . The l∞-norm $ of the PGD attack for each dataset is shown in Table 2 . Based on the results , we observe that ODENets consistently outperform CNN models on both two datasets . On the MNIST dataset , the ODENet outperforms the CNN against all types of perturbations . In particular , for the PGD-0.2 adversarial examples , the accuracy of the ODENet ( 64.7 % ) is much higher than that of the CNN ( 32.9 % ) . Besides , for the PGD-0.3 attack , the CNN is completely misled by the adversarial examples , but the ODENet can still classify perturbed images with an accuracy of 13.0 % . On the SVHN dataset , ODENets also show superior robustness in comparison to CNN models . For all the adversarial examples , ODENets outperform CNN models by a margin of at least 10 percentage points . On the ImgNet10 dataset , the ODENet also performs better than CNN models against all forms of adversarial examples . 3.4 INSIGHTS ON THE ROBUSTNESS OF ODENETS From the results in Sections 3.2 and 3.3 , we find ODENets are more robust compared to CNN models . Here , we attempt to provide an intuitive understanding of the robustness of the neural ODE . In an ODENet , given some datum , the FE extracts an informative feature map from the datum . The neural ODE , serving as the RM , takes as input the feature map and performs a nonlinear mapping . In practice , we use the weight decay technique during training which regularizes the norm of weights in the FE part , so that the change of feature map in terms of a small perturbation on the input can be controlled . We aim to show that , in the neural ODE , a small change on the feature map will not lead to a large deviation from the original output associated with the feature map . Theorem 1 ( ODE integral curves do not intersect ( Coddington & Levinson , 1955 ; Younes , 2010 ; Dupont et al. , 2019 ) ) . Let z1 ( t ) and z2 ( t ) be two solutions of the ODE in ( 1 ) with different initial conditions , i.e . z1 ( 0 ) ∕= z2 ( 0 ) . In ( 1 ) , fθ is continuous in t and globally Lipschitz continuous in z . Then , it holds that z1 ( t ) ∕= z2 ( t ) for all t ∈ [ 0 , ∞ ) . To illustrate this theorem , considering a simple 1- dimensional system in which the state is a scalar . As shown in Figure 2 , equation ( 1 ) has a solution z1 ( t ) starting from A1 = ( 0 , z1 ( 0 ) ) , where z1 ( 0 ) is the feature of some datum . Equation ( 1 ) also has another two solutions z2 ( t ) and z3 ( t ) , whose starting points A2 = ( 0 , z2 ( 0 ) ) and A3 = ( 0 , z3 ( 0 ) ) , both of which are close to A1 . Suppose A1 is between A2 and A3 . By Theorem 1 , we know that the integral curve z1 ( t ) is always sandwiched between the integral curves z2 ( t ) and z3 ( t ) . Now , let $ < min { |z2 ( 0 ) −z1 ( 0 ) | , |z3 ( 0 ) −z1 ( 0 ) | } . Consider a solution z̃1 ( t ) of equation ( 1 ) . The integral curve z̃1 ( t ) starts from a point Ã1 = ( 0 , z̃1 ( 0 ) ) . The point Ã1 is in the $ -neighborhood of A1 with |z̃1 ( 0 ) −z1 ( 0 ) | < $ . By Theorem 1 , we know that |z̃1 ( T ) − z1 ( T ) | ≤ |z3 ( T ) − z2 ( T ) | . In other words , if any perturbation smaller than $ is added to the scalar z1 ( 0 ) in A1 , the deviation from the original output z1 ( T ) is bounded by the distance between z2 ( T ) and z3 ( T ) . In contrast , in a CNN model , there is no such bound on the deviation from the original output . Thus , we opine that due to this non-intersecting property , ODENets are intrinsically robust . 4 TISODE : BOOSTING THE ROBUSTNESS OF NEURAL ODES In the previous section , we presented an empirical study on the robustness of ODENets and observed that ODENets are more robust compared to CNN models . In this section , we explore how to boost the robustness of the vanilla neural ODE model further . This motivates the proposal of time-invariant steady neural ODEs ( TisODEs ) . 4.1 TIME-INVARIANT STEADY NEURAL ODES steady-state constraint . In the neural ODE characterized by equation ( 1 ) , the dynamics fθ ( z ( t ) , t ) depends on both the state z ( t ) at time t and the time t itself . In contrast , if the neural ODE is modified to be time-invariant , the time dependence of the dynamics is removed . Consequently , the dynamics depends only on the state z . So , we can rewrite the dynamics function as fθ ( z ) , and the neural ODE is characterized as dz ( t ) dt = fθ ( z ( t ) ) ; z ( 0 ) = zin ; zout = z ( T ) . ( 2 ) Let z1 ( t ) be a solution of ( 2 ) on [ 0 , ∞ ) and $ > 0 be a small positive value . We define the set M1 = { ( z1 ( t ) , t ) |t ∈ [ 0 , T ] , ‖z1 ( t ) −z1 ( 0 ) ‖ ≤ $ } . This set contains all points on the curve of z1 ( t ) during [ 0 , T ] that are also inside the $ -neighborhood of z1 ( 0 ) . For some element ( z1 ( T ′ ) , T ′ ) ∈ M1 , let z̃1 ( t ) be the solution of ( 2 ) which starts from z̃1 ( 0 ) = z1 ( T ′ ) . Then we have z̃1 ( t ) = z1 ( t+ T ′ ) ( 3 ) for all t in [ 0 , ∞ ) . The property shown in equation ( 3 ) is known as the time-invariant property . It indicates that the integral curve z̃1 ( t ) is the −T ′ shift of z1 ( t ) ( Figure 3 ) . We can regard z̃1 ( 0 ) as a slightly perturbed version of z1 ( 0 ) , and we are interested in how large the difference between z̃1 ( T ) and z1 ( T ) is . In a robust model , the difference should be small . By equation ( 3 ) , we have ‖z̃1 ( T ) − z1 ( T ) ‖ = ‖z1 ( T +T ′ ) − z1 ( T ) ‖ . Since T ′ ∈ [ 0 , T ] , the difference between z1 ( T ) and z̃1 ( T ) can be bounded as follows , ‖z̃1 ( T ) −z1 ( T ) ‖= ∥∥∥∥∥ ∫ T+T ′ T fθ ( z1 ( t ) ) dt ∥∥∥∥∥≤ ∥∥∥∥∥ ∫ T+T ′ T |fθ ( z1 ( t ) ) | dt ∥∥∥∥∥≤ ∥∥∥∥∥ ∫ 2T T |fθ ( z1 ( t ) ) | dt ∥∥∥∥∥ , ( 4 ) where all norms are ℓ2 norms and |fθ| denotes the element-wise absolute operation of a vectorvalued function fθ . That is to say , the difference between z̃1 ( T ) and z1 ( T ) can be bounded by only using the information of the curve z1 ( t ) . For any t′ ∈ [ 0 , T ] and element ( z1 ( t′ ) , t′ ) ∈ M1 , consider the integral curve that starts from z1 ( t′ ) . The difference between the output state of this curve and z1 ( T ) satisfies inequality ( 4 ) . Therefore , we propose to add an additional term Lss to the loss function when training the timeinvariant neural ODE : Lss = N∑ i=1 ∥∥∥∥∥ ∫ 2T T |fθ ( zi ( t ) ) | dt ∥∥∥∥∥ , ( 5 ) where N is the number of samples in the training set and zi ( t ) is the solution whose initial state equals to the feature of the ith sample . The regularization term Lss is termed as the steady-state loss . This terminology “ steady state ” is borrowed from the dynamical systems literature . In a stable dynamical system , the states stabilize around a fixed point , known as the steady-state , as time tends to infinity . If we can ensure that Lss is small , for each sample , the outputs of all the points in Mi will stabilize around zi ( T ) . Consequently , the model is robust . This modification of the neural ODE is dubbed Time-invariant steady neural ODE . 4.2 EVALUATING ROBUSTNESS OF TISODE-BASED CLASSIFIERS Here , we conduct experiments to evaluate the robustness of our proposed TisODE , and compare TisODE-based models with the vanilla ODENets . We train all models with original non-perturbed images together with their Gaussian perturbed versions . The regularization parameter for the steadystate loss Lss is set to be 0.1 . All other hyperparameters are exactly the same as those in Section 3.3 . From the results in Table 3 , we can see that our proposed TisODE-based models are clearly more robust compared to vanilla ODENets . On the MNIST dataset , when combating FGSM-0.3 attacks , the TisODE-based models outperform vanilla ODENets by more than 4 percentage points . For the FGSM-0.5 adversarial examples , the accuracy of the TisODE-based model is 6 percentage points better . On the SVHN dataset , the TisODE-based models perform better in terms of all forms of adversarial examples . On the ImgNet10 dataset , the TisODE-based models also outperform vanilla ODE-based models on all types of perturbations . In the presence of FGSM and PGD-5/255 examples , the accuracies are enhanced by more than 2 percentage points . 4.3 TISODE - A GENERALLY APPLICABLE DROP-IN TECHNIQUE FOR IMPROVING THE ROBUSTNESS OF DEEP NETWORKS In view of the excellent robustness of the TisODE , we claim that the proposed TisODE can be used as a general drop-in module for improving the robustness of deep networks . We support this claim by showing the TisODE can work in conjunction with other state-of-the-art techniques and further boost the models ’ robustness . These techniques include the feature denoising ( FDn ) method ( Xie et al. , 2019 ) and the input randomization ( IR ) method ( Xie et al. , 2017 ) . We conduct experiments on the MNIST and SVHN datasets . All models are trained with original non-perturbed images together with their Gaussian perturbed versions . We show that models using the FDn/IRd technique becomes much more robust when equipped with the TisODE . In the FDn experiments , the dot-product nonlocal denoising layer ( Xie et al. , 2019 ) is added to the head of the fully-connected classifier . From Table 4 , we observe that both FDn and IRd can effectively improve the adversarial robustness of vanilla CNN models ( CNN-FDn , CNN-IRd ) . Furthermore , combining our proposed TisODE with FDn or IRd ( TisODE-FDn , TisODE-IRd ) , the adversarial robustness of the resultant model is significantly enhanced . For example , on the MNIST dataset , the additional use of our TisODE increases the accuracies on the PGD-0.3 examples by at least 10 percentage points for both FDn ( 8.2 % to 28.2 % ) and IRd ( 55.5 % to 66.0 % ) . However , on both MNIST and SVHN datasets , the IRd technique improves the robustness against adversarial examples , but its performance is worse on random Gaussian noise . With the help of the TisODE , the degradation in the robustness against random Gaussian noise can be effectively ameliorated . 5 RELATED WORKS In this section , we briefly review related works on the neural ODE and works concerning improving the robustness of deep neural networks . Neural ODE : The neural ODE ( Chen et al. , 2018 ) method models the input and output as two states of a continuous-time dynamical system by approximating the dynamics of this system with trainable layers . Before the proposal of neural ODE , the idea of modeling nonlinear mappings using continuous-time dynamical systems was proposed in Weinan ( 2017 ) . Lu et al . ( 2017 ) also showed that several popular network architectures could be interpreted as the discretization of a continuoustime ODE . For example , the ResNet ( He et al. , 2016 ) and PolyNet ( Zhang et al. , 2017 ) are associated with the Euler scheme and the FractalNet ( Larsson et al. , 2016 ) is related to the Runge-Kutta scheme . In contrast to these discretization models , neural ODEs are endowed with an intrinsic invertibility property , which yields a family of invertible models for solving inverse problems ( Ardizzone et al. , 2018 ) , such as the FFJORD ( Grathwohl et al. , 2018 ) . Recently , many researchers have conducted studies on neural ODEs from the perspectives of optimization techniques , approximation capabilities , and generalization . Concerning the optimization of neural ODEs , the auto-differentiation techniques can effectively train ODENets , but the training procedure is computationally and memory inefficient . To address this problem , Chen et al . ( 2018 ) proposed to compute gradients using the adjoint sensitivity method ( Pontryagin , 2018 ) , in which there is no need to store any intermediate quantities of the forward pass . Also in Quaglino et al . ( 2019 ) , the authors proposed the SNet which accelerates the neural ODEs by expressing their dynamics as truncated series of Legendre polynomials . Concerning the approximation capability , Dupont et al . ( 2019 ) pointed out the limitations in approximation capabilities of neural ODEs because of the preserving of input topology . The authors proposed an augmented neural ODE which increases the dimension of states by concatenating zeros so that complex mappings can be learned with simple flow . The most relevant work to ours concerns strategies to improve the generalization of neural ODEs . In Liu et al . ( 2019 ) , the authors proposed the neural stochastic differential equation ( SDE ) by injecting random noise to the dynamics function and showed that the generalization and robustness of vanilla neural ODEs could be improved . However , our improvement on the neural ODEs is explored from a different perspective by introducing constraints on the flow . We empirically found that our proposal and the neural SDE can work in tandem to further boost the robustness of neural ODEs . Robust Improvement : A straightforward way of improving the robustness of a model is to smooth the loss surface by controlling the spectral norm of the Jacobian matrix of the loss function ( Sokolić et al. , 2017 ) . In terms of adversarial examples ( Carlini & Wagner , 2017 ; Chen et al. , 2017 ) , researchers have proposed adversarial training strategies ( Madry et al. , 2017 ; Elsayed et al. , 2018 ; Tramèr et al. , 2017 ) in which the model is fine-tuned with adversarial examples generated in realtime . However , generating adversarial examples is not computationally efficient , and there exists a trade-off between the adversarial robustness and the performance on original non-perturbed images ( Yan et al. , 2018 ; Tsipras et al. , 2018 ) . In Wang et al . ( 2018a ) , the authors model the ResNet as a transport equation , in which the adversarial vulnerability can be interpreted as the irregularity of the decision boundary . Consequently , a diffusion term is introduced to enhance the robustness of the neural nets . Besides , there are also some works that propose novel architectural defense mechanisms against adversarial examples . For example , Xie et al . ( 2017 ) utilized random resizing and random padding to destroy the specific structure of adversarial perturbations ; Wang et al . ( 2018b ) and Wang et al . ( 2018c ) improved the robustness of neural networks by replacing the output layers with novel interpolating functions ; In Xie et al . ( 2019 ) , the authors designed a feature denoising filter that can remove the perturbation ’ s pattern from feature maps . In this work , we explore the intrinsic robustness of a specific novel architecture ( neural ODE ) , and show that the proposed TisODE can improve the robustness of deep networks and can also work in tandem with these state-of-the-art methods Xie et al . ( 2017 ; 2019 ) to achieve further improvements . 6 CONCLUSION In this paper , we first empirically study the robustness of neural ODEs . Our studies reveal that neural ODE-based models are superior in terms of robustness compared to CNN models . We then explore how to further boost the robustness of vanilla neural ODEs and propose the TisODE . Finally , we show that the proposed TisODE outperforms the vanilla neural ODE and also can work in conjunction with other state-of-the-art techniques to further improve the robustness of deep networks . Thus , the TisODE method is an effective drop-in module for building robust deep models . ACKNOWLEDGEMENT This work is funded by a Singapore National Research Foundation ( NRF ) Fellowship ( R-263-000D02-281 ) . Jiashi Feng was partially supported by NUS IDS R-263-000-C67-646 , ECRA R-263-000-C87-133 , MOE Tier-II R-263-000-D17-112 and AI.SG R-263-000-D97-490 REFERENCES Lynton Ardizzone , Jakob Kruse , Sebastian Wirkert , Daniel Rahner , Eric W Pellegrini , Ralf S Klessen , Lena Maier-Hein , Carsten Rother , and Ullrich Köthe . Analyzing inverse problems with invertible neural networks . arXiv preprint arXiv:1808.04730 , 2018 . Nicholas Carlini and David Wagner . Towards evaluating the robustness of neural networks . In 2017 IEEE Symposium on Security and Privacy ( SP ) , pp . 39–57 . IEEE , 2017 . Pin-Yu Chen , Huan Zhang , Yash Sharma , Jinfeng Yi , and Cho-Jui Hsieh . Zoo : Zeroth order optimization based black-box attacks to deep neural networks without training substitute models . In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security , pp . 15–26 . ACM , 2017 . Tian Qi Chen , Yulia Rubanova , Jesse Bettencourt , and David K Duvenaud . Neural ordinary differential equations . In Advances in neural information processing systems , pp . 6571–6583 , 2018 . Earl A Coddington and Norman Levinson . Theory of ordinary differential equations . Tata McGrawHill Education , 1955 . Jia Deng , Wei Dong , Richard Socher , Li-Jia Li , Kai Li , and Li Fei-Fei . Imagenet : A large-scale hierarchical image database . In 2009 IEEE conference on computer vision and pattern recognition , pp . 248–255 . Ieee , 2009 . Emilien Dupont , Arnaud Doucet , and Yee Whye Teh . Augmented neural odes . arXiv preprint arXiv:1904.01681 , 2019 . Gamaleldin F Elsayed , Shreya Shankar , Brian Cheung , Nicolas Papernot , Alex Kurakin , Ian Goodfellow , and Jascha Sohl-Dickstein . Adversarial examples that fool both human and computer vision . arXiv preprint arXiv:1802.08195 , 10 , 2018 . Ian J Goodfellow , Jonathon Shlens , and Christian Szegedy . Explaining and harnessing adversarial examples . arXiv preprint arXiv:1412.6572 , 2014 . Will Grathwohl , Ricky TQ Chen , Jesse Betterncourt , Ilya Sutskever , and David Duvenaud . Ffjord : Free-form continuous dynamics for scalable reversible generative models . arXiv preprint arXiv:1810.01367 , 2018 . Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Deep residual learning for image recognition . In Proceedings of the IEEE conference on computer vision and pattern recognition , pp . 770–778 , 2016 . Ralph Howard . The gronwall inequality . lecture notes , 1998 . Gustav Larsson , Michael Maire , and Gregory Shakhnarovich . Fractalnet : Ultra-deep neural networks without residuals . arXiv preprint arXiv:1605.07648 , 2016 . Yann LeCun , Léon Bottou , Yoshua Bengio , Patrick Haffner , et al . Gradient-based learning applied to document recognition . Proceedings of the IEEE , 86 ( 11 ) :2278–2324 , 1998 . Xuanqing Liu , Si Si , Qin Cao , Sanjiv Kumar , and Cho-Jui Hsieh . Neural sde : Stabilizing neural ode networks with stochastic noise . arXiv preprint arXiv:1906.02355 , 2019 . Yiping Lu , Aoxiao Zhong , Quanzheng Li , and Bin Dong . Beyond finite layer neural networks : Bridging deep architectures and numerical differential equations . arXiv preprint arXiv:1710.10121 , 2017 . Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu . Towards deep learning models resistant to adversarial attacks . arXiv preprint arXiv:1706.06083 , 2017 . Yuval Netzer , Tao Wang , Adam Coates , Alessandro Bissacco , Bo Wu , and Andrew Y Ng . Reading digits in natural images with unsupervised feature learning . 2011 . Adam Paszke , Sam Gross , Soumith Chintala , Gregory Chanan , Edward Yang , Zachary DeVito , Zeming Lin , Alban Desmaison , Luca Antiga , and Adam Lerer . Automatic differentiation in pytorch . 2017 . Lev Semenovich Pontryagin . Mathematical theory of optimal processes . Routledge , 2018 . Alessio Quaglino , Marco Gallieri , Jonathan Masci , and Jan Koutnı́k . Accelerating neural odes with spectral elements . arXiv preprint arXiv:1906.07038 , 2019 . Jure Sokolić , Raja Giryes , Guillermo Sapiro , and Miguel RD Rodrigues . Robust large margin deep neural networks . IEEE Transactions on Signal Processing , 65 ( 16 ) :4265–4280 , 2017 . Christian Szegedy , Wojciech Zaremba , Ilya Sutskever , Joan Bruna , Dumitru Erhan , Ian Goodfellow , and Rob Fergus . Intriguing properties of neural networks . arXiv preprint arXiv:1312.6199 , 2013 . Florian Tramèr , Alexey Kurakin , Nicolas Papernot , Ian Goodfellow , Dan Boneh , and Patrick McDaniel . Ensemble adversarial training : Attacks and defenses . arXiv preprint arXiv:1705.07204 , 2017 . Dimitris Tsipras , Shibani Santurkar , Logan Engstrom , Alexander Turner , and Aleksander Madry . Robustness may be at odds with accuracy . arXiv preprint arXiv:1805.12152 , 2018 . B. Wang , B. Yuan , Z. Shi , and S. Osher . ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies . arXiv e-prints , art . arXiv:1811.10745 , Nov 2018a . Bao Wang , Alex T Lin , Zuoqiang Shi , Wei Zhu , Penghang Yin , Andrea L Bertozzi , and Stanley J Osher . Adversarial defense via data dependent activation function and total variation minimization . arXiv preprint arXiv:1809.08516 , 2018b . Bao Wang , Xiyang Luo , Zhen Li , Wei Zhu , Zuoqiang Shi , and Stanley Osher . Deep neural nets with interpolating function as output activation . In Advances in Neural Information Processing Systems , pp . 743–753 , 2018c . E Weinan . A proposal on machine learning via dynamical systems . Communications in Mathematics and Statistics , 5 ( 1 ) :1–11 , 2017 . Cihang Xie , Jianyu Wang , Zhishuai Zhang , Zhou Ren , and Alan Yuille . Mitigating adversarial effects through randomization . arXiv preprint arXiv:1711.01991 , 2017 . Cihang Xie , Yuxin Wu , Laurens van der Maaten , Alan L Yuille , and Kaiming He . Feature denoising for improving adversarial robustness . In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp . 501–509 , 2019 . Ziang Yan , Yiwen Guo , and Changshui Zhang . Deep defense : Training dnns with improved adversarial robustness . In Advances in Neural Information Processing Systems , pp . 419–428 , 2018 . Laurent Younes . Shapes and diffeomorphisms , volume 171 . Springer , 2010 . Xingcheng Zhang , Zhizhong Li , Chen Change Loy , and Dahua Lin . Polynet : A pursuit of structural diversity in very deep networks . In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp . 718–726 , 2017 . 7 APPENDIX 7.1 NETWORKS USED ON THE MNIST , THE SVHN , AND THE IMGNET10 DATASETS In Table 5 , the four arguments of the Conv layer represent the input channel , output channel , kernel size , and the stride . The two arguments of the Linear layer represents the input dimension and the output dimension of this fully-connected layer . In the network on the ImgNet10 , the BasicBlock refers to the standard architecture in ( He et al. , 2016 ) , the three arguments of the BasicBlock represent the input channel , output channel and the stride of the Conv layers inside the block . Note that we replace the BatchNorm layers in BasicBlocks as the GroupNorm to guarantee that the dynamics of each datum is independent of other data in the same mini-batch . 7.2 THE CONSTRUCTION OF IMGNET10 DATASET 7.3 GRONWALL ’ S INEQUALITY We formally state the Gronwall ’ s Inequality here , following the version in ( Howard , 1998 ) . Theorem 2 . Let U ⊂ Rd be an open set . Let f : U × [ 0 , T ] → Rd be a continuous function and let z1 , z2 : [ 0 , T ] → U satisfy the initial value problems : dz1 ( t ) dt = f ( z1 ( t ) , t ) , z1 ( t ) = x1 dz2 ( t ) dt = f ( z2 ( t ) , t ) , z2 ( t ) = x2 Assume there is a constant C ≥ 0 such that , for all t ∈ [ 0 , T ] , ‖f ( z2 ( t ) , t ) − f ( z1 ( t ) , t ) ) ‖ ≤ C‖z2 ( t ) − z1 ( t ) ‖ Then , for any t ∈ [ 0 , T ] , ‖z1 ( t ) − z2 ( t ) ‖ ≤ ‖x2 − x1‖ · eCt . 7.4 MORE EXPERIMENTAL RESULTS 7.4.1 COMPARISON IN THE SETTING OF ADVERSARIAL TRAINING We implement the adversarial training of the models on the MNIST dataset , and the adversarial examples for training are generated in real-time via the FGSM method ( epsilon=0.3 ) during each epoch ( Madry et al. , 2017 ) . The results of the adversarially trained models are shown in Table 7 . We can observe that the neural ODE-based models are consistently more robust than CNN models . The proposed TisODE also outperforms the vanilla neural ODE . 7.4.2 EXPERIMENTS ON THE CIFAR10 DATASET We conduct experiments on CIFAR10 to compare the robustness of CNN and neural ODE-based models . We train all the models only with original non-perturbed images and evaluate the robustness of models against random Gaussian noise and FGSM adversarial attacks . The results are shown in Table 8 . We can observe that the ONENet is more robust than the CNN model in terms of both the random noise and the FGSM attack . Besides , our proposal , TisODE , can improve the robustness of the vanilla neural ODE . Here , we control the number of parameters to be the same for all kinds of models . We use a small network , which consists of five convolutional layers and one linear layer . 7.4.3 AN EXTENSION ON THE COMPARISON BETWEEN CNNS AND ODENETS Here , we compare CNN and neural ODE-based models by controlling both the number of parameters and the number of function evaluations . We conduct experiments on the MNIST dataset , and all the models are trained only with original non-perturbed images . For the neural ODE-based models , the time range is set from 0 to 1 . We use the Euler method , and the step size is set to be 0.05 . Thus the number of evaluations is 1/0.05 = 20 . For the CNN models ( specifically ResNet ) , we repeatedly concatenate the residual block for 20 times , and these 20 blocks share the same weights . Our experiments show that , in this condition , the neural ODE-based models still outperform the CNN models ( FGSM-0.15 : 87.5 % vs. 81.9 % , FGSM-0.3 : 53.4 % vs. 49.7 % , PGD-0.2 : 11.8 % vs. 4.8 % ) .
The paper is concerned with neural ODE-based networks, specifically their robustness. While ODEs are a classical subject in mathematics with many applications in the sciences and beyond, neural ODEs are a recently proposed family of models for nonlinear mappings in the context of machine learning systems. There they show promise and are an active field of research.
SP:12411220098647e9bc26769218f2f64d82867493
On Robustness of Neural Ordinary Differential Equations
1 INTRODUCTION Neural ordinary differential equations ( Chen et al. , 2018 ) form a family of models that approximate nonlinear mappings by using continuous-time ODEs . Due to their desirable properties , such as invertibility and parameter efficiency , neural ODEs have attracted increasing attention recently ( Dupont et al. , 2019 ; Liu et al. , 2019 ) . For example , Grathwohl et al . ( 2018 ) proposed a neural ODE-based generative model—the FFJORD—to solve inverse problems ; Quaglino et al . ( 2019 ) used a higher-order approximation of the states in a neural ODE , and proposed the SNet to accelerate computation . Along with the wider deployment of neural ODEs , robustness issues come to the fore . However , the robustness of neural ODEs is still yet unclear . In particular , it is unclear how robust neural ODEs are in comparison to the widely-used CNNs . Robustness properties of CNNs have been studied extensively . In this work , we present the first systematic study on exploring the robustness properties of neural ODEs . To do so , we consider the task of image classification . We expect that results would be similar for other machine learning tasks such as regression . Neural ODEs are dimension-preserving mappings , but a classification model transforms a high-dimensional input—such as an image—into an output whose dimension is equal to the number of classes . Thus , we consider the neural ODE-based classification network ( ODENet ) whose architecture is shown in Figure 1 . An ODENet consists of three components : the feature extractor ( FE ) consists of convolutional layers which maps an input datum to a multi-channel feature map , a neural ODE that serves as the nonlinear representation mapping ( RM ) , and the fully-connected classifier ( FCC ) that generates a prediction vector based on the output of the RM . The robustness of a classification model can be evaluated through the lens of its performance on perturbed images . To comprehensively investigate the robustness of neural ODEs , we perturb original images with commonly-used perturbations , namely , random Gaussian noise ( Szegedy et al. , 2013 ) and harmful adversarial examples ( Goodfellow et al. , 2014 ; Madry et al. , 2017 ) . We conduct experiments in two common settings—training the model only on authentic non-perturbed images and training the model on authentic images as well as the Gaussian perturbed ones . We observe that ODENets are more robust compared to CNN models against all types of perturbations in both settings . We then provide an insightful understanding of such intriguing robustness of neural ODEs by exploiting a certain property of the flow ( Dupont et al. , 2019 ) , namely that integral curves that start at distinct initial states are nonintersecting . The flow of a continuous-time ODE is defined as the family of solutions/paths traversed by the state , starting from different initial points , and an integral curve is a specific solution for a given initial point . The non-intersecting property indicates that an integral curve starting from some point is constrained by the integral curves starting from that point ’ s neighborhood . Thus , in an ODENet , if a correctly classified datum is slightly perturbed , the integral curve associated to its perturbed version would not change too much from the original one . Consequently , the perturbed datum could still be correctly classified . Thus , there exists intrinsic robustness regularization in ODENets , which is absent from CNNs . Motivated by this property of the neural ODE flow , we attempt to explore a more robust neural ODE architecture by introducing stronger regularization on the flow . We thus propose a Time-Invariant Steady neural ODE ( TisODE ) . The TisODE removes the time dependence of the dynamics in an ODE and imposes a steady-state constraint on the integral curves . Removing the time dependence of the derivative results in the time-invariant property of the ODE . To wit , given a solution z1 ( t ) , another solution z̃1 ( t ) , with an initial state z̃1 ( 0 ) = z1 ( T ′ ) for some T ′ > 0 , can be regarded as the −T ′- shift version of z1 ( t ) . Such a time-invariant property would make bounding the difference between output states convenient . To elaborate , let the output of a neural ODE correspond to states at time T > 0 . By the time-invariant property , the difference between outputs , ‖z̃1 ( T ) − z1 ( T ) ‖ , equals to ‖z1 ( T + T ′ ) − z1 ( T ) ‖ . To control this distance , a steady-state regularization term is introduced to the overall objective to constrain the change of a state after time exceeds T . With the time-invariant property and the steady-state term , we show that TisODE even is more robust . We do so by evaluating the robustness of TisODE-based classifiers against various types of perturbations and observe that such models are more robust than vanilla ODE-based models . In addition , some other effective architectural solutions have also been recently proposed to improve the robustness of CNNs . For example , Xie et al . ( 2017 ) randomly resizes or pads zeros into test images to destroy the specific structure of adversarial perturbations . Besides , the model proposed by Xie et al . ( 2019 ) contains feature denoising filters to remove the feature-level patterns of adversarial examples . We conduct experiments to show that our proposed TisODE can work seamlessly and in conjunction with these methods to further boost the robustness of deep models . Thus , the proposed TisODE can be used as a generally applicable and effective component for improving the robustness of deep models . In summary , our contributions are as follows . Firstly , we are the first to provide a systematic empirical study on the robustness of neural ODEs and find that the neural ODE-based models are more robust compared to conventional CNN models . This finding inspires new applications of neural ODEs in improving robustness of deep models , a problem that concerns many deep learning theorists and practitioners alike . Secondly , we propose the TisODE method , which is simple yet effective in significantly boosting the robustness of neural ODEs . Moreover , the proposed TisODE can also be used in conjunction with other state-of-the-art robust architectures . Thus , TisODE can serve as a drop-in module to improve the robustness of deep models effectively . 2 PRELIMINARIES ON NEURAL ODE It has been shown that a residual block ( He et al. , 2016 ) can be interpreted as the discrete approximation of an ODE by setting the discretization step to be one . When the discretization step approaches zero , it yields a family of neural networks , which are called neural ODEs ( Chen et al. , 2018 ) . Formally , in a neural ODE , the relation between input and output is characterized by the following set of equations : dz ( t ) dt = fθ ( z ( t ) , t ) , z ( 0 ) = zin , zout = z ( T ) , ( 1 ) where fθ : Rd × [ 0 , ∞ ) → Rd denotes the trainable layers that are parameterized by weights θ and z : [ 0 , ∞ ) → Rd represents the d-dimensional state of the neural ODE . We assume that fθ is continuous in t and globally Lipschitz continuous in z . In this case , the input zin of the neural ODE corresponds to the state at t = 0 , and the output zout is associated to the state at some T ∈ ( 0 , ∞ ) . Because fθ governs how the state changes with respect to time t , we also use fθ to denote the dynamics of the neural ODE . Given input zin , the output zout can be computed by solving the ODE in ( 1 ) . If T is fixed , the output zout only depends on the input zin and the dynamics fθ , which also corresponds to the weighted layers in the neural ODE . Therefore , the neural ODE can be represented as the d-dimensional function φT ( · , · ) of the input zin and the dynamics fθ , i.e. , zout = z ( T ) = z ( 0 ) + ∫ T 0 fθ ( z ( t ) , t ) dt = φT ( zin , fθ ) . The terminal time T of the output state z ( T ) is set to be 1 in practice . Several methods have been proposed for training neural ODEs , such as the adjoint sensitivity method ( Chen et al. , 2018 ) , SNet ( Quaglino et al. , 2019 ) , and the auto-differentiation technique ( Paszke et al. , 2017 ) . In this work , we use the most straightforward technique , i.e. , updating the weights θ with the autodifferentiation technique in the PyTorch framework . 3 AN EMPIRICAL STUDY ON THE ROBUSTNESS OF ODENETS Robustness of deep models has gained increased attention , as it is imperative that deep models employed in critical applications , such as healthcare , are robust . The robustness of a model is measured by the sensitivity of the prediction with respect to small perturbations on the inputs . In this study , we consider three commonly-used perturbation schemes , namely random Gaussian perturbations , FGSM ( Goodfellow et al. , 2014 ) adversarial examples , and PGD ( Madry et al. , 2017 ) adversarial examples . These perturbation schemes reflect noise and adversarial robustness properties of the investigated models respectively . We evaluate the robustness via the classification accuracies on perturbed images , in which the original non-perturbed versions of these images are all correctly classified . For a fair comparison with conventional CNN models , we made sure that the number of parameters of an ODENet is close to that of its counterpart CNN model . Specifically , the ODENet shares the same network architecture with the CNN model for the FE and FCC parts . The only difference is that , for the RM part , the input of the ODE-based RM is concatenated with one more channel which represents the time t , while the RM in a CNN model has a skip connection and serves as a residual block . During the training phase , all the hyperparameters are kept the same , including training epochs , learning rate schedules , and weight decay coefficients . Each model is trained three times with different random seeds , and we report the average performance ( classification accuracy ) together with the standard deviation . 3.1 EXPERIMENTAL SETTINGS Dataset : We conduct experiments to compare the robustness of ODENets with CNN models on three datasets , i.e. , the MNIST ( LeCun et al. , 1998 ) , the SVHN ( Netzer et al. , 2011 ) , and a subset of the ImageNet datset ( Deng et al. , 2009 ) . We call the subset ImgNet10 since it is collected from 10 synsets of ImageNet : dog , bird , car , fish , monkey , turtle , lizard , bridge , cow , and crab . We selected 3,000 training images and 300 test images from each synset and resized all images to 128× 128 . Architectures : On the MNIST dataset , both the ODENet and the CNN model consists of four convolutional layers and one fully-connected layer . The total number of parameters of the two models is around 140k . On the SVHN dataset , the networks are similar to those for the MNIST ; we only changed the input channels of the first convolutional layer to three . On the ImgNet10 dataset , there are nine convolutional layers and one fully-connected layer for both the ODENet and the CNN model . The numbers of parameters is approximately 280k . In practice , the neural ODE can be solved with different numerical solvers such as the Euler method and the Runge-Kutta methods ( Chen et al. , 2018 ) . Here , we use the easily-implemented Euler method in the experiments . To balance the computation and the continuity of the flow , we solve the ODE initial value problem in equation ( 1 ) by the Euler method with step size 0.1 . Our implementation builds on the open-source neural ODE codes.1 Details on the network architectures are included in the Appendix . Training : The experiments are conducted using two settings on each dataset—training models only with original non-perturbed images and training models on original images together with their perturbed versions . In both settings , we added a weight decay term into the training objective to regularize the norm of the weights , since this can help control the model ’ s representation capacity and improve the robustness of a neural network ( Sokolić et al. , 2017 ) . In the second setting , images perturbed with random Gaussian noise are used to fine-tune the models , because augmenting the dataset with small perturbations can possibly improve the robustness of models and synthesizing Gaussian noise does not incur excessive computation time . 3.2 ROBUSTNESS OF ODENETS TRAINED ONLY ON NON-PERTURBED IMAGES The first question we are interested in is how robust ODENets are against perturbations if the model is only trained on original non-perturbed images . We train CNNs and ODEnets to perform classification on three datasets and set the weight decay parameters for all models to be 0.0005 . We make sure that both the well-trained ODENets and CNN models have satisfactory performances on original non-perturbed images , i.e. , around 99.5 % for MNIST , 95.0 % for the SVHN , and 80.0 % for ImgNet10 . Since Gaussian noise is ubiquitous in modeling image degradation , we first evaluated the robustness of the models in the presence of zero-mean random Gaussian perturbations . It has also been shown that a deep model is vulnerable to harmful adversarial examples , such as the FGSM ( Goodfellow et al. , 2014 ) . We are also interested in how robust ODENets are in the presence of adversarial examples . The standard deviation σ of Gaussian noise and the l∞-norm $ of the FGSM attack for each dataset are shown in Table 1 . From the results in Table 1 , we observe that the ODENets demonstrate superior robustness compared to CNNs for all types of perturbations . On the MNIST dataset , in the presence of Gaussian 1https : //github.com/rtqichen/torchdiffeq . perturbations with a large σ of 100 , the ODENet produces much higher accuracy on perturbed images compared to the CNN model ( 73.2 % vs. 56.4 % ) . For the FGSM-0.3 adversarial examples , the accuracy of ONEnet is around twice as high as that of the CNN model . On the SVHN dataset , ODENets significantly outperform CNN models , e.g. , for the FGSM-5/255 examples , the accuracy of the ODENet is 43.0 % , which is much higher than that of the CNN model ( 13.7 % ) . On the ImgNet10 , for both cases of σ = 25 and FGSM-8/255 , ODENet outperforms CNNs by a large margin of around 9 % . 3.3 ROBUSTNESS OF ODENETS TRAINED ON ORIGINAL IMAGES TOGETHER WITH GAUSSIAN PERTURBATIONS Training a model on original images together with their perturbed versions can improve the robustness of the model . As mentioned previously , Gaussian noise is commonly assumed to be present in real-world images . Synthesizing Gaussian noise is also fast and easy . Thus , we add random Gaussian noise into the original images to generate their perturbed versions . ODENets and CNN models are both trained on original images together with their perturbed versions . The standard deviation of the added Gaussian noise is randomly chosen from { 50 , 75 , 100 } on the MNIST dataset , { 15 , 25 , 35 } on the SVHN dataset , and { 10 , 15 , 25 } on the ImgNet10 . All other hyperparameters are kept the same as above . The robustness of the models is evaluated under Gaussian perturbations , FGSM adversarial examples , and PGD ( Madry et al. , 2017 ) adversarial examples . The latter is a stronger attacker compared to the FGSM . The l∞-norm $ of the PGD attack for each dataset is shown in Table 2 . Based on the results , we observe that ODENets consistently outperform CNN models on both two datasets . On the MNIST dataset , the ODENet outperforms the CNN against all types of perturbations . In particular , for the PGD-0.2 adversarial examples , the accuracy of the ODENet ( 64.7 % ) is much higher than that of the CNN ( 32.9 % ) . Besides , for the PGD-0.3 attack , the CNN is completely misled by the adversarial examples , but the ODENet can still classify perturbed images with an accuracy of 13.0 % . On the SVHN dataset , ODENets also show superior robustness in comparison to CNN models . For all the adversarial examples , ODENets outperform CNN models by a margin of at least 10 percentage points . On the ImgNet10 dataset , the ODENet also performs better than CNN models against all forms of adversarial examples . 3.4 INSIGHTS ON THE ROBUSTNESS OF ODENETS From the results in Sections 3.2 and 3.3 , we find ODENets are more robust compared to CNN models . Here , we attempt to provide an intuitive understanding of the robustness of the neural ODE . In an ODENet , given some datum , the FE extracts an informative feature map from the datum . The neural ODE , serving as the RM , takes as input the feature map and performs a nonlinear mapping . In practice , we use the weight decay technique during training which regularizes the norm of weights in the FE part , so that the change of feature map in terms of a small perturbation on the input can be controlled . We aim to show that , in the neural ODE , a small change on the feature map will not lead to a large deviation from the original output associated with the feature map . Theorem 1 ( ODE integral curves do not intersect ( Coddington & Levinson , 1955 ; Younes , 2010 ; Dupont et al. , 2019 ) ) . Let z1 ( t ) and z2 ( t ) be two solutions of the ODE in ( 1 ) with different initial conditions , i.e . z1 ( 0 ) ∕= z2 ( 0 ) . In ( 1 ) , fθ is continuous in t and globally Lipschitz continuous in z . Then , it holds that z1 ( t ) ∕= z2 ( t ) for all t ∈ [ 0 , ∞ ) . To illustrate this theorem , considering a simple 1- dimensional system in which the state is a scalar . As shown in Figure 2 , equation ( 1 ) has a solution z1 ( t ) starting from A1 = ( 0 , z1 ( 0 ) ) , where z1 ( 0 ) is the feature of some datum . Equation ( 1 ) also has another two solutions z2 ( t ) and z3 ( t ) , whose starting points A2 = ( 0 , z2 ( 0 ) ) and A3 = ( 0 , z3 ( 0 ) ) , both of which are close to A1 . Suppose A1 is between A2 and A3 . By Theorem 1 , we know that the integral curve z1 ( t ) is always sandwiched between the integral curves z2 ( t ) and z3 ( t ) . Now , let $ < min { |z2 ( 0 ) −z1 ( 0 ) | , |z3 ( 0 ) −z1 ( 0 ) | } . Consider a solution z̃1 ( t ) of equation ( 1 ) . The integral curve z̃1 ( t ) starts from a point Ã1 = ( 0 , z̃1 ( 0 ) ) . The point Ã1 is in the $ -neighborhood of A1 with |z̃1 ( 0 ) −z1 ( 0 ) | < $ . By Theorem 1 , we know that |z̃1 ( T ) − z1 ( T ) | ≤ |z3 ( T ) − z2 ( T ) | . In other words , if any perturbation smaller than $ is added to the scalar z1 ( 0 ) in A1 , the deviation from the original output z1 ( T ) is bounded by the distance between z2 ( T ) and z3 ( T ) . In contrast , in a CNN model , there is no such bound on the deviation from the original output . Thus , we opine that due to this non-intersecting property , ODENets are intrinsically robust . 4 TISODE : BOOSTING THE ROBUSTNESS OF NEURAL ODES In the previous section , we presented an empirical study on the robustness of ODENets and observed that ODENets are more robust compared to CNN models . In this section , we explore how to boost the robustness of the vanilla neural ODE model further . This motivates the proposal of time-invariant steady neural ODEs ( TisODEs ) . 4.1 TIME-INVARIANT STEADY NEURAL ODES steady-state constraint . In the neural ODE characterized by equation ( 1 ) , the dynamics fθ ( z ( t ) , t ) depends on both the state z ( t ) at time t and the time t itself . In contrast , if the neural ODE is modified to be time-invariant , the time dependence of the dynamics is removed . Consequently , the dynamics depends only on the state z . So , we can rewrite the dynamics function as fθ ( z ) , and the neural ODE is characterized as dz ( t ) dt = fθ ( z ( t ) ) ; z ( 0 ) = zin ; zout = z ( T ) . ( 2 ) Let z1 ( t ) be a solution of ( 2 ) on [ 0 , ∞ ) and $ > 0 be a small positive value . We define the set M1 = { ( z1 ( t ) , t ) |t ∈ [ 0 , T ] , ‖z1 ( t ) −z1 ( 0 ) ‖ ≤ $ } . This set contains all points on the curve of z1 ( t ) during [ 0 , T ] that are also inside the $ -neighborhood of z1 ( 0 ) . For some element ( z1 ( T ′ ) , T ′ ) ∈ M1 , let z̃1 ( t ) be the solution of ( 2 ) which starts from z̃1 ( 0 ) = z1 ( T ′ ) . Then we have z̃1 ( t ) = z1 ( t+ T ′ ) ( 3 ) for all t in [ 0 , ∞ ) . The property shown in equation ( 3 ) is known as the time-invariant property . It indicates that the integral curve z̃1 ( t ) is the −T ′ shift of z1 ( t ) ( Figure 3 ) . We can regard z̃1 ( 0 ) as a slightly perturbed version of z1 ( 0 ) , and we are interested in how large the difference between z̃1 ( T ) and z1 ( T ) is . In a robust model , the difference should be small . By equation ( 3 ) , we have ‖z̃1 ( T ) − z1 ( T ) ‖ = ‖z1 ( T +T ′ ) − z1 ( T ) ‖ . Since T ′ ∈ [ 0 , T ] , the difference between z1 ( T ) and z̃1 ( T ) can be bounded as follows , ‖z̃1 ( T ) −z1 ( T ) ‖= ∥∥∥∥∥ ∫ T+T ′ T fθ ( z1 ( t ) ) dt ∥∥∥∥∥≤ ∥∥∥∥∥ ∫ T+T ′ T |fθ ( z1 ( t ) ) | dt ∥∥∥∥∥≤ ∥∥∥∥∥ ∫ 2T T |fθ ( z1 ( t ) ) | dt ∥∥∥∥∥ , ( 4 ) where all norms are ℓ2 norms and |fθ| denotes the element-wise absolute operation of a vectorvalued function fθ . That is to say , the difference between z̃1 ( T ) and z1 ( T ) can be bounded by only using the information of the curve z1 ( t ) . For any t′ ∈ [ 0 , T ] and element ( z1 ( t′ ) , t′ ) ∈ M1 , consider the integral curve that starts from z1 ( t′ ) . The difference between the output state of this curve and z1 ( T ) satisfies inequality ( 4 ) . Therefore , we propose to add an additional term Lss to the loss function when training the timeinvariant neural ODE : Lss = N∑ i=1 ∥∥∥∥∥ ∫ 2T T |fθ ( zi ( t ) ) | dt ∥∥∥∥∥ , ( 5 ) where N is the number of samples in the training set and zi ( t ) is the solution whose initial state equals to the feature of the ith sample . The regularization term Lss is termed as the steady-state loss . This terminology “ steady state ” is borrowed from the dynamical systems literature . In a stable dynamical system , the states stabilize around a fixed point , known as the steady-state , as time tends to infinity . If we can ensure that Lss is small , for each sample , the outputs of all the points in Mi will stabilize around zi ( T ) . Consequently , the model is robust . This modification of the neural ODE is dubbed Time-invariant steady neural ODE . 4.2 EVALUATING ROBUSTNESS OF TISODE-BASED CLASSIFIERS Here , we conduct experiments to evaluate the robustness of our proposed TisODE , and compare TisODE-based models with the vanilla ODENets . We train all models with original non-perturbed images together with their Gaussian perturbed versions . The regularization parameter for the steadystate loss Lss is set to be 0.1 . All other hyperparameters are exactly the same as those in Section 3.3 . From the results in Table 3 , we can see that our proposed TisODE-based models are clearly more robust compared to vanilla ODENets . On the MNIST dataset , when combating FGSM-0.3 attacks , the TisODE-based models outperform vanilla ODENets by more than 4 percentage points . For the FGSM-0.5 adversarial examples , the accuracy of the TisODE-based model is 6 percentage points better . On the SVHN dataset , the TisODE-based models perform better in terms of all forms of adversarial examples . On the ImgNet10 dataset , the TisODE-based models also outperform vanilla ODE-based models on all types of perturbations . In the presence of FGSM and PGD-5/255 examples , the accuracies are enhanced by more than 2 percentage points . 4.3 TISODE - A GENERALLY APPLICABLE DROP-IN TECHNIQUE FOR IMPROVING THE ROBUSTNESS OF DEEP NETWORKS In view of the excellent robustness of the TisODE , we claim that the proposed TisODE can be used as a general drop-in module for improving the robustness of deep networks . We support this claim by showing the TisODE can work in conjunction with other state-of-the-art techniques and further boost the models ’ robustness . These techniques include the feature denoising ( FDn ) method ( Xie et al. , 2019 ) and the input randomization ( IR ) method ( Xie et al. , 2017 ) . We conduct experiments on the MNIST and SVHN datasets . All models are trained with original non-perturbed images together with their Gaussian perturbed versions . We show that models using the FDn/IRd technique becomes much more robust when equipped with the TisODE . In the FDn experiments , the dot-product nonlocal denoising layer ( Xie et al. , 2019 ) is added to the head of the fully-connected classifier . From Table 4 , we observe that both FDn and IRd can effectively improve the adversarial robustness of vanilla CNN models ( CNN-FDn , CNN-IRd ) . Furthermore , combining our proposed TisODE with FDn or IRd ( TisODE-FDn , TisODE-IRd ) , the adversarial robustness of the resultant model is significantly enhanced . For example , on the MNIST dataset , the additional use of our TisODE increases the accuracies on the PGD-0.3 examples by at least 10 percentage points for both FDn ( 8.2 % to 28.2 % ) and IRd ( 55.5 % to 66.0 % ) . However , on both MNIST and SVHN datasets , the IRd technique improves the robustness against adversarial examples , but its performance is worse on random Gaussian noise . With the help of the TisODE , the degradation in the robustness against random Gaussian noise can be effectively ameliorated . 5 RELATED WORKS In this section , we briefly review related works on the neural ODE and works concerning improving the robustness of deep neural networks . Neural ODE : The neural ODE ( Chen et al. , 2018 ) method models the input and output as two states of a continuous-time dynamical system by approximating the dynamics of this system with trainable layers . Before the proposal of neural ODE , the idea of modeling nonlinear mappings using continuous-time dynamical systems was proposed in Weinan ( 2017 ) . Lu et al . ( 2017 ) also showed that several popular network architectures could be interpreted as the discretization of a continuoustime ODE . For example , the ResNet ( He et al. , 2016 ) and PolyNet ( Zhang et al. , 2017 ) are associated with the Euler scheme and the FractalNet ( Larsson et al. , 2016 ) is related to the Runge-Kutta scheme . In contrast to these discretization models , neural ODEs are endowed with an intrinsic invertibility property , which yields a family of invertible models for solving inverse problems ( Ardizzone et al. , 2018 ) , such as the FFJORD ( Grathwohl et al. , 2018 ) . Recently , many researchers have conducted studies on neural ODEs from the perspectives of optimization techniques , approximation capabilities , and generalization . Concerning the optimization of neural ODEs , the auto-differentiation techniques can effectively train ODENets , but the training procedure is computationally and memory inefficient . To address this problem , Chen et al . ( 2018 ) proposed to compute gradients using the adjoint sensitivity method ( Pontryagin , 2018 ) , in which there is no need to store any intermediate quantities of the forward pass . Also in Quaglino et al . ( 2019 ) , the authors proposed the SNet which accelerates the neural ODEs by expressing their dynamics as truncated series of Legendre polynomials . Concerning the approximation capability , Dupont et al . ( 2019 ) pointed out the limitations in approximation capabilities of neural ODEs because of the preserving of input topology . The authors proposed an augmented neural ODE which increases the dimension of states by concatenating zeros so that complex mappings can be learned with simple flow . The most relevant work to ours concerns strategies to improve the generalization of neural ODEs . In Liu et al . ( 2019 ) , the authors proposed the neural stochastic differential equation ( SDE ) by injecting random noise to the dynamics function and showed that the generalization and robustness of vanilla neural ODEs could be improved . However , our improvement on the neural ODEs is explored from a different perspective by introducing constraints on the flow . We empirically found that our proposal and the neural SDE can work in tandem to further boost the robustness of neural ODEs . Robust Improvement : A straightforward way of improving the robustness of a model is to smooth the loss surface by controlling the spectral norm of the Jacobian matrix of the loss function ( Sokolić et al. , 2017 ) . In terms of adversarial examples ( Carlini & Wagner , 2017 ; Chen et al. , 2017 ) , researchers have proposed adversarial training strategies ( Madry et al. , 2017 ; Elsayed et al. , 2018 ; Tramèr et al. , 2017 ) in which the model is fine-tuned with adversarial examples generated in realtime . However , generating adversarial examples is not computationally efficient , and there exists a trade-off between the adversarial robustness and the performance on original non-perturbed images ( Yan et al. , 2018 ; Tsipras et al. , 2018 ) . In Wang et al . ( 2018a ) , the authors model the ResNet as a transport equation , in which the adversarial vulnerability can be interpreted as the irregularity of the decision boundary . Consequently , a diffusion term is introduced to enhance the robustness of the neural nets . Besides , there are also some works that propose novel architectural defense mechanisms against adversarial examples . For example , Xie et al . ( 2017 ) utilized random resizing and random padding to destroy the specific structure of adversarial perturbations ; Wang et al . ( 2018b ) and Wang et al . ( 2018c ) improved the robustness of neural networks by replacing the output layers with novel interpolating functions ; In Xie et al . ( 2019 ) , the authors designed a feature denoising filter that can remove the perturbation ’ s pattern from feature maps . In this work , we explore the intrinsic robustness of a specific novel architecture ( neural ODE ) , and show that the proposed TisODE can improve the robustness of deep networks and can also work in tandem with these state-of-the-art methods Xie et al . ( 2017 ; 2019 ) to achieve further improvements . 6 CONCLUSION In this paper , we first empirically study the robustness of neural ODEs . Our studies reveal that neural ODE-based models are superior in terms of robustness compared to CNN models . We then explore how to further boost the robustness of vanilla neural ODEs and propose the TisODE . Finally , we show that the proposed TisODE outperforms the vanilla neural ODE and also can work in conjunction with other state-of-the-art techniques to further improve the robustness of deep networks . Thus , the TisODE method is an effective drop-in module for building robust deep models . ACKNOWLEDGEMENT This work is funded by a Singapore National Research Foundation ( NRF ) Fellowship ( R-263-000D02-281 ) . Jiashi Feng was partially supported by NUS IDS R-263-000-C67-646 , ECRA R-263-000-C87-133 , MOE Tier-II R-263-000-D17-112 and AI.SG R-263-000-D97-490 REFERENCES Lynton Ardizzone , Jakob Kruse , Sebastian Wirkert , Daniel Rahner , Eric W Pellegrini , Ralf S Klessen , Lena Maier-Hein , Carsten Rother , and Ullrich Köthe . Analyzing inverse problems with invertible neural networks . arXiv preprint arXiv:1808.04730 , 2018 . Nicholas Carlini and David Wagner . Towards evaluating the robustness of neural networks . In 2017 IEEE Symposium on Security and Privacy ( SP ) , pp . 39–57 . IEEE , 2017 . Pin-Yu Chen , Huan Zhang , Yash Sharma , Jinfeng Yi , and Cho-Jui Hsieh . Zoo : Zeroth order optimization based black-box attacks to deep neural networks without training substitute models . In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security , pp . 15–26 . ACM , 2017 . Tian Qi Chen , Yulia Rubanova , Jesse Bettencourt , and David K Duvenaud . Neural ordinary differential equations . In Advances in neural information processing systems , pp . 6571–6583 , 2018 . Earl A Coddington and Norman Levinson . Theory of ordinary differential equations . Tata McGrawHill Education , 1955 . Jia Deng , Wei Dong , Richard Socher , Li-Jia Li , Kai Li , and Li Fei-Fei . Imagenet : A large-scale hierarchical image database . In 2009 IEEE conference on computer vision and pattern recognition , pp . 248–255 . Ieee , 2009 . Emilien Dupont , Arnaud Doucet , and Yee Whye Teh . Augmented neural odes . arXiv preprint arXiv:1904.01681 , 2019 . Gamaleldin F Elsayed , Shreya Shankar , Brian Cheung , Nicolas Papernot , Alex Kurakin , Ian Goodfellow , and Jascha Sohl-Dickstein . Adversarial examples that fool both human and computer vision . arXiv preprint arXiv:1802.08195 , 10 , 2018 . Ian J Goodfellow , Jonathon Shlens , and Christian Szegedy . Explaining and harnessing adversarial examples . arXiv preprint arXiv:1412.6572 , 2014 . Will Grathwohl , Ricky TQ Chen , Jesse Betterncourt , Ilya Sutskever , and David Duvenaud . Ffjord : Free-form continuous dynamics for scalable reversible generative models . arXiv preprint arXiv:1810.01367 , 2018 . Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Deep residual learning for image recognition . In Proceedings of the IEEE conference on computer vision and pattern recognition , pp . 770–778 , 2016 . Ralph Howard . The gronwall inequality . lecture notes , 1998 . Gustav Larsson , Michael Maire , and Gregory Shakhnarovich . Fractalnet : Ultra-deep neural networks without residuals . arXiv preprint arXiv:1605.07648 , 2016 . Yann LeCun , Léon Bottou , Yoshua Bengio , Patrick Haffner , et al . Gradient-based learning applied to document recognition . Proceedings of the IEEE , 86 ( 11 ) :2278–2324 , 1998 . Xuanqing Liu , Si Si , Qin Cao , Sanjiv Kumar , and Cho-Jui Hsieh . Neural sde : Stabilizing neural ode networks with stochastic noise . arXiv preprint arXiv:1906.02355 , 2019 . Yiping Lu , Aoxiao Zhong , Quanzheng Li , and Bin Dong . Beyond finite layer neural networks : Bridging deep architectures and numerical differential equations . arXiv preprint arXiv:1710.10121 , 2017 . Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu . Towards deep learning models resistant to adversarial attacks . arXiv preprint arXiv:1706.06083 , 2017 . Yuval Netzer , Tao Wang , Adam Coates , Alessandro Bissacco , Bo Wu , and Andrew Y Ng . Reading digits in natural images with unsupervised feature learning . 2011 . Adam Paszke , Sam Gross , Soumith Chintala , Gregory Chanan , Edward Yang , Zachary DeVito , Zeming Lin , Alban Desmaison , Luca Antiga , and Adam Lerer . Automatic differentiation in pytorch . 2017 . Lev Semenovich Pontryagin . Mathematical theory of optimal processes . Routledge , 2018 . Alessio Quaglino , Marco Gallieri , Jonathan Masci , and Jan Koutnı́k . Accelerating neural odes with spectral elements . arXiv preprint arXiv:1906.07038 , 2019 . Jure Sokolić , Raja Giryes , Guillermo Sapiro , and Miguel RD Rodrigues . Robust large margin deep neural networks . IEEE Transactions on Signal Processing , 65 ( 16 ) :4265–4280 , 2017 . Christian Szegedy , Wojciech Zaremba , Ilya Sutskever , Joan Bruna , Dumitru Erhan , Ian Goodfellow , and Rob Fergus . Intriguing properties of neural networks . arXiv preprint arXiv:1312.6199 , 2013 . Florian Tramèr , Alexey Kurakin , Nicolas Papernot , Ian Goodfellow , Dan Boneh , and Patrick McDaniel . Ensemble adversarial training : Attacks and defenses . arXiv preprint arXiv:1705.07204 , 2017 . Dimitris Tsipras , Shibani Santurkar , Logan Engstrom , Alexander Turner , and Aleksander Madry . Robustness may be at odds with accuracy . arXiv preprint arXiv:1805.12152 , 2018 . B. Wang , B. Yuan , Z. Shi , and S. Osher . ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies . arXiv e-prints , art . arXiv:1811.10745 , Nov 2018a . Bao Wang , Alex T Lin , Zuoqiang Shi , Wei Zhu , Penghang Yin , Andrea L Bertozzi , and Stanley J Osher . Adversarial defense via data dependent activation function and total variation minimization . arXiv preprint arXiv:1809.08516 , 2018b . Bao Wang , Xiyang Luo , Zhen Li , Wei Zhu , Zuoqiang Shi , and Stanley Osher . Deep neural nets with interpolating function as output activation . In Advances in Neural Information Processing Systems , pp . 743–753 , 2018c . E Weinan . A proposal on machine learning via dynamical systems . Communications in Mathematics and Statistics , 5 ( 1 ) :1–11 , 2017 . Cihang Xie , Jianyu Wang , Zhishuai Zhang , Zhou Ren , and Alan Yuille . Mitigating adversarial effects through randomization . arXiv preprint arXiv:1711.01991 , 2017 . Cihang Xie , Yuxin Wu , Laurens van der Maaten , Alan L Yuille , and Kaiming He . Feature denoising for improving adversarial robustness . In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp . 501–509 , 2019 . Ziang Yan , Yiwen Guo , and Changshui Zhang . Deep defense : Training dnns with improved adversarial robustness . In Advances in Neural Information Processing Systems , pp . 419–428 , 2018 . Laurent Younes . Shapes and diffeomorphisms , volume 171 . Springer , 2010 . Xingcheng Zhang , Zhizhong Li , Chen Change Loy , and Dahua Lin . Polynet : A pursuit of structural diversity in very deep networks . In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp . 718–726 , 2017 . 7 APPENDIX 7.1 NETWORKS USED ON THE MNIST , THE SVHN , AND THE IMGNET10 DATASETS In Table 5 , the four arguments of the Conv layer represent the input channel , output channel , kernel size , and the stride . The two arguments of the Linear layer represents the input dimension and the output dimension of this fully-connected layer . In the network on the ImgNet10 , the BasicBlock refers to the standard architecture in ( He et al. , 2016 ) , the three arguments of the BasicBlock represent the input channel , output channel and the stride of the Conv layers inside the block . Note that we replace the BatchNorm layers in BasicBlocks as the GroupNorm to guarantee that the dynamics of each datum is independent of other data in the same mini-batch . 7.2 THE CONSTRUCTION OF IMGNET10 DATASET 7.3 GRONWALL ’ S INEQUALITY We formally state the Gronwall ’ s Inequality here , following the version in ( Howard , 1998 ) . Theorem 2 . Let U ⊂ Rd be an open set . Let f : U × [ 0 , T ] → Rd be a continuous function and let z1 , z2 : [ 0 , T ] → U satisfy the initial value problems : dz1 ( t ) dt = f ( z1 ( t ) , t ) , z1 ( t ) = x1 dz2 ( t ) dt = f ( z2 ( t ) , t ) , z2 ( t ) = x2 Assume there is a constant C ≥ 0 such that , for all t ∈ [ 0 , T ] , ‖f ( z2 ( t ) , t ) − f ( z1 ( t ) , t ) ) ‖ ≤ C‖z2 ( t ) − z1 ( t ) ‖ Then , for any t ∈ [ 0 , T ] , ‖z1 ( t ) − z2 ( t ) ‖ ≤ ‖x2 − x1‖ · eCt . 7.4 MORE EXPERIMENTAL RESULTS 7.4.1 COMPARISON IN THE SETTING OF ADVERSARIAL TRAINING We implement the adversarial training of the models on the MNIST dataset , and the adversarial examples for training are generated in real-time via the FGSM method ( epsilon=0.3 ) during each epoch ( Madry et al. , 2017 ) . The results of the adversarially trained models are shown in Table 7 . We can observe that the neural ODE-based models are consistently more robust than CNN models . The proposed TisODE also outperforms the vanilla neural ODE . 7.4.2 EXPERIMENTS ON THE CIFAR10 DATASET We conduct experiments on CIFAR10 to compare the robustness of CNN and neural ODE-based models . We train all the models only with original non-perturbed images and evaluate the robustness of models against random Gaussian noise and FGSM adversarial attacks . The results are shown in Table 8 . We can observe that the ONENet is more robust than the CNN model in terms of both the random noise and the FGSM attack . Besides , our proposal , TisODE , can improve the robustness of the vanilla neural ODE . Here , we control the number of parameters to be the same for all kinds of models . We use a small network , which consists of five convolutional layers and one linear layer . 7.4.3 AN EXTENSION ON THE COMPARISON BETWEEN CNNS AND ODENETS Here , we compare CNN and neural ODE-based models by controlling both the number of parameters and the number of function evaluations . We conduct experiments on the MNIST dataset , and all the models are trained only with original non-perturbed images . For the neural ODE-based models , the time range is set from 0 to 1 . We use the Euler method , and the step size is set to be 0.05 . Thus the number of evaluations is 1/0.05 = 20 . For the CNN models ( specifically ResNet ) , we repeatedly concatenate the residual block for 20 times , and these 20 blocks share the same weights . Our experiments show that , in this condition , the neural ODE-based models still outperform the CNN models ( FGSM-0.15 : 87.5 % vs. 81.9 % , FGSM-0.3 : 53.4 % vs. 49.7 % , PGD-0.2 : 11.8 % vs. 4.8 % ) .
This paper investigates the robustness of Neural Ordinary differential equations (ODEs) against corrupted and adversarial examples. The crux of the analysis is based on the separation property of ODE integral curves. The insights from empirical robustness evaluation show that controlling the difference between neighboring integral curves is able to improve neural ODE's robustness. In general, neural ODE is a hot research topic in recent years, and a paper advancing knowledge in this area about understanding its various characteristics is certainly welcome. The paper is well motivated and clearly written. One aspect that confuses me a little originally is the different effects of getting ridding of the dependency on the time t and adding the steady state regularization. It would be nice to elucidate which part makes more contributions? Furthermore, to compare the robustness of the new approach with CNN, the input data consists of original images and their Gaussian-noise based perturbed samples. Since the paper already involves the evaluation using adversarial examples, it will make the paper much more stronger to show that when training both the new approach and the CNN with adversarial training, the proposed regularization can still lead to better robustness.
SP:12411220098647e9bc26769218f2f64d82867493
Granger Causal Structure Reconstruction from Heterogeneous Multivariate Time Series
Granger causal structure reconstruction is an emerging topic that can uncover causal relationship behind multivariate time series data . In many real-world systems , it is common to encounter a large amount of multivariate time series data collected from heterogeneous individuals with sharing commonalities , however there are ongoing concerns regarding its applicability in such large scale complex scenarios , presenting both challenges and opportunities for Granger causal reconstruction . To bridge this gap , we propose a Granger cAusal StructurE Reconstruction ( GASER ) framework for inductive Granger causality learning and common causal structure detection on heterogeneous multivariate time series . In particular , we address the problem through a novel attention mechanism , called prototypical Granger causal attention . Extensive experiments , as well as an online A/B test on an E-commercial advertising platform , demonstrate the superior performances of GASER . 1 INTRODUCTION . Broadly , machine learning tasks are either predictive or descriptive in nature , often addressed by black-box methods ( Guo et al. , 2018 ) . With the power of uncovering relationship behind the data and providing explanatory analyses , causality inference has drawn increasing attention in many fields , e.g . marketing , economics , and neuroscience ( Pearl , 2000 ; Peters et al. , 2017 ) . Since the cause generally precedes its effects , known as temporal precedence ( Eichler , 2013 ) , recently , an increasing number of studies have focused on causal discovery from time series data . They are commonly based on the concept of Granger causality ( Granger , 1969 ; 1980 ) to investigate the causal relationship with quantification measures . In many real-world systems , it is common to encounter a large amount of multivariate time series ( MTS ) data collected from different individuals with shared commonalities , which we define as heterogeneous multivariate time series . The underlying causal structures of such data often vary ( Zhang et al. , 2017 ; Huang et al. , 2019 ) . For example , in the financial market , the underlying causal drivers of stock prices are often heterogeneous across stocks of different plates . Similar phenomenons are also observed in the sales of different products in E-commerce . To this situation , most existing methods have to train separate models for MTS of each individual , which suffer from over-fitting especially given limited training samples . Although some works have been proposed to solve such problem ( Zhang et al. , 2017 ; Huang et al. , 2019 ) , they lack the inductive capability to do inferences for unseen samples and fall short of fully exploiting shared causal information among the heterogeneous data which often exist in practice . For instance , the causal structures of the products belonging to the same categories are usually similar . Such shared information presents opportunities for causal reconstruction to alleviate overfitting and to do inductive reasoning . However , it is also challenging to detect common and specific causal structures simultaneously . In this paper , we propose a Granger cAusal StructurE Reconstruction ( GASER ) framework for inductive Granger causality learning and common causal structure detection on heterogeneous multivariate time series data . Our approach builds on the idea of quantifying the contributions of each variable series into the prediction of target variable via a novel designed prototypical Granger causal attention mechanism . In order to ensure that the attention capturing Granger causality , we first design an attention mechanism based on Granger causal attribution of the target series and then perform prototype learning that generates both shared and specific prototypes to improve the model ’ s robust- ness . Extensive experiments demonstrate the superior causal structure reconstruction and prediction performances of GASER . In summary , our specific contributions are as follows : • A novel framework that inductively reconstructs Granger causal structures and uncovers common structures among heterogeneous multivariate time series . • A prototypical Granger causal attention mechanism that summarizes variable-wise contributions towards prediction and generates prototypes representing common causal structures . • Relative extensive experiments on real-world , benchmark and synthetic datasets as well as an online A/B test on an E-commercial advertising platform that demonstrate the superior performance on the causal discovery and the prediction performance comparable to state-of-the-art methods . 2 GASER . In this section , we formally define the problem , introduce the architecture of GASER , present the prototypical Granger causal attention with the final objective function . 2.1 PROBLEM DEFINITION . Assuming we have a set of heterogeneous multivariate time series from N individuals , i.e. , X = { Xi } Ni=1 , with each consisting of S time series of length T , denoted as Xi = ( x1i , x2i , . . . , xSi ) T ∈ RS×T , where xsi = ( xsi,1 , xsi,2 , . . . , xsi , T ) T ∈ RT represents the s-th time series of individual i , and one of them is taken as the target series yi . We aim to train a model that ( 1 ) reconstructs Granger causal structures among variables for each individual ; ( 2 ) generates K common structures among all the N individuals , each structure represented by a prototype pk ∈ RS , k = 1 , ... , K ; and ( 3 ) learns a nonlinear mapping to predict the next value of the target variable series for each individual , i.e. , ŷi , T+1 = F ( Xi ) . 2.2 NETWORK ARCHITECTURE . Our GASER framework consists of two parts : a set of parallel encoders , each predicting the target given the past observations , and an attention mechanism that generates prototypical Granger causal attention vectors to quantify variable-wise contributions towards prediction . Figure 1 illustrates the overall framework of GASER . As illustrated in Figure 1 ( a ) , for an input multivariate time series Xi , the encoder specific to s-th variable projects the time series xsi into a sequence of hidden state , denoted as hsi , t = Hs ( xi , t , hsi , t−1 ) . The encoder could be any RNN models , such as LSTM ( Hochreiter & Schmidhuber , 1997 ) and GRU ( Cho et al. , 2014 ) . The last hidden states , { hsi , T } Ss=1 , are used as the hidden embeddings of each variable . Then the predicted next value of the target variable conditioned on historical data of variable s , denoted as ŷsi , T+1 , can be computed by ŷ s i , T+1 = fs ( h s i , T ) , where fs ( · ) denotes the MLP network specific to variable s. Then we obtain the prediction ŷi , T+1 by aggregating the predicted values specific to variables through the prototypical Granger causal attention described below . 2.3 PROTOTYPICAL GRANGER CAUSAL ATTENTION . We propose a novel attention mechanism in GASER , namely prototypical Granger causal attention , to reconstruct Granger causal relationships for each individual and uncover common causal structures among heterogeneous individuals . The goal is to learn attentions that can reflect the Granger causal strength between variables for each individual , and generate prototypes among heterogeneous individuals . As illustrated in Figure 1 ( b ) , the idea of the prototypical Granger causal attention mechanism is as follows . The Granger causal attribution corresponding to each individual is first computed according to the concept of Granger causality , followed by prototype learning that summarizes common causal structures for heterogeneous individuals in the training set , and produces the attention vector specific to each individual . The details of these two parts are described below . 2.3.1 GRANGER CAUSAL ATTRIBUTION . Granger causality ( Granger , 1969 ; 1980 ) is a concept of causality based on prediction , which declares that if a time series x Granger-causes a time series y , then y can be better predicted using all available information than if the information apart from x had been used . Thus , we obtain the Granger causal attributions by comparing the prediction error when using all available information with the error when using the information excluding one variable series . In particular , given all the hidden embeddings { hsi , T } Ss=1 of individual i , we obtain the embedding that encodes all available information and the one that encodes all available information excluding one variable s , denoted as halli and h all\s i respectively , by concatenating the embeddings of corresponding variables : halli = [ h j i , T ] S j=1 , h all\s i = [ h j i , T ] S j=1 , j 6=s , ( 1 ) where [ · ] represents the concatenation operation . Then we feed them into respective predictors , denoted as gall ( · ) and gs ( · ) , to get the predicted value of target and compute the squared errors : ŷalli , T+1 = gall ( h all i ) , ŷ all\s i , T+1 = gs ( h all\s i ) , ( 2 ) εalli = ( ŷ all i , T+1 − yi , T+1 ) 2 , εall\si = ( ŷ all\s i , T+1 − yi , T+1 ) 2 , ( 3 ) where the predictor gall ( · ) and gs ( · ) can be MLP networks . Inspired by Schwab et al . ( 2019 ) , we define the Granger causal attribution of the target variable corresponding to variable s as the decrease in error when adding s-th series to the set of available information , computed as : ∆εsi = ReLU ( ε all\s i − εalli ) , ( 4 ) where ReLU ( · ) is the rectified linear unit . For each individual i , by normalising the Granger causal attribution , we obtain an attention vector that reflects Granger causality , namely Granger causal attention , denoted as qi . The attention factor for variable s can be computed as : qsi = ∆εsi∑S j=1 ∆ε j i . ( 5 ) 2.3.2 PROTOTYPE LEARNING . The Granger causal attention above is not robust enough to reconstruct Granger causal structure , given limited data ( e.g. , very short time series ) of each individual in training . We address the problem by generating Granger causal prototypes from all the individuals , under the assumption that there should be several common causal structures among heterogeneous individuals . In particular , we assume there existK Granger causal prototypes , denoted as { pk } Kk=1 , and compute the similarity between the Granger causal attention vector qi of individual i and each prototype vectors pk . Since the attention can be seen as a distribution , we use the cosine similarity : dk , i = pk · qi ‖pk‖‖qi‖ , ( 6 ) Then we output a prototype most similar to qi by sampling from the similarity distribution di using Gumbel-Softmax ( Maddison et al. , 2017 ; Jang et al. , 2016 ) , which samples from a reparameterized continuous distribution approximation to the categorical one-hot distribution : e = GumbelSoftmax ( di ) = softmax ( ( log ( di ) + g ) /τ ) , ( 7 ) where GumbelSoftmax ( · ) denotes the Gumbel-Softmax function , e ∈ RK is the sample vector which approaches one-hot , and g is a vector of i.i.d . samples drawn from Gumbel ( 0 , 1 ) distribution . τ is the softmax temperature , and the distribution becomes discrete when τ goes to 0 . With the sample vector e , the output prototype p̂ can be obtained as : p̂ = [ p1 , p2 , . . . , pK ] · e. ( 8 ) After normalizing the sampled prototype , we obtain an attention vector for individual i , denoted as ri , namely prototypical attention . The Granger causal attention reflects the Granger causal structure specific to each individual , while the prototypical attention reflects one common Granger causal structure most similar to the Granger causal structure of each individual . To detect the specific and common causal structures simultaneously , we summarize them together and generate the prototypical Granger causal attention ai as follows : ai = αqi + ( 1− α ) ri , ( 9 ) where α ∈ [ 0 , 1 ] is a hyperparameter that controls the ratio of the two attention mechanism . Finally , the prediction of the target variable ’ s next value can be computed as the weighted sum of the predicted values from all variables : ŷi , T+1 = S∑ s=1 asi ŷ s i , T+1 . ( 10 )
This paper proposes a new way of finding the Granger temporal-causal network based on attention mechanism on the predictions obtained by individual time series. It describes a surprisingly complex procedure for computing the attention vector based on combining Granger-inspired attentions with attentions obtained during a diverse prototype generation process. There are also extensive experiments demonstrating the success of the proposed method in uncovering the underlying temporal-causal graph.
SP:674372d2a8bfd6460e61cf6d39f85a9128cdf131
Granger Causal Structure Reconstruction from Heterogeneous Multivariate Time Series
Granger causal structure reconstruction is an emerging topic that can uncover causal relationship behind multivariate time series data . In many real-world systems , it is common to encounter a large amount of multivariate time series data collected from heterogeneous individuals with sharing commonalities , however there are ongoing concerns regarding its applicability in such large scale complex scenarios , presenting both challenges and opportunities for Granger causal reconstruction . To bridge this gap , we propose a Granger cAusal StructurE Reconstruction ( GASER ) framework for inductive Granger causality learning and common causal structure detection on heterogeneous multivariate time series . In particular , we address the problem through a novel attention mechanism , called prototypical Granger causal attention . Extensive experiments , as well as an online A/B test on an E-commercial advertising platform , demonstrate the superior performances of GASER . 1 INTRODUCTION . Broadly , machine learning tasks are either predictive or descriptive in nature , often addressed by black-box methods ( Guo et al. , 2018 ) . With the power of uncovering relationship behind the data and providing explanatory analyses , causality inference has drawn increasing attention in many fields , e.g . marketing , economics , and neuroscience ( Pearl , 2000 ; Peters et al. , 2017 ) . Since the cause generally precedes its effects , known as temporal precedence ( Eichler , 2013 ) , recently , an increasing number of studies have focused on causal discovery from time series data . They are commonly based on the concept of Granger causality ( Granger , 1969 ; 1980 ) to investigate the causal relationship with quantification measures . In many real-world systems , it is common to encounter a large amount of multivariate time series ( MTS ) data collected from different individuals with shared commonalities , which we define as heterogeneous multivariate time series . The underlying causal structures of such data often vary ( Zhang et al. , 2017 ; Huang et al. , 2019 ) . For example , in the financial market , the underlying causal drivers of stock prices are often heterogeneous across stocks of different plates . Similar phenomenons are also observed in the sales of different products in E-commerce . To this situation , most existing methods have to train separate models for MTS of each individual , which suffer from over-fitting especially given limited training samples . Although some works have been proposed to solve such problem ( Zhang et al. , 2017 ; Huang et al. , 2019 ) , they lack the inductive capability to do inferences for unseen samples and fall short of fully exploiting shared causal information among the heterogeneous data which often exist in practice . For instance , the causal structures of the products belonging to the same categories are usually similar . Such shared information presents opportunities for causal reconstruction to alleviate overfitting and to do inductive reasoning . However , it is also challenging to detect common and specific causal structures simultaneously . In this paper , we propose a Granger cAusal StructurE Reconstruction ( GASER ) framework for inductive Granger causality learning and common causal structure detection on heterogeneous multivariate time series data . Our approach builds on the idea of quantifying the contributions of each variable series into the prediction of target variable via a novel designed prototypical Granger causal attention mechanism . In order to ensure that the attention capturing Granger causality , we first design an attention mechanism based on Granger causal attribution of the target series and then perform prototype learning that generates both shared and specific prototypes to improve the model ’ s robust- ness . Extensive experiments demonstrate the superior causal structure reconstruction and prediction performances of GASER . In summary , our specific contributions are as follows : • A novel framework that inductively reconstructs Granger causal structures and uncovers common structures among heterogeneous multivariate time series . • A prototypical Granger causal attention mechanism that summarizes variable-wise contributions towards prediction and generates prototypes representing common causal structures . • Relative extensive experiments on real-world , benchmark and synthetic datasets as well as an online A/B test on an E-commercial advertising platform that demonstrate the superior performance on the causal discovery and the prediction performance comparable to state-of-the-art methods . 2 GASER . In this section , we formally define the problem , introduce the architecture of GASER , present the prototypical Granger causal attention with the final objective function . 2.1 PROBLEM DEFINITION . Assuming we have a set of heterogeneous multivariate time series from N individuals , i.e. , X = { Xi } Ni=1 , with each consisting of S time series of length T , denoted as Xi = ( x1i , x2i , . . . , xSi ) T ∈ RS×T , where xsi = ( xsi,1 , xsi,2 , . . . , xsi , T ) T ∈ RT represents the s-th time series of individual i , and one of them is taken as the target series yi . We aim to train a model that ( 1 ) reconstructs Granger causal structures among variables for each individual ; ( 2 ) generates K common structures among all the N individuals , each structure represented by a prototype pk ∈ RS , k = 1 , ... , K ; and ( 3 ) learns a nonlinear mapping to predict the next value of the target variable series for each individual , i.e. , ŷi , T+1 = F ( Xi ) . 2.2 NETWORK ARCHITECTURE . Our GASER framework consists of two parts : a set of parallel encoders , each predicting the target given the past observations , and an attention mechanism that generates prototypical Granger causal attention vectors to quantify variable-wise contributions towards prediction . Figure 1 illustrates the overall framework of GASER . As illustrated in Figure 1 ( a ) , for an input multivariate time series Xi , the encoder specific to s-th variable projects the time series xsi into a sequence of hidden state , denoted as hsi , t = Hs ( xi , t , hsi , t−1 ) . The encoder could be any RNN models , such as LSTM ( Hochreiter & Schmidhuber , 1997 ) and GRU ( Cho et al. , 2014 ) . The last hidden states , { hsi , T } Ss=1 , are used as the hidden embeddings of each variable . Then the predicted next value of the target variable conditioned on historical data of variable s , denoted as ŷsi , T+1 , can be computed by ŷ s i , T+1 = fs ( h s i , T ) , where fs ( · ) denotes the MLP network specific to variable s. Then we obtain the prediction ŷi , T+1 by aggregating the predicted values specific to variables through the prototypical Granger causal attention described below . 2.3 PROTOTYPICAL GRANGER CAUSAL ATTENTION . We propose a novel attention mechanism in GASER , namely prototypical Granger causal attention , to reconstruct Granger causal relationships for each individual and uncover common causal structures among heterogeneous individuals . The goal is to learn attentions that can reflect the Granger causal strength between variables for each individual , and generate prototypes among heterogeneous individuals . As illustrated in Figure 1 ( b ) , the idea of the prototypical Granger causal attention mechanism is as follows . The Granger causal attribution corresponding to each individual is first computed according to the concept of Granger causality , followed by prototype learning that summarizes common causal structures for heterogeneous individuals in the training set , and produces the attention vector specific to each individual . The details of these two parts are described below . 2.3.1 GRANGER CAUSAL ATTRIBUTION . Granger causality ( Granger , 1969 ; 1980 ) is a concept of causality based on prediction , which declares that if a time series x Granger-causes a time series y , then y can be better predicted using all available information than if the information apart from x had been used . Thus , we obtain the Granger causal attributions by comparing the prediction error when using all available information with the error when using the information excluding one variable series . In particular , given all the hidden embeddings { hsi , T } Ss=1 of individual i , we obtain the embedding that encodes all available information and the one that encodes all available information excluding one variable s , denoted as halli and h all\s i respectively , by concatenating the embeddings of corresponding variables : halli = [ h j i , T ] S j=1 , h all\s i = [ h j i , T ] S j=1 , j 6=s , ( 1 ) where [ · ] represents the concatenation operation . Then we feed them into respective predictors , denoted as gall ( · ) and gs ( · ) , to get the predicted value of target and compute the squared errors : ŷalli , T+1 = gall ( h all i ) , ŷ all\s i , T+1 = gs ( h all\s i ) , ( 2 ) εalli = ( ŷ all i , T+1 − yi , T+1 ) 2 , εall\si = ( ŷ all\s i , T+1 − yi , T+1 ) 2 , ( 3 ) where the predictor gall ( · ) and gs ( · ) can be MLP networks . Inspired by Schwab et al . ( 2019 ) , we define the Granger causal attribution of the target variable corresponding to variable s as the decrease in error when adding s-th series to the set of available information , computed as : ∆εsi = ReLU ( ε all\s i − εalli ) , ( 4 ) where ReLU ( · ) is the rectified linear unit . For each individual i , by normalising the Granger causal attribution , we obtain an attention vector that reflects Granger causality , namely Granger causal attention , denoted as qi . The attention factor for variable s can be computed as : qsi = ∆εsi∑S j=1 ∆ε j i . ( 5 ) 2.3.2 PROTOTYPE LEARNING . The Granger causal attention above is not robust enough to reconstruct Granger causal structure , given limited data ( e.g. , very short time series ) of each individual in training . We address the problem by generating Granger causal prototypes from all the individuals , under the assumption that there should be several common causal structures among heterogeneous individuals . In particular , we assume there existK Granger causal prototypes , denoted as { pk } Kk=1 , and compute the similarity between the Granger causal attention vector qi of individual i and each prototype vectors pk . Since the attention can be seen as a distribution , we use the cosine similarity : dk , i = pk · qi ‖pk‖‖qi‖ , ( 6 ) Then we output a prototype most similar to qi by sampling from the similarity distribution di using Gumbel-Softmax ( Maddison et al. , 2017 ; Jang et al. , 2016 ) , which samples from a reparameterized continuous distribution approximation to the categorical one-hot distribution : e = GumbelSoftmax ( di ) = softmax ( ( log ( di ) + g ) /τ ) , ( 7 ) where GumbelSoftmax ( · ) denotes the Gumbel-Softmax function , e ∈ RK is the sample vector which approaches one-hot , and g is a vector of i.i.d . samples drawn from Gumbel ( 0 , 1 ) distribution . τ is the softmax temperature , and the distribution becomes discrete when τ goes to 0 . With the sample vector e , the output prototype p̂ can be obtained as : p̂ = [ p1 , p2 , . . . , pK ] · e. ( 8 ) After normalizing the sampled prototype , we obtain an attention vector for individual i , denoted as ri , namely prototypical attention . The Granger causal attention reflects the Granger causal structure specific to each individual , while the prototypical attention reflects one common Granger causal structure most similar to the Granger causal structure of each individual . To detect the specific and common causal structures simultaneously , we summarize them together and generate the prototypical Granger causal attention ai as follows : ai = αqi + ( 1− α ) ri , ( 9 ) where α ∈ [ 0 , 1 ] is a hyperparameter that controls the ratio of the two attention mechanism . Finally , the prediction of the target variable ’ s next value can be computed as the weighted sum of the predicted values from all variables : ŷi , T+1 = S∑ s=1 asi ŷ s i , T+1 . ( 10 )
The paper proposes a novel way of reconstructing Granger causal structures using a differentiable neural network architecture that contains attention modules that are proportional to the Granger causality of the input layers. Furthermore, the architecture blends individual-specific induced causal structures and cross-population prototypical causal structures. The paper has an extensive experimental section on which the proposed method shows impressive improvements in causal discovery performance and predictive performance on par with state-of-the-art.
SP:674372d2a8bfd6460e61cf6d39f85a9128cdf131
Deep Randomized Least Squares Value Iteration
1 INTRODUCTION . In Reinforcement Learning ( RL ) , an agent seeks to maximize the cumulative rewards obtained from interactions with an unknown environment ( Sutton et al. , 1998 ) . Since the agent can learn only by its interactions with the environment , it faces the exploration-exploitation dilemma : Should it take actions that will maximize the rewards based on its current knowledge or instead take actions to potentially improve its knowledge in the hope of achieving better future performance . Thus , to find the optimal policy the agent needs to use an appropriate exploration strategy . Classic RL algorithms were designed to face problems in the tabular settings where a table containing a value for each state-action pair can be stored in the computer ’ s memory . For more general settings , where generalization is required , a common practice is to use hand-designed state representation ( or state-action ) , upon which a function approximation can be learned to represent the value for each state and action . RL algorithms based on linear function approximation have demonstrated stability , data efficiency and enjoys convergence guarantees under mild assumptions ( Tsitsiklis & Van Roy , 1997 ; Lagoudakis & Parr , 2003 ) . They require that the desired learned function , e.g . Qfunction , will be a linear combination of the state representation . This is , of course , a hard constraint as the representation is hand-designed , where the designer often does not know how the optimal value-function will look like . Furthermore , hand-designed representation is environment-specific and requires re-designing for every new environment . The DQN algorithm ( Mnih et al. , 2015 ) has changed RL . Using Deep Neural Networks ( DNN ) as function approximators , the DQN algorithm enabled the learning of policies directly from raw highdimensional data and led to unprecedented achievements over a wide variety of domains ( Mnih et al. , 2015 ) . Over the years , many improvements to DQN were presented , suggesting more fitting network architectures ( Wang et al. , 2015 ) , reducing overestimation ( Van Hasselt et al. , 2016 ; Anschel et al. , 2017 ) or improving its data efficiency ( Schaul et al. , 2015 ) . Despite its great success , DQN uses the overly simple -greedy strategy for exploration . This strategy is one of the simplest exploration strategies that currently exist . The agent takes random action with probability and takes the optimal action according to its current belief with probability 1− . This strategy is commonly used despite its simplicity and proven inefficiency ( Osband et al. , 2016 ) . The main shortcoming of -greedy and similar strategies derives from the fact that they do not use observed data to improve exploration . To explore , it takes a completely random action , regardless of the experience obtained by the agent . Thompson Sampling ( TS ) ( Thompson , 1933 ) , is one of the oldest heuristics to address the ’ exploration/exploitation ’ trade-off in sequential decision-making problems . Its variations were proposed in RL ( Wyatt , 1998 ; Strens , 2000 ) and various bandits settings ( Chapelle & Li , 2011 ; Scott , 2010 ) . For Multi-Armed Bandit ( MAB ) problems , TS is very effective both in theory ( Agrawal & Goyal , 2012 ; 2013 ) and practice ( Chapelle & Li , 2011 ) . Intuitively , TS randomly takes actions according to the probability it believes to be optimal . In practice , a prior distribution is assumed over the model ’ s parameters p ( w ) , and a posterior distribution p ( w|D ) is computed using the Bayes theorem , where D is the observed data . TS acts by sampling models from the posterior distribution , and plays the best action according to these samples . Randomized Least Squares Value Iteration ( Osband et al. , 2016 ) is an RL algorithm which uses linear function approximation and is inspired by Thompson Sampling . It explores by sampling plausible Q-functions from uncertainty sets and selecting the action that optimizes the sampled models . This algorithm was proven to be efficient in tabular settings , with a bound on the expected regret that match the worst-case lower bound up to logarithmic factors . More importantly , it demonstrates efficiency even when generalization is required . Alas , as it assumes a linearly parametrized value function on a hand-designed state representation , the success of this algorithm crucially depends on the quality of the given state representation . In this paper , we present a new DRL algorithm that combines the exploration mechanism of RLSVI with the representation learning mechanism of DQN ; we call it the Deep Randomized Least Squares Value Iteration ( DRLSVI ) algorithm . We use standard DQN to learn state representation and explores by using the last layer ’ s activations of DQN as state representation for RLSVI . To compensate for the constantly changing representation and the finite memory of DQN , we use a likelihood matching mechanism , which allows the transfer of information held by an old representation regarding past experience . We evaluate our method on a toy-problem – the Augmented Chain environment – for a qualitative evaluation of our method on a small MDP with a known optimal value function . Then , we compare our algorithm to the DQN and Rainbow algorithms on several Atari benchmarks . We show that it outperforms DQN both in learning speed and performance . 2 RELATED WORK . Thompson Sampling in Multi-Armed Bandit problems : Thompson Sampling ( TS ) ( Thompson , 1933 ) , is one of the oldest heuristics to address the ’ exploration/exploitation ’ trade-off in sequential decision-making problems . Chapelle & Li ( 2011 ) sparked much of the interest in Thompson Sampling in recent years . They rewrote the TS algorithm for Bernoulli bandit and showed impressive empirical results on synthetic and real data sets that demonstrate the effectiveness of the TS algorithm . Their results demonstrate why TS might be a better alternative to balance between exploration and exploitation in sequential decision-making problems than other popular alternatives like the Upper Confidence Bound algorithm ( Auer et al. , 2002 ) . Agrawal & Goyal ( 2013 ) suggested a Thompson Sampling algorithm for the linear contextual bandit problem and supplied a high-probability regret bound for it . They use Bayesian Linear Regression ( BLR ) with Gaussian likelihood and Gaussian prior to design their version of Thompson Sampling algorithm . Riquelme et al . ( 2018 ) suggested performing a BLR on top of the representation of the last layer of a neural network . The predicted value vi for each action ai is given by vi = βTi zx , where zx is the output of the last hidden layer of the network for context x . While linear methods directly try to regress values v on x , they independently trained a DNN to learn a representation z , and then used a BLR to regress v on z , obtaining uncertainty estimates on the β ’ s , and making decisions accordingly via Thompson Sampling . Moreover , the network is only being used to find good representation – z . Since training the network and updating the BLR can be done independently , they train the network for a fixed number of iterations , then , perform a forward pass on all the training data to obtain the new zx , which is then fed to the BLR . This procedure of evaluating the new representation for all the observed data is very costly , moreover , it requires infinite memory which obviously does not scale . Zahavy & Mannor ( 2019 ) suggested matching the likelihood of the reward under old and new representation to avoid catastrophic forgetting when using such an algorithm with finite memory . Thompson Sampling in RL : In the Reinforcement Learning settings , Strens ( 2000 ) suggested a method named ” Posterior Sampling for Reinforcement Learning ” ( PSRL ) which is an application of Thompson Sampling to Model-Based Reinforcement Learning . PSRL estimates the posterior distribution over MDPs . Each episode , the algorithm samples MDP from it and finds the optimal policy for this sampled MDP by dynamic programming . Recent work ( Osband et al. , 2013 ; Osband & Van Roy , 2017 ) have shown a theoretical analysis of PSRL that guarantees strong expected performance over a wide range of environments . The main problem with PSRL , like all modelbased approaches , is that it may be applied to relatively small environments . The Randomized Least Squares Value Iteration ( RLSVI ) algorithm is an application of Thompson Sampling to Model-Free Reinforcement Learning . It explores by sampling plausible Q-functions from uncertainty sets and selecting the action that optimizes the sampled models . Thompson Sampling in DRL : Various approaches have been suggested to extend the idea behind RLSVI to DRL . Bootstrapped DQN ( Osband et al. , 2017 ) uses an ensemble of Q-networks , each trained with slightly different data samples . To explore , Bootstrapped DQN randomly samples one of the networks and acts greedy with respect to it . Recently , Osband et al . ( 2018 ) extended this idea by supplying each member of the ensemble with a different prior . Fortunato et al . ( 2017 ) and Plappert et al . ( 2017 ) investigate a similar idea and propose to adaptively perturb the parameterspace , which can also be thought of as tracking approximate posterior over the network ’ s parameters . O ’ Donoghue et al . ( 2017 ) proposed TS in combination with uncertainty Bellman equation , which connects the uncertainty at any time-step to the expected uncertainties at subsequent timesteps . Recently and most similar to our work , Azizzadenesheli et al . ( 2018 ) experimented with a Deep Learning extension to RLSVI . They changed the network architecture to exclude the last layer weights , optimized the hyper parameters and used double-DQN . In contrary , we don ’ t change anything in the DQN agent . We use the representation learned by DQN to perform RLSVI , however , the network structure , loss and hyper-parameters are the same . Additionally , differently from our method , they don ’ t compensate for the changing representation and solve BLR problem with the same arbitrary prior every time . 3 PRELIMINARIES . We consider the standard RL settings ( Sutton et al. , 1998 ) , in which an environment with discrete time steps is modeled by a Markov Decision Process ( MDP ) . An MDP is a tuple < S , A , P , R , γ > , where S is a state space , A a finite action space , P : S × A −→ ∆ ( S ) , is a transition kernel , and R : S × A −→ R a reward function . At each step the agent receives an observation st ∈ S which represents the current physical state of the system , takes an action at ∈ A which is applied to the environment , receives a scalar reward rt = r ( st , at ) , and observes a new state st+1 which the environment transitions to . As mentioned above , the agent seeks an optimal policy π∗ : S −→ ∆ ( A ) , mapping an environment state to probabilities over the agent ’ s executable actions . γ ∈ ( 0 , 1 ) is the discount factor – a scalar representing the trade-off between immediate and delayed reward . A brief survey of the DQN algorithm can be found in Appendix 1 .
The paper proposes to extend the popular linear-control algorithm, RLSVI, to utilize learned representations. This is done by adapting a work from bandit literature that utilizes BLR with representations that are learned via a DNN. The proposed solution is then compared to DQN with fixed epsilon as the exploration strategy in a chain MDP, and to the Rainbow agent and DQN in 5 selected Atari games to show sample-efficiency improvements.
SP:0bd5556d83764d1445a07e46ea5fcd074789b6e0
Deep Randomized Least Squares Value Iteration
1 INTRODUCTION . In Reinforcement Learning ( RL ) , an agent seeks to maximize the cumulative rewards obtained from interactions with an unknown environment ( Sutton et al. , 1998 ) . Since the agent can learn only by its interactions with the environment , it faces the exploration-exploitation dilemma : Should it take actions that will maximize the rewards based on its current knowledge or instead take actions to potentially improve its knowledge in the hope of achieving better future performance . Thus , to find the optimal policy the agent needs to use an appropriate exploration strategy . Classic RL algorithms were designed to face problems in the tabular settings where a table containing a value for each state-action pair can be stored in the computer ’ s memory . For more general settings , where generalization is required , a common practice is to use hand-designed state representation ( or state-action ) , upon which a function approximation can be learned to represent the value for each state and action . RL algorithms based on linear function approximation have demonstrated stability , data efficiency and enjoys convergence guarantees under mild assumptions ( Tsitsiklis & Van Roy , 1997 ; Lagoudakis & Parr , 2003 ) . They require that the desired learned function , e.g . Qfunction , will be a linear combination of the state representation . This is , of course , a hard constraint as the representation is hand-designed , where the designer often does not know how the optimal value-function will look like . Furthermore , hand-designed representation is environment-specific and requires re-designing for every new environment . The DQN algorithm ( Mnih et al. , 2015 ) has changed RL . Using Deep Neural Networks ( DNN ) as function approximators , the DQN algorithm enabled the learning of policies directly from raw highdimensional data and led to unprecedented achievements over a wide variety of domains ( Mnih et al. , 2015 ) . Over the years , many improvements to DQN were presented , suggesting more fitting network architectures ( Wang et al. , 2015 ) , reducing overestimation ( Van Hasselt et al. , 2016 ; Anschel et al. , 2017 ) or improving its data efficiency ( Schaul et al. , 2015 ) . Despite its great success , DQN uses the overly simple -greedy strategy for exploration . This strategy is one of the simplest exploration strategies that currently exist . The agent takes random action with probability and takes the optimal action according to its current belief with probability 1− . This strategy is commonly used despite its simplicity and proven inefficiency ( Osband et al. , 2016 ) . The main shortcoming of -greedy and similar strategies derives from the fact that they do not use observed data to improve exploration . To explore , it takes a completely random action , regardless of the experience obtained by the agent . Thompson Sampling ( TS ) ( Thompson , 1933 ) , is one of the oldest heuristics to address the ’ exploration/exploitation ’ trade-off in sequential decision-making problems . Its variations were proposed in RL ( Wyatt , 1998 ; Strens , 2000 ) and various bandits settings ( Chapelle & Li , 2011 ; Scott , 2010 ) . For Multi-Armed Bandit ( MAB ) problems , TS is very effective both in theory ( Agrawal & Goyal , 2012 ; 2013 ) and practice ( Chapelle & Li , 2011 ) . Intuitively , TS randomly takes actions according to the probability it believes to be optimal . In practice , a prior distribution is assumed over the model ’ s parameters p ( w ) , and a posterior distribution p ( w|D ) is computed using the Bayes theorem , where D is the observed data . TS acts by sampling models from the posterior distribution , and plays the best action according to these samples . Randomized Least Squares Value Iteration ( Osband et al. , 2016 ) is an RL algorithm which uses linear function approximation and is inspired by Thompson Sampling . It explores by sampling plausible Q-functions from uncertainty sets and selecting the action that optimizes the sampled models . This algorithm was proven to be efficient in tabular settings , with a bound on the expected regret that match the worst-case lower bound up to logarithmic factors . More importantly , it demonstrates efficiency even when generalization is required . Alas , as it assumes a linearly parametrized value function on a hand-designed state representation , the success of this algorithm crucially depends on the quality of the given state representation . In this paper , we present a new DRL algorithm that combines the exploration mechanism of RLSVI with the representation learning mechanism of DQN ; we call it the Deep Randomized Least Squares Value Iteration ( DRLSVI ) algorithm . We use standard DQN to learn state representation and explores by using the last layer ’ s activations of DQN as state representation for RLSVI . To compensate for the constantly changing representation and the finite memory of DQN , we use a likelihood matching mechanism , which allows the transfer of information held by an old representation regarding past experience . We evaluate our method on a toy-problem – the Augmented Chain environment – for a qualitative evaluation of our method on a small MDP with a known optimal value function . Then , we compare our algorithm to the DQN and Rainbow algorithms on several Atari benchmarks . We show that it outperforms DQN both in learning speed and performance . 2 RELATED WORK . Thompson Sampling in Multi-Armed Bandit problems : Thompson Sampling ( TS ) ( Thompson , 1933 ) , is one of the oldest heuristics to address the ’ exploration/exploitation ’ trade-off in sequential decision-making problems . Chapelle & Li ( 2011 ) sparked much of the interest in Thompson Sampling in recent years . They rewrote the TS algorithm for Bernoulli bandit and showed impressive empirical results on synthetic and real data sets that demonstrate the effectiveness of the TS algorithm . Their results demonstrate why TS might be a better alternative to balance between exploration and exploitation in sequential decision-making problems than other popular alternatives like the Upper Confidence Bound algorithm ( Auer et al. , 2002 ) . Agrawal & Goyal ( 2013 ) suggested a Thompson Sampling algorithm for the linear contextual bandit problem and supplied a high-probability regret bound for it . They use Bayesian Linear Regression ( BLR ) with Gaussian likelihood and Gaussian prior to design their version of Thompson Sampling algorithm . Riquelme et al . ( 2018 ) suggested performing a BLR on top of the representation of the last layer of a neural network . The predicted value vi for each action ai is given by vi = βTi zx , where zx is the output of the last hidden layer of the network for context x . While linear methods directly try to regress values v on x , they independently trained a DNN to learn a representation z , and then used a BLR to regress v on z , obtaining uncertainty estimates on the β ’ s , and making decisions accordingly via Thompson Sampling . Moreover , the network is only being used to find good representation – z . Since training the network and updating the BLR can be done independently , they train the network for a fixed number of iterations , then , perform a forward pass on all the training data to obtain the new zx , which is then fed to the BLR . This procedure of evaluating the new representation for all the observed data is very costly , moreover , it requires infinite memory which obviously does not scale . Zahavy & Mannor ( 2019 ) suggested matching the likelihood of the reward under old and new representation to avoid catastrophic forgetting when using such an algorithm with finite memory . Thompson Sampling in RL : In the Reinforcement Learning settings , Strens ( 2000 ) suggested a method named ” Posterior Sampling for Reinforcement Learning ” ( PSRL ) which is an application of Thompson Sampling to Model-Based Reinforcement Learning . PSRL estimates the posterior distribution over MDPs . Each episode , the algorithm samples MDP from it and finds the optimal policy for this sampled MDP by dynamic programming . Recent work ( Osband et al. , 2013 ; Osband & Van Roy , 2017 ) have shown a theoretical analysis of PSRL that guarantees strong expected performance over a wide range of environments . The main problem with PSRL , like all modelbased approaches , is that it may be applied to relatively small environments . The Randomized Least Squares Value Iteration ( RLSVI ) algorithm is an application of Thompson Sampling to Model-Free Reinforcement Learning . It explores by sampling plausible Q-functions from uncertainty sets and selecting the action that optimizes the sampled models . Thompson Sampling in DRL : Various approaches have been suggested to extend the idea behind RLSVI to DRL . Bootstrapped DQN ( Osband et al. , 2017 ) uses an ensemble of Q-networks , each trained with slightly different data samples . To explore , Bootstrapped DQN randomly samples one of the networks and acts greedy with respect to it . Recently , Osband et al . ( 2018 ) extended this idea by supplying each member of the ensemble with a different prior . Fortunato et al . ( 2017 ) and Plappert et al . ( 2017 ) investigate a similar idea and propose to adaptively perturb the parameterspace , which can also be thought of as tracking approximate posterior over the network ’ s parameters . O ’ Donoghue et al . ( 2017 ) proposed TS in combination with uncertainty Bellman equation , which connects the uncertainty at any time-step to the expected uncertainties at subsequent timesteps . Recently and most similar to our work , Azizzadenesheli et al . ( 2018 ) experimented with a Deep Learning extension to RLSVI . They changed the network architecture to exclude the last layer weights , optimized the hyper parameters and used double-DQN . In contrary , we don ’ t change anything in the DQN agent . We use the representation learned by DQN to perform RLSVI , however , the network structure , loss and hyper-parameters are the same . Additionally , differently from our method , they don ’ t compensate for the changing representation and solve BLR problem with the same arbitrary prior every time . 3 PRELIMINARIES . We consider the standard RL settings ( Sutton et al. , 1998 ) , in which an environment with discrete time steps is modeled by a Markov Decision Process ( MDP ) . An MDP is a tuple < S , A , P , R , γ > , where S is a state space , A a finite action space , P : S × A −→ ∆ ( S ) , is a transition kernel , and R : S × A −→ R a reward function . At each step the agent receives an observation st ∈ S which represents the current physical state of the system , takes an action at ∈ A which is applied to the environment , receives a scalar reward rt = r ( st , at ) , and observes a new state st+1 which the environment transitions to . As mentioned above , the agent seeks an optimal policy π∗ : S −→ ∆ ( A ) , mapping an environment state to probabilities over the agent ’ s executable actions . γ ∈ ( 0 , 1 ) is the discount factor – a scalar representing the trade-off between immediate and delayed reward . A brief survey of the DQN algorithm can be found in Appendix 1 .
This paper introduces a deep learning-based adaptation for the RLVSI algorithm, where the agent uses the representation learned by the deep neural network-based RL agent (DQN). They use the last layer of DQN as a state representation for RLSVI. In order to work with the changing representations of the deep agent, they propose a likelihood matching mechanism. The approach is applied to two tasks: a) A toy modified n-chain experiment and b) set of 5 Atari games. They show that their method outperforms the DQN with naive exploration.
SP:0bd5556d83764d1445a07e46ea5fcd074789b6e0
Noise Regularization for Conditional Density Estimation
1 INTRODUCTION . While regression analysis aims to describe the conditional mean E [ y|x ] of a response y given inputs x , many problems such as risk management and planning under uncertainty require gaining insight about deviations from the mean and their associated likelihood . The stochastic dependency of y on x can be captured by modeling the conditional probability density p ( y|x ) . Inferring such a density function from a set of empirical observations { ( xn , yn ) } Nn=1 is typically referred to as conditional density estimation ( CDE ) and is the focus of this paper . In the recent machine learning literature , there has been a resurgence of interest in high-capacity density models based on neural networks ( Dinh et al. , 2017 ; Ambrogioni et al. , 2017 ; Kingma & Dhariwal , 2018 ) . Since this line of work mainly focuses on the modelling of images based on large scale data sets , over-fitting and noisy observations are of minor concern in this context . In contrast , we are interested in CDE in settings where data may be scarce and noisy . When combined with maximum likelihood estimation , the flexibility of such high-capacity models results in over-fitting and poor generalization . While regression typically assumes Gaussian conditional noise , CDE uses expressive distribution families to model deviations from the conditional mean . Hence , the overfitting problem tends to be even more severe in CDE than in regression . Classical regularization of the neural network weights such as weight decay ( Pratt & Hanson , 1989 ) has been shown to be effective for regression and classification . However , in the context of CDE , the output of the neural network merely controls the parameters of a density model such as a Gaussian Mixture or Normalizing Flow . This makes the standard regularization methods in the parameter space less effective and harder to analyze . Aiming to address this issue , we propose and analyze noise regularization , a method well-studied in the context of regression and classification , for the purpose of conditional density estimation . In that , the paper attempts to close a gap in previous research . By adding small random perturbations to the data during training , the conditional density estimate is smoothed and tends to generalize better . In fact , we show that adding noise during maximum likelihood estimation is equivalent to penalizing the second derivatives of the conditional log-probability . Visually , the respective regularization term punishes very curved or even spiky density estimators in favor of smoother variants , which proves to be a favorable inductive bias in many applications . Moreover , under some regularity conditions , we show that the proposed regularization scheme is asymptotically consistent , converging to the unbiased maximum likelihood estimator . This does not only support the soundness of the proposed method but also endows us with useful insight in how to set the regularization intensity relative to the data dimensionality and training set size . Overall , the proposed noise regularization scheme is easy to implement and agnostic to the parameterization of the CDE model . We empirically demonstrate its effectiveness on three different neural network based models . The experimental results show that noise regularization outperforms other regularization methods significantly and consistently across various data sets . Finally , we demonstrate that , when properly regularized , neural network based CDE is able to improve upon state-of-the art non-parametric estimators , even when only 400 training observations are available . 2 BACKGROUND . Density Estimation . Let X be a random variable with probability density function ( PDF ) p ( x ) defined over the domain X ⊆ Rdx . Given a collection D = { x1 , ... , xn } of observations sampled from p ( x ) , the goal is to find a good estimate f̂ ( x ) of the true density function p. In parametric estimation , the PDF f̂ is assumed to belong to a parametric family F = { f̂θ ( · ) |θ ∈ Θ } where the density function is described by a finite dimensional parameter θ ∈ Θ . The standard method for estimating θ is maximum likelihood estimation , wherein θ is chosen so that the likelihood of the data D is maximized . This is equivalent to minimizing the Kullback-Leibler divergence between the empirical data distribution pD ( x ) = 1n ∑n i=1 δ ( ||x − xi|| ) ( i.e. , mixture of point masses in the observations xi ) and the parametric distribution f̂θ : θ∗ = arg max θ∈Θ n∑ i=1 log f̂θ ( xi ) = arg min θ∈Θ DKL ( pD||f̂θ ) ( 1 ) From a geometric perspective , ( 1 ) can be viewed as an orthogonal projection of pD ( x ) onto F w.r.t . the reverse KL-divergence . Hence , ( 1 ) is also commonly referred to as an M-projection ( Murphy , 2012 ; Nielsen , 2018 ) . In contrast , non-parametric density estimators make implicit smoothness assumptions through a kernel function . The most popular non-parametric method , kernel density estimation ( KDE ) , places a symmetric density function K ( z ) , the so-called kernel , on each training data point xn ( Rosenblatt , 1956 ; Parzen , 1962 ) . The resulting density estimate reads as q̂ ( x ) = 1 nhd ∑n i=1K ( x−xi h ) . One popular choice of K ( · ) is a Gaussian K ( z ) = ( 2π ) − d2 exp ( − 12z 2 ) . Beyond the appropriate choice of K ( · ) , a central challenge is the selection of the bandwidth parameter h which controls the smoothness of the estimated PDF ( Li & Racine , 2007 ) . Conditional Density Estimation ( CDE ) . Let ( X , Y ) be a pair of random variables with respective domains X ⊆ Rdx and Y ⊆ Rdy and realizations x and y . Let p ( y|x ) = p ( x , y ) /p ( x ) denote the conditional probability density of y given x . Typically , Y is referred to as a dependent variable ( explained variable ) and X as conditional ( explanatory ) variable . Given a dataset of observations D = { ( xn , yn ) } Nn=1 drawn from the joint distribution ( xn , yn ) ∼ p ( x , y ) , the aim of conditional density estimation ( CDE ) is to find an estimate f̂ ( y|x ) of the true conditional density p ( y|x ) . In the context of CDE , the KL-divergence objective is expressed as expectation over p ( x ) : Ex∼p ( x ) [ DKL ( p ( y|x ) ||f̂ ( y|x ) ) ] = E ( x , y ) ∼p ( x , y ) [ log p ( y|x ) − log f̂ ( y|x ) ] ( 2 ) Corresponding to ( 1 ) , we refer to the minimization of ( 2 ) w.r.t . θ as conditional M-projection . Given a dataset D drawn i.i.d . from p ( x , y ) , the conditional MLE following from ( 2 ) can be stated as θ∗ = arg min θ − n∑ i=1 log f̂θ ( yi|xi ) ( 3 ) 3 RELATED WORK . The first part of this section discusses relevant work in the field of CDE , focusing on high-capacity models that make little prior assumptions . The second part relates our approach to previous regularization and data augmentation methods . Non-parametric CDE . A vast body of literature in statistics and econometrics studies nonparametric kernel density estimators ( KDE ) ( Rosenblatt , 1956 ; Parzen , 1962 ) and the associated bandwidth selection problem , which concerns choosing the appropriate amount of smoothing ( Silverman , 1982 ; Hall et al. , 1992 ; Cao et al. , 1994 ) . To estimate conditional probabilities , previous work proposes to estimate both the joint and marginal probability separately with KDE and then computing the conditional probability as their ratio ( Hyndman et al. , 1996 ; Li & Racine , 2007 ) . Other approaches combine non-parametric elements with parametric elements ( Tresp , 2001 ; Sugiyama & Takeuchi , 2010 ; Dutordoir et al. , 2018 ) . Despite their theoretical appeal , non-parametric density estimators suffer from poor generalization in regions where data is sparse ( e.g. , tail regions ) , causing rapid performance deterioration as the data dimensionality increases ( Scott & Wand , 1991 ) . CDE based on neural networks . Most work in machine learning focuses on flexible parametric function approximators for CDE . In our experiments , we use the work of Bishop ( 1994 ) and Ambrogioni et al . ( 2017 ) , who propose to use a neural network to control the parameters of a mixture density model . A recent trend in machine learning are latent density models such as cGANs ( Mirza & Osindero , 2014 ) and cVAEs ( Sohn et al. , 2015 ) . Although such methods have been shown successful for estimating distributions of images , the probability density function ( PDF ) of such models is intractable . More promising in this sense are normalizing flows ( Rezende & Mohamed , 2015 ; Dinh et al. , 2017 ; Trippe & Turner , 2018 ) , since they provide the PDF in tractable form . We employ a neural network controlling the parameters of a normalizing flow as our third CDE model to showcase the empirical efficacy of our regularization approach . Regularization . Since neural network based CDE models suffer from severe over-fitting when trained with the MLE objective , they require proper regularization . Classical regularization of the parameters such as weight decay ( Pratt & Hanson , 1989 ; Krogh & Hertz , 1992 ; Nowlan & Hinton , 1992 ) , l1/l2-penalties ( Mackay , 1992 ; Ng , 2004 ) and Bayesian priors ( Murray & Edwards , 1993 ; Hinton & Van Camp , 1993 ) have been shown to work well in the regression and classification setting . However , in the context of CDE , it is less clear what kind of inductive bias such a regularization imposes on the density estimate . In contrast , our regularization approach is agnostic w.r.t . parametrization and is shown to penalize strong variations of the log-density function . Regularization methods such as dropout are closely related to ensemble methods ( Srivastava et al. , 2014 ) . Thus , they are orthogonal to our work and can be freely combined with noise regularization . Adding noise during training . Adding noise during training is a common scheme that has been proposed in various forms . This includes noise on the neural network weights or activations ( Wan et al. , 2013 ; Srivastava et al. , 2014 ; Gal & Uk , 2016 ) and additive noise on the gradients for scalable MCMC posterior inference ( Welling & Teh , 2011 ; Chen et al. , 2014 ) . While this line of work corresponds to noise in the parameter space , other research suggests to augment the training data through random and/or adversarial transformations of the data ( Sietsma & Dow , 1991 ; Burges & Schölkopf , 1996 ; Goodfellow et al. , 2015 ; Yuan et al. , 2017 ) . Our approach transforms the training observations by adding small random perturbations . While this form of regularization has been studied in the context of regression and classification problems ( Holmstrom & Koistinen , 1992a ; Webb , 1994 ; Bishop , 1995 ; Natarajan et al. , 2013 ; Maaten et al. , 2013 ) , this paper focuses on the regularization of CDE . In particular , we build on top of the results of Webb ( 1994 ) showing that training with noise corresponds to a penalty on strong variations of the log-density and extend previous consistency results for regression of Holmstrom & Koistinen ( 1992a ) to the more general setting of CDE . To our best knowledge , this is also the first paper to evaluate the empirical efficacy of noise regularization for density estimation .
The paper considers the problem of parametric conditional density estimation, i.e. given a set of points {(x_n, y_n)} drawn from a distribution $p(x,y)$, the task is to estimate the conditional distribution p(x|y). The paper considers parametric estimation where in given a parametrized family of distributions f_{theta} we wish to minimize the likelihood of seeing the given data over theta. The parametric family in a lot of applications consists of highly expressive families like neural networks, which leads to the issue of overfitting in small data regimes. This has been tackled via regularization over the parameter space which might be hard to interpret as the associated inductive bias is not well understood and depends on the parametric family under consideration. On the other hand the paper proposes to add explicit noise in the examples used during training, i.e. irrespective of the optimization procedure (which could be mini-bath sgd) the paper proposes to draw examples from the data set, explicitly add noise onto the examples and create a proxy objective over the augmented data set.
SP:86bd95d8a233760200cafb7cb72ac48a7d50b7d1
Noise Regularization for Conditional Density Estimation
1 INTRODUCTION . While regression analysis aims to describe the conditional mean E [ y|x ] of a response y given inputs x , many problems such as risk management and planning under uncertainty require gaining insight about deviations from the mean and their associated likelihood . The stochastic dependency of y on x can be captured by modeling the conditional probability density p ( y|x ) . Inferring such a density function from a set of empirical observations { ( xn , yn ) } Nn=1 is typically referred to as conditional density estimation ( CDE ) and is the focus of this paper . In the recent machine learning literature , there has been a resurgence of interest in high-capacity density models based on neural networks ( Dinh et al. , 2017 ; Ambrogioni et al. , 2017 ; Kingma & Dhariwal , 2018 ) . Since this line of work mainly focuses on the modelling of images based on large scale data sets , over-fitting and noisy observations are of minor concern in this context . In contrast , we are interested in CDE in settings where data may be scarce and noisy . When combined with maximum likelihood estimation , the flexibility of such high-capacity models results in over-fitting and poor generalization . While regression typically assumes Gaussian conditional noise , CDE uses expressive distribution families to model deviations from the conditional mean . Hence , the overfitting problem tends to be even more severe in CDE than in regression . Classical regularization of the neural network weights such as weight decay ( Pratt & Hanson , 1989 ) has been shown to be effective for regression and classification . However , in the context of CDE , the output of the neural network merely controls the parameters of a density model such as a Gaussian Mixture or Normalizing Flow . This makes the standard regularization methods in the parameter space less effective and harder to analyze . Aiming to address this issue , we propose and analyze noise regularization , a method well-studied in the context of regression and classification , for the purpose of conditional density estimation . In that , the paper attempts to close a gap in previous research . By adding small random perturbations to the data during training , the conditional density estimate is smoothed and tends to generalize better . In fact , we show that adding noise during maximum likelihood estimation is equivalent to penalizing the second derivatives of the conditional log-probability . Visually , the respective regularization term punishes very curved or even spiky density estimators in favor of smoother variants , which proves to be a favorable inductive bias in many applications . Moreover , under some regularity conditions , we show that the proposed regularization scheme is asymptotically consistent , converging to the unbiased maximum likelihood estimator . This does not only support the soundness of the proposed method but also endows us with useful insight in how to set the regularization intensity relative to the data dimensionality and training set size . Overall , the proposed noise regularization scheme is easy to implement and agnostic to the parameterization of the CDE model . We empirically demonstrate its effectiveness on three different neural network based models . The experimental results show that noise regularization outperforms other regularization methods significantly and consistently across various data sets . Finally , we demonstrate that , when properly regularized , neural network based CDE is able to improve upon state-of-the art non-parametric estimators , even when only 400 training observations are available . 2 BACKGROUND . Density Estimation . Let X be a random variable with probability density function ( PDF ) p ( x ) defined over the domain X ⊆ Rdx . Given a collection D = { x1 , ... , xn } of observations sampled from p ( x ) , the goal is to find a good estimate f̂ ( x ) of the true density function p. In parametric estimation , the PDF f̂ is assumed to belong to a parametric family F = { f̂θ ( · ) |θ ∈ Θ } where the density function is described by a finite dimensional parameter θ ∈ Θ . The standard method for estimating θ is maximum likelihood estimation , wherein θ is chosen so that the likelihood of the data D is maximized . This is equivalent to minimizing the Kullback-Leibler divergence between the empirical data distribution pD ( x ) = 1n ∑n i=1 δ ( ||x − xi|| ) ( i.e. , mixture of point masses in the observations xi ) and the parametric distribution f̂θ : θ∗ = arg max θ∈Θ n∑ i=1 log f̂θ ( xi ) = arg min θ∈Θ DKL ( pD||f̂θ ) ( 1 ) From a geometric perspective , ( 1 ) can be viewed as an orthogonal projection of pD ( x ) onto F w.r.t . the reverse KL-divergence . Hence , ( 1 ) is also commonly referred to as an M-projection ( Murphy , 2012 ; Nielsen , 2018 ) . In contrast , non-parametric density estimators make implicit smoothness assumptions through a kernel function . The most popular non-parametric method , kernel density estimation ( KDE ) , places a symmetric density function K ( z ) , the so-called kernel , on each training data point xn ( Rosenblatt , 1956 ; Parzen , 1962 ) . The resulting density estimate reads as q̂ ( x ) = 1 nhd ∑n i=1K ( x−xi h ) . One popular choice of K ( · ) is a Gaussian K ( z ) = ( 2π ) − d2 exp ( − 12z 2 ) . Beyond the appropriate choice of K ( · ) , a central challenge is the selection of the bandwidth parameter h which controls the smoothness of the estimated PDF ( Li & Racine , 2007 ) . Conditional Density Estimation ( CDE ) . Let ( X , Y ) be a pair of random variables with respective domains X ⊆ Rdx and Y ⊆ Rdy and realizations x and y . Let p ( y|x ) = p ( x , y ) /p ( x ) denote the conditional probability density of y given x . Typically , Y is referred to as a dependent variable ( explained variable ) and X as conditional ( explanatory ) variable . Given a dataset of observations D = { ( xn , yn ) } Nn=1 drawn from the joint distribution ( xn , yn ) ∼ p ( x , y ) , the aim of conditional density estimation ( CDE ) is to find an estimate f̂ ( y|x ) of the true conditional density p ( y|x ) . In the context of CDE , the KL-divergence objective is expressed as expectation over p ( x ) : Ex∼p ( x ) [ DKL ( p ( y|x ) ||f̂ ( y|x ) ) ] = E ( x , y ) ∼p ( x , y ) [ log p ( y|x ) − log f̂ ( y|x ) ] ( 2 ) Corresponding to ( 1 ) , we refer to the minimization of ( 2 ) w.r.t . θ as conditional M-projection . Given a dataset D drawn i.i.d . from p ( x , y ) , the conditional MLE following from ( 2 ) can be stated as θ∗ = arg min θ − n∑ i=1 log f̂θ ( yi|xi ) ( 3 ) 3 RELATED WORK . The first part of this section discusses relevant work in the field of CDE , focusing on high-capacity models that make little prior assumptions . The second part relates our approach to previous regularization and data augmentation methods . Non-parametric CDE . A vast body of literature in statistics and econometrics studies nonparametric kernel density estimators ( KDE ) ( Rosenblatt , 1956 ; Parzen , 1962 ) and the associated bandwidth selection problem , which concerns choosing the appropriate amount of smoothing ( Silverman , 1982 ; Hall et al. , 1992 ; Cao et al. , 1994 ) . To estimate conditional probabilities , previous work proposes to estimate both the joint and marginal probability separately with KDE and then computing the conditional probability as their ratio ( Hyndman et al. , 1996 ; Li & Racine , 2007 ) . Other approaches combine non-parametric elements with parametric elements ( Tresp , 2001 ; Sugiyama & Takeuchi , 2010 ; Dutordoir et al. , 2018 ) . Despite their theoretical appeal , non-parametric density estimators suffer from poor generalization in regions where data is sparse ( e.g. , tail regions ) , causing rapid performance deterioration as the data dimensionality increases ( Scott & Wand , 1991 ) . CDE based on neural networks . Most work in machine learning focuses on flexible parametric function approximators for CDE . In our experiments , we use the work of Bishop ( 1994 ) and Ambrogioni et al . ( 2017 ) , who propose to use a neural network to control the parameters of a mixture density model . A recent trend in machine learning are latent density models such as cGANs ( Mirza & Osindero , 2014 ) and cVAEs ( Sohn et al. , 2015 ) . Although such methods have been shown successful for estimating distributions of images , the probability density function ( PDF ) of such models is intractable . More promising in this sense are normalizing flows ( Rezende & Mohamed , 2015 ; Dinh et al. , 2017 ; Trippe & Turner , 2018 ) , since they provide the PDF in tractable form . We employ a neural network controlling the parameters of a normalizing flow as our third CDE model to showcase the empirical efficacy of our regularization approach . Regularization . Since neural network based CDE models suffer from severe over-fitting when trained with the MLE objective , they require proper regularization . Classical regularization of the parameters such as weight decay ( Pratt & Hanson , 1989 ; Krogh & Hertz , 1992 ; Nowlan & Hinton , 1992 ) , l1/l2-penalties ( Mackay , 1992 ; Ng , 2004 ) and Bayesian priors ( Murray & Edwards , 1993 ; Hinton & Van Camp , 1993 ) have been shown to work well in the regression and classification setting . However , in the context of CDE , it is less clear what kind of inductive bias such a regularization imposes on the density estimate . In contrast , our regularization approach is agnostic w.r.t . parametrization and is shown to penalize strong variations of the log-density function . Regularization methods such as dropout are closely related to ensemble methods ( Srivastava et al. , 2014 ) . Thus , they are orthogonal to our work and can be freely combined with noise regularization . Adding noise during training . Adding noise during training is a common scheme that has been proposed in various forms . This includes noise on the neural network weights or activations ( Wan et al. , 2013 ; Srivastava et al. , 2014 ; Gal & Uk , 2016 ) and additive noise on the gradients for scalable MCMC posterior inference ( Welling & Teh , 2011 ; Chen et al. , 2014 ) . While this line of work corresponds to noise in the parameter space , other research suggests to augment the training data through random and/or adversarial transformations of the data ( Sietsma & Dow , 1991 ; Burges & Schölkopf , 1996 ; Goodfellow et al. , 2015 ; Yuan et al. , 2017 ) . Our approach transforms the training observations by adding small random perturbations . While this form of regularization has been studied in the context of regression and classification problems ( Holmstrom & Koistinen , 1992a ; Webb , 1994 ; Bishop , 1995 ; Natarajan et al. , 2013 ; Maaten et al. , 2013 ) , this paper focuses on the regularization of CDE . In particular , we build on top of the results of Webb ( 1994 ) showing that training with noise corresponds to a penalty on strong variations of the log-density and extend previous consistency results for regression of Holmstrom & Koistinen ( 1992a ) to the more general setting of CDE . To our best knowledge , this is also the first paper to evaluate the empirical efficacy of noise regularization for density estimation .
The paper presents a regularization technique for conditional density estimation. The method is simple: adding noise to the data points, and training on the noisy data points. The paper also further gives an interpretation of the method, as a form of smoothing the curvature of the density function. It further proves the consistency of the method.
SP:86bd95d8a233760200cafb7cb72ac48a7d50b7d1
Mirror Descent View For Neural Network Quantization
1 INTRODUCTION . Despite the success of deep neural networks in various domains , their excessive computational and memory requirements limit their practical usability for real-time applications or in resource-limited devices . Quantization is a prominent technique for network compression , where the objective is to learn a network while restricting the parameters to take values from a small discrete set ( usually binary ) . This leads to a dramatic reduction in memory ( a factor of 32 for binary quantization ) and inference time – as it enables specialized implementation using bit operations . Neural Network ( NN ) quantization is usually formulated as a constrained optimization problem minx∈X f ( x ) , where f ( · ) denotes the loss function by abstracting out the dependency on the dataset and X ⊂ IRr denotes the set of all possible quantized solutions . Majority of the works in the literature ( Hubara et al . ( 2017 ) ; Yin et al . ( 2018 ) ; Ajanthan et al . ( 2019 ) ) convert this into an unconstrained problem by introducing auxiliary variables ( x̃ ) and optimize via ( stochastic ) gradient descent . Specifically , the objective and the update step take the following form : min x̃∈IRr f ( P ( x̃ ) ) , x̃k+1 = x̃k − η ∇x̃f ( P ( x̃ ) ) |x̃=x̃k , ( 1 ) where P : IRr → X is a mapping from the unconstrained space to the quantized space ( sometimes called projection ) and η > 0 is the learning rate . In cases where the mapping P is not differentiable , a suitable approximation is employed ( Hubara et al . ( 2017 ) ) . In this work , by noting that the well-known Mirror Descent ( MD ) algorithm , widely used for online convex optimization ( Bubeck ( 2015 ) ) , provides a theoretical framework to perform gradient descent in the unconstrained space ( dual space , IRr ) with gradients computed in the quantized space ( primal space , X ) , we introduce an MD framework for NN quantization . In essence , MD extends gradient descent to non-Euclidean spaces where Euclidean projection is replaced with a more general projection defined based on the associated distance metric . Briefly , the key ingredient of MD is a concept called mirror map which defines both the mapping between primal and dual spaces and the exact form of the projection . Specifically , in this work , by observing P in Eq . ( 1 ) as a mapping from dual space to the primal space , we analytically derive corresponding mirror maps under certain conditions on P . This enables us to derive different variants of the MD algorithm useful for NN quantization . Furthermore , as MD is often found to be numerically unstable ( Hsieh et al . ( 2018 ) ) , we discuss a numerically stable implementation of MD by storing an additional set of auxiliary variables similar to the existing methods . As will be shown later , this update is strikingly analogous to the popular Straight Through Estimator ( STE ) based gradient method ( Hubara et al . ( 2017 ) ; Bai et al . ( 2019 ) ) which is typically viewed as a “ trick ” to avoid vanishing gradients issue but here we show that it is an implementation method for MD under certain conditions on the mapping P . We believe this connection sheds some light on the practical effectiveness of STE . We evaluate the merits of our MD variants on CIFAR-10/100 and TinyImageNet classification datasets with convolutional and residual architectures . Our experiments show that the quantized networks obtained by the MD variants yield accuracies very close to the floating-point counterparts while outperforming directly comparable baselines . Finally , we would like to emphasize that even though our formulation does not necessarily extend the theory of MD , we believe showing MD as a suitable framework for NN quantization with superior empirical performance opens up new ways of designing MD-inspired update rules for NNs . 2 PRELIMINARIES . We first provide some background on the MD algorithm and NN quantization . Then we discuss the link between them and provide our MD framework for NN quantization . 2.1 MIRROR DESCENT . The Mirror Descent ( MD ) algorithm is first introduced in ( Nemirovsky & Yudin ( 1983 ) ) and it has been extensively studied in the convex optimization literature ever since . In this section we provide a brief overview and we refer the interested reader to Chapter 4 of ( Bubeck ( 2015 ) ) . In the context of MD , we consider a problem of the form : min x∈X f ( x ) , ( 2 ) where f : X → IR is a convex function and X ⊂ IRr is a compact convex set . The main concept of MD is to extend gradient descent to a more general non-Euclidean space ( Banach space1 ) , thus overcoming the dependency of gradient descent on the Euclidean geometry . The motivation for this generalization is that one might be able to exploit the geometry of the space to optimize much more efficiently . One such example is the simplex constrained optimization where MD converges at a much faster rate than the standard Projected Gradient Descent ( PGD ) . To this end , since the gradients lie in the dual space , optimization is performed by first mapping the primal point xk ∈ B ( quantized space , X ) to the dual space B∗ ( unconstrained space , IRr ) , then performing gradient descent in the dual space , and finally mapping back the resulting point to the primal space B . If the new point xk+1 lie outside of the constraint set X ⊂ B , it is projected to the set X . Both the primal/dual mapping and the projection are determined by the mirror map . Specifically , the gradient of the mirror map defines the mapping from primal to dual and the projection is done via the Bregman divergence of the mirror map . We first provide the definitions for mirror map and Bregman divergence and then turn to the MD updates . Definition 2.1 ( Mirror map ) . Let C ⊂ IRr be a convex open set such that X ⊂ C̄ ( C̄ denotes the closure of set C ) and X ∩ C 6= ∅ . Then , Φ : C → IR is a mirror map if it satisfies : 1 . Φ is strictly convex and differentiable . 2 . ∇Φ ( C ) = IRr , i.e. , ∇Φ takes all possible values in IRr . 3. limx→∂C ‖∇Φ ( x ) ‖ =∞ ( ∂C denotes the boundary of C ) , i.e. , ∇Φ diverges on the boundary of C. Definition 2.2 ( Bregman divergence ) . Let Φ : C → IR be a continuously differentiable , strictly convex function defined on a convex set C. The Bregman divergence associated with Φ for points p , q ∈ C is the difference between the value of Φ at point p and the value of the first-order Taylor expansion of Φ around point q evaluated at point p , i.e. , DΦ ( p , q ) = Φ ( p ) − Φ ( q ) − 〈∇Φ ( q ) , p− q〉 . ( 3 ) 1A Banach space is a complete normed vector space where the norm is not necessarily derived from an inner product . Notice , DΦ ( p , q ) ≥ 0 with DΦ ( p , p ) = 0 , and DΦ ( p , q ) is convex on p. Now we are ready to provide the mirror descent strategy based on the mirror map Φ . Let x0 ∈ argminx∈X∩C Φ ( x ) be the initial point . Then , for iteration k ≥ 0 and step size η > 0 , the update of the MD algorithm can be written as : ∇Φ ( yk+1 ) = ∇Φ ( xk ) − η gk , where gk ∈ ∂f ( xk ) and yk+1 ∈ C , ( 4 ) xk+1 = argmin x∈X∩C DΦ ( x , y k+1 ) . Note that , in Eq . ( 4 ) , the gradient gk is computed at xk ∈ X ∩ C ( solution space ) but the gradient descent is performed in IRr ( unconstrained dual space ) . Moreover , by simple algebraic manipulation , it is easy to show that the above MD update ( 4 ) can be compactly written in a proximal form where the Bregman divergence of the mirror map becomes the proximal term ( Beck & Teboulle ( 2003 ) ) : xk+1 = argmin x∈X∩C 〈η gk , x〉+DΦ ( x , xk ) . ( 5 ) Note , if Φ ( x ) = 12 ‖x‖ 2 2 , then DΦ ( x , x k ) = 12 ∥∥x− xk∥∥2 2 , which when plugged back to the above problem and optimized for x , leads to exactly the same update rule as that of PGD . However , MD allows us to choose various forms of Φ depending on the problem at hand . 2.2 NEURAL NETWORK QUANTIZATION . Neural Network ( NN ) quantization amounts to training networks with parameters restricted to a small discrete set representing the quantization levels . Here we review two constrained optimization formulations for NN quantization : 1 ) directly constrain each parameter to be in the discrete set ; and 2 ) optimize the probability of each parameter taking a label from the set of quantization levels . 2.2.1 PARAMETER SPACE FORMULATION . Given a dataset D = { xi , yi } ni=1 , NN quantization can be written as : min w∈Qm L ( w ; D ) : = 1 n n∑ i=1 ` ( w ; ( xi , yi ) ) . ( 6 ) Here , ` ( · ) denotes the input-output mapping composed with a standard loss function ( e.g. , crossentropy loss ) , w is the m dimensional parameter vector , and Q with |Q| = d is a predefined discrete set representing quantization levels ( e.g. , Q = { −1 , 1 } or Q = { −1 , 0 , 1 } ) . The approaches that directly optimize in the parameter space include BinaryConnect ( BC ) ( Courbariaux et al . ( 2015 ) ) and its variants ( Hubara et al . ( 2017 ) ; Rastegari et al . ( 2016 ) ) , where the constraint set is discrete . In contrast , recent approaches ( Bai et al . ( 2019 ) ; Yin et al . ( 2018 ) ) relax this constraint set to be its convex hull : conv ( Qm ) = [ qmin , qmax ] m , ( 7 ) where qmin and qmax represent the minimum and maximum quantization levels , respectively . In this case , a quantized solution is obtained by gradually increasing an annealing hyperparameter . 2.2.2 LIFTED PROBABILITY SPACE FORMULATION . Another formulation is based on the Markov Random Field ( MRF ) perspective to NN quantization recently studied in ( Ajanthan et al . ( 2019 ) ) . It treats Eq . ( 6 ) as a discrete labelling problem and introduces indicator variables uj : λ ∈ { 0 , 1 } for each parameter wj where j ∈ { 1 , . . . , m } such that uj : λ = 1 if and only if wj = λ ∈ Q . For convenience , by denoting the vector of quantization levels as q , a parameter vector w ∈ Qm can be written in a matrix vector product as : w = uq , where u ∈ Vm = { u ∑ λ uj : λ = 1 , ∀ j uj : λ ∈ { 0 , 1 } , ∀ j , λ } . ( 8 ) Here , u is a m × d matrix ( i.e. , each row uj = { uj : λ | λ ∈ Q } ) , and q is a column vector of dimension d. Note that , u ∈ Vm is an overparametrized ( i.e. , lifted ) representation of w ∈ Qm . Now , similar to the relaxation in the parameter space , one can relax the binary constraint in Vm to form its convex hull : ∆m = conv ( Vm ) = { u ∑ λ uj : λ = 1 , ∀ j uj : λ ≥ 0 , ∀ j , λ } . ( 9 ) The set ∆m is in fact the Cartesian product of the standard ( d− 1 ) -probability simplexes embedded in IRd . Therefore , for a feasible point u ∈ ∆m , the vector uj for each j ( j-th row of matrix u ) belongs to the probability simplex ∆ . Hence , we can interpret the value uj : λ as the probability of assigning the discrete label λ to the weight wj . This relaxed optimization can then be written as : min u∈∆m L ( uq ; D ) : = 1 n n∑ i=1 ` ( uq ; ( xi , yi ) ) . ( 10 ) In fact , this can be interpreted as finding a probability distribution u ∈ ∆m such that the cost L ( u ) is minimized . Note that , the relaxation of u from Vm to ∆m translates into relaxing w from Qm to the convex region conv ( Qm ) . Even in this case , a discrete solution u ∈ Vm can be enforced via an annealing hyperparameter or using rounding schemes .
This paper proposes a Mirror Descent (MD) framework for the quantization of neural networks, which, different with previous quantization methods, enables us to derive valid mirror maps and the respective MD updates. Moreover, the authors also provide a stable implementation of MD by storing an additional set of auxiliary dual variables. Experiments on CIFAR-10/100 and TinyImageNet with convolutional and residual architectures show the effective of the proposed model.
SP:96afb20c4d7fe41c083a0217c9cb8d1f21a73a15
Mirror Descent View For Neural Network Quantization
1 INTRODUCTION . Despite the success of deep neural networks in various domains , their excessive computational and memory requirements limit their practical usability for real-time applications or in resource-limited devices . Quantization is a prominent technique for network compression , where the objective is to learn a network while restricting the parameters to take values from a small discrete set ( usually binary ) . This leads to a dramatic reduction in memory ( a factor of 32 for binary quantization ) and inference time – as it enables specialized implementation using bit operations . Neural Network ( NN ) quantization is usually formulated as a constrained optimization problem minx∈X f ( x ) , where f ( · ) denotes the loss function by abstracting out the dependency on the dataset and X ⊂ IRr denotes the set of all possible quantized solutions . Majority of the works in the literature ( Hubara et al . ( 2017 ) ; Yin et al . ( 2018 ) ; Ajanthan et al . ( 2019 ) ) convert this into an unconstrained problem by introducing auxiliary variables ( x̃ ) and optimize via ( stochastic ) gradient descent . Specifically , the objective and the update step take the following form : min x̃∈IRr f ( P ( x̃ ) ) , x̃k+1 = x̃k − η ∇x̃f ( P ( x̃ ) ) |x̃=x̃k , ( 1 ) where P : IRr → X is a mapping from the unconstrained space to the quantized space ( sometimes called projection ) and η > 0 is the learning rate . In cases where the mapping P is not differentiable , a suitable approximation is employed ( Hubara et al . ( 2017 ) ) . In this work , by noting that the well-known Mirror Descent ( MD ) algorithm , widely used for online convex optimization ( Bubeck ( 2015 ) ) , provides a theoretical framework to perform gradient descent in the unconstrained space ( dual space , IRr ) with gradients computed in the quantized space ( primal space , X ) , we introduce an MD framework for NN quantization . In essence , MD extends gradient descent to non-Euclidean spaces where Euclidean projection is replaced with a more general projection defined based on the associated distance metric . Briefly , the key ingredient of MD is a concept called mirror map which defines both the mapping between primal and dual spaces and the exact form of the projection . Specifically , in this work , by observing P in Eq . ( 1 ) as a mapping from dual space to the primal space , we analytically derive corresponding mirror maps under certain conditions on P . This enables us to derive different variants of the MD algorithm useful for NN quantization . Furthermore , as MD is often found to be numerically unstable ( Hsieh et al . ( 2018 ) ) , we discuss a numerically stable implementation of MD by storing an additional set of auxiliary variables similar to the existing methods . As will be shown later , this update is strikingly analogous to the popular Straight Through Estimator ( STE ) based gradient method ( Hubara et al . ( 2017 ) ; Bai et al . ( 2019 ) ) which is typically viewed as a “ trick ” to avoid vanishing gradients issue but here we show that it is an implementation method for MD under certain conditions on the mapping P . We believe this connection sheds some light on the practical effectiveness of STE . We evaluate the merits of our MD variants on CIFAR-10/100 and TinyImageNet classification datasets with convolutional and residual architectures . Our experiments show that the quantized networks obtained by the MD variants yield accuracies very close to the floating-point counterparts while outperforming directly comparable baselines . Finally , we would like to emphasize that even though our formulation does not necessarily extend the theory of MD , we believe showing MD as a suitable framework for NN quantization with superior empirical performance opens up new ways of designing MD-inspired update rules for NNs . 2 PRELIMINARIES . We first provide some background on the MD algorithm and NN quantization . Then we discuss the link between them and provide our MD framework for NN quantization . 2.1 MIRROR DESCENT . The Mirror Descent ( MD ) algorithm is first introduced in ( Nemirovsky & Yudin ( 1983 ) ) and it has been extensively studied in the convex optimization literature ever since . In this section we provide a brief overview and we refer the interested reader to Chapter 4 of ( Bubeck ( 2015 ) ) . In the context of MD , we consider a problem of the form : min x∈X f ( x ) , ( 2 ) where f : X → IR is a convex function and X ⊂ IRr is a compact convex set . The main concept of MD is to extend gradient descent to a more general non-Euclidean space ( Banach space1 ) , thus overcoming the dependency of gradient descent on the Euclidean geometry . The motivation for this generalization is that one might be able to exploit the geometry of the space to optimize much more efficiently . One such example is the simplex constrained optimization where MD converges at a much faster rate than the standard Projected Gradient Descent ( PGD ) . To this end , since the gradients lie in the dual space , optimization is performed by first mapping the primal point xk ∈ B ( quantized space , X ) to the dual space B∗ ( unconstrained space , IRr ) , then performing gradient descent in the dual space , and finally mapping back the resulting point to the primal space B . If the new point xk+1 lie outside of the constraint set X ⊂ B , it is projected to the set X . Both the primal/dual mapping and the projection are determined by the mirror map . Specifically , the gradient of the mirror map defines the mapping from primal to dual and the projection is done via the Bregman divergence of the mirror map . We first provide the definitions for mirror map and Bregman divergence and then turn to the MD updates . Definition 2.1 ( Mirror map ) . Let C ⊂ IRr be a convex open set such that X ⊂ C̄ ( C̄ denotes the closure of set C ) and X ∩ C 6= ∅ . Then , Φ : C → IR is a mirror map if it satisfies : 1 . Φ is strictly convex and differentiable . 2 . ∇Φ ( C ) = IRr , i.e. , ∇Φ takes all possible values in IRr . 3. limx→∂C ‖∇Φ ( x ) ‖ =∞ ( ∂C denotes the boundary of C ) , i.e. , ∇Φ diverges on the boundary of C. Definition 2.2 ( Bregman divergence ) . Let Φ : C → IR be a continuously differentiable , strictly convex function defined on a convex set C. The Bregman divergence associated with Φ for points p , q ∈ C is the difference between the value of Φ at point p and the value of the first-order Taylor expansion of Φ around point q evaluated at point p , i.e. , DΦ ( p , q ) = Φ ( p ) − Φ ( q ) − 〈∇Φ ( q ) , p− q〉 . ( 3 ) 1A Banach space is a complete normed vector space where the norm is not necessarily derived from an inner product . Notice , DΦ ( p , q ) ≥ 0 with DΦ ( p , p ) = 0 , and DΦ ( p , q ) is convex on p. Now we are ready to provide the mirror descent strategy based on the mirror map Φ . Let x0 ∈ argminx∈X∩C Φ ( x ) be the initial point . Then , for iteration k ≥ 0 and step size η > 0 , the update of the MD algorithm can be written as : ∇Φ ( yk+1 ) = ∇Φ ( xk ) − η gk , where gk ∈ ∂f ( xk ) and yk+1 ∈ C , ( 4 ) xk+1 = argmin x∈X∩C DΦ ( x , y k+1 ) . Note that , in Eq . ( 4 ) , the gradient gk is computed at xk ∈ X ∩ C ( solution space ) but the gradient descent is performed in IRr ( unconstrained dual space ) . Moreover , by simple algebraic manipulation , it is easy to show that the above MD update ( 4 ) can be compactly written in a proximal form where the Bregman divergence of the mirror map becomes the proximal term ( Beck & Teboulle ( 2003 ) ) : xk+1 = argmin x∈X∩C 〈η gk , x〉+DΦ ( x , xk ) . ( 5 ) Note , if Φ ( x ) = 12 ‖x‖ 2 2 , then DΦ ( x , x k ) = 12 ∥∥x− xk∥∥2 2 , which when plugged back to the above problem and optimized for x , leads to exactly the same update rule as that of PGD . However , MD allows us to choose various forms of Φ depending on the problem at hand . 2.2 NEURAL NETWORK QUANTIZATION . Neural Network ( NN ) quantization amounts to training networks with parameters restricted to a small discrete set representing the quantization levels . Here we review two constrained optimization formulations for NN quantization : 1 ) directly constrain each parameter to be in the discrete set ; and 2 ) optimize the probability of each parameter taking a label from the set of quantization levels . 2.2.1 PARAMETER SPACE FORMULATION . Given a dataset D = { xi , yi } ni=1 , NN quantization can be written as : min w∈Qm L ( w ; D ) : = 1 n n∑ i=1 ` ( w ; ( xi , yi ) ) . ( 6 ) Here , ` ( · ) denotes the input-output mapping composed with a standard loss function ( e.g. , crossentropy loss ) , w is the m dimensional parameter vector , and Q with |Q| = d is a predefined discrete set representing quantization levels ( e.g. , Q = { −1 , 1 } or Q = { −1 , 0 , 1 } ) . The approaches that directly optimize in the parameter space include BinaryConnect ( BC ) ( Courbariaux et al . ( 2015 ) ) and its variants ( Hubara et al . ( 2017 ) ; Rastegari et al . ( 2016 ) ) , where the constraint set is discrete . In contrast , recent approaches ( Bai et al . ( 2019 ) ; Yin et al . ( 2018 ) ) relax this constraint set to be its convex hull : conv ( Qm ) = [ qmin , qmax ] m , ( 7 ) where qmin and qmax represent the minimum and maximum quantization levels , respectively . In this case , a quantized solution is obtained by gradually increasing an annealing hyperparameter . 2.2.2 LIFTED PROBABILITY SPACE FORMULATION . Another formulation is based on the Markov Random Field ( MRF ) perspective to NN quantization recently studied in ( Ajanthan et al . ( 2019 ) ) . It treats Eq . ( 6 ) as a discrete labelling problem and introduces indicator variables uj : λ ∈ { 0 , 1 } for each parameter wj where j ∈ { 1 , . . . , m } such that uj : λ = 1 if and only if wj = λ ∈ Q . For convenience , by denoting the vector of quantization levels as q , a parameter vector w ∈ Qm can be written in a matrix vector product as : w = uq , where u ∈ Vm = { u ∑ λ uj : λ = 1 , ∀ j uj : λ ∈ { 0 , 1 } , ∀ j , λ } . ( 8 ) Here , u is a m × d matrix ( i.e. , each row uj = { uj : λ | λ ∈ Q } ) , and q is a column vector of dimension d. Note that , u ∈ Vm is an overparametrized ( i.e. , lifted ) representation of w ∈ Qm . Now , similar to the relaxation in the parameter space , one can relax the binary constraint in Vm to form its convex hull : ∆m = conv ( Vm ) = { u ∑ λ uj : λ = 1 , ∀ j uj : λ ≥ 0 , ∀ j , λ } . ( 9 ) The set ∆m is in fact the Cartesian product of the standard ( d− 1 ) -probability simplexes embedded in IRd . Therefore , for a feasible point u ∈ ∆m , the vector uj for each j ( j-th row of matrix u ) belongs to the probability simplex ∆ . Hence , we can interpret the value uj : λ as the probability of assigning the discrete label λ to the weight wj . This relaxed optimization can then be written as : min u∈∆m L ( uq ; D ) : = 1 n n∑ i=1 ` ( uq ; ( xi , yi ) ) . ( 10 ) In fact , this can be interpreted as finding a probability distribution u ∈ ∆m such that the cost L ( u ) is minimized . Note that , the relaxation of u from Vm to ∆m translates into relaxing w from Qm to the convex region conv ( Qm ) . Even in this case , a discrete solution u ∈ Vm can be enforced via an annealing hyperparameter or using rounding schemes .
This paper proposes a neural network (NN) quantization based on Mirror Descent (MD) framework. The core of the proposal is the construction of the mirror map from the unconstrained auxiliary variables to the quantized space. Building on that core, the authors derive some mapping functions from the corresponding projection, i.e. tanh, softmax and shifted tanh. The experimental result on benchmark datasets (CIFAR & TinyImageNet) and basic architectures (VGG & ResNet-18) showed that the proposed method is suitable for quantization. The proposed method is a natural extension of ProxQuant, which adopted the proximal gradient descent to quantize NN (a.k.a $\ell_2$ norm in MD). Different projections in NN quantization lead to different Bregman divergences in MD.
SP:96afb20c4d7fe41c083a0217c9cb8d1f21a73a15
Natural Image Manipulation for Autoregressive Models Using Fisher Scores
1 INTRODUCTION . Over the last few decades , unsupervised learning has been a rapidly growing field , with the development of more complex and better probabilistic density models . Autoregressive generative models ( Salimans et al. , 2017 ; Oord et al. , 2016a ; Menick & Kalchbrenner , 2018 ) are one of the most powerful generative models to date , as they generally achieve the best bits per dim compared to other likelihood-based models such as Normalizing Flows or Variational Auto-encoders ( VAEs ) ( Kingma & Welling , 2013 ; Dinh et al. , 2016 ; Kingma & Dhariwal , 2018 ) . However , it remains a difficult problem to perform any kind of controlled sample generation using autoregressive models . For example , flow models and VAEs are structured as latent variable models and allow meaningful manipulations in latent space , which can then be mapped back to original data distribution to produce generated samples either through an invertible map or a learned decoder . In the context of natural image modelling , since discrete autoregressive models do not have continuous latent spaces , there is no natural method to apply controlled generation . When a latent space is not used , prior works generally perform controlled sample generation through training models conditioned on auxiliary information , such as class labels or facial attributes ( Van den Oord et al. , 2016 ) . However , this requires a new conditional model to be trained for every new set of labels or features we want to manipulate , which is a time-consuming and tedious task . Ideally , we could structure an unconditional latent space that the autoregressive model could sample from , but where would this latent space come from ? In this paper , we propose a method of image interpolation and manipulation in a latent space defined by the Fisher score of both discrete and continuous autoregressive models . We use PixelCNNs to model the natural image distribution and use them to compute Fisher scores . In order to map back from Fisher score space to natural images , we train a decoder by minimizing reconstruction error . We show that interpolations in Fisher score space provide higher-level semantic meaning compared to baselines such as interpolations in PixelCNN activation space , which produce blurry and incoherent intermediate interpolations similar in nature to interpolations using pixel values . In order to evaluate interpolations quantitatively , for different mixing coefficients α , we calculate FID ( Heusel et al. , 2017 ) of the images decoded from a large sample of convex combinations of latent vectors . In summary , we present two key contributions in our paper : • A novel method for natural image interpolation and semantic manipulation using autoregressive models through Fisher scores • A new quantitative approach to evaluate interpolation quality of images 2 RELATED WORK . There exists a substantial amount of work on natural image manipulation using deep generative models . VAEs ( Higgins et al. , 2017 ) , BigBiGANS ( Donahue et al. , 2016 ; Donahue & Simonyan , 2019 ; Brock et al. , 2018 ) and normalizing flows Kingma & Dhariwal ( 2018 ) provide learned latent spaces in which realistic image manipulation is a natural task . Interpolating through a latent space allows more natural transitions between images compared to pixel value interpolations , which naively overlay images on top of each other . Other prior methods learn hierarchical latent spaces on GANs and VAEs to encourage semantic manipulations to disentangle different levels of global characteristics of facial features , such as skin color , gender , hair color , and facial features ( Karras et al. , 2019 ) . Aside from using a latent space for controlled image generation , prior methods have also trained generative methods conditioned on relevant auxiliary feature labels , such as class labels in ImageNet . Other prior works have also used facial feature embeddings and binary facial attributes from CelebA to manipulate facial characteristics of generated images ( Van den Oord et al. , 2016 ) . Similarly , there has been a large amount of work on using Fisher information in machine learning . Many prior methods use the Fisher Kernel in algorithms such as kernel discriminant analysis and kernel support vector machines ( Mika et al. , 1999 ; Jaakkola & Haussler , 1999 ) . More recent works introduce the Fisher vector , which uses Fisher scores of mixtures of Gaussians as feature vectors for downstream tasks ( Sánchez et al. , 2013 ; Simonyan et al. , 2013 ; Perronnin et al. , 2010b ; Gosselin et al. , 2014 ; Perronnin et al. , 2010a ) . However , to our knowledge , there has been no work in using Fisher scores for deep generative modelling . 3 BACKGROUND . 3.1 PIXELCNN . PixelCNNs ( Oord et al. , 2016b ) are powerful autoregressive models used to model complex image distributions . They are likelihood-based generative models that directly model the data distribution p ( x ) . This allows us to exactly compute the Fisher score instead of using an approximation , which would be necessary for GANs ( Goodfellow et al. , 2014 ) or VAEs , as they either implicitly optimize likelihood , or optimize a variational lower bound . PixelCNNs use a series of masked convolutions to define an autoregressive model over image data . Masked convolutions allow PixelCNNs to retain the autoregressive ordering when propagating values through layers . Over the past few years , many improved variants of PixelCNNs have been developed to address certain problems with the original PixelCNN design . Specifically for our work , we use Gated PixelCNNs ( Van den Oord et al. , 2016 ) , which introduced horizontal and vertical convolutional blocks to remove the blind-spot in the original masked convolutions of PixelCNNs , and Pyramid PixelCNNs ( Kolesnikov & Lampert , 2017 ) , which use PixelCNN++ ( Salimans et al. , 2017 ) architectures in a hierarchical fashion , with each layer of the hierarchy modelling images at different down-sampled resolutions conditioned on the previous layer . 3.2 FISHER SCORE . The Fisher score ˙̀ ( x ; θ ) = ∇θ log pθ ( x ) is defined as the gradient of the log-probability of a sample with respect to the model parameters θ . Intuitively , Fisher scores describe the contributions of each parameter during the generation process . Similar samples in the data distribution should elicit similar Fisher scores . Similarity between samples can also be evaluated by taking a gradient step for one sample , and checking if the log-likelihood of another sample also increases . This intuition is re-affirmed in the underlying mathematical interpretation of Fisher information - the Fisher score maps data points onto a Riemannian manifold with a local metric given by the Fisher information matrix . The underlying kernel of this space is the Fisher kernel , defined as : ˙̀ ( xi ; θ ) TF−1 ˙̀ ( xj ; θ ) = ( F − 12 ˙̀ ( xi ; θ ) ) T ( F− 1 2 ˙̀ ( xj ; θ ) ) where F−1 is the inverse Fisher information matrix , and F− 1 2 is the Cholesky Decomposition . Applying F− 1 2 is a normalization process , so the Fisher kernel can be approximated by taking the dot product of standardized Fisher scores . This is useful since computing F−1 is normally difficult . As such , we can see the collection of Fisher scores as a high dimensional embedding space in which more meaningful information about the data distribution can be extracted compared to raw pixel values . More complex deep generative models may learn parameters that encode information at high-levels of abstraction which may be reflected as high-level features in Fisher scores . 3.3 SPARSE RANDOM PROJECTIONS . It would be cumbersome to work in the very high-dimensional parameter spaces of deep generative models , so we use dimensionality reduction methods to make our methods more scalable . Sparse random projections allow for memory-efficient and scalable projections of high dimensional vectors . The Johnson-Lindenstrauss Lemma ( Dasgupta & Gupta , 1999 ) states that under a suitable orthogonal projection , a set of n points in a d-dimensional space can be accurately embedded to some k-dimensional vector space , where k depends only on log n. Therefore , for suitably large k , we can preserve the norms and relative distances between projected points , even for very high dimensional data . Since sparse random matrices are nearly orthogonal in high-dimensional settings , we can safely substantially reduce the dimensionality of our embedding spaces using this method . We generate sparse random matrices according to Li et al . ( 2006 ) . Given an n× k matrix to project , we define the minimum density of our sparse random matrix as d = 1√ k , and let s = 1d . Suppose that we are projecting the data into p dimensions , then the n × p projection matrix P is generated according to the following distribution : Pij = − s√p with probability 1 2s 0 with probability 1− 1s s√ p with probability 1 2s ( 1 ) where Pij is the element of the ith row and jth column of the projection matrix . Algorithm 1 : The procedural generation of the new interpolated embedding dataset for quantitative evaluation . Following FID conventions , we use a sample size of 50000 images . Result : α-interpolated dataset Dα Input : The dataset D , projection matrix P , pre-trained autoregressive model pθ ( x ) , and the learned decoder Dec Dα← { } for i← 0 to 50000 do Sample x1 , x2 ∼ D , a random pair of samples z1 ← P∇θ log pθ ( x1 ) z2 ← P∇θ log pθ ( x2 ) ẑ ← ( 1− α ) z1 + αz2 x̂← Dec ( ẑ ) Dα ← Dα ∪ { x̂ } end 4 METHOD . We now describe our approach for natural image manipulation and interpolation using autoregressive models . Note that this method is not restricted to autoregressive models and can be applied to any likelihood-based models . 4.1 EMBEDDING SPACE CONSTRUCTION . Given a trained autoregressive model , pθ ( x ) , we want to construct an embedding space that is more meaningful than raw pixel values . In this paper , our autoregressive models are exclusively variants of PixelCNNs since we are working in an image domain . Drawing inspiration off popular selfsupervised methods ( Zhang et al. , 2016 ; Gidaris et al. , 2018 ; Doersch et al. , 2015 ; Pathak et al. , 2016 ; Noroozi & Favaro , 2016 ) , it may seem natural to take the output activations of one of the last few convolutional layers . However in our experiments , we show that using output activations provides less meaningful embedding manipulation than using the Fisher score . This is especially the case for PixelCNNs , as PixelCNNs are known for generally encoding more local statistics than global features of images . However , Fisher scores lie in parameter space , which may encode more global semantics . In order to construct the embedding space using Fisher scores , we initialize a sparse random matrix P , randomly generated following the distribution in equation 1 . In practice , we found sparse random matrices the only feasible method for reasonable dimensionality reduction when scaling to more difficult datasets , where the sizes of corresponding PixelCNNs grew to tens of millions of parameters . Finally , the embedding space is constructed as follows : each element zi in the new embedding space is computed as zi = P∇θ log pθ ( xi ) for each sample xi in the dataset .
Motivated by the observation that powerful deep autoregressive models such as PixelCNNs lack the ability to produce semantically meaningful latent embeddings and generate visually appealing interpolated images by latent representation manipulations, this paper proposes using Fisher scores projected to a reasonably low-dimensional space as latent embeddings for image manipulations. A decoder based on a CNN, a Conditional RealNVP, or a Conditional Pyramid PixelCNN is used to decode high-dimensional images from these projected Fisher score. Experiments with different autoregressive and decoder architectures are conducted on MNIST and CelebA datasets are conducted.
SP:662edd2fd9437de887821ebf7de06415eba13fae
Natural Image Manipulation for Autoregressive Models Using Fisher Scores
1 INTRODUCTION . Over the last few decades , unsupervised learning has been a rapidly growing field , with the development of more complex and better probabilistic density models . Autoregressive generative models ( Salimans et al. , 2017 ; Oord et al. , 2016a ; Menick & Kalchbrenner , 2018 ) are one of the most powerful generative models to date , as they generally achieve the best bits per dim compared to other likelihood-based models such as Normalizing Flows or Variational Auto-encoders ( VAEs ) ( Kingma & Welling , 2013 ; Dinh et al. , 2016 ; Kingma & Dhariwal , 2018 ) . However , it remains a difficult problem to perform any kind of controlled sample generation using autoregressive models . For example , flow models and VAEs are structured as latent variable models and allow meaningful manipulations in latent space , which can then be mapped back to original data distribution to produce generated samples either through an invertible map or a learned decoder . In the context of natural image modelling , since discrete autoregressive models do not have continuous latent spaces , there is no natural method to apply controlled generation . When a latent space is not used , prior works generally perform controlled sample generation through training models conditioned on auxiliary information , such as class labels or facial attributes ( Van den Oord et al. , 2016 ) . However , this requires a new conditional model to be trained for every new set of labels or features we want to manipulate , which is a time-consuming and tedious task . Ideally , we could structure an unconditional latent space that the autoregressive model could sample from , but where would this latent space come from ? In this paper , we propose a method of image interpolation and manipulation in a latent space defined by the Fisher score of both discrete and continuous autoregressive models . We use PixelCNNs to model the natural image distribution and use them to compute Fisher scores . In order to map back from Fisher score space to natural images , we train a decoder by minimizing reconstruction error . We show that interpolations in Fisher score space provide higher-level semantic meaning compared to baselines such as interpolations in PixelCNN activation space , which produce blurry and incoherent intermediate interpolations similar in nature to interpolations using pixel values . In order to evaluate interpolations quantitatively , for different mixing coefficients α , we calculate FID ( Heusel et al. , 2017 ) of the images decoded from a large sample of convex combinations of latent vectors . In summary , we present two key contributions in our paper : • A novel method for natural image interpolation and semantic manipulation using autoregressive models through Fisher scores • A new quantitative approach to evaluate interpolation quality of images 2 RELATED WORK . There exists a substantial amount of work on natural image manipulation using deep generative models . VAEs ( Higgins et al. , 2017 ) , BigBiGANS ( Donahue et al. , 2016 ; Donahue & Simonyan , 2019 ; Brock et al. , 2018 ) and normalizing flows Kingma & Dhariwal ( 2018 ) provide learned latent spaces in which realistic image manipulation is a natural task . Interpolating through a latent space allows more natural transitions between images compared to pixel value interpolations , which naively overlay images on top of each other . Other prior methods learn hierarchical latent spaces on GANs and VAEs to encourage semantic manipulations to disentangle different levels of global characteristics of facial features , such as skin color , gender , hair color , and facial features ( Karras et al. , 2019 ) . Aside from using a latent space for controlled image generation , prior methods have also trained generative methods conditioned on relevant auxiliary feature labels , such as class labels in ImageNet . Other prior works have also used facial feature embeddings and binary facial attributes from CelebA to manipulate facial characteristics of generated images ( Van den Oord et al. , 2016 ) . Similarly , there has been a large amount of work on using Fisher information in machine learning . Many prior methods use the Fisher Kernel in algorithms such as kernel discriminant analysis and kernel support vector machines ( Mika et al. , 1999 ; Jaakkola & Haussler , 1999 ) . More recent works introduce the Fisher vector , which uses Fisher scores of mixtures of Gaussians as feature vectors for downstream tasks ( Sánchez et al. , 2013 ; Simonyan et al. , 2013 ; Perronnin et al. , 2010b ; Gosselin et al. , 2014 ; Perronnin et al. , 2010a ) . However , to our knowledge , there has been no work in using Fisher scores for deep generative modelling . 3 BACKGROUND . 3.1 PIXELCNN . PixelCNNs ( Oord et al. , 2016b ) are powerful autoregressive models used to model complex image distributions . They are likelihood-based generative models that directly model the data distribution p ( x ) . This allows us to exactly compute the Fisher score instead of using an approximation , which would be necessary for GANs ( Goodfellow et al. , 2014 ) or VAEs , as they either implicitly optimize likelihood , or optimize a variational lower bound . PixelCNNs use a series of masked convolutions to define an autoregressive model over image data . Masked convolutions allow PixelCNNs to retain the autoregressive ordering when propagating values through layers . Over the past few years , many improved variants of PixelCNNs have been developed to address certain problems with the original PixelCNN design . Specifically for our work , we use Gated PixelCNNs ( Van den Oord et al. , 2016 ) , which introduced horizontal and vertical convolutional blocks to remove the blind-spot in the original masked convolutions of PixelCNNs , and Pyramid PixelCNNs ( Kolesnikov & Lampert , 2017 ) , which use PixelCNN++ ( Salimans et al. , 2017 ) architectures in a hierarchical fashion , with each layer of the hierarchy modelling images at different down-sampled resolutions conditioned on the previous layer . 3.2 FISHER SCORE . The Fisher score ˙̀ ( x ; θ ) = ∇θ log pθ ( x ) is defined as the gradient of the log-probability of a sample with respect to the model parameters θ . Intuitively , Fisher scores describe the contributions of each parameter during the generation process . Similar samples in the data distribution should elicit similar Fisher scores . Similarity between samples can also be evaluated by taking a gradient step for one sample , and checking if the log-likelihood of another sample also increases . This intuition is re-affirmed in the underlying mathematical interpretation of Fisher information - the Fisher score maps data points onto a Riemannian manifold with a local metric given by the Fisher information matrix . The underlying kernel of this space is the Fisher kernel , defined as : ˙̀ ( xi ; θ ) TF−1 ˙̀ ( xj ; θ ) = ( F − 12 ˙̀ ( xi ; θ ) ) T ( F− 1 2 ˙̀ ( xj ; θ ) ) where F−1 is the inverse Fisher information matrix , and F− 1 2 is the Cholesky Decomposition . Applying F− 1 2 is a normalization process , so the Fisher kernel can be approximated by taking the dot product of standardized Fisher scores . This is useful since computing F−1 is normally difficult . As such , we can see the collection of Fisher scores as a high dimensional embedding space in which more meaningful information about the data distribution can be extracted compared to raw pixel values . More complex deep generative models may learn parameters that encode information at high-levels of abstraction which may be reflected as high-level features in Fisher scores . 3.3 SPARSE RANDOM PROJECTIONS . It would be cumbersome to work in the very high-dimensional parameter spaces of deep generative models , so we use dimensionality reduction methods to make our methods more scalable . Sparse random projections allow for memory-efficient and scalable projections of high dimensional vectors . The Johnson-Lindenstrauss Lemma ( Dasgupta & Gupta , 1999 ) states that under a suitable orthogonal projection , a set of n points in a d-dimensional space can be accurately embedded to some k-dimensional vector space , where k depends only on log n. Therefore , for suitably large k , we can preserve the norms and relative distances between projected points , even for very high dimensional data . Since sparse random matrices are nearly orthogonal in high-dimensional settings , we can safely substantially reduce the dimensionality of our embedding spaces using this method . We generate sparse random matrices according to Li et al . ( 2006 ) . Given an n× k matrix to project , we define the minimum density of our sparse random matrix as d = 1√ k , and let s = 1d . Suppose that we are projecting the data into p dimensions , then the n × p projection matrix P is generated according to the following distribution : Pij = − s√p with probability 1 2s 0 with probability 1− 1s s√ p with probability 1 2s ( 1 ) where Pij is the element of the ith row and jth column of the projection matrix . Algorithm 1 : The procedural generation of the new interpolated embedding dataset for quantitative evaluation . Following FID conventions , we use a sample size of 50000 images . Result : α-interpolated dataset Dα Input : The dataset D , projection matrix P , pre-trained autoregressive model pθ ( x ) , and the learned decoder Dec Dα← { } for i← 0 to 50000 do Sample x1 , x2 ∼ D , a random pair of samples z1 ← P∇θ log pθ ( x1 ) z2 ← P∇θ log pθ ( x2 ) ẑ ← ( 1− α ) z1 + αz2 x̂← Dec ( ẑ ) Dα ← Dα ∪ { x̂ } end 4 METHOD . We now describe our approach for natural image manipulation and interpolation using autoregressive models . Note that this method is not restricted to autoregressive models and can be applied to any likelihood-based models . 4.1 EMBEDDING SPACE CONSTRUCTION . Given a trained autoregressive model , pθ ( x ) , we want to construct an embedding space that is more meaningful than raw pixel values . In this paper , our autoregressive models are exclusively variants of PixelCNNs since we are working in an image domain . Drawing inspiration off popular selfsupervised methods ( Zhang et al. , 2016 ; Gidaris et al. , 2018 ; Doersch et al. , 2015 ; Pathak et al. , 2016 ; Noroozi & Favaro , 2016 ) , it may seem natural to take the output activations of one of the last few convolutional layers . However in our experiments , we show that using output activations provides less meaningful embedding manipulation than using the Fisher score . This is especially the case for PixelCNNs , as PixelCNNs are known for generally encoding more local statistics than global features of images . However , Fisher scores lie in parameter space , which may encode more global semantics . In order to construct the embedding space using Fisher scores , we initialize a sparse random matrix P , randomly generated following the distribution in equation 1 . In practice , we found sparse random matrices the only feasible method for reasonable dimensionality reduction when scaling to more difficult datasets , where the sizes of corresponding PixelCNNs grew to tens of millions of parameters . Finally , the embedding space is constructed as follows : each element zi in the new embedding space is computed as zi = P∇θ log pθ ( xi ) for each sample xi in the dataset .
This paper focuses on the problem of interpolating between data points using neural autoregressive models. The core idea is that it is possible to use (a smaller-dimensional projection of) the Fisher score of the density function defined by the autoregressive model to represent data points in embedding space, and a neural decoder for mapping them back to input space. Experiments on both MNIST and Celeb suggest that this is a sensible method, and leads to smoother interpolations rather than just relying on the embeddings resulting from the network activations.
SP:662edd2fd9437de887821ebf7de06415eba13fae
Meta-RCNN: Meta Learning for Few-Shot Object Detection
1 INTRODUCTION . Object detection is the task of identifying various objects in a given image , and localizing them with a bounding box . It is a widely studied problem in computer vision , and following the success deep convolutional neural networks ( DCNN ) in image classification ( Karpathy et al. , 2014 ; Krizhevsky et al. , 2012 ) , recent years have witnessed remarkable progress made in object detection based on deep learning . A series of detection algorithms based on DCNNs have been proposed which achieve state-of-the-art results on public detection benchmark datasets ( Gidaris & Komodakis , 2015 ; Girshick et al. , 2014 ; Ren et al. , 2015 ; Lin et al. , 2017a ; b ; Liu et al. , 2016 ; Redmon & Farhadi , 2016 ) . However , all these methods are data hungry , and require large amounts of annotated data to learn an immense number of parameters . For object detection , annotating the data is every expensive ( much more than image classification ) , as it requires not only identifying the categorical labels for every object in the image , but also providing accurate localization information through bounding box coordinates . Moreover , in some applications , such as medical research , it ’ s often impossible to even collect sufficient data to annotate . This warrants a need for effective detectors that can generalize well from small amounts of annotated data . We refer to the problem of learning detectors from limited labeled data as few-shot detection . For example , in one-shot detection , only one image is available with objects of interest annotated , and a detector needs to train on just this image and generalize . When presented with such small amounts of annotated data , traditional detectors tend to suffer from overfitting . Inspired by the fact that humans can learn a new concepts from little annotated data , we aim to develop a new few-shot detection algorithm . There have been several efforts exploring few-shot learning ( Vinyals et al. , 2016 ; Finn et al. , 2017 ; Snell et al. , 2017 ) . Many of them follow the principle of meta learning . In meta learning , a set of tasks in a few-shot setting is simulated from a large corpus of annotated data , and the model is optimized to perform well over these few shot tasks . This trains the model to learn how to solve few-shot tasks . However , most existing efforts of meta learning are mainly focused on classification . Adapting few-shot classification algorithms directly for few-shot detection ( e.g . by replacing the region classification branch of detector with a meta-learner ) is non-trivial because of two major concerns : i ) . Detection algorithms not only require classifying objects but also need to correctly localize objects in cluttered backgrounds by using a Region Proposal Network ( RPN ) and bounding box ( bbox ) regressors . It is thus also desirable that both RPN and bbox regressors should also be capable enough to adapt to few-shot settings . ii ) . For a given task with one ( or few ) annotated image ( s ) , the annotated image may contain objects from several classes . But only a few objects of interest are annotated . The goal of the few-shot detector is to detect only these objects of interest . Unfortunately , a naively trained meta-detector ’ s RPN would detect all objects ( even objects from classes not of interest ) and try to classify them as one of the classes of interest rather than background images ( See Figure 1 for an example ) . We aim to address these challenges by proposing a novel method for solving few-shot detection using the meta-learning paradigm . We develop Meta-RCNN , an end to end trainable meta object detector . The proposed Meta-RCNN follows the episodic learning paradigm of meta-learning ( Vinyals et al. , 2016 ) , where based on a give meta-train dataset , multiple few-shot tasks are simulated . For a given task , we first construct a class prototype for each of the annotated object categories in the support set . Using these prototypes , a class-specific feature map of the entire image is constructed , i.e. , we obtain a feature map of the entire image for each of the class prototypes . These feature maps are tailored to detect only objects of the class of the prototype , by giving higher attention to appropriate regions in the image containing that object . Finally , all feature maps are merged to produce a combined feature map , followed by an RPN , and then classification and bbox regression layers . Meta-RCNN learns few-shot detector where the whole framework can be trained via meta-learning in an end-to-end manner . In contrast to the naive adaptation of meta-learning for classification into an object detection framework , Meta-RCNN learns the few-shot classifier , the RPN , and the bbox regressor in the meta-learning setting , thus making all three components suitable for handling fewshot scenarios . Moreover , Meta-RCNN learns a class-specific feature map for a given class prototype enabling easier distinction between classes of interest and backgrounds ( where other objects in the image from classes not of interest are considered as backgrounds ) . We demonstrate the effectiveness of Meta-RCNN on two few-shot detection benchmarks : Pascal VOC and animal subset of ImageNet , and show that Meta-RCNN significantly improves the detection result in few shot settings . 2 RELATED WORK . Generic Object Detection . Object detection based on deep learning can be broadly divided into two families : two-stage detectors and one-stage detectors . Two-stage detectors such as RCNN ( Girshick et al. , 2014 ) , Fast RCNN ( Gidaris & Komodakis , 2015 ) and Faster RCNN ( Ren et al. , 2015 ) , first generate a sparse set of proposal candidates , and a fixed-length feature vector is extracted from each of these candidates , followed by a categorical classifier and a bounding box regressor . Twostage detection algorithms have achieved state-of-the-art results on many public benchmarks ( He et al. , 2016 ; Lin et al. , 2017a ) , but are relatively slower than one-stage detectors . One-stage detectors such as SSD ( Liu et al. , 2016 ) , Yolo ( Redmon et al. , 2016 ; Redmon & Farhadi , 2016 ) and RefineDet ( Zhang et al. , 2018 ) directly generate categorical proposals from the feature map and thus avoid cascaded region classifiers . One-stage detectors can achieve real-time inference speed but the detection accuracy is often inferior to two-stage detection algorithms . Both detection families assume access to a large set of annotated data , and are not suitable for scenarios where the model has access to small amounts of annotated training data . In contrast , our proposed Meta-RCNN method addresses detection problem of few-shot setting , and achieves promising results . Meta Learning for few-shot classification . Few-shot learning has been widely explored in image classification , and currently the most promising methods are mainly based on meta learning . Ravi & Larochelle ( 2016 ) optimized the base-model via an LSTM-based meta-learner which simulates traditional SGD optimization method . Finn et al . ( Finn et al. , 2017 ) proposed MAML which learns a good feature initialization which can adapt to a new task in only one gradient step udpate . Based on MAML , Li et al . ( 2017 ) proposed Meta-SGD which learns a set of learnable parameters to control gradient step of different tasks . Learning initialization is potentially a very general idea for few-shot learning however , the training process can be unstable ( Antoniou et al. , 2018 ) especially for complex problems such as detection . Vinyals et al . ( Vinyals et al. , 2016 ; Snell et al. , 2017 ) proposed a matching network which followed a non-parametric principle by learning a differentiable K-Nearest Neighbour model . Ren et al . ( Ren et al. , 2018 ) extended this idea to semi-supervised learning by self-learning from the unlabeled data . Sung et al . ( Sung et al. , 2018 ) proposed a relation network to automatically define the optimal distance metric . These metric-learning based methods are easy to train and effective in addressing few-shot classification . However , directly adapting these techniques for detection is very challenging as just replacing the object classification branch of a detector with a meta-learner is not sufficient , and training the RPN under a meta-learning paradigm is non-trivial . Few-shot Object Detection . Few-shot detection has received considerably less interest from the community . Dong et al . ( Dong et al. , 2018 ) addressed few-shot detection using large scale unlabeled data . Their model is based on a semi-supervised method which extracts knowledge from unlabeled dataset to enrich training dataset by self-paced learning and multi-modal learning . However , their method may be misled by the incorrect predictions from initial model and also requires re-training the model for every new task . Chen et al . ( Chen et al. , 2018 ) propose a Low-shot Transfer Detector ( LSTD ) using regularization to transfer the knowledge from source domain to target domain by minimizing the gap between these two domains . RepMet ( Schwartz et al. , 2019 ) is a few-shot detection algorithm based on meta learning . It replaces the fully connected classification layer of a standard detector with modified prototypical network . However , they suffer from the two limitations of the RPN and bbox regression not being able to handle few-shot settings , and difficulties in distinguishing object classes of interest from background ( including object classes not of interest ) . Our proposed method is also based on meta learning but can be optimized end-to-end and addresses these limitations to do effective few-shot detection . 3 PRELIMINARIES . 3.1 PROBLEM SETTING . In this section we present the formal problem setting of few-shot detection investigated in our paper . Assume we have two datasets L and S , where L is a large scale annotated dataset with Lc categories and S is a dataset with only a few annotated images with Sc categories . There is no category overlap between two datasets : Lc ∩ Sc = φ . Our goal is to learn a robust detector based on the annotated data in L and S to detect unlabeled objects of S. The proposed Meta-RCNN aims to learn a general detection framework which can be quickly adapted to different detection tasks which have only a few labeled samples . We follow the standard training scheme of meta learning , which splits the whole learning stage into two parts : meta-training and meta-testing , and the model is optimized over multiple few-shot tasks simulated from the metatraining data . Specifically , during meta-training , few-shot detection tasks are sampled from L , and each task contains a support set and a query set . For the i-th task , K ways ( or categories ) and N images per category are randomly selected from Lc to build support set : TL , si . Similarly , Q images per category are randomly selected to build query set TL , qi . Support set T L , s i and query set T L , q i construct a complete task extracted from L ( See Figure 1 ) : TLi = { TL , si , T L , q i } ( 1 ) where both the support set and query set are used to train the meta-model . The meta-model optimizes the base-model with respect to the support set and makes predictions on query set . Finally the loss suffered on the query set is used to update the model . In the meta-testing stage , similar to metatraining stage , a set of few-shot tasks are sampled from S : TSi = { TS , si , T S , q i } ( 2 ) where TS , si is support set and T S , q i is query set . The model makes predictions on the query set , and these results are averaged across several few-shot tasks to evaluate the expected performance of the few-shot detector over a variety of novel few-shot detection tasks .
The paper proposes a method for few-shot object detection (FSOD), a variant of few-shot learning (FSL) where using a support set of few training images for novel categories (usually 1 or 5) not only the correct category labels are predicted on the query images, but also the object instances from the novel categories are localized and their bounding boxes are predicted. The method proposes a network architecture where the sliding window features that enter the RPN are first attenuated using support classes prototypes discovered using (a different?) RPN and found as matching to the few provided box annotations on the support images. The attenuation is by channel wise multiplication of the feature map and concatenation of the resulting feature maps (one per support class). After the RPN, ROI-pooling is applied on the concatenated feature map that is reduced using 1x1 convolution and original feature map (before attenuation) being added to the result. Following this a two FC layer classifier is fine-tuned on the support data to form the final
SP:9e36913574414b98fd6c4a66061cf0216dcc536b
Meta-RCNN: Meta Learning for Few-Shot Object Detection
1 INTRODUCTION . Object detection is the task of identifying various objects in a given image , and localizing them with a bounding box . It is a widely studied problem in computer vision , and following the success deep convolutional neural networks ( DCNN ) in image classification ( Karpathy et al. , 2014 ; Krizhevsky et al. , 2012 ) , recent years have witnessed remarkable progress made in object detection based on deep learning . A series of detection algorithms based on DCNNs have been proposed which achieve state-of-the-art results on public detection benchmark datasets ( Gidaris & Komodakis , 2015 ; Girshick et al. , 2014 ; Ren et al. , 2015 ; Lin et al. , 2017a ; b ; Liu et al. , 2016 ; Redmon & Farhadi , 2016 ) . However , all these methods are data hungry , and require large amounts of annotated data to learn an immense number of parameters . For object detection , annotating the data is every expensive ( much more than image classification ) , as it requires not only identifying the categorical labels for every object in the image , but also providing accurate localization information through bounding box coordinates . Moreover , in some applications , such as medical research , it ’ s often impossible to even collect sufficient data to annotate . This warrants a need for effective detectors that can generalize well from small amounts of annotated data . We refer to the problem of learning detectors from limited labeled data as few-shot detection . For example , in one-shot detection , only one image is available with objects of interest annotated , and a detector needs to train on just this image and generalize . When presented with such small amounts of annotated data , traditional detectors tend to suffer from overfitting . Inspired by the fact that humans can learn a new concepts from little annotated data , we aim to develop a new few-shot detection algorithm . There have been several efforts exploring few-shot learning ( Vinyals et al. , 2016 ; Finn et al. , 2017 ; Snell et al. , 2017 ) . Many of them follow the principle of meta learning . In meta learning , a set of tasks in a few-shot setting is simulated from a large corpus of annotated data , and the model is optimized to perform well over these few shot tasks . This trains the model to learn how to solve few-shot tasks . However , most existing efforts of meta learning are mainly focused on classification . Adapting few-shot classification algorithms directly for few-shot detection ( e.g . by replacing the region classification branch of detector with a meta-learner ) is non-trivial because of two major concerns : i ) . Detection algorithms not only require classifying objects but also need to correctly localize objects in cluttered backgrounds by using a Region Proposal Network ( RPN ) and bounding box ( bbox ) regressors . It is thus also desirable that both RPN and bbox regressors should also be capable enough to adapt to few-shot settings . ii ) . For a given task with one ( or few ) annotated image ( s ) , the annotated image may contain objects from several classes . But only a few objects of interest are annotated . The goal of the few-shot detector is to detect only these objects of interest . Unfortunately , a naively trained meta-detector ’ s RPN would detect all objects ( even objects from classes not of interest ) and try to classify them as one of the classes of interest rather than background images ( See Figure 1 for an example ) . We aim to address these challenges by proposing a novel method for solving few-shot detection using the meta-learning paradigm . We develop Meta-RCNN , an end to end trainable meta object detector . The proposed Meta-RCNN follows the episodic learning paradigm of meta-learning ( Vinyals et al. , 2016 ) , where based on a give meta-train dataset , multiple few-shot tasks are simulated . For a given task , we first construct a class prototype for each of the annotated object categories in the support set . Using these prototypes , a class-specific feature map of the entire image is constructed , i.e. , we obtain a feature map of the entire image for each of the class prototypes . These feature maps are tailored to detect only objects of the class of the prototype , by giving higher attention to appropriate regions in the image containing that object . Finally , all feature maps are merged to produce a combined feature map , followed by an RPN , and then classification and bbox regression layers . Meta-RCNN learns few-shot detector where the whole framework can be trained via meta-learning in an end-to-end manner . In contrast to the naive adaptation of meta-learning for classification into an object detection framework , Meta-RCNN learns the few-shot classifier , the RPN , and the bbox regressor in the meta-learning setting , thus making all three components suitable for handling fewshot scenarios . Moreover , Meta-RCNN learns a class-specific feature map for a given class prototype enabling easier distinction between classes of interest and backgrounds ( where other objects in the image from classes not of interest are considered as backgrounds ) . We demonstrate the effectiveness of Meta-RCNN on two few-shot detection benchmarks : Pascal VOC and animal subset of ImageNet , and show that Meta-RCNN significantly improves the detection result in few shot settings . 2 RELATED WORK . Generic Object Detection . Object detection based on deep learning can be broadly divided into two families : two-stage detectors and one-stage detectors . Two-stage detectors such as RCNN ( Girshick et al. , 2014 ) , Fast RCNN ( Gidaris & Komodakis , 2015 ) and Faster RCNN ( Ren et al. , 2015 ) , first generate a sparse set of proposal candidates , and a fixed-length feature vector is extracted from each of these candidates , followed by a categorical classifier and a bounding box regressor . Twostage detection algorithms have achieved state-of-the-art results on many public benchmarks ( He et al. , 2016 ; Lin et al. , 2017a ) , but are relatively slower than one-stage detectors . One-stage detectors such as SSD ( Liu et al. , 2016 ) , Yolo ( Redmon et al. , 2016 ; Redmon & Farhadi , 2016 ) and RefineDet ( Zhang et al. , 2018 ) directly generate categorical proposals from the feature map and thus avoid cascaded region classifiers . One-stage detectors can achieve real-time inference speed but the detection accuracy is often inferior to two-stage detection algorithms . Both detection families assume access to a large set of annotated data , and are not suitable for scenarios where the model has access to small amounts of annotated training data . In contrast , our proposed Meta-RCNN method addresses detection problem of few-shot setting , and achieves promising results . Meta Learning for few-shot classification . Few-shot learning has been widely explored in image classification , and currently the most promising methods are mainly based on meta learning . Ravi & Larochelle ( 2016 ) optimized the base-model via an LSTM-based meta-learner which simulates traditional SGD optimization method . Finn et al . ( Finn et al. , 2017 ) proposed MAML which learns a good feature initialization which can adapt to a new task in only one gradient step udpate . Based on MAML , Li et al . ( 2017 ) proposed Meta-SGD which learns a set of learnable parameters to control gradient step of different tasks . Learning initialization is potentially a very general idea for few-shot learning however , the training process can be unstable ( Antoniou et al. , 2018 ) especially for complex problems such as detection . Vinyals et al . ( Vinyals et al. , 2016 ; Snell et al. , 2017 ) proposed a matching network which followed a non-parametric principle by learning a differentiable K-Nearest Neighbour model . Ren et al . ( Ren et al. , 2018 ) extended this idea to semi-supervised learning by self-learning from the unlabeled data . Sung et al . ( Sung et al. , 2018 ) proposed a relation network to automatically define the optimal distance metric . These metric-learning based methods are easy to train and effective in addressing few-shot classification . However , directly adapting these techniques for detection is very challenging as just replacing the object classification branch of a detector with a meta-learner is not sufficient , and training the RPN under a meta-learning paradigm is non-trivial . Few-shot Object Detection . Few-shot detection has received considerably less interest from the community . Dong et al . ( Dong et al. , 2018 ) addressed few-shot detection using large scale unlabeled data . Their model is based on a semi-supervised method which extracts knowledge from unlabeled dataset to enrich training dataset by self-paced learning and multi-modal learning . However , their method may be misled by the incorrect predictions from initial model and also requires re-training the model for every new task . Chen et al . ( Chen et al. , 2018 ) propose a Low-shot Transfer Detector ( LSTD ) using regularization to transfer the knowledge from source domain to target domain by minimizing the gap between these two domains . RepMet ( Schwartz et al. , 2019 ) is a few-shot detection algorithm based on meta learning . It replaces the fully connected classification layer of a standard detector with modified prototypical network . However , they suffer from the two limitations of the RPN and bbox regression not being able to handle few-shot settings , and difficulties in distinguishing object classes of interest from background ( including object classes not of interest ) . Our proposed method is also based on meta learning but can be optimized end-to-end and addresses these limitations to do effective few-shot detection . 3 PRELIMINARIES . 3.1 PROBLEM SETTING . In this section we present the formal problem setting of few-shot detection investigated in our paper . Assume we have two datasets L and S , where L is a large scale annotated dataset with Lc categories and S is a dataset with only a few annotated images with Sc categories . There is no category overlap between two datasets : Lc ∩ Sc = φ . Our goal is to learn a robust detector based on the annotated data in L and S to detect unlabeled objects of S. The proposed Meta-RCNN aims to learn a general detection framework which can be quickly adapted to different detection tasks which have only a few labeled samples . We follow the standard training scheme of meta learning , which splits the whole learning stage into two parts : meta-training and meta-testing , and the model is optimized over multiple few-shot tasks simulated from the metatraining data . Specifically , during meta-training , few-shot detection tasks are sampled from L , and each task contains a support set and a query set . For the i-th task , K ways ( or categories ) and N images per category are randomly selected from Lc to build support set : TL , si . Similarly , Q images per category are randomly selected to build query set TL , qi . Support set T L , s i and query set T L , q i construct a complete task extracted from L ( See Figure 1 ) : TLi = { TL , si , T L , q i } ( 1 ) where both the support set and query set are used to train the meta-model . The meta-model optimizes the base-model with respect to the support set and makes predictions on query set . Finally the loss suffered on the query set is used to update the model . In the meta-testing stage , similar to metatraining stage , a set of few-shot tasks are sampled from S : TSi = { TS , si , T S , q i } ( 2 ) where TS , si is support set and T S , q i is query set . The model makes predictions on the query set , and these results are averaged across several few-shot tasks to evaluate the expected performance of the few-shot detector over a variety of novel few-shot detection tasks .
This paper is about the task of object detection in the setting of few-shots dataset. The problem is addressed in the learning scheme of meta-learning paradigm: the proposed meta-rcnn trains the popular faster-rcnn on several tasks of few shots object detection while the RPN and the object classification networks are meta-learned among the tasks. Compared to previous work the paper introduces the meta learning framework and several changes to the faster rcnn detector. A prototype representation is derived from the standard RPN network and its proposed bounding box. An attention mechanism choose the object of interest and is used to train the final RPN and classification network. Experiments on the popular Pascal Voc 2007 and ImageNet-FSOD show that the proposed system have state of the art performance.
SP:9e36913574414b98fd6c4a66061cf0216dcc536b
Angular Visual Hardness
Although convolutional neural networks ( CNNs ) are inspired by the mechanisms behind human visual systems , they diverge on many measures such as ambiguity or hardness . In this paper , we make a surprising discovery : there exists a ( nearly ) universal score function for CNNs whose correlation is statistically significant than the widely used model confidence with human visual hardness . We term this function as angular visual hardness ( AVH ) which is given by the normalized angular distance between a feature embedding and the classifier weights of the corresponding target category in a CNN . We conduct an in-depth scientific study . We observe that CNN models with the highest accuracy also have the best AVH scores . This agrees with an earlier finding that state-of-art models tend to improve on classification of harder training examples . We find that AVH displays interesting dynamics during training : it quickly reaches a plateau even though the training loss keeps improving . This suggests the need for designing better loss functions that can target harder examples more effectively . Finally , we empirically show significant improvement in performance by using AVH as a measure of hardness in self-training methods for domain adaptation . 1 INTRODUCTION . The invention and development of Convolutional Neural Networks ( CNNs ) were inspired by natural visual processing systems . For example , artificial neurons were designed to mimic neurons taking and transforming information [ 48 ] , and neocognitron , the origin of the CNN architecture , was enlightened by early findings of receptive fields in the visual cortex [ 15 ] . CNNs have achieved a great success in pushing the boundaries in a wide range of computer vision tasks such as image classification [ 24 , 30 , 59 ] , face recognition [ 39 , 63 , 64 ] , and scene analysis [ 19 , 43 , 70 ] . Specifically , on certain large-scale benchmarks such as ImageNet [ 10 ] , CNNs have even surpassed human-level accuracy . Despite such notable progress , CNNs are still far from matching human vision on many measures such as robustness , adaptability and few-shot learning [ 23 ] , and could suffer from various biases . For example , CNNs pre-trained on Imagenet are biased towards textures [ 18 ] . These biases can result in CNNs being overconfident , or prone to domain gaps and adversarial attacks . Therefore , to fundamentally solve the above problems , the efforts should be made to improve CNN ’ s capabilities to model human visual system [ 47 , 62 ] . The popular measure of CNN confidence , softmax score , has been widely used in many applications , yet causing calibration problems and tending to make CNNs overconfident even if they are wrong [ 21 , 34 ] . However , this is not the case with human vision . Thus , there is a gap between the current measure of hard examples that appear to be ambiguous or uncertain in these two systems . We denote human visual hardness as the measure of how hard an example is to human visual system . In this paper , we bridge the gap by proposing a novel score function on CNNs that correlates closely with human visual hardness . The first piece of this puzzle starts with the question of what is a good measure of human visual hardness . Recently , [ 52 ] argued that human selection frequency is a good measure . This is the average number of times an image gets picked by a crowd of annotators , when they are asked to pick an image from a pool that belongs to a certain specified category . Intuitively , human selection frequency depends on various factors like object sizes , poses , special filters applied to images , etc . [ 52 ] collected human selection frequency scores on ImageNet validation set using the MTurk platform . In this paper , we use this dataset to verify several hypotheses on correlations between CNNs and human visual systems in section 3.2 . Moreover , an automatic detection of examples that are hard for human vision has numerous applications . [ 52 ] showed that state-of-the-art models perform better on hard examples ( i.e. , hard for humans ) . This implies that in order to improve generalization , the models need to improve accuracy on hard examples . This can be achieved through various learning algorithms such as curriculum learning [ 2 ] and self-paced learning [ 32 ] where being able to detect hard examples is crucial . Measuring sample confidence is also important in partially-supervised problems such as semi-supervised learning [ 71 , 72 ] , unsupervised domain adaptation [ 8 ] and weakly-supervised learning [ 65 ] due to their under-constrained nature . For instance , self-training [ 73 ] can easily reach to trivial solutions if one does not select pseudo-labels carefully based on correct measure of hardness . Furthermore , by identifying hard examples , one can detect various biases in current CNN models . Sample hardness can also be used to identify implicit distribution imbalance in datasets to ensure fairness and remove societal biases [ 4 ] . Our contributions are summarized as follows : . Angular visual hardness ( AVH ) : Given a CNN , we propose a novel score function that has stronger correlation with human visual hardness than softmax score . It is the normalized angular distance between the image feature embedding and the weights of the target category ( See Figure 1 ) . The normalization takes into account the angular distances to other categories . We argue that the semantic ambiguity that affects human visual hardness has stronger correlation with this score and we find empirical evidence to support this claim . The AVH score is inspired by the observation from Figure 1 and also [ 42 ] that samples from each class concentrate in a convex cone in the embedding space . In addition , some existing theoretical results [ 61 ] show that for minimization of logistic loss or cross- entropy loss which is often used in CNNs , gradient descent converges to the same direction as maximum margin solutions irrelevant to the ` 2 norm of classifier weights or feature embeddings . This also provides the intuition behind why we can show AVH score rather than current model confidence , softmax score , correlates better with human visual hardness . We validate that there is a statistically-significant stronger correlation between AVH and human selection frequency across a wide range of CNN models . Hence , it serves as its proxy on datasets where such information is not available and is beneficial to a number of downstream tasks . We observed the evolution of AVH score during training of CNN models . It plateaus early in training even if the training ( cross-entropy ) loss keeps improving . This suggests the need to design better loss functions that can improve performance on hard examples . We also validate the argument in [ 52 ] that improving on hard examples is the key to improve the generalization by verifying that the state-of-the-art models have the best average AVH scores . Finally , we empirically show the superiority of AVH with its application to self-training for unsupervised domain adaptation . With AVH being an improved confidence measure , our proposed self-training framework renders considerably improved pseudo-label selection and category estimation , leading to state-of-the-art results with significant performance gain over baselines . 2 RELATED WORK . Example hardness measures : Recently , measuring sample hardness for deep learning models has been widely studied with loss value [ 58 ] , relative Euclidean distance [ 56 , 66 ] and gradient norm [ 28 ] . On the other hand , there is a rich history in cognitive and neuroscience communities to understand human visual perception [ 6 , 7 , 14 , 45 ] , where many of them focus on mechanisms used by the human brain to translate visual information to mental representations . These representations are subject to many correspondence differences and errors and thereby are not isomorphic to the real world [ 37 ] . They can be affected by the ambiguity of different semantics [ 27 ] such as occlusion , distortion , motion blur , and inherent similarity among objects . Due to the expensive human labeling process , such detailed semantic information is typically not present in large-scale image benchmarks used to train the CNN models . Angular distance in CNNs : [ 69 ] uses the deep features to quantify the semantic difference between images , indicating that deep features contain the most crucial semantic information . It empirically shows that the angular distance between feature maps in deep neural networks is very consistent with the human in distinguishing the semantic difference . However , because of the different goal mentioned above , they have not studied or shown any strong correlation of human visual hardness and the angular distance on natural images . [ 40 ] proposes a hyperspherical neural network that constrains the parameters of neurons on a unit hypersphere and uses angular similarity to replace the inner product similarity . [ 42 ] decouples the inner product as the norm and the angle and argues that the norm corresponds to intra-class variation , and the angle corresponds to inter-class semantic difference . However , this work does not consider any human factors , while our goal is to bridge the gap between CNNs and human perception . [ 35 , 41 ] propose well-performing regularizations based on angular diversity to improve the network generalization . Image degradation : Because CNNs and humans achieve similar accuracy on a wide range of tasks on benchmark datasets , a number of works have investigated similarities and differences between CNNs and human vision [ 3 , 5 , 9 , 11 , 12 , 29 , 46 , 51 , 67 ] . Since human annotation data is hard to come by , researchers have proposed an alternative measure of visual hardness on images based on image degradation [ 37 ] . It involves adding noise or changing image properties such as contrast , blurriness , and brightness . [ 17 ] employed psychological studies to validate the degradation method as a way to measure human visual hardness . It should be noted that the artificial visual hardness introduced by degradation is a different concept from the natural visual hardness . The hardness based on degradation only reflects the hardness of a single original image with various of transformations , while natural visual hardness based on the ambiguity of human perception across a distribution of natural images . In this paper , we consider both as the surrogates of human visual hardness . Deep model calibration . Confidence calibration is the problem of predicting probability estimates representative of the true correctness likelihood [ 21 ] . It is well-known that the deep neural networks are mis-calibrated and there has been a rich literature trying to solve this problem [ 21 , 31 ] . However , this is a somewhat different issue because the confidence calibration is a problem introduced by two measurements of the model , which does not have any involvement of human visual hardness . 3 A DISCOVERY FROM SCIENTIFIC TESTING : ANGULAR VISUAL HARDNESS . 3.1 NOTATIONS AND SETUP . In order to quantify Human Visual Hardness and Model Predictions for convenience purposes in experiments , we use corresponding surrogates which are formally defined as the following throughout the paper . We use the ImageNet [ 10 ] benchmark in all following experiments . Particularly , we take advantage of the Human Selection Frequency information for validation images provided by the recent paper [ 52 ] . Recall that such information can serve as one of the proxy for Human Visual Hardness . To test if our findings with Human Selection Frequency hold on another proxy , image degradation , we create an augmented validation set based on two image degradation methods , decreasing contrast and adding noise . We label them with corresponding degradation level ( results shown in Appendix A.2 and A.5 ) . Besides , in order to verify that the our experimental results hold consistently across models instead of a particular model , we use four popular ImageNet pre-trained models AlexNet [ 30 ] , VGG19 [ 59 ] , DenseNet121 [ 26 ] , ResNet50 [ 24 ] . We select ResNet50 as the representative model for some experiments . Most importantly , we also provide experimental results on different datasets , MNIST and CIFAR10/100 , in Appendix A.3 and A.4 to better support our proposal . Denote Sn as the unit n-sphere , formally , Sn = { x ∈ Rn+1|‖x‖2 = 1 } . Below by A ( · , · ) , we denote the angular distance between two points on Sn , i.e. , A ( u , v ) = arccos ( 〈u , v〉‖u‖‖v‖ ) . Let x be the feature embeddings input for the layer before the last one of the classifier of the pretrained CNN models , eg . FC2 for VGG19 . Let C be the number of classes for a classification task . Denote W = { wi|0 < i ≤ C } as the set of weights for all C classes in the final layer of the classifier . Definition 1 ( Angular Visual Hardness ( AVH ) ) . AVH , for any ( x , y ) , is defined as : AVH ( x ) = A ( x , wy ) ∑C i=1A ( x , wi ) , which wy represents the weights of the target class . Definition 2 ( Model Confidence ) . We define model confidence on a single sample as the probability score of the true objective class output by the CNN models , formally , e wyx∑C i=1 e wix . Definition 3 ( Human Selection Frequency ) . We define one way to measure human visual hardness on pictures as Human Selection Frequency . Quantitatively , given m number of human workers in a labeling process described in [ 52 ] , if b out of m label a picture as a particular class and that class is the target class of that picture in the final dataset , then Human Selection Frequency is defined as bm .
This paper defined Angular Visual Hardness (AVH) based on the angle between image feature embedding and the weights of the target class. The authors compared the correlation between AVH and human selection frequency with model confidence and feature norm. The results show that both AVH and model confidence have correlation, but AVH has a stronger correlation than model confidence. Differently from the conjecture of [41], feature norm was not correlated with human selection frequency.
SP:ab1e96008989209a4c5423f723bfae327416e78a
Angular Visual Hardness
Although convolutional neural networks ( CNNs ) are inspired by the mechanisms behind human visual systems , they diverge on many measures such as ambiguity or hardness . In this paper , we make a surprising discovery : there exists a ( nearly ) universal score function for CNNs whose correlation is statistically significant than the widely used model confidence with human visual hardness . We term this function as angular visual hardness ( AVH ) which is given by the normalized angular distance between a feature embedding and the classifier weights of the corresponding target category in a CNN . We conduct an in-depth scientific study . We observe that CNN models with the highest accuracy also have the best AVH scores . This agrees with an earlier finding that state-of-art models tend to improve on classification of harder training examples . We find that AVH displays interesting dynamics during training : it quickly reaches a plateau even though the training loss keeps improving . This suggests the need for designing better loss functions that can target harder examples more effectively . Finally , we empirically show significant improvement in performance by using AVH as a measure of hardness in self-training methods for domain adaptation . 1 INTRODUCTION . The invention and development of Convolutional Neural Networks ( CNNs ) were inspired by natural visual processing systems . For example , artificial neurons were designed to mimic neurons taking and transforming information [ 48 ] , and neocognitron , the origin of the CNN architecture , was enlightened by early findings of receptive fields in the visual cortex [ 15 ] . CNNs have achieved a great success in pushing the boundaries in a wide range of computer vision tasks such as image classification [ 24 , 30 , 59 ] , face recognition [ 39 , 63 , 64 ] , and scene analysis [ 19 , 43 , 70 ] . Specifically , on certain large-scale benchmarks such as ImageNet [ 10 ] , CNNs have even surpassed human-level accuracy . Despite such notable progress , CNNs are still far from matching human vision on many measures such as robustness , adaptability and few-shot learning [ 23 ] , and could suffer from various biases . For example , CNNs pre-trained on Imagenet are biased towards textures [ 18 ] . These biases can result in CNNs being overconfident , or prone to domain gaps and adversarial attacks . Therefore , to fundamentally solve the above problems , the efforts should be made to improve CNN ’ s capabilities to model human visual system [ 47 , 62 ] . The popular measure of CNN confidence , softmax score , has been widely used in many applications , yet causing calibration problems and tending to make CNNs overconfident even if they are wrong [ 21 , 34 ] . However , this is not the case with human vision . Thus , there is a gap between the current measure of hard examples that appear to be ambiguous or uncertain in these two systems . We denote human visual hardness as the measure of how hard an example is to human visual system . In this paper , we bridge the gap by proposing a novel score function on CNNs that correlates closely with human visual hardness . The first piece of this puzzle starts with the question of what is a good measure of human visual hardness . Recently , [ 52 ] argued that human selection frequency is a good measure . This is the average number of times an image gets picked by a crowd of annotators , when they are asked to pick an image from a pool that belongs to a certain specified category . Intuitively , human selection frequency depends on various factors like object sizes , poses , special filters applied to images , etc . [ 52 ] collected human selection frequency scores on ImageNet validation set using the MTurk platform . In this paper , we use this dataset to verify several hypotheses on correlations between CNNs and human visual systems in section 3.2 . Moreover , an automatic detection of examples that are hard for human vision has numerous applications . [ 52 ] showed that state-of-the-art models perform better on hard examples ( i.e. , hard for humans ) . This implies that in order to improve generalization , the models need to improve accuracy on hard examples . This can be achieved through various learning algorithms such as curriculum learning [ 2 ] and self-paced learning [ 32 ] where being able to detect hard examples is crucial . Measuring sample confidence is also important in partially-supervised problems such as semi-supervised learning [ 71 , 72 ] , unsupervised domain adaptation [ 8 ] and weakly-supervised learning [ 65 ] due to their under-constrained nature . For instance , self-training [ 73 ] can easily reach to trivial solutions if one does not select pseudo-labels carefully based on correct measure of hardness . Furthermore , by identifying hard examples , one can detect various biases in current CNN models . Sample hardness can also be used to identify implicit distribution imbalance in datasets to ensure fairness and remove societal biases [ 4 ] . Our contributions are summarized as follows : . Angular visual hardness ( AVH ) : Given a CNN , we propose a novel score function that has stronger correlation with human visual hardness than softmax score . It is the normalized angular distance between the image feature embedding and the weights of the target category ( See Figure 1 ) . The normalization takes into account the angular distances to other categories . We argue that the semantic ambiguity that affects human visual hardness has stronger correlation with this score and we find empirical evidence to support this claim . The AVH score is inspired by the observation from Figure 1 and also [ 42 ] that samples from each class concentrate in a convex cone in the embedding space . In addition , some existing theoretical results [ 61 ] show that for minimization of logistic loss or cross- entropy loss which is often used in CNNs , gradient descent converges to the same direction as maximum margin solutions irrelevant to the ` 2 norm of classifier weights or feature embeddings . This also provides the intuition behind why we can show AVH score rather than current model confidence , softmax score , correlates better with human visual hardness . We validate that there is a statistically-significant stronger correlation between AVH and human selection frequency across a wide range of CNN models . Hence , it serves as its proxy on datasets where such information is not available and is beneficial to a number of downstream tasks . We observed the evolution of AVH score during training of CNN models . It plateaus early in training even if the training ( cross-entropy ) loss keeps improving . This suggests the need to design better loss functions that can improve performance on hard examples . We also validate the argument in [ 52 ] that improving on hard examples is the key to improve the generalization by verifying that the state-of-the-art models have the best average AVH scores . Finally , we empirically show the superiority of AVH with its application to self-training for unsupervised domain adaptation . With AVH being an improved confidence measure , our proposed self-training framework renders considerably improved pseudo-label selection and category estimation , leading to state-of-the-art results with significant performance gain over baselines . 2 RELATED WORK . Example hardness measures : Recently , measuring sample hardness for deep learning models has been widely studied with loss value [ 58 ] , relative Euclidean distance [ 56 , 66 ] and gradient norm [ 28 ] . On the other hand , there is a rich history in cognitive and neuroscience communities to understand human visual perception [ 6 , 7 , 14 , 45 ] , where many of them focus on mechanisms used by the human brain to translate visual information to mental representations . These representations are subject to many correspondence differences and errors and thereby are not isomorphic to the real world [ 37 ] . They can be affected by the ambiguity of different semantics [ 27 ] such as occlusion , distortion , motion blur , and inherent similarity among objects . Due to the expensive human labeling process , such detailed semantic information is typically not present in large-scale image benchmarks used to train the CNN models . Angular distance in CNNs : [ 69 ] uses the deep features to quantify the semantic difference between images , indicating that deep features contain the most crucial semantic information . It empirically shows that the angular distance between feature maps in deep neural networks is very consistent with the human in distinguishing the semantic difference . However , because of the different goal mentioned above , they have not studied or shown any strong correlation of human visual hardness and the angular distance on natural images . [ 40 ] proposes a hyperspherical neural network that constrains the parameters of neurons on a unit hypersphere and uses angular similarity to replace the inner product similarity . [ 42 ] decouples the inner product as the norm and the angle and argues that the norm corresponds to intra-class variation , and the angle corresponds to inter-class semantic difference . However , this work does not consider any human factors , while our goal is to bridge the gap between CNNs and human perception . [ 35 , 41 ] propose well-performing regularizations based on angular diversity to improve the network generalization . Image degradation : Because CNNs and humans achieve similar accuracy on a wide range of tasks on benchmark datasets , a number of works have investigated similarities and differences between CNNs and human vision [ 3 , 5 , 9 , 11 , 12 , 29 , 46 , 51 , 67 ] . Since human annotation data is hard to come by , researchers have proposed an alternative measure of visual hardness on images based on image degradation [ 37 ] . It involves adding noise or changing image properties such as contrast , blurriness , and brightness . [ 17 ] employed psychological studies to validate the degradation method as a way to measure human visual hardness . It should be noted that the artificial visual hardness introduced by degradation is a different concept from the natural visual hardness . The hardness based on degradation only reflects the hardness of a single original image with various of transformations , while natural visual hardness based on the ambiguity of human perception across a distribution of natural images . In this paper , we consider both as the surrogates of human visual hardness . Deep model calibration . Confidence calibration is the problem of predicting probability estimates representative of the true correctness likelihood [ 21 ] . It is well-known that the deep neural networks are mis-calibrated and there has been a rich literature trying to solve this problem [ 21 , 31 ] . However , this is a somewhat different issue because the confidence calibration is a problem introduced by two measurements of the model , which does not have any involvement of human visual hardness . 3 A DISCOVERY FROM SCIENTIFIC TESTING : ANGULAR VISUAL HARDNESS . 3.1 NOTATIONS AND SETUP . In order to quantify Human Visual Hardness and Model Predictions for convenience purposes in experiments , we use corresponding surrogates which are formally defined as the following throughout the paper . We use the ImageNet [ 10 ] benchmark in all following experiments . Particularly , we take advantage of the Human Selection Frequency information for validation images provided by the recent paper [ 52 ] . Recall that such information can serve as one of the proxy for Human Visual Hardness . To test if our findings with Human Selection Frequency hold on another proxy , image degradation , we create an augmented validation set based on two image degradation methods , decreasing contrast and adding noise . We label them with corresponding degradation level ( results shown in Appendix A.2 and A.5 ) . Besides , in order to verify that the our experimental results hold consistently across models instead of a particular model , we use four popular ImageNet pre-trained models AlexNet [ 30 ] , VGG19 [ 59 ] , DenseNet121 [ 26 ] , ResNet50 [ 24 ] . We select ResNet50 as the representative model for some experiments . Most importantly , we also provide experimental results on different datasets , MNIST and CIFAR10/100 , in Appendix A.3 and A.4 to better support our proposal . Denote Sn as the unit n-sphere , formally , Sn = { x ∈ Rn+1|‖x‖2 = 1 } . Below by A ( · , · ) , we denote the angular distance between two points on Sn , i.e. , A ( u , v ) = arccos ( 〈u , v〉‖u‖‖v‖ ) . Let x be the feature embeddings input for the layer before the last one of the classifier of the pretrained CNN models , eg . FC2 for VGG19 . Let C be the number of classes for a classification task . Denote W = { wi|0 < i ≤ C } as the set of weights for all C classes in the final layer of the classifier . Definition 1 ( Angular Visual Hardness ( AVH ) ) . AVH , for any ( x , y ) , is defined as : AVH ( x ) = A ( x , wy ) ∑C i=1A ( x , wi ) , which wy represents the weights of the target class . Definition 2 ( Model Confidence ) . We define model confidence on a single sample as the probability score of the true objective class output by the CNN models , formally , e wyx∑C i=1 e wix . Definition 3 ( Human Selection Frequency ) . We define one way to measure human visual hardness on pictures as Human Selection Frequency . Quantitatively , given m number of human workers in a labeling process described in [ 52 ] , if b out of m label a picture as a particular class and that class is the target class of that picture in the final dataset , then Human Selection Frequency is defined as bm .
This paper is trying to bridge the gap between CNN and the human visual system by proposing a metric (angular visual distance) and validate that this metric is correlated to the human visual hardness and this metric has a stronger relation compared to the softmax score which has been viewed as a metric measuring the hardness of images in CNNs. Furthermore, this paper proposed a reasonable explanation for this observation, i.e., the norm is possibly not correlated to the human visual hardness and validate through the experiment. Finally, this paper shows that this metric is also useful in other applications.
SP:ab1e96008989209a4c5423f723bfae327416e78a
Retrieving Signals in the Frequency Domain with Deep Complex Extractors
1 INTRODUCTION . Complex-valued neural networks have been studied since long before the emergence of modern deep learning techniques ( Georgiou & Koutsougeras , 1992 ; Zemel et al. , 1995 ; Kim & Adalı , 2003 ; Hirose , 2003 ; Nitta , 2004 ) . Nevertheless , deep complex-valued models have only started to gain momentum ( Reichert & Serre , 2014 ; Arjovsky et al. , 2015 ; Danihelka et al. , 2016 ; Trabelsi et al. , 2017 ; Jose et al. , 2017 ; Wolter & Yao , 2018b ; Choi et al. , 2019 ) , with the great majority of models in deep learning still relying on real-valued representations . The motivation for using complex-valued representations for deep learning is twofold : On the one hand , biological nervous systems actively make use of synchronization effects to gate signals between neurons – a mechanism that can be recreated in artificial systems by taking into account phase differences ( Reichert & Serre , 2014 ) . On the other hand , complex-valued representations are better suited to certain types of data , particularly those that are naturally expressed in the frequency domain . Other benefits provided by working with complex-valued inputs in the spectral or frequency domain are computational . In particular , short-time Fourier transforms ( STFTs ) can be used to considerably reduce the temporal dimension of the representation for an underlying signal . This is a critical advantage , as training recurrent neural networks ( RNNs ) or convolutional neural networks ( CNNs ) on long sequences remains challenging due to unstable gradients and the computational requirements of backpropagation through time ( BPTT ) ( Hochreiter , 1991 ; Bengio et al. , 1994 ) . Applying the STFT on the raw signal , on the other hand , is computationally efficient , as in practice it is implemented with the fast Fourier transform ( FFT ) whose computational complexity is O ( n log ( n ) ) . The aforementioned biological , representational and computational considerations provide compelling motivations for designing learning models for tasks where the complex-valued representation of the input and output data is more desirable than their real-counterpart . Recent work has provided building blocks for deep complex-valued neural networks ( Trabelsi et al. , 2017 ) . These building blocks have been shown , in many cases , to avoid numerical problems during training and , thereby , enable the use of complex-valued representations . These representations are well-suited for frequency domain signals , as they have the ability to explicitly encode frequency magnitude and phase components . This motivates us to design a new signal extraction mechanism operating in the frequency domain . In this work , our contributions are summarized as follows : 1 . We present a new signal separation mechanism implementing a local ensembling procedure . More precisely , a complex-valued convolutional version of Feature-wise Linear Modulation ( FiLM ) ( Perez et al. , 2018 ) is used to create multiple separated candidates for each of the signals we aim to retrieve from a mixture of inputs . A signal averaging operation on the candidates is then performed in order to increase the robustness of the signal to noise and interference . Before the averaging procedure , a form of dropout is implemented on the signal candidates in order to reduce the amount of interference and noise correlation existing between the different candidates . 2 . We propose and explore a new magnitude and phase-aware loss taking explicitly into account the magnitude and phase of signals . A key characteristic of our loss is that it is scale- and time-invariant . We test our proposed signal extraction mechanism in the audio source separation setting where we aim to retrieve distinct audio signals associated with each speaker in the input mix . Our experiments demonstrate the usefulness of our extraction method , and show its regularizing effect . 2 RELATED WORK . 2.1 RELATED WORK ON LEARNING REPRESENTATIONS IN THE FOURIER DOMAIN . Leveraging the Convolution Theorem to retrieve information has been done decades ago in the machine learning community using holographic reduced representations ( HRRs ) in the context of associative memories ( Plate , 1991 ; 1995 ) . HRRs enable one to store key-value data . Retrieval of a value in the data associated with a given key can be performed by convolving the whole data with the key or by applying an inner product between these two . By applying a fast Fourier transform ( FFT ) on the keys and the data , one can perform elementwise multiplication between the Fourier transforms and apply an inverse FFT to convert the result to the time domain . This would be equivalent to performing circular convolution between the key and the data in the time domain and has the advantage of being less expensive . Recently , Danihelka et al . ( 2016 ) have used associative memories to augment the capacity of LSTMs and to increase their robustness to noise and interference . For that , they applied independent permutations on the memory to create multiple copies of it . This enables one to obtain decorrelated noise in each of the permuted copies . A complex multiplication is then performed between the key and each of the copies . A signal averaging on the resulted multiplications eliminates the decorrelated noise in them and strengthens the signal-to-noise ratio ( SNR ) of the retrieved signal . Danihelka et al . ( 2016 ) , however , have not relied on FFTs in order to convert the temporal signals to the frequency domain . In fact , they assumed that complex-valued multiplication between the key and the data is itself enough to perform retrieval , and they have assumed that for each input representation the first half is real and the second one is imaginary . During this decade , interest in Fourier domain representations has started to grow in the machine learning community . Bruna et al . ( 2013 ) introduced a generalization of convolutions to graphs using the Graph Fourier Transform , which is defined as the multiplication of a graph signal by the eigenvector matrix of the graph Laplacian . However , the computation of the eigenvector matrix is expensive . Recently , methods that are computationally more efficient have been introduced in Defferrard et al . ( 2016 ) and Kipf & Welling ( 2016 ) to avoid an explicit use of the Graph Fourier basis . In the context of Convolutional Neural Networks ( CNNs ) , Rippel et al . ( 2015 ) introduced spectral pooling , which allows one to perform pooling in the frequency domain . This allows one to maintain the output spatial dimensionality , and thus the technique can retain significantly more information than other pooling approaches . Rippel et al . ( 2015 ) have also observed that the parametrization of the convolution filters in the Fourier domain induces faster convergence during training . Arjovsky et al . ( 2016 ) designed a recurrent neural network ( RNN ) where the transition hidden matrix is unitary . More precisely , the hidden transition matrix is constructed using the product of specific unitary transformations such as diagonal matrices , permutations , rotations , the Discrete Fourier Transform and its inverse . This allows one to preserve the norm of the hidden state , and as a consequence , mitigates the problem of vanishing and exploding gradients . Wolter & Yao ( 2018a ) designed an RNN where the input is converted to the frequency domain using a Short Time Fourier Transform ( STFT ) . The output is converted back to the time domain by applying an inverse STFT . Zhang et al . ( 2018 ) proposed a Fourier Recurrent Unit ( FRU ) where they showed that FRU has gradient lower and upper bounds independent of the temporal dimension . They have also demonstrated the great expressivity of the sparse Fourier basis from which the FRU draws its power . As we consider the task of speech separation as case study , we provide a related work section on both time domain and frequency domain speech separation methods in section 2.2 . 2.2 RELATED WORK ON TIME DOMAIN AND FREQUENCY DOMAIN SPEECH SEPARATION . Speech separation has been the subject of extensive study within the audio processing literature for a considerable amount of time . Recently , there has been growing interest in leveraging deep learning techniques ( Du et al. , 2014 ; Huang et al. , 2014 ; Hershey et al. , 2015 ; Gao et al. , 2018 ; Ephrat et al. , 2018 ) to tackle the speech separation problem . Hershey et al . ( 2015 ) proposed a deep clustering approach to speech separation . The basic idea is to learn high-dimensional embeddings of the mixture signals , that is later exploited to separate the speech targets with standard clustering techniques . A recent attempt to extend deep clustering led to the deep attractor network proposed by Chen et al . ( 2016 ) . Similarly to deep clustering , high dimensional embeddings are learned , but the network also creates the so-called “ attractors '' to better cluster time-frequency points dominated by different speakers . The aforementioned approaches estimate only the magnitude of the STFTs and reconstruct the time-domain signal with the Griffin-Lim algorithm ( Griffin & Lim , 1984 ) or other similar procedures ( Sturmel & Daudet , 2006 ) . Other papers have recently proposed to integrate the phase-information within a speech separation system without necessarily working in the complexvalued frequency domain . The work by Erdogan et al . ( 2015 ) , for instance , proposes to train a deep neural network with a phase-sensitive loss . Another noteworthy attempt has been described in Wang et al . ( 2018 ) , where the neural network still estimates the magnitude of the spectrum , but the timedomain speech signals are retrieved by directly integrating the Griffin-Lim reconstruction into the neural layers . Furthermore , methods reported in Wang et al . ( 2018 ) integrate the phase-information within a speech separation system by reconstructing the clean phase of each source starting from the estimated magnitude of each source and the mixture phase . This is fundamentally different from our proposed framework , as we provide an end-to-end solution to perform signal retrieval in the complex-valued frequency domain , and process both spectrogram magnitude and phase information rather than working only on magnitude representation with heuristic reconstruction of phase . Another attempt to estimate the clean phase is reported in Le Roux et al . ( 2019 ) where the clean phase of each speaker is estimated using discrete representation . This is also fundamentally different from our work as it considers a discrete representation of the phase for source separation and , in our case , we consider continuous representation of the complex-domain signal . Instead of explicitly integrating phase-information , other recent work perform speech separation in the time domain directly , as described in Venkataramani & Smaragdis ( 2018 ) . Likewise , the TasNet architecture proposed in Luo & Mesgarani ( 2017 ) and ConvTasNet ( Luo & Mesgarani , 2018 ) perform speech separation using the mixed time signal as input . Operating directly on the time-domain signal using the ConvTasNet architecture , which implements temporal convolutional networks ( TCN ) ( Bai et al. , 2018 ) , has led to state-of-the-art results in audio speech separation ( Luo & Mesgarani , 2018 ; Shi et al. , 2019 ) . The studies by ( Lee et al. , 2017 ; Hu & Wang , 2004 ; Huang et al. , 2014 ) are more related to our work as they address the speech separation problem by processing the complex-valued spectral input of the mixed speech . However , this was done without leveraging the recent advances in complex-valued deep learning .
This paper proposes a new method for source separation, by using deep learning UNets, complex-valued representations and the Fourier domain. Concretely, their contribution is : i) a complex-valued convolutional version of the Feature-Wise Linear Modulation, able to optimise the parameters needed to create multiple separated candidates for each of signal sources that are then combined using signal averaging; ii) the design of a loss that takes into account magnitude and phase while being scale and time invariant. It was then tested and compared with real-valued versions, and also some state-of-the-art methods.
SP:32ff67eb5376e6c8c52c5adc601c520abc9a648c
Retrieving Signals in the Frequency Domain with Deep Complex Extractors
1 INTRODUCTION . Complex-valued neural networks have been studied since long before the emergence of modern deep learning techniques ( Georgiou & Koutsougeras , 1992 ; Zemel et al. , 1995 ; Kim & Adalı , 2003 ; Hirose , 2003 ; Nitta , 2004 ) . Nevertheless , deep complex-valued models have only started to gain momentum ( Reichert & Serre , 2014 ; Arjovsky et al. , 2015 ; Danihelka et al. , 2016 ; Trabelsi et al. , 2017 ; Jose et al. , 2017 ; Wolter & Yao , 2018b ; Choi et al. , 2019 ) , with the great majority of models in deep learning still relying on real-valued representations . The motivation for using complex-valued representations for deep learning is twofold : On the one hand , biological nervous systems actively make use of synchronization effects to gate signals between neurons – a mechanism that can be recreated in artificial systems by taking into account phase differences ( Reichert & Serre , 2014 ) . On the other hand , complex-valued representations are better suited to certain types of data , particularly those that are naturally expressed in the frequency domain . Other benefits provided by working with complex-valued inputs in the spectral or frequency domain are computational . In particular , short-time Fourier transforms ( STFTs ) can be used to considerably reduce the temporal dimension of the representation for an underlying signal . This is a critical advantage , as training recurrent neural networks ( RNNs ) or convolutional neural networks ( CNNs ) on long sequences remains challenging due to unstable gradients and the computational requirements of backpropagation through time ( BPTT ) ( Hochreiter , 1991 ; Bengio et al. , 1994 ) . Applying the STFT on the raw signal , on the other hand , is computationally efficient , as in practice it is implemented with the fast Fourier transform ( FFT ) whose computational complexity is O ( n log ( n ) ) . The aforementioned biological , representational and computational considerations provide compelling motivations for designing learning models for tasks where the complex-valued representation of the input and output data is more desirable than their real-counterpart . Recent work has provided building blocks for deep complex-valued neural networks ( Trabelsi et al. , 2017 ) . These building blocks have been shown , in many cases , to avoid numerical problems during training and , thereby , enable the use of complex-valued representations . These representations are well-suited for frequency domain signals , as they have the ability to explicitly encode frequency magnitude and phase components . This motivates us to design a new signal extraction mechanism operating in the frequency domain . In this work , our contributions are summarized as follows : 1 . We present a new signal separation mechanism implementing a local ensembling procedure . More precisely , a complex-valued convolutional version of Feature-wise Linear Modulation ( FiLM ) ( Perez et al. , 2018 ) is used to create multiple separated candidates for each of the signals we aim to retrieve from a mixture of inputs . A signal averaging operation on the candidates is then performed in order to increase the robustness of the signal to noise and interference . Before the averaging procedure , a form of dropout is implemented on the signal candidates in order to reduce the amount of interference and noise correlation existing between the different candidates . 2 . We propose and explore a new magnitude and phase-aware loss taking explicitly into account the magnitude and phase of signals . A key characteristic of our loss is that it is scale- and time-invariant . We test our proposed signal extraction mechanism in the audio source separation setting where we aim to retrieve distinct audio signals associated with each speaker in the input mix . Our experiments demonstrate the usefulness of our extraction method , and show its regularizing effect . 2 RELATED WORK . 2.1 RELATED WORK ON LEARNING REPRESENTATIONS IN THE FOURIER DOMAIN . Leveraging the Convolution Theorem to retrieve information has been done decades ago in the machine learning community using holographic reduced representations ( HRRs ) in the context of associative memories ( Plate , 1991 ; 1995 ) . HRRs enable one to store key-value data . Retrieval of a value in the data associated with a given key can be performed by convolving the whole data with the key or by applying an inner product between these two . By applying a fast Fourier transform ( FFT ) on the keys and the data , one can perform elementwise multiplication between the Fourier transforms and apply an inverse FFT to convert the result to the time domain . This would be equivalent to performing circular convolution between the key and the data in the time domain and has the advantage of being less expensive . Recently , Danihelka et al . ( 2016 ) have used associative memories to augment the capacity of LSTMs and to increase their robustness to noise and interference . For that , they applied independent permutations on the memory to create multiple copies of it . This enables one to obtain decorrelated noise in each of the permuted copies . A complex multiplication is then performed between the key and each of the copies . A signal averaging on the resulted multiplications eliminates the decorrelated noise in them and strengthens the signal-to-noise ratio ( SNR ) of the retrieved signal . Danihelka et al . ( 2016 ) , however , have not relied on FFTs in order to convert the temporal signals to the frequency domain . In fact , they assumed that complex-valued multiplication between the key and the data is itself enough to perform retrieval , and they have assumed that for each input representation the first half is real and the second one is imaginary . During this decade , interest in Fourier domain representations has started to grow in the machine learning community . Bruna et al . ( 2013 ) introduced a generalization of convolutions to graphs using the Graph Fourier Transform , which is defined as the multiplication of a graph signal by the eigenvector matrix of the graph Laplacian . However , the computation of the eigenvector matrix is expensive . Recently , methods that are computationally more efficient have been introduced in Defferrard et al . ( 2016 ) and Kipf & Welling ( 2016 ) to avoid an explicit use of the Graph Fourier basis . In the context of Convolutional Neural Networks ( CNNs ) , Rippel et al . ( 2015 ) introduced spectral pooling , which allows one to perform pooling in the frequency domain . This allows one to maintain the output spatial dimensionality , and thus the technique can retain significantly more information than other pooling approaches . Rippel et al . ( 2015 ) have also observed that the parametrization of the convolution filters in the Fourier domain induces faster convergence during training . Arjovsky et al . ( 2016 ) designed a recurrent neural network ( RNN ) where the transition hidden matrix is unitary . More precisely , the hidden transition matrix is constructed using the product of specific unitary transformations such as diagonal matrices , permutations , rotations , the Discrete Fourier Transform and its inverse . This allows one to preserve the norm of the hidden state , and as a consequence , mitigates the problem of vanishing and exploding gradients . Wolter & Yao ( 2018a ) designed an RNN where the input is converted to the frequency domain using a Short Time Fourier Transform ( STFT ) . The output is converted back to the time domain by applying an inverse STFT . Zhang et al . ( 2018 ) proposed a Fourier Recurrent Unit ( FRU ) where they showed that FRU has gradient lower and upper bounds independent of the temporal dimension . They have also demonstrated the great expressivity of the sparse Fourier basis from which the FRU draws its power . As we consider the task of speech separation as case study , we provide a related work section on both time domain and frequency domain speech separation methods in section 2.2 . 2.2 RELATED WORK ON TIME DOMAIN AND FREQUENCY DOMAIN SPEECH SEPARATION . Speech separation has been the subject of extensive study within the audio processing literature for a considerable amount of time . Recently , there has been growing interest in leveraging deep learning techniques ( Du et al. , 2014 ; Huang et al. , 2014 ; Hershey et al. , 2015 ; Gao et al. , 2018 ; Ephrat et al. , 2018 ) to tackle the speech separation problem . Hershey et al . ( 2015 ) proposed a deep clustering approach to speech separation . The basic idea is to learn high-dimensional embeddings of the mixture signals , that is later exploited to separate the speech targets with standard clustering techniques . A recent attempt to extend deep clustering led to the deep attractor network proposed by Chen et al . ( 2016 ) . Similarly to deep clustering , high dimensional embeddings are learned , but the network also creates the so-called “ attractors '' to better cluster time-frequency points dominated by different speakers . The aforementioned approaches estimate only the magnitude of the STFTs and reconstruct the time-domain signal with the Griffin-Lim algorithm ( Griffin & Lim , 1984 ) or other similar procedures ( Sturmel & Daudet , 2006 ) . Other papers have recently proposed to integrate the phase-information within a speech separation system without necessarily working in the complexvalued frequency domain . The work by Erdogan et al . ( 2015 ) , for instance , proposes to train a deep neural network with a phase-sensitive loss . Another noteworthy attempt has been described in Wang et al . ( 2018 ) , where the neural network still estimates the magnitude of the spectrum , but the timedomain speech signals are retrieved by directly integrating the Griffin-Lim reconstruction into the neural layers . Furthermore , methods reported in Wang et al . ( 2018 ) integrate the phase-information within a speech separation system by reconstructing the clean phase of each source starting from the estimated magnitude of each source and the mixture phase . This is fundamentally different from our proposed framework , as we provide an end-to-end solution to perform signal retrieval in the complex-valued frequency domain , and process both spectrogram magnitude and phase information rather than working only on magnitude representation with heuristic reconstruction of phase . Another attempt to estimate the clean phase is reported in Le Roux et al . ( 2019 ) where the clean phase of each speaker is estimated using discrete representation . This is also fundamentally different from our work as it considers a discrete representation of the phase for source separation and , in our case , we consider continuous representation of the complex-domain signal . Instead of explicitly integrating phase-information , other recent work perform speech separation in the time domain directly , as described in Venkataramani & Smaragdis ( 2018 ) . Likewise , the TasNet architecture proposed in Luo & Mesgarani ( 2017 ) and ConvTasNet ( Luo & Mesgarani , 2018 ) perform speech separation using the mixed time signal as input . Operating directly on the time-domain signal using the ConvTasNet architecture , which implements temporal convolutional networks ( TCN ) ( Bai et al. , 2018 ) , has led to state-of-the-art results in audio speech separation ( Luo & Mesgarani , 2018 ; Shi et al. , 2019 ) . The studies by ( Lee et al. , 2017 ; Hu & Wang , 2004 ; Huang et al. , 2014 ) are more related to our work as they address the speech separation problem by processing the complex-valued spectral input of the mixed speech . However , this was done without leveraging the recent advances in complex-valued deep learning .
This work researches the deep complex-valued neural networks. Specifically, it proposes a new signal extraction mechanism that operates in frequency domain and applies to address the speech separation issue. Also, a function is proposed to explicitly consider both the magnitude and phase information of a signal. Related work on learning representation in frequency domain and speech separation is well introduced. Theoretical analysis is conducted to show the motivation and connection to signal processing. The architecture of the deep neural networks is presented in details, with the elaboration of the complex mask generation. Experimental study is conducted on a benchmark dataset to compare the proposed complex networks with those using real-part values only to demonstrate the improvement.
SP:32ff67eb5376e6c8c52c5adc601c520abc9a648c
Neural Text Generation With Unlikelihood Training
1 INTRODUCTION . Neural text generation is a vital tool in a wide range of natural language applications . However , the standard approach – training a sequence to sequence model , e.g . Transformer ( Vaswani et al. , 2017 ) , to maximize log-likelihood and approximately decoding the most likely sequence – is known to be flawed . Generated text in open-ended applications such as language modeling or dialogue has been observed to be dull , with high frequency tokens used too often and interesting content words used too rarely ( Holtzman et al. , 2019 ; Dinan et al. , 2019 ) . Moreover , the models repeat themselves at the token , phrase , and sentence levels , and statistics comparing a set of human-generated utterances and model-generated responses indicate a discrepancy between the human and model word distributions . This does not appear to be rectified by training on more data ( Radford et al. , 2019 ) . Recent fixes involve modifying the decoding strategy using sampling or more sophisticated beam search variants . However , these decoding strategies do not address the core issue : the model ’ s underlying sequence probabilities are clearly not correct . Several reasons for exactly why neural text is degenerate have been posited , with the cause currently unknown . Possible candidates include the problem being ( i ) a by-product of the model architecture , e.g . the Transformer architecture preferring repeats ( Holtzman et al. , 2019 ; Vig , 2018 ) , ( ii ) an intrinsic property of human language ( Holtzman et al. , 2019 ) rather than a modeling deficiency , or that ( iii ) a training objective relying on fixed corpora can not take into account the real goal of using the language ( Choi , 2018 ) . Our work shows that , while the above may be factors , a primary factor is the use of the likelihood objective itself , as we demonstrate that degeneration is alleviated if we replace the likelihood objective with our proposal . While low perplexity in the limit should lead to predicting the correct next target word , there are two major flaws of the likelihood objective : ( i ) it pays relatively little attention to the argmax or the top of the ranked list of next token probabilities , instead optimizing the likelihood of the entire distribution ; ∗Equal contribution ; the ordering was decided by a coin flip . ( ii ) it is not focused on optimizing sequence generation , only on producing the next token . The first issue means that greedy or beam search decoding , which rely on the top of the list to generate , are not optimized – there is a discrepancy between maximizing the log-probability of a ground-truth token and ensuring the rank of the ground-truth token to be one . The second issue means that during sequence generation , any imperfection in next token prediction leads to error accumulation that is not addressed by likelihood training . In this work , we introduce unlikelihood training , an approach that addresses the two aforementioned issues . It combines two types of updates : a likelihood update on the true target tokens so that they are assigned high probability , and an unlikelihood update on tokens that are otherwise assigned too high a probability . We can collect these unlikely token candidates either during next-token prediction or from generated sequences , allowing us to train at both the token and sequence levels . Both token and sequence level unlikelihood training are shown to improve metrics that measure dullness and repetition of the model , while maintaining performance in other metrics such as perplexity or token accuracy compared to the maximum likelihood baseline . Finally , we assess our models using human evaluations . We find that our generations have vastly improved quality compared to likelihood trained models when both models use beam search decoding . Moreover , our approach when using beam search also significantly improves over likelihood trained models using either beam blocking or nucleus sampling , thus outperforming the current state-of-the-art . 2 RELATED WORK . Neural Text Degeneration Recently , several papers have observed various forms of neural text degeneration , especially in open-ended generation tasks . In dialogue , it has been shown that there is a mismatch between model and human word distributions , where generative models are more likely to output frequent words , but less likely to produce rare words compared to humans . For example , this was observed across all generative models submitted to the ConvAI2 NeurIPS 2018 competition ( Dinan et al. , 2019 ) . In language modeling , the work of Holtzman et al . ( 2019 ) highlighted problems with the word frequency distribution and level of repetition in model generations compared to human text . These issues are not remedied by simply increasing the amount of the training data ; e.g . largescale GPT-2 language models ( Radford et al. , 2019 ) display the same issues . Improved Decoding Algorithms Several methods have been proposed to rectify these issues . The primary ones involve changing the decoding method to a sophisticated beam search variant or to stochastic decoding , e.g . sampling . Different variants of beam search have been explored ( Li et al. , 2016 ; Vijayakumar et al. , 2018 ; Kulikov et al. , 2018 ; Holtzman et al. , 2018 ) which can decrease a model ’ s level of repetition by selecting candidates that are unlike previously chosen ones . Separately , hard or soft beam blocking has been investigated ( Paulus et al. , 2017 ; Klein et al. , 2017 ) , whereby previously generated n-grams are blocked from subsequent generation . This approach is often used in dialogue generation , fixing some token or phrase level repetitions but removing repetitions that would naturally occur in human text . The second major approach is that of sampling from the model at generation time . Top k-sampling ( Fan et al. , 2018 ) and nucleus sampling ( Holtzman et al. , 2019 ) are two methods that sample sequences based on a function of the predicted next token probability distribution given by the model . Both approaches vastly improve the repetition issue , as the randomization often reduces the number of duplicate tokens in a decoded sequence , even if highly scored paths under the model ( represented by beam search candidates ) contain repetitions . However , as the underlying model is unchanged , it often prefers semantically similar phrasing , depending on the temperature parameter of the sampling ( Holtzman et al. , 2019 ) . Furthermore , this solution is less relevant in less open-ended tasks such as machine translation , where beam search variants are the preferred method . Ideally we would like a model that can work with both beam and sampling decoding methods . Improved Learning Algorithms The proposed learning criteria are closely related to structured output prediction methods in which the goal is to increase the scores assigned by a model to true examples while decreasing those assigned to negative examples often generated by the model itself . Some representative algorithms include structured perceptron ( Collins , 2002 ) , energy-based models ( LeCun et al. , 2006 ) and more recently reflective likelihood ( Dieng et al. , 2018 ) . A particular variant in this family of algorithms , called negative training , was recently used by He and Glass ( 2019 ) to prevent generic and malicious responses in dialogue models . Similarly , these structured prediction algorithms with neural language models have been applied to machine translation in recent years by Shen et al . ( 2015 ) and Edunov et al . ( 2017 ) . 3 NEURAL TEXT GENERATION . Language Modeling In language modeling , our goal is to model a probability distribution p∗ ( x ) over variable-length text sequences x = ( x1 , . . . , x|x| ) composed of tokens from a vocabulary , xt ∈ V . We wish to find a model pθ ( x ) which resembles p∗ ( x ) , meaning that samples x̂ ∼ pθ are similar to samples from p∗ , and pθ ( x ) ≈ p∗ ( x ) for all x . When pθ ( x ) is parameterized by a neural network , we call pθ a neural language model . We assume that pθ takes the form pθ ( x ) = ∏|x| t=1 pθ ( xt|x < t ) . The de facto approach to training such a model is to find parameters θ that maximize the loglikelihood of a finite set of samples D from p∗ by minimizing : LMLE ( pθ , D ) = − |D|∑ i=1 |x ( i ) |∑ t=1 log pθ ( x ( i ) t |x ( i ) < t ) . ( 1 ) Sequence Completion A closely related problem consists of sampling a sub-sequence , or prefix , x1 : k ∼ p∗ , then using pθ to conditionally decode a continuation , x̂k+1 : N ∼ pθ ( ·|x1 : k ) . We now want the resulting completion ( x1 , . . . , xk , x̂k+1 , . . . , x̂N ) to resemble a sample from p∗ . We use sequence completion as a setting to study the behavior of neural language models due to its generality . For instance , sequence completion encompasses story generation ( Fan et al. , 2018 ) , contextual text completion ( Radford et al. , 2019 ) , language modeling ( for k = 0 ) , and dialogue modeling ( Zhang et al. , 2018 ) where x1 : k is a dialogue history and a continuation is a next utterance . Given pθ and a prefix x1 : k , finding the optimal continuation is not tractable , so in practice approximate deterministic or stochastic decoding strategies are used to generate continuations . Deterministic Decoding Two widely used deterministic decoding approaches are greedy search and beam search . The former can be seen as a special case of the latter . Greedy search selects the highest probability token at each time step : xt = arg max pθ ( xt|x < t ) . Beam search maintains a fixed-size set of partially-decoded sequences , called hypotheses . At each time step , beam search forms new hypotheses by appending each token in the vocabulary to each existing hypothesis , scoring the resulting sequences then selecting the highest scoring sequences . As we describe in Section 4 , these deterministic decoding strategies , which depend highly on underlying model probabilities , expose issues with conventionally trained neural language models . Stochastic Decoding An alternative is to sample from a model-dependent distribution at each step , xt ∼ q ( xt|x < t , pθ ) . In order to prevent sampling low probability tokens , a typical approach is to restrict sampling to a subset of the vocabulary U ⊂ V at each step : q ( xt|x < t , pθ ) = { pθ ( xt|x < t ) /Z xt ∈ U 0 otherwise , where Z = ∑ x∈U pθ ( x|x < t ) . The top-k sampler restricts sampling to the k most-probable tokens ; i.e . U is the size k subset of V which maximizes ∑ x∈U pθ ( x|x < t ) ( Fan et al. , 2018 ) . The nucleus sampler instead restricts sampling to the smallest set of tokens with total mass above a threshold p ∈ [ 0 , 1 ] ; i.e . U is the smallest subset with ∑ x∈U pθ ( x|x < t ) > = p ( Holtzman et al. , 2019 ) .
This paper proposes training losses, unlikelihood objective, for mitigating the repetition problem of the text generated by recent neural language models. The problem is well-motivated by evidence from the existing literature. Specifically, the paper argues that the main cause of the degenerated output is the maximum likelihood objective commonly used to train language models. Their main contribution is to introduce additional objectives to penalize “unlikely” word probabilities. The proposed penalty is derived into 2 objectives: token level (previous words in context) and sentence level (future decoded words). The prior objective is used along with the MLE, while the later and more expensive is used for fine-tuning. They perform experiments on Wikitext-103 and evaluate models on the perplexity of the models, and n-gram statistics such as repetition, and uniqueness of the decoded texts. The proposed training scheme (UL-token+seq) is shown to have the closest statistics to the original corpus while the perplexity slightly suffers. The additional manual analysis shows that human annotators prefer the outputs (sentence completion) of the proposed method over the other baselines.
SP:4140a212888e058dc1f0bfaa5233f54e9d87fcee
Neural Text Generation With Unlikelihood Training
1 INTRODUCTION . Neural text generation is a vital tool in a wide range of natural language applications . However , the standard approach – training a sequence to sequence model , e.g . Transformer ( Vaswani et al. , 2017 ) , to maximize log-likelihood and approximately decoding the most likely sequence – is known to be flawed . Generated text in open-ended applications such as language modeling or dialogue has been observed to be dull , with high frequency tokens used too often and interesting content words used too rarely ( Holtzman et al. , 2019 ; Dinan et al. , 2019 ) . Moreover , the models repeat themselves at the token , phrase , and sentence levels , and statistics comparing a set of human-generated utterances and model-generated responses indicate a discrepancy between the human and model word distributions . This does not appear to be rectified by training on more data ( Radford et al. , 2019 ) . Recent fixes involve modifying the decoding strategy using sampling or more sophisticated beam search variants . However , these decoding strategies do not address the core issue : the model ’ s underlying sequence probabilities are clearly not correct . Several reasons for exactly why neural text is degenerate have been posited , with the cause currently unknown . Possible candidates include the problem being ( i ) a by-product of the model architecture , e.g . the Transformer architecture preferring repeats ( Holtzman et al. , 2019 ; Vig , 2018 ) , ( ii ) an intrinsic property of human language ( Holtzman et al. , 2019 ) rather than a modeling deficiency , or that ( iii ) a training objective relying on fixed corpora can not take into account the real goal of using the language ( Choi , 2018 ) . Our work shows that , while the above may be factors , a primary factor is the use of the likelihood objective itself , as we demonstrate that degeneration is alleviated if we replace the likelihood objective with our proposal . While low perplexity in the limit should lead to predicting the correct next target word , there are two major flaws of the likelihood objective : ( i ) it pays relatively little attention to the argmax or the top of the ranked list of next token probabilities , instead optimizing the likelihood of the entire distribution ; ∗Equal contribution ; the ordering was decided by a coin flip . ( ii ) it is not focused on optimizing sequence generation , only on producing the next token . The first issue means that greedy or beam search decoding , which rely on the top of the list to generate , are not optimized – there is a discrepancy between maximizing the log-probability of a ground-truth token and ensuring the rank of the ground-truth token to be one . The second issue means that during sequence generation , any imperfection in next token prediction leads to error accumulation that is not addressed by likelihood training . In this work , we introduce unlikelihood training , an approach that addresses the two aforementioned issues . It combines two types of updates : a likelihood update on the true target tokens so that they are assigned high probability , and an unlikelihood update on tokens that are otherwise assigned too high a probability . We can collect these unlikely token candidates either during next-token prediction or from generated sequences , allowing us to train at both the token and sequence levels . Both token and sequence level unlikelihood training are shown to improve metrics that measure dullness and repetition of the model , while maintaining performance in other metrics such as perplexity or token accuracy compared to the maximum likelihood baseline . Finally , we assess our models using human evaluations . We find that our generations have vastly improved quality compared to likelihood trained models when both models use beam search decoding . Moreover , our approach when using beam search also significantly improves over likelihood trained models using either beam blocking or nucleus sampling , thus outperforming the current state-of-the-art . 2 RELATED WORK . Neural Text Degeneration Recently , several papers have observed various forms of neural text degeneration , especially in open-ended generation tasks . In dialogue , it has been shown that there is a mismatch between model and human word distributions , where generative models are more likely to output frequent words , but less likely to produce rare words compared to humans . For example , this was observed across all generative models submitted to the ConvAI2 NeurIPS 2018 competition ( Dinan et al. , 2019 ) . In language modeling , the work of Holtzman et al . ( 2019 ) highlighted problems with the word frequency distribution and level of repetition in model generations compared to human text . These issues are not remedied by simply increasing the amount of the training data ; e.g . largescale GPT-2 language models ( Radford et al. , 2019 ) display the same issues . Improved Decoding Algorithms Several methods have been proposed to rectify these issues . The primary ones involve changing the decoding method to a sophisticated beam search variant or to stochastic decoding , e.g . sampling . Different variants of beam search have been explored ( Li et al. , 2016 ; Vijayakumar et al. , 2018 ; Kulikov et al. , 2018 ; Holtzman et al. , 2018 ) which can decrease a model ’ s level of repetition by selecting candidates that are unlike previously chosen ones . Separately , hard or soft beam blocking has been investigated ( Paulus et al. , 2017 ; Klein et al. , 2017 ) , whereby previously generated n-grams are blocked from subsequent generation . This approach is often used in dialogue generation , fixing some token or phrase level repetitions but removing repetitions that would naturally occur in human text . The second major approach is that of sampling from the model at generation time . Top k-sampling ( Fan et al. , 2018 ) and nucleus sampling ( Holtzman et al. , 2019 ) are two methods that sample sequences based on a function of the predicted next token probability distribution given by the model . Both approaches vastly improve the repetition issue , as the randomization often reduces the number of duplicate tokens in a decoded sequence , even if highly scored paths under the model ( represented by beam search candidates ) contain repetitions . However , as the underlying model is unchanged , it often prefers semantically similar phrasing , depending on the temperature parameter of the sampling ( Holtzman et al. , 2019 ) . Furthermore , this solution is less relevant in less open-ended tasks such as machine translation , where beam search variants are the preferred method . Ideally we would like a model that can work with both beam and sampling decoding methods . Improved Learning Algorithms The proposed learning criteria are closely related to structured output prediction methods in which the goal is to increase the scores assigned by a model to true examples while decreasing those assigned to negative examples often generated by the model itself . Some representative algorithms include structured perceptron ( Collins , 2002 ) , energy-based models ( LeCun et al. , 2006 ) and more recently reflective likelihood ( Dieng et al. , 2018 ) . A particular variant in this family of algorithms , called negative training , was recently used by He and Glass ( 2019 ) to prevent generic and malicious responses in dialogue models . Similarly , these structured prediction algorithms with neural language models have been applied to machine translation in recent years by Shen et al . ( 2015 ) and Edunov et al . ( 2017 ) . 3 NEURAL TEXT GENERATION . Language Modeling In language modeling , our goal is to model a probability distribution p∗ ( x ) over variable-length text sequences x = ( x1 , . . . , x|x| ) composed of tokens from a vocabulary , xt ∈ V . We wish to find a model pθ ( x ) which resembles p∗ ( x ) , meaning that samples x̂ ∼ pθ are similar to samples from p∗ , and pθ ( x ) ≈ p∗ ( x ) for all x . When pθ ( x ) is parameterized by a neural network , we call pθ a neural language model . We assume that pθ takes the form pθ ( x ) = ∏|x| t=1 pθ ( xt|x < t ) . The de facto approach to training such a model is to find parameters θ that maximize the loglikelihood of a finite set of samples D from p∗ by minimizing : LMLE ( pθ , D ) = − |D|∑ i=1 |x ( i ) |∑ t=1 log pθ ( x ( i ) t |x ( i ) < t ) . ( 1 ) Sequence Completion A closely related problem consists of sampling a sub-sequence , or prefix , x1 : k ∼ p∗ , then using pθ to conditionally decode a continuation , x̂k+1 : N ∼ pθ ( ·|x1 : k ) . We now want the resulting completion ( x1 , . . . , xk , x̂k+1 , . . . , x̂N ) to resemble a sample from p∗ . We use sequence completion as a setting to study the behavior of neural language models due to its generality . For instance , sequence completion encompasses story generation ( Fan et al. , 2018 ) , contextual text completion ( Radford et al. , 2019 ) , language modeling ( for k = 0 ) , and dialogue modeling ( Zhang et al. , 2018 ) where x1 : k is a dialogue history and a continuation is a next utterance . Given pθ and a prefix x1 : k , finding the optimal continuation is not tractable , so in practice approximate deterministic or stochastic decoding strategies are used to generate continuations . Deterministic Decoding Two widely used deterministic decoding approaches are greedy search and beam search . The former can be seen as a special case of the latter . Greedy search selects the highest probability token at each time step : xt = arg max pθ ( xt|x < t ) . Beam search maintains a fixed-size set of partially-decoded sequences , called hypotheses . At each time step , beam search forms new hypotheses by appending each token in the vocabulary to each existing hypothesis , scoring the resulting sequences then selecting the highest scoring sequences . As we describe in Section 4 , these deterministic decoding strategies , which depend highly on underlying model probabilities , expose issues with conventionally trained neural language models . Stochastic Decoding An alternative is to sample from a model-dependent distribution at each step , xt ∼ q ( xt|x < t , pθ ) . In order to prevent sampling low probability tokens , a typical approach is to restrict sampling to a subset of the vocabulary U ⊂ V at each step : q ( xt|x < t , pθ ) = { pθ ( xt|x < t ) /Z xt ∈ U 0 otherwise , where Z = ∑ x∈U pθ ( x|x < t ) . The top-k sampler restricts sampling to the k most-probable tokens ; i.e . U is the size k subset of V which maximizes ∑ x∈U pθ ( x|x < t ) ( Fan et al. , 2018 ) . The nucleus sampler instead restricts sampling to the smallest set of tokens with total mass above a threshold p ∈ [ 0 , 1 ] ; i.e . U is the smallest subset with ∑ x∈U pθ ( x|x < t ) > = p ( Holtzman et al. , 2019 ) .
The main contribution of this paper lies in the proposed unlikelihood training objective for open-ended text generation. The key idea is to enforce the unlikely generations to be assigned lower probability by the model. Both token and sequence-level unlikelihood training objectives are provided. Impressively, the authors show that models trained with the proposed method can generate high-quality text via only beam search, without using top-k, nucleus sampling, or beam blocking methods.
SP:4140a212888e058dc1f0bfaa5233f54e9d87fcee
Solving Packing Problems by Conditional Query Learning
1 INTRODUCTION . How to pack boxes with the smallest bin size ? With the development of globalization and ECommerce , this question becomes more and more important . Boxes are packed to various bins such as shipping container and boxcar . Many of these boxes are made by paper or plastic ; packing boxes in a more efficient way can greatly reduce material cost or shipping energy . The Bin Packing Problem ( BPP ) is one of the classic integer combinatorial optimization problems and it has been extensively studied for decades ( Wu et al. , 2010 ) . It is shown that BPP is a strongly NP-hard problem ( Martello et al. , 2000 ) , which requires exponential time to generate the optimal solution . Some heuristic algorithms ( Baltacioglu et al. , 2006 ) try to obtain a nearly optimal solution within polynomial time , but these methods require explicit rules for every specific problem setting . When the setting changes even very slightly , the original method can not work properly . On contrast , the explicit or hand-crafted rules can be interpreted as policies by neural networks to make decisions , which are insensitive to problem settings . Neural networks have achieved great success in many domains , such as computer vision ( Liu et al. , 2017 ) , natural language processing ( Vaswani et al. , 2017 ) , speech recognition ( Chan et al. , 2016 ) , etc . Inspired by these booming techniques , many studies ( Bello et al. , 2016 ) adopt the neural networks and the recent advances in artificial intelligence to solve the classic combinatorial optimization problems , such as the Travelling Salesman Problem ( TSP ) , the Vehicle Routing Problem ( VRP ) , etc . Some latest works propose the pointer network ( Vinyals et al. , 2015b ) and utilize the attention mechanism with reinforcement learning ( Nazari et al. , 2018 ; Kool et al. , 2019 ) to solve the TSP and the routing problems respectively . There are also some learning-based attempts for the packing problem , which utilize reinforcement learning in the neural network model since the optimal solution is unknown beforehand . For example , MTSL ( Duan et al. , 2019 ) separates selecting and rotating step by selected learning , and applies the transitional greedy method to perform final positioning step . Laterre et al . ( 2018 ) enlighten by AlphaZero ( Silver et al. , 2017 ) adopt Monte Carlo Tree Search ( MCTS ) with self-competition reinforcement learning to solve the packing problem , but it is restrained to pack boxes that have been preliminarily divided from a bin . Cai et al . ( 2019 ) simply use reinforcement learning to get some packing results , which serve as the initialization to accelerate the original heuristic algorithms . However , these approaches either miss the connection between sub-actions or combine handicraft rules with a learning algorithm . Without the sub-action connection , the learning process becomes partial observable Markov Decision Process ( MDP ) ( Sutton & Barto , 2018 ) , which is hard to generalize a good policy for the lack of information . Some methods generate all possible sub-action sequences at the same time , it is still non-trivial for a neural network model to produce many mutual related outputs in a single forward prorogation even though the setting of these methods are strict MDP . These methods combining handicraft rules are not only difficult to achieve an optimal solution , but also sensitive to problem settings . To the best of our knowledge , there is no end-to-end learning algorithm that solves standard packing problems . In this paper , we propose a Conditional Query Learning ( CQL ) model that directly addresses the gap between sub-actions . With the inner conditional query mechanism , the learning process becomes fully observable MDP , which makes the problem easier to apply the reinforcement learning . Compared with the model that outputs several sub-actions simultaneously , CQL model has smaller action space per forward step . Benefit from the small action space feature , we can apply a simpler model to fit the policy , which work more efficiently . In addition , it would not sacrifice performance with the small action space . As a result , we do not require handicraft rules to feed the gaps between sub-actions since they are provided by conditional queries . Specifically , the packing problem requires three mutual conditioned sub-actions : box selecting , rotating and positioning . To fill the gap between sub-actions , we adopt the CQL model as follows . First of all , the packing problem is formulated as an MDP to apply reinforcement learning . Then the previous sub-actions are embedded as a query to the next sub-action decoder . After all three sub-actions are generated , the environment performs one packing step and updates the observation . Finally , we adopt the actor-critic algorithm to update the model parameters . We conduct extensive experiments to evaluate the models and the results show that the CQL model greatly outperforms the vanilla model which produces sub-actions in one step forward propagation without query . In addition , the CQL model achieves lower bin gap ratio in both 2D and 3D packing compared with extant learning approaches and heuristic algorithms . Specifically , our model improves 7.2 % space utilization ratio in 3D packing ( 30 boxes ) compared with genetic algorithm , and reduces more than 10 % bin gap ratio in almost every case compared with the state-of-the-art learning approaches . Furthermore , numerical results show that CQL greatly outperforms other methods when the scale of the problem increases . The learning carve and the variance of results also illustrate that CQL makes the training process more stable . The contributions of this paper are summarized as follows : • We propose the first end-to-end learning algorithm that solves the standard packing problem for both 2D and 3D settings ; • We propose the conditional query learning ( CQL ) model to address the packing problem that has mutual conditioned multi-dimension actions ; • We combine the conditional query learning with an attention mechanism to construct a new learning framework ; • We conduct extensive experiments and the results show that our model outperforms both heuristic algorithms and the state-of-the-art learning-based models . We also release our model implementation and 2D & 3D packing environment for the community to test their algorithms . The rest of the paper is organized as follows . We introduce related works in the next section . We introduce the preliminaries in Section 3 . Design of CQL model is presented in Section 4 , We implement CQL model and illustrate the comparison results in Section 5 . Finally , we conclude this paper in section 6 . 2 RELATED WORKS . Getting an optimal solution of combinatorial optimization problems are computationally heavy , optimal labeled data for supervised learning is expensive . Hence , when using Neural Networks ( NNs ) to solve these problems , one solution is to apply heuristic algorithm results as labeled data , but the performance of this approach can not be better than the performance of the heuristic algorithm . The other solution is to apply reinforcement learning that makes the algorithm learn from its own experience , which is possible to produce better results than existing algorithms . Here we focus on the reinforcement learning approaches and introduce some related works of reinforcement learning and neural combinatorial optimization . 2.1 SEQUENCE PROBLEM IN NEURAL COMBINATORIAL OPTIMIZATION . Enlighten by the recent success of Neural Networks ( NNs ) , especially the fast progress in sequence to sequence model , which is initially used in Neural Machine Translation ( NMT ) ( Bahdanau et al. , 2014 ; Vinyals et al. , 2015a ; Luong et al. , 2015 ; Vaswani et al. , 2017 ; Shen et al. , 2018 ) . Because many combinatorial optimization problems have similar input and output structure as NMT , many studies adopt NNs to solve sequential combinatorial optimization problems . Vinyals et al . ( 2015b ) propose Pointer Networks , which uses attention as a pointer to select a member of the input sequence as the output . Bello et al . ( 2016 ) and ( Nazari et al. , 2018 ) view TSP and VRP as MDP respectively , and they both apply policy gradient algorithm to train their models . Kool et al . ( 2019 ) further improve the result of routing problem using attention model ( Vaswani et al. , 2017 ) . 2.2 REINFORCEMENT LEARNING WITH PARAMETERIZED ACTION SPACE . Different from routing problems that only require one object selected from input sequence per step , the packing problem requires three sub-actions to pack a box into the bin . This kind of action space is called parameterized action space ( Masson et al. , 2016 ) in reinforcement learning , which requires the agent to select multiple types of action every action step . Hausknecht & Stone ( 2015 ) expand DDPG to parameterized action space and test it on RoboCup soccer game . But this approach suffers from tanh saturation problem in continuous action space , so they apply inverse gradients trick to address it . Masson et al . ( 2016 ) introduce Q-PAMDP algorithm , which alternates learning action selection and parameter selection polices to make the training process more stable , but the parameter policy have to output all the parameters for all discrete actions . The output size of the parameter policy can be explosion when the problem has large high dimensional parameters with large discrete actions . The authors of Wei et al . ( 2018 ) propose a hierarchical approach , they condition the parameter policy on the output of discrete action policy , and they apply Variational Auto-Encoders ( VAE ) trick to make the model differentiable . However , all these researches only divide action to two parts , namely , a discrete action and a set of actions as parameters that may include discrete or continuous actions . But the problems like packing contains several mutual conditioned actions that can not directly view as conventional parameterized action space problem , otherwise the number of outputs of the second model will be the product of the candidate output of each sub-actions , which makes the model has too many output options and is hard to generalize the problem and learn to produce a good solution . 2.3 PACKING PROBLEM . As mentioned before , the packing problems are such problem that have mutually conditioned subactions , and there are some works trying to solve this problem by NCO . Enlighten by AlphaZero Silver et al . ( 2017 ) , Laterre et al . ( 2018 ) apply MCTS self play to search the better solution and learn how to pack , but their method only applies to the dataset which boxes are divided by random cuts from a regular bin . Duan et al . ( 2019 ) propose a selected learning approach solving 3D flexible bin packing problem which balances sequence and orientation complexity . They adopt pointer networks with reinforcement learning to get the box selection and rotation results , and apply greedy algorithm to select the location of boxes , which is not end-to-end learning method and can not get better than greedy algorithm . More importantly , this hybrid method do not view the entire packing problem as one complete optimization process , so the learning process only tries to optimize part of problem . At the same time , different algorithms of sub-actions may have conflict goal in optimization process . Different from previous methods , we simply embed the previous actions as an attention query for the model to reason the subsequent actions and after the full action step is finished , we perform one-step optimization along every sub-action model . In this way , we reduce the total output size to the sum of all sub-action candidate output , which makes the model smaller and easier to learn .
This paper aims at solving geometric bin packing (2D or 3D) problems using a deep reinforcement learning framework. Namely, the framework is based on the actor-critic paradigm, and uses a conditional query learning model for performing composite actions (selections, rotations) in geometric bin packing. Experiments are performed on several instances of 2D-BPP and 3D-BPP,
SP:3af65f4601748c89802e82f7e312d169ab8f54f2
Solving Packing Problems by Conditional Query Learning
1 INTRODUCTION . How to pack boxes with the smallest bin size ? With the development of globalization and ECommerce , this question becomes more and more important . Boxes are packed to various bins such as shipping container and boxcar . Many of these boxes are made by paper or plastic ; packing boxes in a more efficient way can greatly reduce material cost or shipping energy . The Bin Packing Problem ( BPP ) is one of the classic integer combinatorial optimization problems and it has been extensively studied for decades ( Wu et al. , 2010 ) . It is shown that BPP is a strongly NP-hard problem ( Martello et al. , 2000 ) , which requires exponential time to generate the optimal solution . Some heuristic algorithms ( Baltacioglu et al. , 2006 ) try to obtain a nearly optimal solution within polynomial time , but these methods require explicit rules for every specific problem setting . When the setting changes even very slightly , the original method can not work properly . On contrast , the explicit or hand-crafted rules can be interpreted as policies by neural networks to make decisions , which are insensitive to problem settings . Neural networks have achieved great success in many domains , such as computer vision ( Liu et al. , 2017 ) , natural language processing ( Vaswani et al. , 2017 ) , speech recognition ( Chan et al. , 2016 ) , etc . Inspired by these booming techniques , many studies ( Bello et al. , 2016 ) adopt the neural networks and the recent advances in artificial intelligence to solve the classic combinatorial optimization problems , such as the Travelling Salesman Problem ( TSP ) , the Vehicle Routing Problem ( VRP ) , etc . Some latest works propose the pointer network ( Vinyals et al. , 2015b ) and utilize the attention mechanism with reinforcement learning ( Nazari et al. , 2018 ; Kool et al. , 2019 ) to solve the TSP and the routing problems respectively . There are also some learning-based attempts for the packing problem , which utilize reinforcement learning in the neural network model since the optimal solution is unknown beforehand . For example , MTSL ( Duan et al. , 2019 ) separates selecting and rotating step by selected learning , and applies the transitional greedy method to perform final positioning step . Laterre et al . ( 2018 ) enlighten by AlphaZero ( Silver et al. , 2017 ) adopt Monte Carlo Tree Search ( MCTS ) with self-competition reinforcement learning to solve the packing problem , but it is restrained to pack boxes that have been preliminarily divided from a bin . Cai et al . ( 2019 ) simply use reinforcement learning to get some packing results , which serve as the initialization to accelerate the original heuristic algorithms . However , these approaches either miss the connection between sub-actions or combine handicraft rules with a learning algorithm . Without the sub-action connection , the learning process becomes partial observable Markov Decision Process ( MDP ) ( Sutton & Barto , 2018 ) , which is hard to generalize a good policy for the lack of information . Some methods generate all possible sub-action sequences at the same time , it is still non-trivial for a neural network model to produce many mutual related outputs in a single forward prorogation even though the setting of these methods are strict MDP . These methods combining handicraft rules are not only difficult to achieve an optimal solution , but also sensitive to problem settings . To the best of our knowledge , there is no end-to-end learning algorithm that solves standard packing problems . In this paper , we propose a Conditional Query Learning ( CQL ) model that directly addresses the gap between sub-actions . With the inner conditional query mechanism , the learning process becomes fully observable MDP , which makes the problem easier to apply the reinforcement learning . Compared with the model that outputs several sub-actions simultaneously , CQL model has smaller action space per forward step . Benefit from the small action space feature , we can apply a simpler model to fit the policy , which work more efficiently . In addition , it would not sacrifice performance with the small action space . As a result , we do not require handicraft rules to feed the gaps between sub-actions since they are provided by conditional queries . Specifically , the packing problem requires three mutual conditioned sub-actions : box selecting , rotating and positioning . To fill the gap between sub-actions , we adopt the CQL model as follows . First of all , the packing problem is formulated as an MDP to apply reinforcement learning . Then the previous sub-actions are embedded as a query to the next sub-action decoder . After all three sub-actions are generated , the environment performs one packing step and updates the observation . Finally , we adopt the actor-critic algorithm to update the model parameters . We conduct extensive experiments to evaluate the models and the results show that the CQL model greatly outperforms the vanilla model which produces sub-actions in one step forward propagation without query . In addition , the CQL model achieves lower bin gap ratio in both 2D and 3D packing compared with extant learning approaches and heuristic algorithms . Specifically , our model improves 7.2 % space utilization ratio in 3D packing ( 30 boxes ) compared with genetic algorithm , and reduces more than 10 % bin gap ratio in almost every case compared with the state-of-the-art learning approaches . Furthermore , numerical results show that CQL greatly outperforms other methods when the scale of the problem increases . The learning carve and the variance of results also illustrate that CQL makes the training process more stable . The contributions of this paper are summarized as follows : • We propose the first end-to-end learning algorithm that solves the standard packing problem for both 2D and 3D settings ; • We propose the conditional query learning ( CQL ) model to address the packing problem that has mutual conditioned multi-dimension actions ; • We combine the conditional query learning with an attention mechanism to construct a new learning framework ; • We conduct extensive experiments and the results show that our model outperforms both heuristic algorithms and the state-of-the-art learning-based models . We also release our model implementation and 2D & 3D packing environment for the community to test their algorithms . The rest of the paper is organized as follows . We introduce related works in the next section . We introduce the preliminaries in Section 3 . Design of CQL model is presented in Section 4 , We implement CQL model and illustrate the comparison results in Section 5 . Finally , we conclude this paper in section 6 . 2 RELATED WORKS . Getting an optimal solution of combinatorial optimization problems are computationally heavy , optimal labeled data for supervised learning is expensive . Hence , when using Neural Networks ( NNs ) to solve these problems , one solution is to apply heuristic algorithm results as labeled data , but the performance of this approach can not be better than the performance of the heuristic algorithm . The other solution is to apply reinforcement learning that makes the algorithm learn from its own experience , which is possible to produce better results than existing algorithms . Here we focus on the reinforcement learning approaches and introduce some related works of reinforcement learning and neural combinatorial optimization . 2.1 SEQUENCE PROBLEM IN NEURAL COMBINATORIAL OPTIMIZATION . Enlighten by the recent success of Neural Networks ( NNs ) , especially the fast progress in sequence to sequence model , which is initially used in Neural Machine Translation ( NMT ) ( Bahdanau et al. , 2014 ; Vinyals et al. , 2015a ; Luong et al. , 2015 ; Vaswani et al. , 2017 ; Shen et al. , 2018 ) . Because many combinatorial optimization problems have similar input and output structure as NMT , many studies adopt NNs to solve sequential combinatorial optimization problems . Vinyals et al . ( 2015b ) propose Pointer Networks , which uses attention as a pointer to select a member of the input sequence as the output . Bello et al . ( 2016 ) and ( Nazari et al. , 2018 ) view TSP and VRP as MDP respectively , and they both apply policy gradient algorithm to train their models . Kool et al . ( 2019 ) further improve the result of routing problem using attention model ( Vaswani et al. , 2017 ) . 2.2 REINFORCEMENT LEARNING WITH PARAMETERIZED ACTION SPACE . Different from routing problems that only require one object selected from input sequence per step , the packing problem requires three sub-actions to pack a box into the bin . This kind of action space is called parameterized action space ( Masson et al. , 2016 ) in reinforcement learning , which requires the agent to select multiple types of action every action step . Hausknecht & Stone ( 2015 ) expand DDPG to parameterized action space and test it on RoboCup soccer game . But this approach suffers from tanh saturation problem in continuous action space , so they apply inverse gradients trick to address it . Masson et al . ( 2016 ) introduce Q-PAMDP algorithm , which alternates learning action selection and parameter selection polices to make the training process more stable , but the parameter policy have to output all the parameters for all discrete actions . The output size of the parameter policy can be explosion when the problem has large high dimensional parameters with large discrete actions . The authors of Wei et al . ( 2018 ) propose a hierarchical approach , they condition the parameter policy on the output of discrete action policy , and they apply Variational Auto-Encoders ( VAE ) trick to make the model differentiable . However , all these researches only divide action to two parts , namely , a discrete action and a set of actions as parameters that may include discrete or continuous actions . But the problems like packing contains several mutual conditioned actions that can not directly view as conventional parameterized action space problem , otherwise the number of outputs of the second model will be the product of the candidate output of each sub-actions , which makes the model has too many output options and is hard to generalize the problem and learn to produce a good solution . 2.3 PACKING PROBLEM . As mentioned before , the packing problems are such problem that have mutually conditioned subactions , and there are some works trying to solve this problem by NCO . Enlighten by AlphaZero Silver et al . ( 2017 ) , Laterre et al . ( 2018 ) apply MCTS self play to search the better solution and learn how to pack , but their method only applies to the dataset which boxes are divided by random cuts from a regular bin . Duan et al . ( 2019 ) propose a selected learning approach solving 3D flexible bin packing problem which balances sequence and orientation complexity . They adopt pointer networks with reinforcement learning to get the box selection and rotation results , and apply greedy algorithm to select the location of boxes , which is not end-to-end learning method and can not get better than greedy algorithm . More importantly , this hybrid method do not view the entire packing problem as one complete optimization process , so the learning process only tries to optimize part of problem . At the same time , different algorithms of sub-actions may have conflict goal in optimization process . Different from previous methods , we simply embed the previous actions as an attention query for the model to reason the subsequent actions and after the full action step is finished , we perform one-step optimization along every sub-action model . In this way , we reduce the total output size to the sum of all sub-action candidate output , which makes the model smaller and easier to learn .
This paper proposes an end-to-end deep reinforcement learning-based algorithm for the 2D and 3D bin packing problems. Its main contribution is conditional query learning (CQL) which allows effective decision over mutually conditioned action spaces through policy expressed as a sequence of conditional distributions. Efficient neural architectures for modeling of such a policy is proposed. Experiments validate the effectiveness of the algorithm through comparisons with genetic algorithm and vanilla RL baselines.
SP:3af65f4601748c89802e82f7e312d169ab8f54f2
Learning to Anneal and Prune Proximity Graphs for Similarity Search
This paper studies similarity search , which is a crucial enabler of many feature vector–based applications . The problem of similarity search has been extensively studied in the machine learning community . Recent advances of proximity graphs have achieved outstanding performance through exploiting the navigability of the underlying graph structure . In this work , we introduce the annealable proximity graph ( APG ) method to learn and reshape proximity graphs for efficiency and effective similarity search . APG makes proximity graph edges annealable , which can be effectively trained with a stochastic optimization algorithm . APG identifies important edges that best preserve graph navigability and prune inferior edges without drastically changing graph properties . Experimental results show that APG achieves state-of-the-art results not only by producing proximity graphs with less number of edges but also speeding up the search time by 20–40 % across different datasets with almost no loss of accuracy . 1 INTRODUCTION . Similarity search ( nearest neighbor search ) is an integral and indispensable task in many machine learning applications , such as non-parametric classification/regression , computer vision , information retrieval , and language modeling . Recently , it has been demonstrated that it is possible to build a vector search engine to support semantic search ( Chen et al. , 2018 ; Sullivan , 2018 ; Wang et al. , 2018a ; Johnson et al. , 2017 ) , which leverages high-quality neural ranking models ( Nogueira & Cho , 2019 ; Xiong et al. , 2017 ; Zamani et al. , 2018a ) to encode both natural language query and documents into dense continuous feature vectors and performs similarity search to retrieve relevant documents with vast data volumes ( e.g. , based on Euclidean distance ) . This approach has demonstrated significant relevance gains in a wide range of applications and outperforms existing term matching baselines , such as web search ( Huang et al. , 2013 ; Zamani et al. , 2018b ) , question and answering ( Yu et al. , 2014 ) , ad-hoc retrieval ( Mitra et al. , 2017 ; Dehghani et al. , 2017 ; Guo et al. , 2016 ) , mobile search ( Aliannejadi et al. , 2018 ) , and product search ( Van Gysel et al. , 2016 ) . The efficiency and effectiveness of the similarity search approaches have become a problem of great interest , due to the widespread commercial value and the exciting prospect . Recent advance of proximity graphs has demonstrated great potential for fast and accurate nearest neighbor retrieval ( Malkov & Yashunin , 2016 ; Fu et al. , 2019 ) , and the empirical performance of proximity graphs outperforms existing tree–based ( Bentley , 1975 ; Beckmann et al. , 1990 ; Yianilos , 1993 ; Muja & Lowe , 2014 ) , locality sensitive hashing–based ( Gionis et al. , 1999 ) , and product quantization– based methods ( Jegou et al. , 2011 ; Ge et al. , 2013 ; Norouzi & Fleet , 2013 ; Lempitsky , 2012 ; Kalantidis & Avrithis , 2014 ) by a large margin . Proximity graphs exploit the navigability of graph structures , which the search process relies on to converge and achieve good efficiency . In practice , that often results in dense connectivity and large memory consumption because they need to have sufficient edges to maintain specific graph properties , which is a major limitation of this class of approaches . We wish to improve the efficiency of similarity search . In this paper , we address the following research question : can we learn to prune edges of a proximity graph while still being accurate to find nearest neighbors ? Specifically , the pruned proximity graph should be more efficient than the state-of-the-art proximity graphs with comparable accuracy . Before providing a definite answer to the question , we briefly review the findings in percolation theory that motivates our research . Percolation describes the phase transition of a physical system when one or more of its properties change abruptly after a slight change in controlling variables ( e.g. , temperature , pressure , or others ) ( Broadbent & Hammersley , 1957 ) . Prototypical percolation processes include water turning into ice or steam , and the spontaneous emergence of magnetization and superconductivity in metals . Percolation theory mathematically models these physical systems as complex networks and phase transition as a dramatic change of the properties of network connections . We believe that if we can model edge importance as the robustness of proximity graphs to the removal of the edges between vertices , we can produce a proximity graph with less number of edges without dramatically changing the navigability of the graph . We present Annealable Proximity Graph ( APG ) , for simplifying proximity graphs . In particular , we make the following contributions : • We introduce the annealable proximity graph and summarize its key characteristics . • To learn edge importance , we present a percolation inspired method for identifying impor- tant edges and introduce a domain-specific loss derived from search distance errors . • Our formulation makes it possible to leverage a stochastic optimization algorithm to opti- mize the objective and prune edges with low importance . • We prove the convergence of our optimization process and a theoretical guarantee of the search quality of the pruned graph . This approach is unique compared with previous proximity graph algorithms , where most of them only exploit the structure of the underlying index instead of learning from query distribution to reshape proximity graphs . We provide a detailed empirical analysis of our approach . Experimental results show that our approach reduces the number of edges of state-of-the-art proximity graphs significantly by 50 % while also speeds up the search time by 20–40 % across different datasets with almost no loss of accuracy . 2 RELATED WORK . In this section , we review the main ideas from the existing work that is relevant to our approach . Approximate nearest neighbor search ( ANN ) . The problem of similarity search has been extensively studied in the literature of ANN algorithms , which trade the guarantee of exactness against high-efficiency improvement . Some representative methods include tree structure–based ( Bentley , 1975 ; Beckmann et al. , 1990 ; Yianilos , 1993 ; Muja & Lowe , 2014 ) , locality sensitive hashing ( LSH ) –based ( Gionis et al. , 1999 ) , product quantization ( PQ ) –based ( Jegou et al. , 2011 ; Ge et al. , 2013 ; Norouzi & Fleet , 2013 ; Lempitsky , 2012 ; Kalantidis & Avrithis , 2014 ) , and nearest neighbor graph–based ( Hajebi et al. , 2011 ; Fu & Cai , 2016 ) approaches . Although some of these methods , such as LSH , have strong theoretical performance guarantee even in the worst case ( Indyk & Motwani , 1998 ) , recent advances of the proximity graphs have demonstrated logarithmic search complexity and outperformed prior approaches by a large margin ( Malkov & Yashunin , 2016 ; Douze et al. , 2018 ; Fu et al. , 2019 ; Li et al. , 2019 ) . Proximity graphs . A proximity graph exploits the closeness relationship among feature vectors to support similarity search . In particular , let V = { vi ∈ RD|i = 1 , ... , N } be a database of vectors , a proximity graph G ( V , E ) is a directed graph , where each vertex corresponds to one of the vectors v and the whole graph achieves great local connectivity ( as in a lattice graph ) combined with a small graph diameter ( as in a random graph ) ( Malkov et al. , 2014 ; Malkov & Yashunin , 2016 ; Fu et al. , 2019 ) . Such a graph exhibits strong navigability and enables quick search with an N -greedy bestfirst search algorithm . During the search , a candidate queue of size L is used to determine the tradeoff between the searching time and accuracy . Recent studies look into optimizing proximity graphs with product quantization ( Douze et al. , 2018 ; Baranchuk et al. , 2018 ) . However , these approaches often suffer from a considerable amount of recall loss on large datasets because quantization errors tend to be large on dense continuous feature vectors ( e.g. , generated by neural networks ) . Learning to prune . Pruning is a common method to derive sparse neural networks and reduce their heavy inference cost ( Han et al. , 2015a ; b ) . These methods annihilate the non-important weights through the introduction of an L0 or L1 regularizer to the loss function . Many of them remove weights and keep important weights to best preserve the accuracy . Neural network pruning can also be viewed as an architecture search technique , where the network is viewed as a computational graph where the vertices denote the computation nodes and the edges represent the flow of tensors ( Wang et al. , 2018b ; Liu et al. , 2019 ) . To the best of our knowledge , learning to prune has not yet been applied to the task of proximity graphs for similarity search . However , it appears to be a natural fit for restructuring the proximity graphs to improve similarity search . 3 CHALLENGES . The navigability of proximity graphs comes as a result of approximating monotonicity graphs , e.g. , Delaunay graphs ( Lee & Schachter , 1980 ) . According to the graph monotonicity theory ( Dearholt et al. , 1988 ) , a monotonicity graph has a strong guarantee to find the exact nearest neighbor by following a monotonic path with 1-greedy search ( Fu et al. , 2019 ) . However , monotonicity graphs in high dimensional space quickly become almost fully connected , and search in fully connected graphs would be infeasible , due to the out-degree explosion problem ( Hajebi et al. , 2011 ) . To address the problem , proximity graphs limit each node to connect to only a number of R neighbors , aiming to minimize the loss of graph monotonicity while still letting greedy search be effective . However , several challenges remain . The correct choice of R is not so obvious . R can not be too small , because then the graph tends to lose too much monotonicity and search can frequently get stuck at non-global local minima , hurting accuracy . A sufficiently large R is often required to reach high accuracy , but it also increases the number of edges significantly and decreases efficiency : ( 1 ) It ubiquitously raises the connections of both ” hubs ” ( i.e. , nodes in dense areas ) and nodes in sparse areas ; and ( 2 ) it makes the graph more densely connected with many vertices sharing a lot of common neighbors , increasing unnecessary distance computations . Ideally , each vertex should have a different R that best preserves graph monotonicity . The problem is beyond selecting a good value for R and seems to require more fundamental changes to existing proximity graphs . 4 METHODS . In this section , we propose the annealable proximity graph ( APG ) , which supports the exploration of edge heterogeneity in proximity graphs and learning to prune such graphs with an eye towards getting to the nearest neighbor as quickly as possible with minimal loss of accuracy . 4.1 OVERVIEW . APG starts with augmenting a pre-built proximity graph with a learnable weight associated with each edge , representing its importance to preserve graph monotonicity , and a keep probability ( §§ 4.2 ) . The weight is updated in an iterative manner according to a learning rule . APG employs a multi-step learning process , as illustrated in Fig . 1 . At initialization of APG , all edges have equal weights . Since no edge heterogeneity has been learned so far , applying pruning at this stage could lead to premature decisions . Therefore , we start with performing a “ warm-up ” iteration over the edge weights only , without optimizations , i.e. , as the grace-period . Once this warm-up ends , we introduce a systematic and principled approach to optimize the edge keep probability distribution of the APG ( §§ 4.4 ) . In particular , APG models edge importance as the robustness of graph monotonicity to edge removal and defines an objective function that reflects the destruction of graph monotonicity , based on relative search distance errors ( §§ 4.3 ) . It then generates a sequence of randomized subgraphs through a sampling policy to learn edge importance and uses a predefined annealing schedule to optimize the objective function . The process ends once we meet a stopping criterion . After this step , APG marks low weight edges as less important and perform a hard pruning to remove inferior edges , as shown in Fig . 2 . Hops Checks Entry Query Ground Truth APG Figure 2 : Example of before and after pruning proximity graph .
This paper suggests an approach for learning how to sparsify similarity search graphs. Graph-based methods currently attain state of the art performance for similarity search, and reducing their number of edges may speed them up even further. The paper suggests a learning framework that uses sample queries in order to determine which edges are more useful for searches, and prune the less useful edges. This is a sensible and potentially useful approach in line with the recent flurry of work on improving algorithms with tool from machine learning.
SP:158dd8882013a9a5efa7fd4579ad3900ca76a4b5
Learning to Anneal and Prune Proximity Graphs for Similarity Search
This paper studies similarity search , which is a crucial enabler of many feature vector–based applications . The problem of similarity search has been extensively studied in the machine learning community . Recent advances of proximity graphs have achieved outstanding performance through exploiting the navigability of the underlying graph structure . In this work , we introduce the annealable proximity graph ( APG ) method to learn and reshape proximity graphs for efficiency and effective similarity search . APG makes proximity graph edges annealable , which can be effectively trained with a stochastic optimization algorithm . APG identifies important edges that best preserve graph navigability and prune inferior edges without drastically changing graph properties . Experimental results show that APG achieves state-of-the-art results not only by producing proximity graphs with less number of edges but also speeding up the search time by 20–40 % across different datasets with almost no loss of accuracy . 1 INTRODUCTION . Similarity search ( nearest neighbor search ) is an integral and indispensable task in many machine learning applications , such as non-parametric classification/regression , computer vision , information retrieval , and language modeling . Recently , it has been demonstrated that it is possible to build a vector search engine to support semantic search ( Chen et al. , 2018 ; Sullivan , 2018 ; Wang et al. , 2018a ; Johnson et al. , 2017 ) , which leverages high-quality neural ranking models ( Nogueira & Cho , 2019 ; Xiong et al. , 2017 ; Zamani et al. , 2018a ) to encode both natural language query and documents into dense continuous feature vectors and performs similarity search to retrieve relevant documents with vast data volumes ( e.g. , based on Euclidean distance ) . This approach has demonstrated significant relevance gains in a wide range of applications and outperforms existing term matching baselines , such as web search ( Huang et al. , 2013 ; Zamani et al. , 2018b ) , question and answering ( Yu et al. , 2014 ) , ad-hoc retrieval ( Mitra et al. , 2017 ; Dehghani et al. , 2017 ; Guo et al. , 2016 ) , mobile search ( Aliannejadi et al. , 2018 ) , and product search ( Van Gysel et al. , 2016 ) . The efficiency and effectiveness of the similarity search approaches have become a problem of great interest , due to the widespread commercial value and the exciting prospect . Recent advance of proximity graphs has demonstrated great potential for fast and accurate nearest neighbor retrieval ( Malkov & Yashunin , 2016 ; Fu et al. , 2019 ) , and the empirical performance of proximity graphs outperforms existing tree–based ( Bentley , 1975 ; Beckmann et al. , 1990 ; Yianilos , 1993 ; Muja & Lowe , 2014 ) , locality sensitive hashing–based ( Gionis et al. , 1999 ) , and product quantization– based methods ( Jegou et al. , 2011 ; Ge et al. , 2013 ; Norouzi & Fleet , 2013 ; Lempitsky , 2012 ; Kalantidis & Avrithis , 2014 ) by a large margin . Proximity graphs exploit the navigability of graph structures , which the search process relies on to converge and achieve good efficiency . In practice , that often results in dense connectivity and large memory consumption because they need to have sufficient edges to maintain specific graph properties , which is a major limitation of this class of approaches . We wish to improve the efficiency of similarity search . In this paper , we address the following research question : can we learn to prune edges of a proximity graph while still being accurate to find nearest neighbors ? Specifically , the pruned proximity graph should be more efficient than the state-of-the-art proximity graphs with comparable accuracy . Before providing a definite answer to the question , we briefly review the findings in percolation theory that motivates our research . Percolation describes the phase transition of a physical system when one or more of its properties change abruptly after a slight change in controlling variables ( e.g. , temperature , pressure , or others ) ( Broadbent & Hammersley , 1957 ) . Prototypical percolation processes include water turning into ice or steam , and the spontaneous emergence of magnetization and superconductivity in metals . Percolation theory mathematically models these physical systems as complex networks and phase transition as a dramatic change of the properties of network connections . We believe that if we can model edge importance as the robustness of proximity graphs to the removal of the edges between vertices , we can produce a proximity graph with less number of edges without dramatically changing the navigability of the graph . We present Annealable Proximity Graph ( APG ) , for simplifying proximity graphs . In particular , we make the following contributions : • We introduce the annealable proximity graph and summarize its key characteristics . • To learn edge importance , we present a percolation inspired method for identifying impor- tant edges and introduce a domain-specific loss derived from search distance errors . • Our formulation makes it possible to leverage a stochastic optimization algorithm to opti- mize the objective and prune edges with low importance . • We prove the convergence of our optimization process and a theoretical guarantee of the search quality of the pruned graph . This approach is unique compared with previous proximity graph algorithms , where most of them only exploit the structure of the underlying index instead of learning from query distribution to reshape proximity graphs . We provide a detailed empirical analysis of our approach . Experimental results show that our approach reduces the number of edges of state-of-the-art proximity graphs significantly by 50 % while also speeds up the search time by 20–40 % across different datasets with almost no loss of accuracy . 2 RELATED WORK . In this section , we review the main ideas from the existing work that is relevant to our approach . Approximate nearest neighbor search ( ANN ) . The problem of similarity search has been extensively studied in the literature of ANN algorithms , which trade the guarantee of exactness against high-efficiency improvement . Some representative methods include tree structure–based ( Bentley , 1975 ; Beckmann et al. , 1990 ; Yianilos , 1993 ; Muja & Lowe , 2014 ) , locality sensitive hashing ( LSH ) –based ( Gionis et al. , 1999 ) , product quantization ( PQ ) –based ( Jegou et al. , 2011 ; Ge et al. , 2013 ; Norouzi & Fleet , 2013 ; Lempitsky , 2012 ; Kalantidis & Avrithis , 2014 ) , and nearest neighbor graph–based ( Hajebi et al. , 2011 ; Fu & Cai , 2016 ) approaches . Although some of these methods , such as LSH , have strong theoretical performance guarantee even in the worst case ( Indyk & Motwani , 1998 ) , recent advances of the proximity graphs have demonstrated logarithmic search complexity and outperformed prior approaches by a large margin ( Malkov & Yashunin , 2016 ; Douze et al. , 2018 ; Fu et al. , 2019 ; Li et al. , 2019 ) . Proximity graphs . A proximity graph exploits the closeness relationship among feature vectors to support similarity search . In particular , let V = { vi ∈ RD|i = 1 , ... , N } be a database of vectors , a proximity graph G ( V , E ) is a directed graph , where each vertex corresponds to one of the vectors v and the whole graph achieves great local connectivity ( as in a lattice graph ) combined with a small graph diameter ( as in a random graph ) ( Malkov et al. , 2014 ; Malkov & Yashunin , 2016 ; Fu et al. , 2019 ) . Such a graph exhibits strong navigability and enables quick search with an N -greedy bestfirst search algorithm . During the search , a candidate queue of size L is used to determine the tradeoff between the searching time and accuracy . Recent studies look into optimizing proximity graphs with product quantization ( Douze et al. , 2018 ; Baranchuk et al. , 2018 ) . However , these approaches often suffer from a considerable amount of recall loss on large datasets because quantization errors tend to be large on dense continuous feature vectors ( e.g. , generated by neural networks ) . Learning to prune . Pruning is a common method to derive sparse neural networks and reduce their heavy inference cost ( Han et al. , 2015a ; b ) . These methods annihilate the non-important weights through the introduction of an L0 or L1 regularizer to the loss function . Many of them remove weights and keep important weights to best preserve the accuracy . Neural network pruning can also be viewed as an architecture search technique , where the network is viewed as a computational graph where the vertices denote the computation nodes and the edges represent the flow of tensors ( Wang et al. , 2018b ; Liu et al. , 2019 ) . To the best of our knowledge , learning to prune has not yet been applied to the task of proximity graphs for similarity search . However , it appears to be a natural fit for restructuring the proximity graphs to improve similarity search . 3 CHALLENGES . The navigability of proximity graphs comes as a result of approximating monotonicity graphs , e.g. , Delaunay graphs ( Lee & Schachter , 1980 ) . According to the graph monotonicity theory ( Dearholt et al. , 1988 ) , a monotonicity graph has a strong guarantee to find the exact nearest neighbor by following a monotonic path with 1-greedy search ( Fu et al. , 2019 ) . However , monotonicity graphs in high dimensional space quickly become almost fully connected , and search in fully connected graphs would be infeasible , due to the out-degree explosion problem ( Hajebi et al. , 2011 ) . To address the problem , proximity graphs limit each node to connect to only a number of R neighbors , aiming to minimize the loss of graph monotonicity while still letting greedy search be effective . However , several challenges remain . The correct choice of R is not so obvious . R can not be too small , because then the graph tends to lose too much monotonicity and search can frequently get stuck at non-global local minima , hurting accuracy . A sufficiently large R is often required to reach high accuracy , but it also increases the number of edges significantly and decreases efficiency : ( 1 ) It ubiquitously raises the connections of both ” hubs ” ( i.e. , nodes in dense areas ) and nodes in sparse areas ; and ( 2 ) it makes the graph more densely connected with many vertices sharing a lot of common neighbors , increasing unnecessary distance computations . Ideally , each vertex should have a different R that best preserves graph monotonicity . The problem is beyond selecting a good value for R and seems to require more fundamental changes to existing proximity graphs . 4 METHODS . In this section , we propose the annealable proximity graph ( APG ) , which supports the exploration of edge heterogeneity in proximity graphs and learning to prune such graphs with an eye towards getting to the nearest neighbor as quickly as possible with minimal loss of accuracy . 4.1 OVERVIEW . APG starts with augmenting a pre-built proximity graph with a learnable weight associated with each edge , representing its importance to preserve graph monotonicity , and a keep probability ( §§ 4.2 ) . The weight is updated in an iterative manner according to a learning rule . APG employs a multi-step learning process , as illustrated in Fig . 1 . At initialization of APG , all edges have equal weights . Since no edge heterogeneity has been learned so far , applying pruning at this stage could lead to premature decisions . Therefore , we start with performing a “ warm-up ” iteration over the edge weights only , without optimizations , i.e. , as the grace-period . Once this warm-up ends , we introduce a systematic and principled approach to optimize the edge keep probability distribution of the APG ( §§ 4.4 ) . In particular , APG models edge importance as the robustness of graph monotonicity to edge removal and defines an objective function that reflects the destruction of graph monotonicity , based on relative search distance errors ( §§ 4.3 ) . It then generates a sequence of randomized subgraphs through a sampling policy to learn edge importance and uses a predefined annealing schedule to optimize the objective function . The process ends once we meet a stopping criterion . After this step , APG marks low weight edges as less important and perform a hard pruning to remove inferior edges , as shown in Fig . 2 . Hops Checks Entry Query Ground Truth APG Figure 2 : Example of before and after pruning proximity graph .
This paper studies the problem of improving proximity graph for nearest neighbor search. It formulates the task of pruning the graph as a problem of learning annealable proximity graph. A hard pruning processes is used after the learning process, and the results shows that the proposed method can reduce 50% of the edges and speed up the search time by 16-41%.
SP:158dd8882013a9a5efa7fd4579ad3900ca76a4b5
Understanding and Improving Information Transfer in Multi-Task Learning
1 INTRODUCTION . Multi-task learning has recently emerged as a powerful paradigm in deep learning to obtain language ( Devlin et al . ( 2018 ) ; Liu et al . ( 2019a ; b ) ) and visual representations ( Kokkinos ( 2017 ) ) from large-scale data . By leveraging supervised data from related tasks , multi-task learning approaches reduce the expensive cost of curating the massive per-task training data sets needed by deep learning methods and provide a shared representation which is also more efficient for learning over multiple tasks . While in some cases , great improvements have been reported compared to single-task learning ( McCann et al . ( 2018 ) ) , practitioners have also observed problematic outcomes , where the performances of certain tasks have decreased due to task interference ( Alonso and Plank ( 2016 ) ; Bingel and Søgaard ( 2017 ) ) . Predicting when and for which tasks this occurs is a challenge exacerbated by the lack of analytic tools . In this work , we investigate key components to determine whether tasks interfere constructively or destructively from theoretical and empirical perspectives . Based on these insights , we develop methods to improve the effectiveness and robustness of multi-task training . There has been a large body of algorithmic and theoretical studies for kernel-based multi-task learning , but less is known for neural networks . The conceptual message from the earlier work ( Baxter ( 2000 ) ; Evgeniou and Pontil ( 2004 ) ; Micchelli and Pontil ( 2005 ) ; Xue et al . ( 2007 ) ) show that multi-task learning is effective over “ similar ” tasks , where the notion of similarity is based on the single-task models ( e.g . decision boundaries are close ) . The work on structural correspondence learning ( Ando and Zhang ( 2005 ) ; Blitzer et al . ( 2006 ) ) uses alternating minimization to learn a shared parameter and separate task parameters . Zhang and Yeung ( 2014 ) use a parameter vector for each task and learn task relationships via l2 regularization , which implicitly controls the capacity of the model . These results are difficult to apply to neural networks : it is unclear how to reason about neural networks whose feature space is given by layer-wise embeddings . To determine whether two tasks interfere constructively or destructively , we investigate an architecture with a shared module for all tasks and a separate output module for each task ( Ruder ( 2017 ) ) . See Figure 1 for an illustration . Our motivating observation is that in addition to model similarity which affects the type of interference , task data similarity plays a second-order effect after controlling model similarity . To illustrate the idea , we consider three tasks with the same number of data ∗Equal contribution . Correspondence to { senwu , hongyang , chrismre } @ cs.stanford.edu samples where task 2 and 3 have the same decision boundary but different data distributions ( see Figure 2 for an illustration ) . We observe that training task 1 with task 2 or task 3 can either improve or hurt task 1 ’ s performance , depending on the amount of contributing data along the decision boundary ! This observation shows that by measuring the similarities of the task data and the models separately , we can analyze the interference of tasks and attribute the cause more precisely . Motivated by the above observation , we study the theory of multi-task learning through the shared module in linear and ReLU-activated settings . Our theoretical contribution involves three components : the capacity of the shared module , task covariance , and the per-task weight of the training procedure . The capacity plays a fundamental role because , if the shared module ’ s capacity is too large , there is no interference between tasks ; if it is too small , there can be destructive interference . Then , we show how to determine interference by proposing a more fine-grained notion called task covariance which can be used to measure the alignment of task data . By varying task covariances , we observe both positive and negative transfers from one task to another ! We then provide sufficient conditions which guarantee that one task can transfer positively to another task , provided with sufficiently many data points from the contributor task . Finally , we study how to assign per-task weights for settings where different tasks share the same data but have different labels . Experimental results . Our theory leads to the design of two algorithms with practical interest . First , we propose to align the covariances of the task embedding layers and present empirical evaluations on well-known benchmarks and tasks . On 5 tasks from the General Language Understanding Evaluation ( GLUE ) benchmark ( Wang et al . ( 2018b ) ) trained with the BERTLARGE model by Devlin et al . ( 2018 ) , our method improves the result of BERTLARGE by a 2.35 % average GLUE score , which is the standard metric for the benchmark . Further , we show that our method is applicable to transfer learning settings ; we observe up to 2.5 % higher accuracy by transferring between six sentiment analysis tasks using the LSTM model of Lei et al . ( 2018 ) . Second , we propose an SVD-based task reweighting scheme to improve multi-task training for settings where different tasks have the same features but different labels . On the ChestX-ray14 dataset , we compare our method to the unweighted scheme and observe an improvement of 0.4 % AUC score on average for all tasks . In conclusion , these evaluations confirm that our theoretical insights are applicable to a broad range of settings and applications . 2 THREE COMPONENTS OF MULTI-TASK LEARNING . We study multi-task learning ( MTL ) models with a shared module for all tasks and a separate output module for each task . We ask : What are the key components to determine whether or not MTL is better than single-task learning ( STL ) ? In response , we identify three components : model capacity , task covariance , and optimization scheme . After setting up the model , we briefly describe the role of model capacity . We then introduce the notion of task covariance , which comprises the bulk of the section . We finish by showing the implications of our results for choosing optimization schemes . 2.1 MODELING SETUP . We are given k tasks . Letmi denote the number of data samples of task i . For task i , letXi ∈ Rmi×d denote its covariates and let yi ∈ Rmi denote its labels , where d is the dimension of the data . We have assumed that all the tasks have the same input dimension d. This is not a restrictive assumption and is typically satisfied , e.g . for word embeddings on BERT , or by padding zeros to the input otherwise . Our model assumes the output label is 1-dimensional . We can also model a multi-label problem with k types of labels by having k tasks with the same covariates but different labels . We consider an MTL model with a shared module B ∈ Rd×r and a separate output module Ai ∈ Rr for task i , where r denotes the output dimension of B . See Figure 1 for the illustration . We define the objective of finding an MTL model as minimizing the following equation over B and the Ai ’ s : f ( A1 , A2 , . . . , Ak ; B ) = k∑ i=1 L ( g ( XiB ) Ai , yi ) , ( 1 ) where L is a loss function such as the squared loss . The activation function g : R→ R is applied on every entry of XiB . In equation 1 , all data samples contribute equally . Because of the differences between tasks such as data size , it is natural to re-weight tasks during training : f ( A1 , A2 , . . . , Ak ; B ) = k∑ i=1 αi · L ( g ( XiB ) Ai , yi ) , ( 2 ) This setup is an abstraction of the hard parameter sharing architecture ( Ruder ( 2017 ) ) . The shared module B provides a universal representation ( e.g. , an LSTM for encoding sentences ) for all tasks . Each task-specific module Ai is optimized for its output . We focus on two models as follows . The single-task linear model . The labels y of each task follow a linear model with parameter θ ∈ Rd : y = Xθ + ε . Every entry of ε follows the normal distribution N ( 0 , σ2 ) with variance σ2 . The function g ( XB ) = XB . This is a well-studied setting for linear regression ( Hastie et al . ( 2005 ) ) . The single-task ReLU model . Denote by ReLU ( x ) = max ( x , 0 ) for any x ∈ R. We will also consider a non-linear model where Xθ goes through the ReLU activation function with a ∈ R and θ ∈ Rd : y = a ·ReLU ( Xθ ) +ε , which applies the ReLU activation on Xθ entrywise . The encoding function g ( XB ) then maps to ReLU ( XB ) . Positive vs. negative transfer . For a source task and a target task , we say the source task transfers positively to the target task , if training both through equation 1 improves over just training the target task ( measured on its validation set ) . Negative transfer is the converse of positive transfer . Problem statement . Our goal is to analyze the three components to determine positive vs. negative transfer between tasks : model capacity ( r ) , task covariances ( { X > i Xi } ki=1 ) and the per-task weights ( { αi } ki=1 ) . We focus on regression tasks under the squared loss but we also provide synthetic experiments on classification tasks to validate our theory . Notations . For a matrixX , its column span is the set of all linear combinations of the column vectors of X . Let X† denote its pseudoinverse . Given u , v ∈ Rd , cos ( u , v ) is equal to u > v/ ( ‖u‖ · ‖v‖ ) . 2.2 MODEL CAPACITY . We begin by revisiting the role of model capacity , i.e . the output dimension ofB ( denoted by r ) . We show that as a rule of thumb , r should be smaller than the sum of capacities of the STL modules . Example . Suppose we have k linear regression tasks using the squared loss , equation 1 becomes : f ( A1 , A2 , . . . , Ak ; B ) = k∑ i=1 ‖XiBAi − yi‖2F . ( 3 ) The optimal solution of equation 3 for task i is θi = ( X > i Xi ) †X > i yi ∈ Rd . Hence a capacity of 1 suffices for each task . We show that if r ≥ k , then there is no transfer between any two tasks . Proposition 1 . Let r ≥ k. There exists an optimum B ? and { A ? i } ki=1 of equation 3 where B ? A ? i = θi , for all i = 1 , 2 , . . . , k. To illustrate the idea , as long as B ? contains { θi } ki=1 in its column span , there exists A ? i such that B ? A ? i = θi , which is optimal for equation 3 with minimum error . But this means no transfer among any two tasks . This can hurt generalization if a task has limited data , in which case its STL solution overfits training data , whereas the MTL solution can leverage other tasks ’ data to improve generalization . The proof of Proposition 1 and its extension to ReLU settings are in Appendix A.1 . Algorithmic consequence . The implication is that limiting the shared module ’ s capacity is necessary to enforce information transfer . If the shared module is too small , then tasks may interfere negatively with each other . But if it is too large , then there may be no transfer between tasks . In Section 3.3 , we verify the need to carefully choose model capacity on a wide range of neural networks including CNN , LSTM and multi-layer perceptron .
This paper studies how to improve the multi-task learning from both theoretical and experimental viewpoints. More specifically, they study an architecture where there is a shared model for all of the tasks and a separate module specific to each task. They show that data similarity of the tasks, measured by task covariance is an important element for the tasks to be constructive or destructive. They theoretically find a sufficient condition that guarantee one task can transfer positively to the other; i.e. a lower bound of the number of data points that one task has to have. Consequently, they propose an algorithm which is basically applying a covariance alignment method to the input.
SP:23726c6ff50e4ff1beb7f21e31a9f6286a656b1e
Understanding and Improving Information Transfer in Multi-Task Learning
1 INTRODUCTION . Multi-task learning has recently emerged as a powerful paradigm in deep learning to obtain language ( Devlin et al . ( 2018 ) ; Liu et al . ( 2019a ; b ) ) and visual representations ( Kokkinos ( 2017 ) ) from large-scale data . By leveraging supervised data from related tasks , multi-task learning approaches reduce the expensive cost of curating the massive per-task training data sets needed by deep learning methods and provide a shared representation which is also more efficient for learning over multiple tasks . While in some cases , great improvements have been reported compared to single-task learning ( McCann et al . ( 2018 ) ) , practitioners have also observed problematic outcomes , where the performances of certain tasks have decreased due to task interference ( Alonso and Plank ( 2016 ) ; Bingel and Søgaard ( 2017 ) ) . Predicting when and for which tasks this occurs is a challenge exacerbated by the lack of analytic tools . In this work , we investigate key components to determine whether tasks interfere constructively or destructively from theoretical and empirical perspectives . Based on these insights , we develop methods to improve the effectiveness and robustness of multi-task training . There has been a large body of algorithmic and theoretical studies for kernel-based multi-task learning , but less is known for neural networks . The conceptual message from the earlier work ( Baxter ( 2000 ) ; Evgeniou and Pontil ( 2004 ) ; Micchelli and Pontil ( 2005 ) ; Xue et al . ( 2007 ) ) show that multi-task learning is effective over “ similar ” tasks , where the notion of similarity is based on the single-task models ( e.g . decision boundaries are close ) . The work on structural correspondence learning ( Ando and Zhang ( 2005 ) ; Blitzer et al . ( 2006 ) ) uses alternating minimization to learn a shared parameter and separate task parameters . Zhang and Yeung ( 2014 ) use a parameter vector for each task and learn task relationships via l2 regularization , which implicitly controls the capacity of the model . These results are difficult to apply to neural networks : it is unclear how to reason about neural networks whose feature space is given by layer-wise embeddings . To determine whether two tasks interfere constructively or destructively , we investigate an architecture with a shared module for all tasks and a separate output module for each task ( Ruder ( 2017 ) ) . See Figure 1 for an illustration . Our motivating observation is that in addition to model similarity which affects the type of interference , task data similarity plays a second-order effect after controlling model similarity . To illustrate the idea , we consider three tasks with the same number of data ∗Equal contribution . Correspondence to { senwu , hongyang , chrismre } @ cs.stanford.edu samples where task 2 and 3 have the same decision boundary but different data distributions ( see Figure 2 for an illustration ) . We observe that training task 1 with task 2 or task 3 can either improve or hurt task 1 ’ s performance , depending on the amount of contributing data along the decision boundary ! This observation shows that by measuring the similarities of the task data and the models separately , we can analyze the interference of tasks and attribute the cause more precisely . Motivated by the above observation , we study the theory of multi-task learning through the shared module in linear and ReLU-activated settings . Our theoretical contribution involves three components : the capacity of the shared module , task covariance , and the per-task weight of the training procedure . The capacity plays a fundamental role because , if the shared module ’ s capacity is too large , there is no interference between tasks ; if it is too small , there can be destructive interference . Then , we show how to determine interference by proposing a more fine-grained notion called task covariance which can be used to measure the alignment of task data . By varying task covariances , we observe both positive and negative transfers from one task to another ! We then provide sufficient conditions which guarantee that one task can transfer positively to another task , provided with sufficiently many data points from the contributor task . Finally , we study how to assign per-task weights for settings where different tasks share the same data but have different labels . Experimental results . Our theory leads to the design of two algorithms with practical interest . First , we propose to align the covariances of the task embedding layers and present empirical evaluations on well-known benchmarks and tasks . On 5 tasks from the General Language Understanding Evaluation ( GLUE ) benchmark ( Wang et al . ( 2018b ) ) trained with the BERTLARGE model by Devlin et al . ( 2018 ) , our method improves the result of BERTLARGE by a 2.35 % average GLUE score , which is the standard metric for the benchmark . Further , we show that our method is applicable to transfer learning settings ; we observe up to 2.5 % higher accuracy by transferring between six sentiment analysis tasks using the LSTM model of Lei et al . ( 2018 ) . Second , we propose an SVD-based task reweighting scheme to improve multi-task training for settings where different tasks have the same features but different labels . On the ChestX-ray14 dataset , we compare our method to the unweighted scheme and observe an improvement of 0.4 % AUC score on average for all tasks . In conclusion , these evaluations confirm that our theoretical insights are applicable to a broad range of settings and applications . 2 THREE COMPONENTS OF MULTI-TASK LEARNING . We study multi-task learning ( MTL ) models with a shared module for all tasks and a separate output module for each task . We ask : What are the key components to determine whether or not MTL is better than single-task learning ( STL ) ? In response , we identify three components : model capacity , task covariance , and optimization scheme . After setting up the model , we briefly describe the role of model capacity . We then introduce the notion of task covariance , which comprises the bulk of the section . We finish by showing the implications of our results for choosing optimization schemes . 2.1 MODELING SETUP . We are given k tasks . Letmi denote the number of data samples of task i . For task i , letXi ∈ Rmi×d denote its covariates and let yi ∈ Rmi denote its labels , where d is the dimension of the data . We have assumed that all the tasks have the same input dimension d. This is not a restrictive assumption and is typically satisfied , e.g . for word embeddings on BERT , or by padding zeros to the input otherwise . Our model assumes the output label is 1-dimensional . We can also model a multi-label problem with k types of labels by having k tasks with the same covariates but different labels . We consider an MTL model with a shared module B ∈ Rd×r and a separate output module Ai ∈ Rr for task i , where r denotes the output dimension of B . See Figure 1 for the illustration . We define the objective of finding an MTL model as minimizing the following equation over B and the Ai ’ s : f ( A1 , A2 , . . . , Ak ; B ) = k∑ i=1 L ( g ( XiB ) Ai , yi ) , ( 1 ) where L is a loss function such as the squared loss . The activation function g : R→ R is applied on every entry of XiB . In equation 1 , all data samples contribute equally . Because of the differences between tasks such as data size , it is natural to re-weight tasks during training : f ( A1 , A2 , . . . , Ak ; B ) = k∑ i=1 αi · L ( g ( XiB ) Ai , yi ) , ( 2 ) This setup is an abstraction of the hard parameter sharing architecture ( Ruder ( 2017 ) ) . The shared module B provides a universal representation ( e.g. , an LSTM for encoding sentences ) for all tasks . Each task-specific module Ai is optimized for its output . We focus on two models as follows . The single-task linear model . The labels y of each task follow a linear model with parameter θ ∈ Rd : y = Xθ + ε . Every entry of ε follows the normal distribution N ( 0 , σ2 ) with variance σ2 . The function g ( XB ) = XB . This is a well-studied setting for linear regression ( Hastie et al . ( 2005 ) ) . The single-task ReLU model . Denote by ReLU ( x ) = max ( x , 0 ) for any x ∈ R. We will also consider a non-linear model where Xθ goes through the ReLU activation function with a ∈ R and θ ∈ Rd : y = a ·ReLU ( Xθ ) +ε , which applies the ReLU activation on Xθ entrywise . The encoding function g ( XB ) then maps to ReLU ( XB ) . Positive vs. negative transfer . For a source task and a target task , we say the source task transfers positively to the target task , if training both through equation 1 improves over just training the target task ( measured on its validation set ) . Negative transfer is the converse of positive transfer . Problem statement . Our goal is to analyze the three components to determine positive vs. negative transfer between tasks : model capacity ( r ) , task covariances ( { X > i Xi } ki=1 ) and the per-task weights ( { αi } ki=1 ) . We focus on regression tasks under the squared loss but we also provide synthetic experiments on classification tasks to validate our theory . Notations . For a matrixX , its column span is the set of all linear combinations of the column vectors of X . Let X† denote its pseudoinverse . Given u , v ∈ Rd , cos ( u , v ) is equal to u > v/ ( ‖u‖ · ‖v‖ ) . 2.2 MODEL CAPACITY . We begin by revisiting the role of model capacity , i.e . the output dimension ofB ( denoted by r ) . We show that as a rule of thumb , r should be smaller than the sum of capacities of the STL modules . Example . Suppose we have k linear regression tasks using the squared loss , equation 1 becomes : f ( A1 , A2 , . . . , Ak ; B ) = k∑ i=1 ‖XiBAi − yi‖2F . ( 3 ) The optimal solution of equation 3 for task i is θi = ( X > i Xi ) †X > i yi ∈ Rd . Hence a capacity of 1 suffices for each task . We show that if r ≥ k , then there is no transfer between any two tasks . Proposition 1 . Let r ≥ k. There exists an optimum B ? and { A ? i } ki=1 of equation 3 where B ? A ? i = θi , for all i = 1 , 2 , . . . , k. To illustrate the idea , as long as B ? contains { θi } ki=1 in its column span , there exists A ? i such that B ? A ? i = θi , which is optimal for equation 3 with minimum error . But this means no transfer among any two tasks . This can hurt generalization if a task has limited data , in which case its STL solution overfits training data , whereas the MTL solution can leverage other tasks ’ data to improve generalization . The proof of Proposition 1 and its extension to ReLU settings are in Appendix A.1 . Algorithmic consequence . The implication is that limiting the shared module ’ s capacity is necessary to enforce information transfer . If the shared module is too small , then tasks may interfere negatively with each other . But if it is too large , then there may be no transfer between tasks . In Section 3.3 , we verify the need to carefully choose model capacity on a wide range of neural networks including CNN , LSTM and multi-layer perceptron .
This paper analyzed the principles for a successful transfer in the hard-parameter sharing multitask learning model. They analyzed three key factors of multi-task learning on linear model and relu linear model: model capacity (output dimension after common transformation), task covariance (similarity between tasks) and optimization strategy (influence of re-weighting algorithm), with theoretical guarantees. Finally they evaluated their assumptions on the state-of-the-art multi-task framework (e.g GLUE,CheXNet), showing the benefits of the proposed algorithm.
SP:23726c6ff50e4ff1beb7f21e31a9f6286a656b1e
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
1 INTRODUCTION . Adversarial training is a method for creating robust neural networks . During adversarial training , mini-batches of training samples are contaminated with adversarial perturbations ( alterations that are small and yet cause misclassification ) , and then used to update network parameters until the resulting model learns to resist such attacks . Adversarial training was originally proposed as a means to enhance the security of machine learning systems ( Goodfellow et al. , 2015 ) , especially for safety-critical systems like self-driving cars ( Xiao et al. , 2018 ) and copyright detection ( Saadatpanah et al. , 2019 ) . In this paper , we turn our focus away from the security benefits of adversarial training , and instead study its effects on generalization . While adversarial training boosts the robustness , it is widely accepted by computer vision researchers that it is at odds with generalization , with classification accuracy on non-corrupted images dropping as much as 10 % on CIFAR-10 , and 15 % on Imagenet ( Madry et al. , 2018 ; Xie et al. , 2019 ) . Surprisingly , people observe the opposite result for language models ( Miyato et al. , 2017 ; Cheng et al. , 2019 ) , showing that adversarial training can improve both generalization and robustness . We will show that adversarial training significantly improves performance of state-of-the-art models for many language understanding tasks . In particular , we propose a novel adversarial training algorithm , called FreeLB ( Free Large-Batch ) , which adds adversarial perturbations to word embeddings and minimizes the resultant adversarial loss around input samples . The method leverages recently proposed “ free ” training strategies ( Shafahi et al. , 2019 ; Zhang et al. , 2019 ) to enrich the training data with diversified adversarial samples under different norm constraints at no extra cost than PGD-based ( Projected Gradient Descent ) adversarial training ( Madry et al. , 2018 ) , which enables us to perform such diversified adversarial training on large-scale state-of-the-art models . We observe improved invariance in the embedding space for models trained with FreeLB , which is positively correlated with generalization . 1Code is available at https : //github.com/zhuchen03/FreeLB . We perform comprehensive experiments to evaluate the performance of a variety of adversarial training algorithms on state-of-the-art language understanding models and tasks . In the comparisons with standard PGD ( Madry et al. , 2018 ) , FreeAT ( Shafahi et al. , 2019 ) and YOPO ( Zhang et al. , 2019 ) , FreeLB stands out to be the best for the datasets and models we evaluated . With FreeLB , we achieve state-of-the-art results on several important language understanding benchmarks . On the GLUE benchmark , FreeLB pushes the performance of the BERT-base model from 78.3 to 79.4 . The overall score of the RoBERTa-large models on the GLUE benchmark is also lifted from 88.5 to 88.8 , achieving best results on most of its sub-tasks . Experiments also show that FreeLB can boost the performance of RoBERTa-large on question answering tasks , such as the ARC and CommonsenseQA benchmarks . We also provide a comprehensive ablation study and analysis to demonstrate the effectiveness of our training process . 2 RELATED WORK . 2.1 ADVERSARIAL TRAINING . To improve the robustness of neural networks against adversarial examples , many defense strategies and models have been proposed , in which PGD-based adversarial training ( Madry et al. , 2018 ) is widely considered to be the most effective , since it largely avoids the the obfuscated gradient problem ( Athalye et al. , 2018 ) . It formulates a class of adversarial training algorithms ( Kurakin et al. , 2017 ) into solving a minimax problem on the cross-entropy loss , which can be achieved reliably through multiple projected gradient ascent steps followed by a SGD ( Stochastic Gradient Descent ) step . Despite being verified by Athalye et al . ( 2018 ) to avoid obfuscated gradients , Qin et al . ( 2019 ) shows that PGD-based adversarial training still leads to highly convolved and non-linear loss surfaces when K is small , which could be readily broken under stronger adversaries . Thus , to be effective , the cost of PGD-based adversarial training is much higher than conventional training . To mitigate this cost , Shafahi et al . ( 2019 ) proposed a “ free ” adversarial training algorithm that simultaneously updates both model parameters and adversarial perturbations on a single backward pass . Using a similar formulation , Zhang et al . ( 2019 ) effectively reduce the total number of full forward and backward propagations for obtaining adversarial examples by restricting most of its adversarial updates in the first layer . 2.2 ADVERSARIAL EXAMPLES IN NATURAL LANGUAGES . Adversarial examples have been explored primarily in the image domain , and received many attention in text domain recently . Previous works on text adversaries have focused on heuristics for creating adversarial examples in the black-box setting , or on specific tasks . Jia & Liang ( 2017 ) propose to add distracting sentences to the input document in order to induce mis-classification . Zhao et al . ( 2018 ) generate text adversaries by projecting the input data to a latent space using GANs , and searching for adversaries close to the original instance . Belinkov & Bisk ( 2018 ) manipulate every word in a sentence with synthetic or natural noise in machine translation systems . Iyyer et al . ( 2018 ) propose a neural paraphrase model based on back-translated data to produce paraphrases that have different sentence structures . Different from previous work , ours is not to produce actual adversarial examples , but only take the benefit of adversarial training for natural language understanding . We are not the first to observe that robust language models may perform better on clean test data . Miyato et al . ( 2017 ) extend adversarial and virtual adversarial training ( Miyato et al. , 2019 ) to the text domain to improve the performance on semi-supervised classification tasks . Ebrahimi et al . ( 2018 ) propose a character/word replacement for crafting attacks , and show employing adversarial examples in training renders the models more robust . Ribeiro et al . ( 2018 ) show that adversarial attacks can be used as a valuable tool for debugging NLP models . Cheng et al . ( 2019 ) also find that crafting adversarial examples can help neural machine translation significantly . Notably , these studies have focused on simple models or text generation tasks . Our work explores how to efficiently use the gradients obtained in adversarial training to boost the performance of state-of-the-art transformer-based models . 3 ADVERSARIAL TRAINING FOR LANGUAGE UNDERSTANDING . Pre-trained large-scale language models , such as BERT ( Devlin et al. , 2019 ) , RoBERTa ( Liu et al. , 2019b ) , ALBERT ( Lan et al. , 2020 ) and T5 ( Raffel et al. , 2019 ) , have proven to be highly effective for downstream tasks . We aim to further improve the generalization of these pre-trained language models on the downstream language understanding tasks by enhancing their robustness in the embedding space during finetuning on these tasks . We achieve this goal by creating “ virtual ” adversarial examples in the embedding space , and then perform parameter updates on these adversarial embeddings . Creating actual adversarial examples for language is difficult ; even with state-of-theart language models as guidance ( e.g. , ( Cheng et al. , 2019 ) ) , it remains unclear how to construct label-preserving adversarial examples via word/character replacement without human evaluations , because the meaning of each word/character depends on the context ( Ribeiro et al. , 2018 ) . Since we are only interested in the effects of adversarial training , rather than producing actual adversarial examples , we add norm-bounded adversarial perturbations to the embeddings of the input sentences using a gradient-based method . Note that our embedding-based adversary is strictly stronger than a more conventional text-based adversary , as our adversary can make manipulations on word embeddings that are not possible in the text domain . For models that incorporate various input representations , including word or subword embeddings , segment embeddings and position embeddings , our adversaries only modify the concatenated word or sub-word embeddings , leaving other components of the sentence representation unchanged . 2 Denote the sequence of one-hot representations of the input subwords as Z = [ z1 , z2 , ... , zn ] , the embedding matrix as V , and the language model ( encoder ) as a function y = fθ ( X ) , whereX = V Z is the subword embeddings , y is the output of the model ( e.g. , class probabilities for classification models ) , and θ denotes all the learnable parameters including the embedding matrix V . We add adversarial perturbations δ to the embeddings such that the prediction becomes y′ = fθ ( X + δ ) . To preserve the semantics , we constrain the norm of δ to be small , and assume the model ’ s prediction should not change after the perturbation . This formulation is analogous to Miyato et al . ( 2017 ) , with the difference that we do not requireX to be normalized . 3.1 PGD FOR ADVERSARIAL TRAINING . Standard adversarial training seeks to find optimal parameters θ∗ to minimize the maximum risk for any δ within a norm ball as : min θ E ( Z , y ) ∼D [ max ‖δ‖≤ L ( fθ ( X + δ ) , y ) ] , ( 1 ) where D is the data distribution , y is the label , and L is some loss function . We use the Frobenius norm to constrain δ . For neural networks , the outer “ min ” is non-convex , and the inner “ max ” is non-concave . Nonetheless , Madry et al . ( 2018 ) demonstrated that this saddle-point problem can be solved reliably with SGD for the outer minimization and PGD ( a standard method for large-scale constrained optimization , see ( Combettes & Pesquet , 2011 ) and ( Goldstein et al. , 2014 ) ) , for the inner maximization . In particular , for the constraint ‖δ‖F ≤ , with an additional assumption that the loss function is locally linear , PGD takes the following step ( with step size α ) in each iteration : δt+1 = Π‖δ‖F≤ ( δt + αg ( δt ) /‖g ( δt ) ‖F ) , ( 2 ) where g ( δt ) = ∇δL ( fθ ( X + δt ) , y ) is the gradient of the loss with respect to δ , and Π‖δ‖F≤ performs a projection onto the -ball . To achieve high-level robustness , multi-step adversarial examples are needed during training , which is computationally expensive . TheK-step PGD ( K-PGD ) requires K forward-backward passes through the network , while the standard SGD update requires only one . As a result , the adversary generation step in adversarial training increases run-time by an order of magnitude—a catastrophic amount when training large state-of-the-art language models . 3.2 LARGE-BATCH ADVERSARIAL TRAINING FOR FREE . In the inner ascent steps of PGD , the gradients of the parameters can be obtained with almost no overhead when computing the gradients of the inputs . From this observation , FreeAT ( Shafahi et al. , 2 “ Subword embeddings ” refers to the embeddings of sub-word encodings such as the popular Byte Pair Encoding ( BPE ) ( Sennrich et al. , 2016 ) . Algorithm 1 “ Free ” Large-Batch Adversarial Training ( FreeLB-K ) Require : Training samples X = { ( Z , y ) } , perturbation bound , learning rate τ , ascent steps K , ascent step size α 1 : Initialize θ 2 : for epoch = 1 . . . Nep do 3 : for minibatch B ⊂ X do 4 : δ0 ← 1√NδU ( − , ) 5 : g0 ← 0 6 : for t = 1 . . .K do 7 : Accumulate gradient of parameters θ 8 : gt ← gt−1 + 1KE ( Z , y ) ∈B [ ∇θ L ( fθ ( X + δt−1 ) , y ) ] 9 : Update the perturbation δ via gradient ascend 10 : gadv ← ∇δ L ( fθ ( X + δt−1 ) , y ) 11 : δt ← Π‖δ‖F≤ ( δt−1 + α · gadv/‖gadv‖F ) 12 : end for 13 : θ ← θ − τgK 14 : end for 15 : end for 2019 ) and YOPO ( Zhang et al. , 2019 ) have been proposed to accelerate adversarial training . They achieve comparable robustness and generalization as standard PGD-trained models using only the same or a slightly larger number of forward-backward passes as natural training ( i.e. , SGD on clean samples ) . FreeAT takes one descent step on the parameters together with each of the K ascent steps on the perturbation . As a result , FreeAT may suffer from the “ stale gradient ” problem ( Dutta et al. , 2018 ) , where in every step t , δt does not necessarily maximize the model with parameter θt since its update is based on ∇δL ( fθt−1 ( X + δt−1 ) , y ) , and vice versa , θt does not necessarily minimize the adversarial risk with adversary δt since its update is based on ∇θL ( fθt−1 ( X + δt−1 ) , y ) . Such a problem may be more significant when the step size is large . Different from FreeAT , YOPO accumulates the gradient of the parameters from each of the ascent steps , and updates the parameters only once after the K inner ascent steps . YOPO also advocates that after each back-propagation , one should take the gradient of the first hidden layer as a constant and perform several additional updates on the adversary using the product of this constant and the Jacobian of the first layer of the network to obtain strong adversaries . However , when the first hidden layer is a linear layer as in their implementation , such an operation is equivalent to taking a larger step size on the adversary . The analysis backing the extra update steps also assumes a twice continuously differentiable loss , which does not hold for ReLU-based neural networks they experimented with , and thus the reasons for the success of such an algorithm remains obscure . We give empirical comparisons between YOPO and our approach in Sec . 4.3 . To obtain better solutions for the inner max and avoid fundamental limitations on the function class , we propose FreeLB , which performs multiple PGD iterations to craft adversarial examples , and simultaneously accumulates the “ free ” parameter gradients ∇θL in each iteration . After that , it updates the model parameter θ all at once with the accumulated gradients . The overall procedure is shown in Algorithm 1 , in which X + δt is an approximation to the local maximum within the intersection of two balls It = BX+δ0 ( αt ) ∩ BX ( ) . By taking a descent step along the averaged gradients atX + δ0 , ... , X + δK−1 , we approximately optimize the following objective : min θ E ( Z , y ) ∼D [ 1 K K−1∑ t=0 max δt∈It L ( fθ ( X + δt ) , y ) ] , ( 3 ) which is equivalent to replacing the original batchX with a K-times larger virtual batch , consisting of samples whose embeddings are X + δ0 , ... , X + δK−1 . Compared with PGD-based adversarial training ( Eq . 1 ) , which minimizes the maximum risk at a single estimated point in the vicinity of each training sample , FreeLB minimizes the maximum risk at each ascent step at almost no overhead . Intuitively , FreeLB could be a learning method with lower generalization error than PGD . Sokolic et al . ( 2017 ) have proved that the generalization error of a learning method invariant to a set of T transformations may be up to √ T smaller than a non-invariant learning method . According to their theory , FreeLB could have a more significant improvement over natural training , since FreeLB enforces the invariance to K adversaries from a set of up to K different norm constraints,3 while PGD only enforces invariance to a single norm constraint . Empirically , FreeLB does lead to higher robustness and invariance than PGD in the embedding space , in the sense that the maximum increase of loss in the vicinity of X for models trained with FreeLB is smaller than that with PGD . See Sec . 4.3 for details . In theory , such improved robustness can lead to better generalization ( Xu & Mannor , 2012 ) , which is consistent with our experiments . Qin et al . ( 2019 ) also demonstrated that PGD-based method leads to highly convolved and non-linear loss surfaces in the vicinity of input samples when K is small , indicating a lack of robustness .
- This paper modifies and extends the recent “free” training strategies in adversarial training for representation learning for natural language. The proposed “Free” Large-Batch Adversarial Training is well motived, in comparison with plain PGD-based adversarial training and the existing methods like FreeAT and YOPO, which virtually enlarges the batch size and minimize maximum risk at every ascent step. The contributions are solid.
SP:027dfebf9732ce68cb3985ef873b00d65e6e7205
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
1 INTRODUCTION . Adversarial training is a method for creating robust neural networks . During adversarial training , mini-batches of training samples are contaminated with adversarial perturbations ( alterations that are small and yet cause misclassification ) , and then used to update network parameters until the resulting model learns to resist such attacks . Adversarial training was originally proposed as a means to enhance the security of machine learning systems ( Goodfellow et al. , 2015 ) , especially for safety-critical systems like self-driving cars ( Xiao et al. , 2018 ) and copyright detection ( Saadatpanah et al. , 2019 ) . In this paper , we turn our focus away from the security benefits of adversarial training , and instead study its effects on generalization . While adversarial training boosts the robustness , it is widely accepted by computer vision researchers that it is at odds with generalization , with classification accuracy on non-corrupted images dropping as much as 10 % on CIFAR-10 , and 15 % on Imagenet ( Madry et al. , 2018 ; Xie et al. , 2019 ) . Surprisingly , people observe the opposite result for language models ( Miyato et al. , 2017 ; Cheng et al. , 2019 ) , showing that adversarial training can improve both generalization and robustness . We will show that adversarial training significantly improves performance of state-of-the-art models for many language understanding tasks . In particular , we propose a novel adversarial training algorithm , called FreeLB ( Free Large-Batch ) , which adds adversarial perturbations to word embeddings and minimizes the resultant adversarial loss around input samples . The method leverages recently proposed “ free ” training strategies ( Shafahi et al. , 2019 ; Zhang et al. , 2019 ) to enrich the training data with diversified adversarial samples under different norm constraints at no extra cost than PGD-based ( Projected Gradient Descent ) adversarial training ( Madry et al. , 2018 ) , which enables us to perform such diversified adversarial training on large-scale state-of-the-art models . We observe improved invariance in the embedding space for models trained with FreeLB , which is positively correlated with generalization . 1Code is available at https : //github.com/zhuchen03/FreeLB . We perform comprehensive experiments to evaluate the performance of a variety of adversarial training algorithms on state-of-the-art language understanding models and tasks . In the comparisons with standard PGD ( Madry et al. , 2018 ) , FreeAT ( Shafahi et al. , 2019 ) and YOPO ( Zhang et al. , 2019 ) , FreeLB stands out to be the best for the datasets and models we evaluated . With FreeLB , we achieve state-of-the-art results on several important language understanding benchmarks . On the GLUE benchmark , FreeLB pushes the performance of the BERT-base model from 78.3 to 79.4 . The overall score of the RoBERTa-large models on the GLUE benchmark is also lifted from 88.5 to 88.8 , achieving best results on most of its sub-tasks . Experiments also show that FreeLB can boost the performance of RoBERTa-large on question answering tasks , such as the ARC and CommonsenseQA benchmarks . We also provide a comprehensive ablation study and analysis to demonstrate the effectiveness of our training process . 2 RELATED WORK . 2.1 ADVERSARIAL TRAINING . To improve the robustness of neural networks against adversarial examples , many defense strategies and models have been proposed , in which PGD-based adversarial training ( Madry et al. , 2018 ) is widely considered to be the most effective , since it largely avoids the the obfuscated gradient problem ( Athalye et al. , 2018 ) . It formulates a class of adversarial training algorithms ( Kurakin et al. , 2017 ) into solving a minimax problem on the cross-entropy loss , which can be achieved reliably through multiple projected gradient ascent steps followed by a SGD ( Stochastic Gradient Descent ) step . Despite being verified by Athalye et al . ( 2018 ) to avoid obfuscated gradients , Qin et al . ( 2019 ) shows that PGD-based adversarial training still leads to highly convolved and non-linear loss surfaces when K is small , which could be readily broken under stronger adversaries . Thus , to be effective , the cost of PGD-based adversarial training is much higher than conventional training . To mitigate this cost , Shafahi et al . ( 2019 ) proposed a “ free ” adversarial training algorithm that simultaneously updates both model parameters and adversarial perturbations on a single backward pass . Using a similar formulation , Zhang et al . ( 2019 ) effectively reduce the total number of full forward and backward propagations for obtaining adversarial examples by restricting most of its adversarial updates in the first layer . 2.2 ADVERSARIAL EXAMPLES IN NATURAL LANGUAGES . Adversarial examples have been explored primarily in the image domain , and received many attention in text domain recently . Previous works on text adversaries have focused on heuristics for creating adversarial examples in the black-box setting , or on specific tasks . Jia & Liang ( 2017 ) propose to add distracting sentences to the input document in order to induce mis-classification . Zhao et al . ( 2018 ) generate text adversaries by projecting the input data to a latent space using GANs , and searching for adversaries close to the original instance . Belinkov & Bisk ( 2018 ) manipulate every word in a sentence with synthetic or natural noise in machine translation systems . Iyyer et al . ( 2018 ) propose a neural paraphrase model based on back-translated data to produce paraphrases that have different sentence structures . Different from previous work , ours is not to produce actual adversarial examples , but only take the benefit of adversarial training for natural language understanding . We are not the first to observe that robust language models may perform better on clean test data . Miyato et al . ( 2017 ) extend adversarial and virtual adversarial training ( Miyato et al. , 2019 ) to the text domain to improve the performance on semi-supervised classification tasks . Ebrahimi et al . ( 2018 ) propose a character/word replacement for crafting attacks , and show employing adversarial examples in training renders the models more robust . Ribeiro et al . ( 2018 ) show that adversarial attacks can be used as a valuable tool for debugging NLP models . Cheng et al . ( 2019 ) also find that crafting adversarial examples can help neural machine translation significantly . Notably , these studies have focused on simple models or text generation tasks . Our work explores how to efficiently use the gradients obtained in adversarial training to boost the performance of state-of-the-art transformer-based models . 3 ADVERSARIAL TRAINING FOR LANGUAGE UNDERSTANDING . Pre-trained large-scale language models , such as BERT ( Devlin et al. , 2019 ) , RoBERTa ( Liu et al. , 2019b ) , ALBERT ( Lan et al. , 2020 ) and T5 ( Raffel et al. , 2019 ) , have proven to be highly effective for downstream tasks . We aim to further improve the generalization of these pre-trained language models on the downstream language understanding tasks by enhancing their robustness in the embedding space during finetuning on these tasks . We achieve this goal by creating “ virtual ” adversarial examples in the embedding space , and then perform parameter updates on these adversarial embeddings . Creating actual adversarial examples for language is difficult ; even with state-of-theart language models as guidance ( e.g. , ( Cheng et al. , 2019 ) ) , it remains unclear how to construct label-preserving adversarial examples via word/character replacement without human evaluations , because the meaning of each word/character depends on the context ( Ribeiro et al. , 2018 ) . Since we are only interested in the effects of adversarial training , rather than producing actual adversarial examples , we add norm-bounded adversarial perturbations to the embeddings of the input sentences using a gradient-based method . Note that our embedding-based adversary is strictly stronger than a more conventional text-based adversary , as our adversary can make manipulations on word embeddings that are not possible in the text domain . For models that incorporate various input representations , including word or subword embeddings , segment embeddings and position embeddings , our adversaries only modify the concatenated word or sub-word embeddings , leaving other components of the sentence representation unchanged . 2 Denote the sequence of one-hot representations of the input subwords as Z = [ z1 , z2 , ... , zn ] , the embedding matrix as V , and the language model ( encoder ) as a function y = fθ ( X ) , whereX = V Z is the subword embeddings , y is the output of the model ( e.g. , class probabilities for classification models ) , and θ denotes all the learnable parameters including the embedding matrix V . We add adversarial perturbations δ to the embeddings such that the prediction becomes y′ = fθ ( X + δ ) . To preserve the semantics , we constrain the norm of δ to be small , and assume the model ’ s prediction should not change after the perturbation . This formulation is analogous to Miyato et al . ( 2017 ) , with the difference that we do not requireX to be normalized . 3.1 PGD FOR ADVERSARIAL TRAINING . Standard adversarial training seeks to find optimal parameters θ∗ to minimize the maximum risk for any δ within a norm ball as : min θ E ( Z , y ) ∼D [ max ‖δ‖≤ L ( fθ ( X + δ ) , y ) ] , ( 1 ) where D is the data distribution , y is the label , and L is some loss function . We use the Frobenius norm to constrain δ . For neural networks , the outer “ min ” is non-convex , and the inner “ max ” is non-concave . Nonetheless , Madry et al . ( 2018 ) demonstrated that this saddle-point problem can be solved reliably with SGD for the outer minimization and PGD ( a standard method for large-scale constrained optimization , see ( Combettes & Pesquet , 2011 ) and ( Goldstein et al. , 2014 ) ) , for the inner maximization . In particular , for the constraint ‖δ‖F ≤ , with an additional assumption that the loss function is locally linear , PGD takes the following step ( with step size α ) in each iteration : δt+1 = Π‖δ‖F≤ ( δt + αg ( δt ) /‖g ( δt ) ‖F ) , ( 2 ) where g ( δt ) = ∇δL ( fθ ( X + δt ) , y ) is the gradient of the loss with respect to δ , and Π‖δ‖F≤ performs a projection onto the -ball . To achieve high-level robustness , multi-step adversarial examples are needed during training , which is computationally expensive . TheK-step PGD ( K-PGD ) requires K forward-backward passes through the network , while the standard SGD update requires only one . As a result , the adversary generation step in adversarial training increases run-time by an order of magnitude—a catastrophic amount when training large state-of-the-art language models . 3.2 LARGE-BATCH ADVERSARIAL TRAINING FOR FREE . In the inner ascent steps of PGD , the gradients of the parameters can be obtained with almost no overhead when computing the gradients of the inputs . From this observation , FreeAT ( Shafahi et al. , 2 “ Subword embeddings ” refers to the embeddings of sub-word encodings such as the popular Byte Pair Encoding ( BPE ) ( Sennrich et al. , 2016 ) . Algorithm 1 “ Free ” Large-Batch Adversarial Training ( FreeLB-K ) Require : Training samples X = { ( Z , y ) } , perturbation bound , learning rate τ , ascent steps K , ascent step size α 1 : Initialize θ 2 : for epoch = 1 . . . Nep do 3 : for minibatch B ⊂ X do 4 : δ0 ← 1√NδU ( − , ) 5 : g0 ← 0 6 : for t = 1 . . .K do 7 : Accumulate gradient of parameters θ 8 : gt ← gt−1 + 1KE ( Z , y ) ∈B [ ∇θ L ( fθ ( X + δt−1 ) , y ) ] 9 : Update the perturbation δ via gradient ascend 10 : gadv ← ∇δ L ( fθ ( X + δt−1 ) , y ) 11 : δt ← Π‖δ‖F≤ ( δt−1 + α · gadv/‖gadv‖F ) 12 : end for 13 : θ ← θ − τgK 14 : end for 15 : end for 2019 ) and YOPO ( Zhang et al. , 2019 ) have been proposed to accelerate adversarial training . They achieve comparable robustness and generalization as standard PGD-trained models using only the same or a slightly larger number of forward-backward passes as natural training ( i.e. , SGD on clean samples ) . FreeAT takes one descent step on the parameters together with each of the K ascent steps on the perturbation . As a result , FreeAT may suffer from the “ stale gradient ” problem ( Dutta et al. , 2018 ) , where in every step t , δt does not necessarily maximize the model with parameter θt since its update is based on ∇δL ( fθt−1 ( X + δt−1 ) , y ) , and vice versa , θt does not necessarily minimize the adversarial risk with adversary δt since its update is based on ∇θL ( fθt−1 ( X + δt−1 ) , y ) . Such a problem may be more significant when the step size is large . Different from FreeAT , YOPO accumulates the gradient of the parameters from each of the ascent steps , and updates the parameters only once after the K inner ascent steps . YOPO also advocates that after each back-propagation , one should take the gradient of the first hidden layer as a constant and perform several additional updates on the adversary using the product of this constant and the Jacobian of the first layer of the network to obtain strong adversaries . However , when the first hidden layer is a linear layer as in their implementation , such an operation is equivalent to taking a larger step size on the adversary . The analysis backing the extra update steps also assumes a twice continuously differentiable loss , which does not hold for ReLU-based neural networks they experimented with , and thus the reasons for the success of such an algorithm remains obscure . We give empirical comparisons between YOPO and our approach in Sec . 4.3 . To obtain better solutions for the inner max and avoid fundamental limitations on the function class , we propose FreeLB , which performs multiple PGD iterations to craft adversarial examples , and simultaneously accumulates the “ free ” parameter gradients ∇θL in each iteration . After that , it updates the model parameter θ all at once with the accumulated gradients . The overall procedure is shown in Algorithm 1 , in which X + δt is an approximation to the local maximum within the intersection of two balls It = BX+δ0 ( αt ) ∩ BX ( ) . By taking a descent step along the averaged gradients atX + δ0 , ... , X + δK−1 , we approximately optimize the following objective : min θ E ( Z , y ) ∼D [ 1 K K−1∑ t=0 max δt∈It L ( fθ ( X + δt ) , y ) ] , ( 3 ) which is equivalent to replacing the original batchX with a K-times larger virtual batch , consisting of samples whose embeddings are X + δ0 , ... , X + δK−1 . Compared with PGD-based adversarial training ( Eq . 1 ) , which minimizes the maximum risk at a single estimated point in the vicinity of each training sample , FreeLB minimizes the maximum risk at each ascent step at almost no overhead . Intuitively , FreeLB could be a learning method with lower generalization error than PGD . Sokolic et al . ( 2017 ) have proved that the generalization error of a learning method invariant to a set of T transformations may be up to √ T smaller than a non-invariant learning method . According to their theory , FreeLB could have a more significant improvement over natural training , since FreeLB enforces the invariance to K adversaries from a set of up to K different norm constraints,3 while PGD only enforces invariance to a single norm constraint . Empirically , FreeLB does lead to higher robustness and invariance than PGD in the embedding space , in the sense that the maximum increase of loss in the vicinity of X for models trained with FreeLB is smaller than that with PGD . See Sec . 4.3 for details . In theory , such improved robustness can lead to better generalization ( Xu & Mannor , 2012 ) , which is consistent with our experiments . Qin et al . ( 2019 ) also demonstrated that PGD-based method leads to highly convolved and non-linear loss surfaces in the vicinity of input samples when K is small , indicating a lack of robustness .
In this paper, the authors present a new adversarial training algorithm and apply it to the fintuning stage large scale language models BERT and RoBERTa. They find that with FreeLB applied to finetuning, both BERT and RoBERTa see small boosts in performance on GLUE, ARC, and CommonsenseQA. The gains they see on GLUE are quite small (0.3 on the GLUE test score for RoBERTa) but the gains are more substantial on ARC and CommonsenseQA. The paper also presents some ablation studies on the use of the same dropout mask across each ascent step of FreeLB, empirically seeing gains by using the same mask. They also present some analysis on robustness in the embedding space, showing that FreeLB leads to greater robustness than other adversarial training methods
SP:027dfebf9732ce68cb3985ef873b00d65e6e7205
An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality
1 INTRODUCTION . Many machine learning tasks involve a distance measure over the input domain . A good measure can make a once hard task easy , even trivial . In many cases—including graph distances , certain clustering algorithms , and general value functions in reinforcement learning ( RL ) —it is either known that distances satisfy the triangle inequality , or required for purposes of theoretical guarantees ; e.g. , speed and loss guarantees in k-nearest neighbors and clustering ( Cover and Hart , 1967 ; Indyk , 1999 ; Davidson and Ravi , 2009 ) , or optimality guarantees for A∗ search ( Russell and Norvig , 2016 ) . This also makes the triangle inequality a potentially useful inductive bias for learning distances . For these reasons , numerous papers have studied different ways to learn distances that satisfy the triangle inequality ( Xing et al. , 2003 ; Yang and Jin , 2006 ; Brickell et al. , 2008 ; Kulis et al. , 2013 ) . The usual approach to enforcing the triangle inequality in deep metric learning ( Yi et al. , 2014 ; Hoffer and Ailon , 2015 ; Wang et al. , 2018 ) is to use a Siamese network ( Bromley et al. , 1994 ) that computes a Euclidean distance in the latent space . Specifically , the Siamese network models distance dX : X × X → R+ on domain X by learning embedding φ : X → Rn and computing dX ( x , y ) as ‖φ ( x ) − φ ( y ) ‖2 . Successful applications include collaborative filtering ( Hsieh et al. , 2017 ) , few-shot learning ( Snell et al. , 2017 ) , and multi-goal reinforcement learning ( Schaul et al. , 2015 ) . The use of Euclidean distance , however , has at least two downsides . First , the Euclidean architecture can not represent asymmetric metrics , which arise naturally in directed graphs and reinforcement learning . Second , it is well known that for some metric spaces ( X , dX ) , including large classes of symmetric graphs ( e.g. , constant-degree expanders and k-regular graphs ) , there is no embedding φ : X → Rn that can model dX precisely using ‖ · ‖2 ( Indyk et al. , 2017 ) . A classic example is shown in Figure 1 . In part due to these issues , some have considered non-architectural constraints . He et al . ( 2016 ) impose a triangle inequality constraint in RL via an online , algorithmic penalty . Implementing such a penalty can be expensive , and does not provide any guarantees . An approach that does guarantee 1Code available at https : //github.com/spitis/deepnorms Published as a conference paper at ICLR 2020 ... ... ... ... ... ... Cleaned up version Blah Blah Blah Norm MSE Euclidean , Rn , ∀n 0.057 Deep Norm , R2 0.000 Wide Norm , R2 0.000 Deep Norm Wide Norm Mahalanobis 1 0 11.5 1.0 0.5 0.0 0.5 1.0 1.5 A B C D 1 0 1 1 0 1 A B C D 1 0 1 1.0 0.5 0.0 0.5 1.0 A B C D Fig . 1 : The nodes in the graph ( left ) can not be embedded into any Rn so that edge distances are represented by the Euclidean metric : points φ ( A ) and φ ( D ) must lie at the midpoint of the segment from φ ( B ) to φ ( C ) —but then φ ( A ) and φ ( D ) coincide , which is incorrect . Our models fit the data in R2 ( middle ) . The visualization ( right ) shows learned norm balls in red and embeddings in blue . satisfaction of triangle inequality is to fix any violations after learning , as done by Brickell et al . ( 2008 ) . But this does not scale to large problems or provide an inductive bias during learning . Is it possible to impose the triangle inequality architecturally , without the downsides of Euclidean distance ? In response to this question , we present the following contributions : ( 1 ) three novel neural network architectures , Deep Norms , Wide Norms and Neural Metrics , which model symmetric and asymmetric norms and metrics , ( 2 ) universal approximation theorems for Deep Norms and Wide Norms and modified Input Convex Neural Networks ( Amos et al. , 2017 ) , and ( 3 ) empirical evaluations of our models on several tasks : modeling norms , metric nearness , modeling shortest path lengths , and learning a general value function ( Sutton et al. , 2011 ) . Our models are guaranteed to satisfy the triangle inequality , straightforward to implement , and may be used in place of the usual Euclidean metric should one seek to model asymmetry or increase expressiveness . 2 MODELING NORMS . 2.1 PRELIMINARIES . Our goal is to construct expressive models of metrics and quasi-metrics on domain X . A metric is a function d : X × X → R+ satisfying , ∀x , y , z ∈ X : M1 ( Non-negativity ) . d ( x , y ) ≥ 0 . M2 ( Definiteness ) . d ( x , y ) = 0 ⇐⇒ x = y. M3 ( Subadditivity ) . d ( x , z ) ≤ d ( x , y ) + d ( y , z ) . M4 ( Symmetry ) . d ( x , y ) = d ( y , x ) . Since we care mostly about the triangle inequality ( M3 ) , we relax other axioms and define a quasimetric as a function that is M1 and M3 , but not necessarily M2 or M4 . Given weighted graph G = ( V , E ) with non-negative weights , shortest path lengths define a quasi-metric between vertices . When X is a vector space ( we assume over R ) , many common metrics , e.g. , Euclidean and Manhattan distances , are induced by a norm . A norm is a function ‖·‖ : X → R satisfying , ∀x , y ∈ X , α ∈ R+ : N1 ( Pos . def. ) . ‖x‖ > 0 , unless x = 0 . N2 ( Pos . homo. ) . α‖x‖ = ‖αx‖ , for α ≥ 0 . N3 ( Subadditivity ) . ‖x+ y‖ ≤ ‖x‖+ ‖y‖ . N4 ( Symmetry ) . ‖x‖ = ‖ –x‖ . An asymmetric norm is N1-N3 , but not necessarily N4 . An ( asymmetric ) semi-norm is nonnegative , N2 and N3 ( and N4 ) , but not necessarily N1 . We will use the fact that any asymmetric semi-norm ‖ · ‖ induces a quasi-metric using the rule , d ( x , y ) = ‖x − y‖ , and first construct models of asymmetric semi-norms . Any induced quasi-metric d is translation invariant—d ( x , y ) = d ( x+ z , y+ z ) —and positive homogeneous—d ( αx , αy ) = αd ( x , y ) for α ≥ 0 . If ‖ · ‖ is symmetric ( N4 ) , so is d ( M4 ) . If ‖ · ‖ is N1 , d is M2 . Metrics that are not translation invariant ( e.g. , Bi et al . ( 2015 ) ) or positive homogeneous ( e.g. , our Neural Metrics in Section 3 ) can not be induced by a norm . A convex function f : X → R is a function satisfying C1 : ∀x , y ∈ X , α ∈ [ 0 , 1 ] : f ( αx+ ( 1−α ) y ) ≤ αf ( x ) + ( 1− α ) f ( y ) . The commonly used ReLU activation , relu ( x ) = max ( 0 , x ) , is convex . 2.2 DEEP NORMS . It is easy to see that any N2 and N3 function is convex—thus , all asymmetric semi-norms are convex . This motivates modeling norms as constrained convex functions , using the following proposition . Proposition 1 . All positive homogeneous convex functions are subadditive ; i.e. , C1 ∧ N2⇒ N3 . ... Less misleading : The proof is straightforward ( put α = 12 in C1 and apply N2 to the left side ) . To use Proposition 1 , we begin with the Input Convex Neural Network ( ICNN ) ( Amos et al. , 2017 ) architecture , which satisfies C1 , and further constrain it to be non-negative and satisfy N2 . The resulting Deep Norm architecture is guaranteed to be an asymmetric semi-norm . A k-layer Deep Norm is defined as : for i = 1 ... k , where x is the input , h0 = 0 , W+1 = 0 , the activation functions gi preserve C1 and N2 ( element-wise ) , gk is non-negative , W+i is a non-negative matrix , and Ui is an unconstrained matrix . As compared to the original ICNN architecture , we have omitted the bias terms from Equation 1 , have constrained the gi to preserve positive homogeneity while also allowing them to be any function that preserves element-wise convexity ( this is essential to our universal approximation results ) , and have required gk to be non-negative . It is easy to verify that the set of valid element-wise gi is { gαβ ( x ) = α relu ( x ) + βx |α , β ≥ 0 } . This includes ReLUs and leaky ReLUs . But we do not restrict ourselves to element-wise activations . Inspired by GroupSort ( Anil et al. , 2018 ) , we use activations that depend on multiple inputs ( and preserve element-wise C1 and N2 ) . In particular , we use the pairwise MaxReLU : maxrelu ( x , y ) = [ max ( x , y ) , α relu ( x ) + β relu ( y ) ] , where α , β ≥ 0 ( 2 ) Deep Norms are N2 and N3 . Using the following propositions , we may also impose N1 and N4 . Proposition 2 . If ‖ · | is an asymmetric semi-norm , then ‖x‖ = ‖x|+ ‖ –x| is a semi-norm . Proposition 3 . If ‖ · ‖a is an ( asymmetric ) semi-norm , ‖ · ‖b is a norm ( e.g. , ‖ · ‖b = ‖ · ‖2 ) , and λ > 0 , then ‖x‖a+λb = ‖x‖a + λ‖x‖b is an ( asymmetric ) norm . 2.3 WIDE NORMS . In addition to Deep Norms , we propose the following alternative method for constructing norms : a Wide Norm is any combination of ( asymmetric ) ( semi- ) norms that preserves N1-N4 . It is easy to verify that both ( 1 ) non-negative sums and ( 2 ) max are valid combinations ( indeed , these properties were also used to construct Deep Norms ) , and so the vector-wise MaxMean combination is valid : maxmean ( x1 , x2 , . . . , xn ) = α max ( x1 , x2 , . . . , xn ) + ( 1− α ) mean ( x1 , x2 , . . . , xn ) . Although the family of Wide Norms is broad , for computational reasons to be discussed in Subsection 3.5 , we focus our attention on the Wide Mahalanobis norm . References to “ Wide Norms ” in the rest of this paper refer to Wide Mahalanobis norms . The Mahalanobis norm of x ∈ Rn , parameterized by W ∈ Rm×n , is defined as ‖x‖W = ‖Wx‖2 . It is easily verified that ‖ · ‖W is a proper norm when W is a non-singular ( square ) matrix , and a semi-norm when W is singular or m < n. A k-component Mixture of Mahalanobis norm ( hereafter Wide Norm , or Wide Norm with k Euclidean components ) is defined as the maxmean of k Mahalanobis norms : ‖x‖ = maxmeani ( ‖Wix‖2 ) where Wi ∈ Rmi×n with mi ≤ n. ( 3 ) Wide Norms are symmetric by default , and must be asymmetrized to obtain asymmetric ( semi- ) norms . We use the below property ( Bauer et al. , 1961 ) and propositions ( proofs in Appendix A ) . N5 . ‖ · ‖ is monotonic in the positive orthant if 0 ≤ x ≤ y ( element-wise ) implies ‖x‖ ≤ ‖y‖ . Proposition 4 . If ‖ · ‖ is an N5 ( semi- ) norm on R2n , then ‖x| = ‖relu ( x : : –x ) ‖ , where : : denotes concatenation , is an asymmetric ( semi- ) norm on Rn . Proposition 5 . The Mahalanobis norm with W = DU , with D diagonal and U non-negative , is N5 .
This paper proposes a modeling approach for norm and metric learning that ensures triangle inequalities are satisfied by the very design of the architecture. The main idea is that convexity together with homogeneity imply subadditivity, so starting from an input-convex architecture and using activations that preserve homogeneity implies the resulting model is sub-additive at every point. This architecture is used to model a norm, and in conjunction with an embedding - a metric. The authors also propose a mixture-based approach that combines a given set of metrics into a new one using a max-mean approach. Universal approximation results are presented for both architectures. The results are illustrated on a few mostly synthetic examples including metric nearness for random matrices, value functions for maze MDPs and distances between nodes on a graph (some problems here are sourced from open street map).
SP:1dae5dd9635962d35767ee1a5a4da01170e18029
An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality
1 INTRODUCTION . Many machine learning tasks involve a distance measure over the input domain . A good measure can make a once hard task easy , even trivial . In many cases—including graph distances , certain clustering algorithms , and general value functions in reinforcement learning ( RL ) —it is either known that distances satisfy the triangle inequality , or required for purposes of theoretical guarantees ; e.g. , speed and loss guarantees in k-nearest neighbors and clustering ( Cover and Hart , 1967 ; Indyk , 1999 ; Davidson and Ravi , 2009 ) , or optimality guarantees for A∗ search ( Russell and Norvig , 2016 ) . This also makes the triangle inequality a potentially useful inductive bias for learning distances . For these reasons , numerous papers have studied different ways to learn distances that satisfy the triangle inequality ( Xing et al. , 2003 ; Yang and Jin , 2006 ; Brickell et al. , 2008 ; Kulis et al. , 2013 ) . The usual approach to enforcing the triangle inequality in deep metric learning ( Yi et al. , 2014 ; Hoffer and Ailon , 2015 ; Wang et al. , 2018 ) is to use a Siamese network ( Bromley et al. , 1994 ) that computes a Euclidean distance in the latent space . Specifically , the Siamese network models distance dX : X × X → R+ on domain X by learning embedding φ : X → Rn and computing dX ( x , y ) as ‖φ ( x ) − φ ( y ) ‖2 . Successful applications include collaborative filtering ( Hsieh et al. , 2017 ) , few-shot learning ( Snell et al. , 2017 ) , and multi-goal reinforcement learning ( Schaul et al. , 2015 ) . The use of Euclidean distance , however , has at least two downsides . First , the Euclidean architecture can not represent asymmetric metrics , which arise naturally in directed graphs and reinforcement learning . Second , it is well known that for some metric spaces ( X , dX ) , including large classes of symmetric graphs ( e.g. , constant-degree expanders and k-regular graphs ) , there is no embedding φ : X → Rn that can model dX precisely using ‖ · ‖2 ( Indyk et al. , 2017 ) . A classic example is shown in Figure 1 . In part due to these issues , some have considered non-architectural constraints . He et al . ( 2016 ) impose a triangle inequality constraint in RL via an online , algorithmic penalty . Implementing such a penalty can be expensive , and does not provide any guarantees . An approach that does guarantee 1Code available at https : //github.com/spitis/deepnorms Published as a conference paper at ICLR 2020 ... ... ... ... ... ... Cleaned up version Blah Blah Blah Norm MSE Euclidean , Rn , ∀n 0.057 Deep Norm , R2 0.000 Wide Norm , R2 0.000 Deep Norm Wide Norm Mahalanobis 1 0 11.5 1.0 0.5 0.0 0.5 1.0 1.5 A B C D 1 0 1 1 0 1 A B C D 1 0 1 1.0 0.5 0.0 0.5 1.0 A B C D Fig . 1 : The nodes in the graph ( left ) can not be embedded into any Rn so that edge distances are represented by the Euclidean metric : points φ ( A ) and φ ( D ) must lie at the midpoint of the segment from φ ( B ) to φ ( C ) —but then φ ( A ) and φ ( D ) coincide , which is incorrect . Our models fit the data in R2 ( middle ) . The visualization ( right ) shows learned norm balls in red and embeddings in blue . satisfaction of triangle inequality is to fix any violations after learning , as done by Brickell et al . ( 2008 ) . But this does not scale to large problems or provide an inductive bias during learning . Is it possible to impose the triangle inequality architecturally , without the downsides of Euclidean distance ? In response to this question , we present the following contributions : ( 1 ) three novel neural network architectures , Deep Norms , Wide Norms and Neural Metrics , which model symmetric and asymmetric norms and metrics , ( 2 ) universal approximation theorems for Deep Norms and Wide Norms and modified Input Convex Neural Networks ( Amos et al. , 2017 ) , and ( 3 ) empirical evaluations of our models on several tasks : modeling norms , metric nearness , modeling shortest path lengths , and learning a general value function ( Sutton et al. , 2011 ) . Our models are guaranteed to satisfy the triangle inequality , straightforward to implement , and may be used in place of the usual Euclidean metric should one seek to model asymmetry or increase expressiveness . 2 MODELING NORMS . 2.1 PRELIMINARIES . Our goal is to construct expressive models of metrics and quasi-metrics on domain X . A metric is a function d : X × X → R+ satisfying , ∀x , y , z ∈ X : M1 ( Non-negativity ) . d ( x , y ) ≥ 0 . M2 ( Definiteness ) . d ( x , y ) = 0 ⇐⇒ x = y. M3 ( Subadditivity ) . d ( x , z ) ≤ d ( x , y ) + d ( y , z ) . M4 ( Symmetry ) . d ( x , y ) = d ( y , x ) . Since we care mostly about the triangle inequality ( M3 ) , we relax other axioms and define a quasimetric as a function that is M1 and M3 , but not necessarily M2 or M4 . Given weighted graph G = ( V , E ) with non-negative weights , shortest path lengths define a quasi-metric between vertices . When X is a vector space ( we assume over R ) , many common metrics , e.g. , Euclidean and Manhattan distances , are induced by a norm . A norm is a function ‖·‖ : X → R satisfying , ∀x , y ∈ X , α ∈ R+ : N1 ( Pos . def. ) . ‖x‖ > 0 , unless x = 0 . N2 ( Pos . homo. ) . α‖x‖ = ‖αx‖ , for α ≥ 0 . N3 ( Subadditivity ) . ‖x+ y‖ ≤ ‖x‖+ ‖y‖ . N4 ( Symmetry ) . ‖x‖ = ‖ –x‖ . An asymmetric norm is N1-N3 , but not necessarily N4 . An ( asymmetric ) semi-norm is nonnegative , N2 and N3 ( and N4 ) , but not necessarily N1 . We will use the fact that any asymmetric semi-norm ‖ · ‖ induces a quasi-metric using the rule , d ( x , y ) = ‖x − y‖ , and first construct models of asymmetric semi-norms . Any induced quasi-metric d is translation invariant—d ( x , y ) = d ( x+ z , y+ z ) —and positive homogeneous—d ( αx , αy ) = αd ( x , y ) for α ≥ 0 . If ‖ · ‖ is symmetric ( N4 ) , so is d ( M4 ) . If ‖ · ‖ is N1 , d is M2 . Metrics that are not translation invariant ( e.g. , Bi et al . ( 2015 ) ) or positive homogeneous ( e.g. , our Neural Metrics in Section 3 ) can not be induced by a norm . A convex function f : X → R is a function satisfying C1 : ∀x , y ∈ X , α ∈ [ 0 , 1 ] : f ( αx+ ( 1−α ) y ) ≤ αf ( x ) + ( 1− α ) f ( y ) . The commonly used ReLU activation , relu ( x ) = max ( 0 , x ) , is convex . 2.2 DEEP NORMS . It is easy to see that any N2 and N3 function is convex—thus , all asymmetric semi-norms are convex . This motivates modeling norms as constrained convex functions , using the following proposition . Proposition 1 . All positive homogeneous convex functions are subadditive ; i.e. , C1 ∧ N2⇒ N3 . ... Less misleading : The proof is straightforward ( put α = 12 in C1 and apply N2 to the left side ) . To use Proposition 1 , we begin with the Input Convex Neural Network ( ICNN ) ( Amos et al. , 2017 ) architecture , which satisfies C1 , and further constrain it to be non-negative and satisfy N2 . The resulting Deep Norm architecture is guaranteed to be an asymmetric semi-norm . A k-layer Deep Norm is defined as : for i = 1 ... k , where x is the input , h0 = 0 , W+1 = 0 , the activation functions gi preserve C1 and N2 ( element-wise ) , gk is non-negative , W+i is a non-negative matrix , and Ui is an unconstrained matrix . As compared to the original ICNN architecture , we have omitted the bias terms from Equation 1 , have constrained the gi to preserve positive homogeneity while also allowing them to be any function that preserves element-wise convexity ( this is essential to our universal approximation results ) , and have required gk to be non-negative . It is easy to verify that the set of valid element-wise gi is { gαβ ( x ) = α relu ( x ) + βx |α , β ≥ 0 } . This includes ReLUs and leaky ReLUs . But we do not restrict ourselves to element-wise activations . Inspired by GroupSort ( Anil et al. , 2018 ) , we use activations that depend on multiple inputs ( and preserve element-wise C1 and N2 ) . In particular , we use the pairwise MaxReLU : maxrelu ( x , y ) = [ max ( x , y ) , α relu ( x ) + β relu ( y ) ] , where α , β ≥ 0 ( 2 ) Deep Norms are N2 and N3 . Using the following propositions , we may also impose N1 and N4 . Proposition 2 . If ‖ · | is an asymmetric semi-norm , then ‖x‖ = ‖x|+ ‖ –x| is a semi-norm . Proposition 3 . If ‖ · ‖a is an ( asymmetric ) semi-norm , ‖ · ‖b is a norm ( e.g. , ‖ · ‖b = ‖ · ‖2 ) , and λ > 0 , then ‖x‖a+λb = ‖x‖a + λ‖x‖b is an ( asymmetric ) norm . 2.3 WIDE NORMS . In addition to Deep Norms , we propose the following alternative method for constructing norms : a Wide Norm is any combination of ( asymmetric ) ( semi- ) norms that preserves N1-N4 . It is easy to verify that both ( 1 ) non-negative sums and ( 2 ) max are valid combinations ( indeed , these properties were also used to construct Deep Norms ) , and so the vector-wise MaxMean combination is valid : maxmean ( x1 , x2 , . . . , xn ) = α max ( x1 , x2 , . . . , xn ) + ( 1− α ) mean ( x1 , x2 , . . . , xn ) . Although the family of Wide Norms is broad , for computational reasons to be discussed in Subsection 3.5 , we focus our attention on the Wide Mahalanobis norm . References to “ Wide Norms ” in the rest of this paper refer to Wide Mahalanobis norms . The Mahalanobis norm of x ∈ Rn , parameterized by W ∈ Rm×n , is defined as ‖x‖W = ‖Wx‖2 . It is easily verified that ‖ · ‖W is a proper norm when W is a non-singular ( square ) matrix , and a semi-norm when W is singular or m < n. A k-component Mixture of Mahalanobis norm ( hereafter Wide Norm , or Wide Norm with k Euclidean components ) is defined as the maxmean of k Mahalanobis norms : ‖x‖ = maxmeani ( ‖Wix‖2 ) where Wi ∈ Rmi×n with mi ≤ n. ( 3 ) Wide Norms are symmetric by default , and must be asymmetrized to obtain asymmetric ( semi- ) norms . We use the below property ( Bauer et al. , 1961 ) and propositions ( proofs in Appendix A ) . N5 . ‖ · ‖ is monotonic in the positive orthant if 0 ≤ x ≤ y ( element-wise ) implies ‖x‖ ≤ ‖y‖ . Proposition 4 . If ‖ · ‖ is an N5 ( semi- ) norm on R2n , then ‖x| = ‖relu ( x : : –x ) ‖ , where : : denotes concatenation , is an asymmetric ( semi- ) norm on Rn . Proposition 5 . The Mahalanobis norm with W = DU , with D diagonal and U non-negative , is N5 .
This manuscript proposes a general framework to learn non-Euclidean distances from data using neural networks. The authors provide a combination of theoretical and experimental results in support of the use of several neural architectures to learn such distances. In particular, the develop “deep norms” and “wide norms”, based either on a deep or shallow neural network. Metrics are elaborated based on norms by combining them with a learnt embedding function mapping the input space de R^n. Theoretical results are mostly application textbook results and intuitive, the overall work forms a coherent line of research bridging theory and applications that sets well justified reference approaches for this topic.
SP:1dae5dd9635962d35767ee1a5a4da01170e18029
Neural Networks for Principal Component Analysis: A New Loss Function Provably Yields Ordered Exact Eigenvectors
1 INTRODUCTION . Ranking among the most widely-used and valuable statistical tools , Principal Component Analysis ( PCA ) represents a given set of data within a new orthogonal coordinate system in which the data are uncorrelated and the variance of the data along each orthogonal axis is successively ordered from the highest to lowest . The projection of data along each axis gives what are called principal components . Theoretically , eigendecomposition of the covariance matrix provides exactly such a transformation . For large data sets , however , classical decomposition techniques are infeasible and other numerical methods , such as least squares approximation schemes , are practically employed . An especially notable instance is the problem of dimensionality reduction , where only the largest principal components—as the best representative of the data—are desired . Linear autoencoders ( LAEs ) are one such scheme for dimensionality reduction that is applicable to large data sets . An LAE with a single fully-connected and linear hidden layer , and Mean Squared Error ( MSE ) loss function can discover the linear subspace spanned by the principal components . This subspace is the same as the one spanned by the weights of the decoder . However , it failure to identify the exact principal directions . This is due to the fact that , when the encoder is transformed by some matrix , transforming the decoder by the inverse of that matrix will yield no change in the loss . In other words , the loss possesses a symmetry under the action of a group of invertible matrices , so that directions ( and orderings/permutations thereto ) will not be discriminated . The early work of Bourlard & Kamp ( 1988 ) and Baldi & Hornik ( 1989 ) connected LAEs and PCA and demonstrated the lack of identifiability of principal components . Several methods for neural networks compute the exact eigenvectors ( Rubner & Tavan , 1989 ; Xu , 1993 ; Kung & Diamantaras , 1990 ; Oja et al. , 1992 ) , but they depend on either particular network structures or special optimization methods . It was recently observed ( Plaut , 2018 ; Kunin et al. , 2019 ) that regularization causes the left singular vectors of the decoder to become the exact eigenvectors , but recovering them still requires an extra decomposition step . As Plaut ( 2018 ) point out , no existent method recovers the eigenvectors from an LAE in an optimization-independent way on a standard network — this work fills that void . Moreover , analyzing the loss surface for various architectures of linear/non-linear neural networks is a highly active and prominent area of research ( e.g . Baldi & Hornik ( 1989 ) ; Kunin et al . ( 2019 ) ; Pretorius et al . ( 2018 ) ; Frye et al . ( 2019 ) ) . Most of these works extend the results of Baldi & Hornik ( 1989 ) for shallow LAEs to more complex networks . However , most retain the original MSE loss , and they prove the same critical point characterization for their specific architecture of interest . Most notably Zhou & Liang ( 2018 ) extends the results of Baldi & Hornik ( 1989 ) to deep linear networks and shallow RELU networks . In contrast in this work we are going after a loss with better loss surface properties . We propose a new loss function for performing PCA using LAEs . We show that with the proposed loss function , the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix . The idea is simple : for identifying p principal directions we build up a total loss function as a sum of p squared error losses , where the ith loss function identifies only the first i principal directions . This approach breaks the symmetry since minimizing the first loss results in the first principal direction , which forces the second loss to find the first and the second . This constraint is propagated through the rest of the losses , resulting in all p principal components being identified . For the new loss we prove that all local minima are global minima . Consequently , the proposed loss function has both theoretical and practical implications . Theoretically , it provides better understanding of the loss surface . Specifically , we show that any critical point of our loss L is a critical point of the original MSE loss but not vice versa , and conclude that L eliminates those undesirable global minima of the original loss ( i.e. , exactly those which suffer from the invariance ) . Given that the set of critical points of L is a subset of critical points of MSE loss , many of the previous work on loss surfaces of more complex networks likely extend . In light of the removal of undesirable global minima through L , examining more complex networks is certainly a very promising direction . As for practical consequences , we show that the loss and its gradients can be compactly vectorized so that their computational complexity is no different from the MSE loss . Therefore , the loss L can be used to perform PCA/SVD on large datasets using any method of optimization such as Stochastic Gradient Descent ( SGD ) . Chief among the compellingly reasons to perform PCA/SVD using this method is that , in recent years , there has been unprecedented gains in the performance of very large SGD optimizations , with autoencoders in particular successfully handling larger numbers of high-dimensional training data ( e.g. , images ) . The loss function we offer is attractive in terms of parallelizability and distributability , and does not prescribe any single specific algorithm or implementation , so stands to continue to benefit from the arms race between SGD and its competitors . More importantly , this single loss function ( without an additional post hoc processing step ) fits seamlessly into optimization pipelines ( where SGD is but one instance ) . The result is that the loss allows for PCA/SVD computation as single optimization layer , akin to an instance of a fully differentiable building block in a NN pipeline Amos & Kolter ( 2017 ) , potentially as part of a much larger network . 2 THE PROPOSED LOSS FUNCTION AND REVIEW OF FINAL RESULTS . Let X ∈ Rn×m and Y ∈ Rn×m be the input and output matrices , where m centered sample points , each n-dimensional , are stacked column-wise . Let xj ∈ Rn and yj ∈ Rn be the jth sample input and output ( i.e . the jth column of X and Y , respectively ) . Define the loss function L ( A , B ) as L ( A , B ) : = p∑ i=1 m∑ j=1 ‖yj −AIi ; pBxj‖22 = p∑ i=1 ‖Y −AIi ; pBX‖2F ( 1 ) where 〈· , ·〉F and ‖·‖F are the Frobenius inner product and norm , Ii ; p is a p× p matrix with all elements zero except the first i diagonal elements being one . ( Or , equivalently , the matrix obtained by setting the last p− i diagonal elements of a p×p identity matrix to zero , e.g . I2 ; 3 = [ 1 0 0 0 1 0 0 0 0 ] . ) In what follows , we shall denote the transpose of matrix M by M ′ . Moreover , the matrices A ∈ Rn×p , and B ∈ Rp×n can be viewed as the weights of the decoder and encoder parts of an LAE . The results are based on the following standard assumptions that hold generically : Assumption 1 . For an input X and an output Y , let Σxx : = XX ′ , Σxy : = XY ′ , Σyx : = Σ′xy and Σyy = Y Y ′ be their sample covariance matrices . We assume • The input and output data are centered ( zero mean ) . • Σxx , Σxy , Σyx and Σyy are positive definite ( of full rank and invertible ) . • The covariance matrix Σ : = ΣyxΣ−1xxΣxy is of full rank with n distinct eigenvalues λ1 > λ2 > · · · > λn . • The decoder matrix A has no zero columns . Claim . The main result of this work proved in Theorem 2 is as follows : If the above assumptions hold then all the local minima of L ( A , B ) are achieved iff A and B are of the form A = U1 : pDp B = D−1p U ′ 1 : pΣyxΣ −1 xx , where the ith column of U1 : p is the unit eigenvector of Σ : = ΣyxΣ−1xxΣxy corresponding to the i th largest eigenvalue and Dp is a diagonal matrix with nonzero diagonal elements . In other words , A contains ordered unnormalized eigenvectors of Σ corresponding to the p largest eigenvalues . Moreover , all the local minima are global minima with the value of the loss function at those global minima being L ( A , B ) = p Tr ( Σyy ) − p∑ i=1 ( p− i+ 1 ) λi , where λi is the ith largest eigenvalue of Σ : = ΣyxΣ−1xxΣxy . In the case of autoencoder ( Y = X ) : Σ = Σxx . Finally , while L ( A , B ) in the given form containsO ( p ) matrix products , we will show that it can be evaluated with constant ( less than 7 ) matrix products independent of the value p . 3 NOTATION . In this paper , the underlying field is always R , and positive semidefinite matrices are symmetric by definition . The following constant matrices are used extensively throughout . The matrices Tp ∈ Rp×p and Sp ∈ Rp×p are defined as ( Tp ) ij = ( p− i+ 1 ) δij , i.e . Tp = diag ( p , p− 1 , · · · , 1 ) , ( 2 ) ( Sp ) ij=p−max ( i , j ) +1 , i.e . Sp= p p− 1 · · · 2 1 p− 1 p− 1 · · · 2 1 ... ... . . . 2 1 2 2 2 2 1 1 1 1 1 1 , e.g . S4= 4 3 2 13 3 2 12 2 2 1 1 1 1 1 . ( 3 ) Another matrix that will appear in the formulation is Ŝp : = T−1p SpT −1 p . Clearly , the diagonal matrix Tp is positive definite . As shown in Lemma 2 , Sp and Ŝp are positive definite as well . 4 MAIN THEOREMS . The general strategy to prove the above claim is as follows . First the analytical gradients of the loss is derived in a matrix form in Propositions 1 and 2 . We compare the gradients with that of the original Minimum Square Error ( MSE ) loss . Next , we analyze the loss surface by solving the gradient equations which yields the general structure of critical points based on the rank of the decoder matrix A . Next , we delineate several interesting properties of the critical points , notably , any critical point of the loss is also a critical point for the MSE loss but not the other way around . Finally , by performing second order analysis on the loss in Theorem 2 the exact equations for local minima are derived which is shown to be as claimed . Let L̃ ( A , B ) and L ( A , B ) be the original loss , and the proposed loss function , respectively , i.e. , L̃ ( A , B ) : = m∑ j=1 ‖yj −ABxj‖22 = ‖Y −ABX‖2F L ( A , B ) : = p∑ i=1 m∑ j=1 ‖yj −AIi ; pBxj‖22 = p∑ i=1 ‖Y −AIi ; pBX‖2F The first step is to calculate the gradients with respect to A and B and set them to zero to derive the implicit expressions for the critical points . In order to do so , first , in Lemma 5 , for a fixed A , we derive the directional ( Gateaux ) derivative of the loss with respect to B along an arbitrary direction W ∈ Rp×n , denoted as dBL ( A , B ) W , i.e . dBL ( A , B ) W = lim‖W ‖F→0 L ( A , B + W ) − L ( A , B ) ‖W ‖F . As shown in the proof of the lemma , dBL ( A , B ) W is derived by writing the norm in the loss as an inner product , opening it up using linearity of inner product , dismiss second order terms in W ( i.e . O ( ‖W ‖2 ) ) and rearrange the result as the inner product between the gradient with respect to B , and the direction W , which yields dBL ( A , B ) W = −2 Tr ( W ′ ( TpA′Σyx − ( Sp ◦ ( A′A ) ) BΣxx ) ) = −2〈TpA′Σyx − ( Sp ◦ ( A′A ) ) BΣxx , W 〉F , ( 4 ) where , ◦ is the Hadamard product and the constant matrices Tp and Sp , were defined in the beginning . Second , the same process is done in Lemma 6 , to derive dAL ( A , B ) V ; the derivative of L with respect to A in an arbitrary direction V ∈ Rn×p , for a fixed B , which is then set to zero to derive the implicit expressions for the critical points . The results are formally stated in the two following propositions . Proposition 1 . For any fixed matrix A ∈ Rn×p the function L ( A , B ) is convex in the coefficients of B and attains its minimum for any B satisfying the equation ( Sp ◦ ( A′A ) ) BΣxx = TpA′Σyx , ( 5 ) where ◦ is the Hadamard ( element-wise ) product operator , and Sp and Tp are constant matrices defined in the previous section . Further , if A has no zero column , then L ( A , B ) is strictly convex in B and has a unique minimum when the critical B is B = B̂ ( A ) = ( Sp ◦ ( A′A ) ) −1TpA′ΣyxΣ−1xx , ( 6 ) and in the autoencoder case it becomes B = B̂ ( A ) = ( Sp ◦ ( A′A ) ) −1TpA′ . ( 6′ ) The proof is given in appendix A.2 . Remark 1 . Note that as long as A has no zero column , Sp ◦ ( A′A ) is nonsingular ( we will explain the reason soon ) . In practice , A with zero columns can always be avoided by nudging the zero columns of A during the gradient decent process . Proposition 2 . For any fixed matrix B ∈ Rp×n the function L ( A , B ) is a convex function in A . Moreover , for a fixed B , the matrix A that satisfies A ( Sp ◦ ( BΣxxB′ ) ) =ΣyxB′Tp ( 7 ) is a critical point of L ( A , B ) . The proof is given in appendix A.3 . The pair ( A , B ) is a critical point of L if they make dBL ( A , B ) W and dAL ( A , B ) V zero for any pair of directions ( V , W ) . Therefore , the implicit equations for critical points are given below , next to the ones derived by Baldi & Hornik ( 1989 ) for L̃ ( A , B ) . For L̃ ( A , B ) : A′ABΣxx = A ′Σyx , ABΣxxB ′ = ΣyxB ′ . For L ( A , B ) : ( Sp ◦ ( A′A ) ) BΣxx = TpA′Σyx , A ( Sp ◦ ( BΣxxB′ ) ) = ΣyxB′Tp . Remark 2 . Notice the similarity , and the difference only being the presence of Hadamard product by Sp in the left and by diagonal Tp in the right . Therefore , practically , the added computational cost of evaluating the gradients is negligible compare to that of MSE loss . The next step is to determine the structure of ( A , B ) that satisfies the above equations , and find the subset of those solutions that account for local minima . For the original loss , the first expression ( A′ABΣxx = A′Σyx ) is used to solve for B and put it in the second to derive an expression solely based on A . Obviously , in order to solve the first expression for B , two cases are considered separately : the case where A is of full rank p , so A′A is invertible , and the case of A being of rank r < p. Here we do the same but there is a twist ; for us there is only one case . The reason is as long as ( not necessarily full rank ) A has no zero column , Sp ◦ ( A′A ) is positive definite and hence , invertible . This is discussed in detail in Lemma 2 and we briefly explain it here . As shown in the lemma , Sp is positive definite and by Shur product theorem for any A ( of any rank ) , Sp ◦ ( A′A ) is positive semidefinite . However , as a result of Oppenheim inequality ( Horn & Johnson ( 2012 ) , Thm 7.8.16 ) , that in our case translates to det ( Sp ) ∏ i ( A ′A ) ii ≤ det ( Sp ◦ ( A′A ) ) , as long as A has no zero column , ∏ i ( A ′A ) ii > 0 and therefore , det ( Sp ◦ ( A′A ) ) > 0 . Here , we assume A of any rank r ≤ p has no zero column ( since this can be easily avoided in practice ) and consider Sp ◦ ( A′A ) to be always invertible . Therefore , ( A , B ) define a critical point of losses L̃ and L if For L̃ ( A , B ) and full rank A : B = B̂ ( A ) = ( A′A ) −1A′ΣyxΣ −1 xx , ABΣxxB ′ = ΣyxB ′ . For L ( A , B ) and no zero column A : B = B̂ ( A ) = ( Sp ◦ ( A′A ) ) −1TpA′ΣyxΣ−1xx , A ( Sp ◦ ( BΣxxB′ ) ) = ΣyxB′Tp . Before , we state the main theorem we need the following definitions . First , a rectangular permutation matrix Πr ∈ Rr×p is a matrix that each column consists of at most one nonzero element with the value 1 . If the rank of Πr is r with r < p then clearly , Πr has p − r zero columns . Also , by taking away those zero columns the resultant r × r submatrix of Πr is a standard square permutation matrix . Second , under the conditions provided in Assumption 1 , the matrix Σ : = ΣyxΣ−1xxΣxy has an eigenvalue decomposition Σ = UΛU ′ , where the ith column of U , denoted as ui , is an eigenvector of Σ corresponding to the ith largest eigenvalue of Σ , denoted as λi . Also , Λ = diag ( λ1 , · · · , λn ) is the diagonal vector of ordered eigenvalues of Σ , with λ1 > λ2 > · · · > λn > 0 . We use the following notation to organize a subset of eigenvectors of Σ into a rectangular matrix . Let for any r ≤ p , Ir = { i1 , · · · , ir } ( 1 ≤ i1 < · · · < ir < n ) be any ordered r−index set . Define UIr ∈ Rn×p as UIr = [ ui1 , · · · , uir ] . That is the columns of UIr are the ordered orthonormal eigenvectors of Σ associated with eigenvalues λi1 < · · · < λir . Clearly , when r = p , we have UIr = [ ui1 , · · · , uip ] corresponding to an p−index set Ip = { i1 , · · · , ip } ( 1 ≤ i1 < · · · < ip < n ) . Similarly , we define ΛIr ∈ Rp×p as ΛIr = diag ( λi1 , · · · , λir ) . Theorem 1 . Let A ∈ Rn×p and B ∈ Rp×n such that A is of rank r ≤ p. Under the conditions provided in Assumption 1 and the above notation , The matrices A and B define a critical point of L ( A , B ) if and only if for any given r−index set Ir , and a nonsingular diagonal matrix D ∈ Rr×r , A and B are of the form A = UIrCD , ( 8 ) B = B̂ ( A ) = D−1ΠCU ′ IrΣyxΣ −1 xx , ( 9 ) where , C ∈ Rr×p is of of full rank r with nonzero and normalized columns such that ΠC : = ( Sp ◦ ( C ′C ) ) −1 TpC ′ is a rectangular permutation matrix of rank r and CΠC = Ir . For all 1 ≤ r ≤ p , such C always exists . In particular , if matrix A is of full rank p , i.e . r = p , the two given conditions on ΠC are satisfied iff the invertible matrix C is any squared p × p permutation matrix Π . In this case ( A , B ) define a critical point of L ( A , B ) iff they are of the form A = UIpΠD , ( 10 ) B = B̂ ( A ) = D−1Π′U ′IpΣyxΣ −1 xx . ( 11 ) The proof is given in appendix A.4 . Remark 3 . The above theorem provides explicit equations for the critical points of the loss surface in terms of the rank of the decoder matrixA and the eigenvectors of Σ . This explicit structure allows us to further analyze the loss surface and its local/global minima . Here , we provide a proof sketch for the above theorem to make the claims more clear . Again as a reminder , the EVD of Σ : = ΣyxΣ−1xxΣxy is Σ = UΛU ′ . For both L̃ and L , the corresponding B̂ ( A ) is replaced by B on the RHS of critical point equations . For the loss L ( A , B ) , as shown in the proof of the theorem , results in the following identity U ′A ( Sp ◦ ( B̂ΣxxB̂ ′ ) ) A′U = Λ∆ , ( 12 ) where ∆ : = U ′ATp ( Sp ◦ ( A′A ) ) −1TpA′U is symmetric and positive semidefinite . The LHS of eq . ( 12 ) is symmetric so the RHS is symmetric too , so Λ∆ = ( Λ∆ ) ′ = ∆′Λ′ = ∆Λ . Therefore , ∆ commutes with the diagonal matrix of eigenvalues Λ . Since eigenvalues are assumed to be distinct , ∆ has to be diagonal as well . By Lemma 2 Tp ( Sp ◦ ( A′A ) ) −1Tp is positive definite and U is an orthogonal matrix . Therefore , r = rank ( A ) = rank ( ∆ ) = rank ( U ′∆U ) , which implies that the diagonal matrix ∆ , has r nonzero and positive diagonal entries . There exists an r−index set Ir corresponding to the nonzero diagonal elements of ∆ . Forming a diagonal matrix ∆Ir ∈ Rr×r by filling its diagonal entries ( in order ) by the nonzero diagonal elements of ∆ , we have U∆U ′ = UIr∆IrU ′ Ir Def of ∆ ====⇒ ATp ( Sp ◦ ( A′A ) ) −1TpA′ = UIr∆IrU ′Ir , ( 13 ) which indicates that the matrix A has the same column space as UIr . Therefore , there exists a full rank matrix C̄ ∈ Rr×p such that A = UIrC̄ . Since A has no zero column , C̄ has no zero column . Further , by normalizing the columns of C̄ we can write A = UIrCD , where D ∈ Rp×p is diagonal that contains the norms of columns of C̄ . Baldi & Hornik ( 1989 ) did something similar for full rank A for the loss L̃ to derive ( AL̃ = UIpC̃ ) . But their C̃ can be any invertible p × p matrix . However , in our case , the matrix C ∈ Rr×p corresponding to rank r ≤ p matrix A , has to satisfy eq . ( 13 ) by replacing A by UIrCD and eq . ( 12 ) by replacing B̂ ( A ) by B̂ ( UIrCD ) . In the case of Baldi & Hornik ( 1989 ) , for the original loss L̃ , equations similar to eq . ( 13 ) and eq . ( 12 ) appear but they are are satisfied trivially by any invertible matrix C̃ . Simplifying those equations by using A = UIrCD after some algebraic manipulation results in the following two conditions for C : CTp ( Sp ◦ ( C ′C ) ) −1 TpC ′ =∆Ir and ( 14 ) C ( Sp ◦ ( ( Sp ◦ ( C ′C ) ) −1TpC ′ΛIrCTp ( Sp ◦ ( C ′C ) ) −1 ) ) C ′ =ΛIr∆Ir . ( 15 ) As detailed in proof of Theorem 1 , solving for C leads to its specific structure as laid out in the theorem . Remark 4 . Note that when A is of rank r < p with no zero columns then the invariant matrix C is not necessarily a rectangular permutation matrix but ΠC : = ( Sp ◦ ( C ′C ) ) −1 TpC ′ is a rectangular permutation matrix with CΠC = Ir . It is only when r = p that the invariant matrix C becomes a permutation matrix . Nevertheless , as we show in the following corollary , the global map is always ∀r ≤ p : G = AB = UIrU ′IrΣyxΣ−1xx . It is possible to find further structure ( in terms of block matrices ) for the invariant matrix C when r < p. However , this is not necessary as we soon show that all rank deficient matrix As are saddle points for the loss and ideally should be passed by during the gradient decent process . Based on some numerical results our conjecture is that when r < p the matrix C can only start with a r × k rectangular permutation matrix of rank r with r ≤ k ≤ p and the rest of p− k columns of C is arbitrary as long as none of the columns are identically zero . Corollary 1 . Let ( A , B ) be a critical point of L ( A , B ) under the conditions provided in Assumption 1 and rankA = r ≤ p. Then the following are true 1 . The matrix BΣxxB′ is a p× p diagonal matrix of rank r. 2 . For all 1 ≤ r ≤ p , for any critical pair ( A , B ) , the global map G : = AB becomes G = UIrU ′ IrΣyxΣ −1 xx . ( 16 ) For the autoencoder case ( Y = X ) the global map is simply G = UIrU ′ Ir . 3 . ( A , B ) is also the critical point of the classical loss L̃ ( A , B ) = ∑p i=1 ‖Y −ABX‖ 2 F . The proof is given in appendix A.5 . Remark 5 . The above corollary implies that L ( A , B ) not only does not add any extra critical point compare to the original loss L̃ ( A , B ) , it provides the same global map G : = AB . It only limits the structure of the invariance matrix C as described in Theorem 1 so that the decoder matrix A can recover the exact eigenvectors of Σ. Lemma 1 . The loss function L ( A , B ) can be written as L ( A , B ) = pTr ( Σyy ) − 2 Tr ( ATpBΣxy ) + Tr ( B′ ( Sp ◦ ( A′A ) ) BΣxx ) . ( 17 ) The above identity shows that the number of matrix operations required for computing the loss L ( A , B ) is constant and thereby independent of the value of p. The proof is given in appendix A.6 . Theorem 2 . Let A∗ ∈ Rn×p and B∗ ∈ Rp×n such that A∗ is of rank r ≤ p. Under the conditions provided in Assumption 1 , ( A∗ , B∗ ) define a local minima of the proposed loss function iff they are of the form A∗ = U1 : pDp ( 18 ) B∗ = D−1p U ′ 1 : pΣyxΣ −1 xx , ( 19 ) where the ith column of U1 : p is a unit eigenvector of Σ : = ΣyxΣ−1xxΣxy corresponding the i th largest eigenvalue and Dp is a diagonal matrix with nonzero diagonal elements . In other words , A∗ contains ordered unnormalized eigenvectors of Σ corresponding to the p largest eigenvalues . Moreover , all the local minima are global minima with the value of the loss function at those global minima being L ( A∗ , B∗ ) = p Tr ( Σyy ) − p∑ i=1 ( p− i+ 1 ) λi , ( 20 ) where λi is the ith largest eigenvalue of Σ . The proof is given in appendix A.7 . Remark 6 . Finally , the second and third assumptions we made in the beginning in Assumption 1 can be relaxed by requiring only Σxx to be full rank . The output data can have a different dimension than the input . That is Y ∈ Rn×m and X ∈ Rn′×m , where n 6= n′ . The reason is that the given loss function structurally is very similar to MSE loss and can be represented as a Frobenius norm on the space of n × m matrices . In this case the covariance matrix Σ : = ΣyxΣ−1xxΣxy is still n × n. Clearly , for under-constrained systems with n < n′ the full rank assumption of Σ holds . For the overdetermined case , where n′ > n the second and third assumptions in Assumption 1 can be relaxed : we only require Σxx to be full rank since this is the only matrix that is inverted in the theorems . Note that if p > min ( n′ , n ) then ΛIp : the p× p diagonal matrix of eigenvalues of Σ for a p-index-set Ip bounds to have some zeros and will be say rank r < p , which in turn , results in the encoder A with rank r. However , the Theorem 1 is proved for encoder of any rank r ≤ p. Finally , following theorem 2 then the first r columns of the encoder A converges to ordered eigenvectors of Σ while the p − r remaining columns span the kernel space of Σ . Moreover , Σ need not to have distinct eigenvectors . In this case ∆Ir becomes a block diagonal matrix , where the blocks correspond to identical eigenvalues ΣIr . In this case , the corresponding eigenvectors in A ∗ are not unique but they span the respective eigenspace .
This paper proposes a new loss function to compute the exact ordered eigenvectors of a dataset. The loss is motivated from the idea of computing the eigenvectors sequentially. However doing so would be computationally expensive, and the authors show that the loss function they propose (sum of sequential losses) has the same order (constant less than 7) of computational complexity as using the squared loss. A proof of the correctness of the algorithm is given, along with experiments to verify its performance.
SP:3a670c06bf87ba895ed91ed2280d88881defa412
Neural Networks for Principal Component Analysis: A New Loss Function Provably Yields Ordered Exact Eigenvectors
1 INTRODUCTION . Ranking among the most widely-used and valuable statistical tools , Principal Component Analysis ( PCA ) represents a given set of data within a new orthogonal coordinate system in which the data are uncorrelated and the variance of the data along each orthogonal axis is successively ordered from the highest to lowest . The projection of data along each axis gives what are called principal components . Theoretically , eigendecomposition of the covariance matrix provides exactly such a transformation . For large data sets , however , classical decomposition techniques are infeasible and other numerical methods , such as least squares approximation schemes , are practically employed . An especially notable instance is the problem of dimensionality reduction , where only the largest principal components—as the best representative of the data—are desired . Linear autoencoders ( LAEs ) are one such scheme for dimensionality reduction that is applicable to large data sets . An LAE with a single fully-connected and linear hidden layer , and Mean Squared Error ( MSE ) loss function can discover the linear subspace spanned by the principal components . This subspace is the same as the one spanned by the weights of the decoder . However , it failure to identify the exact principal directions . This is due to the fact that , when the encoder is transformed by some matrix , transforming the decoder by the inverse of that matrix will yield no change in the loss . In other words , the loss possesses a symmetry under the action of a group of invertible matrices , so that directions ( and orderings/permutations thereto ) will not be discriminated . The early work of Bourlard & Kamp ( 1988 ) and Baldi & Hornik ( 1989 ) connected LAEs and PCA and demonstrated the lack of identifiability of principal components . Several methods for neural networks compute the exact eigenvectors ( Rubner & Tavan , 1989 ; Xu , 1993 ; Kung & Diamantaras , 1990 ; Oja et al. , 1992 ) , but they depend on either particular network structures or special optimization methods . It was recently observed ( Plaut , 2018 ; Kunin et al. , 2019 ) that regularization causes the left singular vectors of the decoder to become the exact eigenvectors , but recovering them still requires an extra decomposition step . As Plaut ( 2018 ) point out , no existent method recovers the eigenvectors from an LAE in an optimization-independent way on a standard network — this work fills that void . Moreover , analyzing the loss surface for various architectures of linear/non-linear neural networks is a highly active and prominent area of research ( e.g . Baldi & Hornik ( 1989 ) ; Kunin et al . ( 2019 ) ; Pretorius et al . ( 2018 ) ; Frye et al . ( 2019 ) ) . Most of these works extend the results of Baldi & Hornik ( 1989 ) for shallow LAEs to more complex networks . However , most retain the original MSE loss , and they prove the same critical point characterization for their specific architecture of interest . Most notably Zhou & Liang ( 2018 ) extends the results of Baldi & Hornik ( 1989 ) to deep linear networks and shallow RELU networks . In contrast in this work we are going after a loss with better loss surface properties . We propose a new loss function for performing PCA using LAEs . We show that with the proposed loss function , the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix . The idea is simple : for identifying p principal directions we build up a total loss function as a sum of p squared error losses , where the ith loss function identifies only the first i principal directions . This approach breaks the symmetry since minimizing the first loss results in the first principal direction , which forces the second loss to find the first and the second . This constraint is propagated through the rest of the losses , resulting in all p principal components being identified . For the new loss we prove that all local minima are global minima . Consequently , the proposed loss function has both theoretical and practical implications . Theoretically , it provides better understanding of the loss surface . Specifically , we show that any critical point of our loss L is a critical point of the original MSE loss but not vice versa , and conclude that L eliminates those undesirable global minima of the original loss ( i.e. , exactly those which suffer from the invariance ) . Given that the set of critical points of L is a subset of critical points of MSE loss , many of the previous work on loss surfaces of more complex networks likely extend . In light of the removal of undesirable global minima through L , examining more complex networks is certainly a very promising direction . As for practical consequences , we show that the loss and its gradients can be compactly vectorized so that their computational complexity is no different from the MSE loss . Therefore , the loss L can be used to perform PCA/SVD on large datasets using any method of optimization such as Stochastic Gradient Descent ( SGD ) . Chief among the compellingly reasons to perform PCA/SVD using this method is that , in recent years , there has been unprecedented gains in the performance of very large SGD optimizations , with autoencoders in particular successfully handling larger numbers of high-dimensional training data ( e.g. , images ) . The loss function we offer is attractive in terms of parallelizability and distributability , and does not prescribe any single specific algorithm or implementation , so stands to continue to benefit from the arms race between SGD and its competitors . More importantly , this single loss function ( without an additional post hoc processing step ) fits seamlessly into optimization pipelines ( where SGD is but one instance ) . The result is that the loss allows for PCA/SVD computation as single optimization layer , akin to an instance of a fully differentiable building block in a NN pipeline Amos & Kolter ( 2017 ) , potentially as part of a much larger network . 2 THE PROPOSED LOSS FUNCTION AND REVIEW OF FINAL RESULTS . Let X ∈ Rn×m and Y ∈ Rn×m be the input and output matrices , where m centered sample points , each n-dimensional , are stacked column-wise . Let xj ∈ Rn and yj ∈ Rn be the jth sample input and output ( i.e . the jth column of X and Y , respectively ) . Define the loss function L ( A , B ) as L ( A , B ) : = p∑ i=1 m∑ j=1 ‖yj −AIi ; pBxj‖22 = p∑ i=1 ‖Y −AIi ; pBX‖2F ( 1 ) where 〈· , ·〉F and ‖·‖F are the Frobenius inner product and norm , Ii ; p is a p× p matrix with all elements zero except the first i diagonal elements being one . ( Or , equivalently , the matrix obtained by setting the last p− i diagonal elements of a p×p identity matrix to zero , e.g . I2 ; 3 = [ 1 0 0 0 1 0 0 0 0 ] . ) In what follows , we shall denote the transpose of matrix M by M ′ . Moreover , the matrices A ∈ Rn×p , and B ∈ Rp×n can be viewed as the weights of the decoder and encoder parts of an LAE . The results are based on the following standard assumptions that hold generically : Assumption 1 . For an input X and an output Y , let Σxx : = XX ′ , Σxy : = XY ′ , Σyx : = Σ′xy and Σyy = Y Y ′ be their sample covariance matrices . We assume • The input and output data are centered ( zero mean ) . • Σxx , Σxy , Σyx and Σyy are positive definite ( of full rank and invertible ) . • The covariance matrix Σ : = ΣyxΣ−1xxΣxy is of full rank with n distinct eigenvalues λ1 > λ2 > · · · > λn . • The decoder matrix A has no zero columns . Claim . The main result of this work proved in Theorem 2 is as follows : If the above assumptions hold then all the local minima of L ( A , B ) are achieved iff A and B are of the form A = U1 : pDp B = D−1p U ′ 1 : pΣyxΣ −1 xx , where the ith column of U1 : p is the unit eigenvector of Σ : = ΣyxΣ−1xxΣxy corresponding to the i th largest eigenvalue and Dp is a diagonal matrix with nonzero diagonal elements . In other words , A contains ordered unnormalized eigenvectors of Σ corresponding to the p largest eigenvalues . Moreover , all the local minima are global minima with the value of the loss function at those global minima being L ( A , B ) = p Tr ( Σyy ) − p∑ i=1 ( p− i+ 1 ) λi , where λi is the ith largest eigenvalue of Σ : = ΣyxΣ−1xxΣxy . In the case of autoencoder ( Y = X ) : Σ = Σxx . Finally , while L ( A , B ) in the given form containsO ( p ) matrix products , we will show that it can be evaluated with constant ( less than 7 ) matrix products independent of the value p . 3 NOTATION . In this paper , the underlying field is always R , and positive semidefinite matrices are symmetric by definition . The following constant matrices are used extensively throughout . The matrices Tp ∈ Rp×p and Sp ∈ Rp×p are defined as ( Tp ) ij = ( p− i+ 1 ) δij , i.e . Tp = diag ( p , p− 1 , · · · , 1 ) , ( 2 ) ( Sp ) ij=p−max ( i , j ) +1 , i.e . Sp= p p− 1 · · · 2 1 p− 1 p− 1 · · · 2 1 ... ... . . . 2 1 2 2 2 2 1 1 1 1 1 1 , e.g . S4= 4 3 2 13 3 2 12 2 2 1 1 1 1 1 . ( 3 ) Another matrix that will appear in the formulation is Ŝp : = T−1p SpT −1 p . Clearly , the diagonal matrix Tp is positive definite . As shown in Lemma 2 , Sp and Ŝp are positive definite as well . 4 MAIN THEOREMS . The general strategy to prove the above claim is as follows . First the analytical gradients of the loss is derived in a matrix form in Propositions 1 and 2 . We compare the gradients with that of the original Minimum Square Error ( MSE ) loss . Next , we analyze the loss surface by solving the gradient equations which yields the general structure of critical points based on the rank of the decoder matrix A . Next , we delineate several interesting properties of the critical points , notably , any critical point of the loss is also a critical point for the MSE loss but not the other way around . Finally , by performing second order analysis on the loss in Theorem 2 the exact equations for local minima are derived which is shown to be as claimed . Let L̃ ( A , B ) and L ( A , B ) be the original loss , and the proposed loss function , respectively , i.e. , L̃ ( A , B ) : = m∑ j=1 ‖yj −ABxj‖22 = ‖Y −ABX‖2F L ( A , B ) : = p∑ i=1 m∑ j=1 ‖yj −AIi ; pBxj‖22 = p∑ i=1 ‖Y −AIi ; pBX‖2F The first step is to calculate the gradients with respect to A and B and set them to zero to derive the implicit expressions for the critical points . In order to do so , first , in Lemma 5 , for a fixed A , we derive the directional ( Gateaux ) derivative of the loss with respect to B along an arbitrary direction W ∈ Rp×n , denoted as dBL ( A , B ) W , i.e . dBL ( A , B ) W = lim‖W ‖F→0 L ( A , B + W ) − L ( A , B ) ‖W ‖F . As shown in the proof of the lemma , dBL ( A , B ) W is derived by writing the norm in the loss as an inner product , opening it up using linearity of inner product , dismiss second order terms in W ( i.e . O ( ‖W ‖2 ) ) and rearrange the result as the inner product between the gradient with respect to B , and the direction W , which yields dBL ( A , B ) W = −2 Tr ( W ′ ( TpA′Σyx − ( Sp ◦ ( A′A ) ) BΣxx ) ) = −2〈TpA′Σyx − ( Sp ◦ ( A′A ) ) BΣxx , W 〉F , ( 4 ) where , ◦ is the Hadamard product and the constant matrices Tp and Sp , were defined in the beginning . Second , the same process is done in Lemma 6 , to derive dAL ( A , B ) V ; the derivative of L with respect to A in an arbitrary direction V ∈ Rn×p , for a fixed B , which is then set to zero to derive the implicit expressions for the critical points . The results are formally stated in the two following propositions . Proposition 1 . For any fixed matrix A ∈ Rn×p the function L ( A , B ) is convex in the coefficients of B and attains its minimum for any B satisfying the equation ( Sp ◦ ( A′A ) ) BΣxx = TpA′Σyx , ( 5 ) where ◦ is the Hadamard ( element-wise ) product operator , and Sp and Tp are constant matrices defined in the previous section . Further , if A has no zero column , then L ( A , B ) is strictly convex in B and has a unique minimum when the critical B is B = B̂ ( A ) = ( Sp ◦ ( A′A ) ) −1TpA′ΣyxΣ−1xx , ( 6 ) and in the autoencoder case it becomes B = B̂ ( A ) = ( Sp ◦ ( A′A ) ) −1TpA′ . ( 6′ ) The proof is given in appendix A.2 . Remark 1 . Note that as long as A has no zero column , Sp ◦ ( A′A ) is nonsingular ( we will explain the reason soon ) . In practice , A with zero columns can always be avoided by nudging the zero columns of A during the gradient decent process . Proposition 2 . For any fixed matrix B ∈ Rp×n the function L ( A , B ) is a convex function in A . Moreover , for a fixed B , the matrix A that satisfies A ( Sp ◦ ( BΣxxB′ ) ) =ΣyxB′Tp ( 7 ) is a critical point of L ( A , B ) . The proof is given in appendix A.3 . The pair ( A , B ) is a critical point of L if they make dBL ( A , B ) W and dAL ( A , B ) V zero for any pair of directions ( V , W ) . Therefore , the implicit equations for critical points are given below , next to the ones derived by Baldi & Hornik ( 1989 ) for L̃ ( A , B ) . For L̃ ( A , B ) : A′ABΣxx = A ′Σyx , ABΣxxB ′ = ΣyxB ′ . For L ( A , B ) : ( Sp ◦ ( A′A ) ) BΣxx = TpA′Σyx , A ( Sp ◦ ( BΣxxB′ ) ) = ΣyxB′Tp . Remark 2 . Notice the similarity , and the difference only being the presence of Hadamard product by Sp in the left and by diagonal Tp in the right . Therefore , practically , the added computational cost of evaluating the gradients is negligible compare to that of MSE loss . The next step is to determine the structure of ( A , B ) that satisfies the above equations , and find the subset of those solutions that account for local minima . For the original loss , the first expression ( A′ABΣxx = A′Σyx ) is used to solve for B and put it in the second to derive an expression solely based on A . Obviously , in order to solve the first expression for B , two cases are considered separately : the case where A is of full rank p , so A′A is invertible , and the case of A being of rank r < p. Here we do the same but there is a twist ; for us there is only one case . The reason is as long as ( not necessarily full rank ) A has no zero column , Sp ◦ ( A′A ) is positive definite and hence , invertible . This is discussed in detail in Lemma 2 and we briefly explain it here . As shown in the lemma , Sp is positive definite and by Shur product theorem for any A ( of any rank ) , Sp ◦ ( A′A ) is positive semidefinite . However , as a result of Oppenheim inequality ( Horn & Johnson ( 2012 ) , Thm 7.8.16 ) , that in our case translates to det ( Sp ) ∏ i ( A ′A ) ii ≤ det ( Sp ◦ ( A′A ) ) , as long as A has no zero column , ∏ i ( A ′A ) ii > 0 and therefore , det ( Sp ◦ ( A′A ) ) > 0 . Here , we assume A of any rank r ≤ p has no zero column ( since this can be easily avoided in practice ) and consider Sp ◦ ( A′A ) to be always invertible . Therefore , ( A , B ) define a critical point of losses L̃ and L if For L̃ ( A , B ) and full rank A : B = B̂ ( A ) = ( A′A ) −1A′ΣyxΣ −1 xx , ABΣxxB ′ = ΣyxB ′ . For L ( A , B ) and no zero column A : B = B̂ ( A ) = ( Sp ◦ ( A′A ) ) −1TpA′ΣyxΣ−1xx , A ( Sp ◦ ( BΣxxB′ ) ) = ΣyxB′Tp . Before , we state the main theorem we need the following definitions . First , a rectangular permutation matrix Πr ∈ Rr×p is a matrix that each column consists of at most one nonzero element with the value 1 . If the rank of Πr is r with r < p then clearly , Πr has p − r zero columns . Also , by taking away those zero columns the resultant r × r submatrix of Πr is a standard square permutation matrix . Second , under the conditions provided in Assumption 1 , the matrix Σ : = ΣyxΣ−1xxΣxy has an eigenvalue decomposition Σ = UΛU ′ , where the ith column of U , denoted as ui , is an eigenvector of Σ corresponding to the ith largest eigenvalue of Σ , denoted as λi . Also , Λ = diag ( λ1 , · · · , λn ) is the diagonal vector of ordered eigenvalues of Σ , with λ1 > λ2 > · · · > λn > 0 . We use the following notation to organize a subset of eigenvectors of Σ into a rectangular matrix . Let for any r ≤ p , Ir = { i1 , · · · , ir } ( 1 ≤ i1 < · · · < ir < n ) be any ordered r−index set . Define UIr ∈ Rn×p as UIr = [ ui1 , · · · , uir ] . That is the columns of UIr are the ordered orthonormal eigenvectors of Σ associated with eigenvalues λi1 < · · · < λir . Clearly , when r = p , we have UIr = [ ui1 , · · · , uip ] corresponding to an p−index set Ip = { i1 , · · · , ip } ( 1 ≤ i1 < · · · < ip < n ) . Similarly , we define ΛIr ∈ Rp×p as ΛIr = diag ( λi1 , · · · , λir ) . Theorem 1 . Let A ∈ Rn×p and B ∈ Rp×n such that A is of rank r ≤ p. Under the conditions provided in Assumption 1 and the above notation , The matrices A and B define a critical point of L ( A , B ) if and only if for any given r−index set Ir , and a nonsingular diagonal matrix D ∈ Rr×r , A and B are of the form A = UIrCD , ( 8 ) B = B̂ ( A ) = D−1ΠCU ′ IrΣyxΣ −1 xx , ( 9 ) where , C ∈ Rr×p is of of full rank r with nonzero and normalized columns such that ΠC : = ( Sp ◦ ( C ′C ) ) −1 TpC ′ is a rectangular permutation matrix of rank r and CΠC = Ir . For all 1 ≤ r ≤ p , such C always exists . In particular , if matrix A is of full rank p , i.e . r = p , the two given conditions on ΠC are satisfied iff the invertible matrix C is any squared p × p permutation matrix Π . In this case ( A , B ) define a critical point of L ( A , B ) iff they are of the form A = UIpΠD , ( 10 ) B = B̂ ( A ) = D−1Π′U ′IpΣyxΣ −1 xx . ( 11 ) The proof is given in appendix A.4 . Remark 3 . The above theorem provides explicit equations for the critical points of the loss surface in terms of the rank of the decoder matrixA and the eigenvectors of Σ . This explicit structure allows us to further analyze the loss surface and its local/global minima . Here , we provide a proof sketch for the above theorem to make the claims more clear . Again as a reminder , the EVD of Σ : = ΣyxΣ−1xxΣxy is Σ = UΛU ′ . For both L̃ and L , the corresponding B̂ ( A ) is replaced by B on the RHS of critical point equations . For the loss L ( A , B ) , as shown in the proof of the theorem , results in the following identity U ′A ( Sp ◦ ( B̂ΣxxB̂ ′ ) ) A′U = Λ∆ , ( 12 ) where ∆ : = U ′ATp ( Sp ◦ ( A′A ) ) −1TpA′U is symmetric and positive semidefinite . The LHS of eq . ( 12 ) is symmetric so the RHS is symmetric too , so Λ∆ = ( Λ∆ ) ′ = ∆′Λ′ = ∆Λ . Therefore , ∆ commutes with the diagonal matrix of eigenvalues Λ . Since eigenvalues are assumed to be distinct , ∆ has to be diagonal as well . By Lemma 2 Tp ( Sp ◦ ( A′A ) ) −1Tp is positive definite and U is an orthogonal matrix . Therefore , r = rank ( A ) = rank ( ∆ ) = rank ( U ′∆U ) , which implies that the diagonal matrix ∆ , has r nonzero and positive diagonal entries . There exists an r−index set Ir corresponding to the nonzero diagonal elements of ∆ . Forming a diagonal matrix ∆Ir ∈ Rr×r by filling its diagonal entries ( in order ) by the nonzero diagonal elements of ∆ , we have U∆U ′ = UIr∆IrU ′ Ir Def of ∆ ====⇒ ATp ( Sp ◦ ( A′A ) ) −1TpA′ = UIr∆IrU ′Ir , ( 13 ) which indicates that the matrix A has the same column space as UIr . Therefore , there exists a full rank matrix C̄ ∈ Rr×p such that A = UIrC̄ . Since A has no zero column , C̄ has no zero column . Further , by normalizing the columns of C̄ we can write A = UIrCD , where D ∈ Rp×p is diagonal that contains the norms of columns of C̄ . Baldi & Hornik ( 1989 ) did something similar for full rank A for the loss L̃ to derive ( AL̃ = UIpC̃ ) . But their C̃ can be any invertible p × p matrix . However , in our case , the matrix C ∈ Rr×p corresponding to rank r ≤ p matrix A , has to satisfy eq . ( 13 ) by replacing A by UIrCD and eq . ( 12 ) by replacing B̂ ( A ) by B̂ ( UIrCD ) . In the case of Baldi & Hornik ( 1989 ) , for the original loss L̃ , equations similar to eq . ( 13 ) and eq . ( 12 ) appear but they are are satisfied trivially by any invertible matrix C̃ . Simplifying those equations by using A = UIrCD after some algebraic manipulation results in the following two conditions for C : CTp ( Sp ◦ ( C ′C ) ) −1 TpC ′ =∆Ir and ( 14 ) C ( Sp ◦ ( ( Sp ◦ ( C ′C ) ) −1TpC ′ΛIrCTp ( Sp ◦ ( C ′C ) ) −1 ) ) C ′ =ΛIr∆Ir . ( 15 ) As detailed in proof of Theorem 1 , solving for C leads to its specific structure as laid out in the theorem . Remark 4 . Note that when A is of rank r < p with no zero columns then the invariant matrix C is not necessarily a rectangular permutation matrix but ΠC : = ( Sp ◦ ( C ′C ) ) −1 TpC ′ is a rectangular permutation matrix with CΠC = Ir . It is only when r = p that the invariant matrix C becomes a permutation matrix . Nevertheless , as we show in the following corollary , the global map is always ∀r ≤ p : G = AB = UIrU ′IrΣyxΣ−1xx . It is possible to find further structure ( in terms of block matrices ) for the invariant matrix C when r < p. However , this is not necessary as we soon show that all rank deficient matrix As are saddle points for the loss and ideally should be passed by during the gradient decent process . Based on some numerical results our conjecture is that when r < p the matrix C can only start with a r × k rectangular permutation matrix of rank r with r ≤ k ≤ p and the rest of p− k columns of C is arbitrary as long as none of the columns are identically zero . Corollary 1 . Let ( A , B ) be a critical point of L ( A , B ) under the conditions provided in Assumption 1 and rankA = r ≤ p. Then the following are true 1 . The matrix BΣxxB′ is a p× p diagonal matrix of rank r. 2 . For all 1 ≤ r ≤ p , for any critical pair ( A , B ) , the global map G : = AB becomes G = UIrU ′ IrΣyxΣ −1 xx . ( 16 ) For the autoencoder case ( Y = X ) the global map is simply G = UIrU ′ Ir . 3 . ( A , B ) is also the critical point of the classical loss L̃ ( A , B ) = ∑p i=1 ‖Y −ABX‖ 2 F . The proof is given in appendix A.5 . Remark 5 . The above corollary implies that L ( A , B ) not only does not add any extra critical point compare to the original loss L̃ ( A , B ) , it provides the same global map G : = AB . It only limits the structure of the invariance matrix C as described in Theorem 1 so that the decoder matrix A can recover the exact eigenvectors of Σ. Lemma 1 . The loss function L ( A , B ) can be written as L ( A , B ) = pTr ( Σyy ) − 2 Tr ( ATpBΣxy ) + Tr ( B′ ( Sp ◦ ( A′A ) ) BΣxx ) . ( 17 ) The above identity shows that the number of matrix operations required for computing the loss L ( A , B ) is constant and thereby independent of the value of p. The proof is given in appendix A.6 . Theorem 2 . Let A∗ ∈ Rn×p and B∗ ∈ Rp×n such that A∗ is of rank r ≤ p. Under the conditions provided in Assumption 1 , ( A∗ , B∗ ) define a local minima of the proposed loss function iff they are of the form A∗ = U1 : pDp ( 18 ) B∗ = D−1p U ′ 1 : pΣyxΣ −1 xx , ( 19 ) where the ith column of U1 : p is a unit eigenvector of Σ : = ΣyxΣ−1xxΣxy corresponding the i th largest eigenvalue and Dp is a diagonal matrix with nonzero diagonal elements . In other words , A∗ contains ordered unnormalized eigenvectors of Σ corresponding to the p largest eigenvalues . Moreover , all the local minima are global minima with the value of the loss function at those global minima being L ( A∗ , B∗ ) = p Tr ( Σyy ) − p∑ i=1 ( p− i+ 1 ) λi , ( 20 ) where λi is the ith largest eigenvalue of Σ . The proof is given in appendix A.7 . Remark 6 . Finally , the second and third assumptions we made in the beginning in Assumption 1 can be relaxed by requiring only Σxx to be full rank . The output data can have a different dimension than the input . That is Y ∈ Rn×m and X ∈ Rn′×m , where n 6= n′ . The reason is that the given loss function structurally is very similar to MSE loss and can be represented as a Frobenius norm on the space of n × m matrices . In this case the covariance matrix Σ : = ΣyxΣ−1xxΣxy is still n × n. Clearly , for under-constrained systems with n < n′ the full rank assumption of Σ holds . For the overdetermined case , where n′ > n the second and third assumptions in Assumption 1 can be relaxed : we only require Σxx to be full rank since this is the only matrix that is inverted in the theorems . Note that if p > min ( n′ , n ) then ΛIp : the p× p diagonal matrix of eigenvalues of Σ for a p-index-set Ip bounds to have some zeros and will be say rank r < p , which in turn , results in the encoder A with rank r. However , the Theorem 1 is proved for encoder of any rank r ≤ p. Finally , following theorem 2 then the first r columns of the encoder A converges to ordered eigenvectors of Σ while the p − r remaining columns span the kernel space of Σ . Moreover , Σ need not to have distinct eigenvectors . In this case ∆Ir becomes a block diagonal matrix , where the blocks correspond to identical eigenvalues ΣIr . In this case , the corresponding eigenvectors in A ∗ are not unique but they span the respective eigenspace .
This paper proposes a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs). With this new loss function, the decoder weights of LAEs can eventually converge to the exact ordered unnormalized eigenvectors of the sample covariance matrix. The main contribution is to add the identifiability of principal components in PCA using LAEs and. Two empirical experiments were done to show the effectiveness of proposed loss function on one synthetic dataset and the MNIST dataset.
SP:3a670c06bf87ba895ed91ed2280d88881defa412
Hierarchical Foresight: Self-Supervised Learning of Long-Horizon Tasks via Visual Subgoal Generation
1 INTRODUCTION . Developing robotic systems that can complete long horizon visual control tasks , while generalizing to novel scenes and objectives , remains an unsolved and challenging problem . Generalization to unseen objects and scenes requires robots to be trained across diverse environments , meaning that detailed supervision during data collection in not practical to provide . Furthermore , reasoning over long-horizon tasks introduces two additional major challenges . First , the robot must handle large amounts of uncertainty as the horizon increases . And second , the robot must identify how to reach distant goals when only provided with the final goal state , a sparse indication of the task , as opposed to a shaped cost that implicitly encodes how to get there . In this work , we aim to develop a method that can start to address these challenges , leveraging self-supervised models learned using only unlabeled data , to solve novel temporally-extended tasks . Model-based reinforcement learning has shown promise in generalizing to novel objects and tasks , as learned dynamics models have been shown to generalize to new objects ( Finn & Levine , 2016 ; Ebert et al. , 2018b ) , and can be used in conjunction with planning to reach goals unseen during training . However , planning to reach temporally distant goals is difficult . As the planning horizon increases model error compounds , and the cost function often provides only a noisy or sparse signal of the objective . Both of these challenges are exacerbated when planning in visual space . In this work , the key insight that we leverage is that while model error and sparse cost signals can make long horizon planning difficult , we can mitigate these issues by learning to break down longhorizon tasks into short horizon segments . Consider , for example , the task of opening a drawer and putting a book in it , given supervision only in the form of the final image of the open drawer containing the book . The goal image provides nearly no useful cost signal until the last stage of the task , and model predictions are likely to become inaccurate beyond the first stage of the task . However , if we can generate a sequence of good subgoals , such as ( 1 ) the robot arm grasping the †Work completed at Google Brain Videos and code are available at : https : //sites.google.com/stanford.edu/hvf drawer handle , ( 2 ) the open drawer , and ( 3 ) the robot arm reaching for the book , planning from the initial state to ( 1 ) , from ( 1 ) to ( 2 ) , from ( 2 ) to ( 3 ) , and from ( 3 ) to the final goal , the problem becomes substantially easier . The subgoals break the problem into short horizon subsegments each with some useful cost signal coming from the next subgoal image . Our main contribution is a self-supervised hierarchical planning framework , hierarchical visual foresight ( HVF ) , which combines generative models of images and model predictive control to decompose a long-horizon visual task into a sequence of subgoals . In particular , we propose optimizing over subgoals such that the resulting task subsegments have low expected planning cost . However , in the case of visual planning , optimizing over subgoals corresponds to optimizing within the space of natural images . To address this challenge , we train a generative latent variable model over images from the robot ’ s environment and optimize over subgoals in the latent space of this model . This allows us to optimize over the manifold of images with only a small number of optimization variables . When combined with visual model predictive control , we observe that this subgoal optimization naturally identifies semantically meaningful states in a long horizon tasks as subgoals , and that when using these subgoals during planning , we achieve significantly higher success rates on long horizon , multi-stage visual tasks . Furthermore , since our method outputs subgoals conditioned on a goal image , we can use the same model and approach to plan to solve many different long-horizon tasks , even with previously unseen objects . We first demonstrate our approach in simulation on a continuous control navigation task with tight bottlenecks , and then evaluate on a set of four different multi-stage object manipulation tasks in a simulated desk environment , which require interacting with up to 3 different objects . In the challenging desk environment , we find that our method yields at least a 20 % absolute performance improvement over prior approaches , including model-free reinforcement learning and a state of the art subgoal identification method . Finally , we show that our approach generates realistic subgoals on real robot manipulation data . 2 RELATED WORK . Developing robots that can execute complex behaviours from only pixel inputs has been a well studied problem , for example with visual servoing ( Mohta et al. , 2014 ; Espiau et al. , 1992 ; Wilson et al. , 1996 ; Yoshimi & Allen , 1994 ; Jagersand et al. , 1997 ; Lampe & Riedmiller , 2013 ; Sadeghi et al. , 2018 ; Sadeghi , 2019 ) . Recently , reinforcement learning has shown promise in completing complex tasks from pixels ( Ghadirzadeh et al. , 2017 ; Levine et al. , 2015 ; Kalashnikov et al. , 2018 ; Lange et al. , 2012 ; OpenAI et al. , 2018 ; Schenck & Fox , 2016 ; Matas et al. , 2018 ; James et al. , 2017 ; Singh et al. , 2019 ) , including in goal-conditioned settings ( Kaelbling , 1993 ; Schaul et al. , 2015 ; Andrychowicz et al. , 2017 ; Sadeghi et al. , 2018 ; Sadeghi , 2019 ; Nair et al. , 2018a ) . While model-free RL approaches have illustrated the ability to generalize to new objects ( Kalashnikov et al. , 2018 ) and learn tasks such as grasping and pushing through self-supervision ( Pinto & Gupta , 2015 ; Zeng et al. , 2018 ) , pure model-free approaches generally lack the ability to explicitly reason over temporally-extended plans , making them ill-suited for the problem of learning long-horizon tasks with limited supervision . Video prediction and planning have also shown promise in enabling robots to complete a diverse set of visuomotor tasks while generalizing to novel objects ( Finn & Levine , 2016 ; Kalchbrenner et al. , 2016 ; Boots et al. , 2014 ; Byravan & Fox , 2016 ) . Since then , a number of video prediction frameworks have been developed specifically for robotics ( Babaeizadeh et al. , 2017 ; Lee et al. , 2018 ; Ebert et al. , 2017 ) , which combined with planning have been used to complete diverse behaviors ( Nair et al. , 2018b ; Ebert et al. , 2018b ; Paxton et al. , 2018 ; Xie et al. , 2019 ) . However , these approaches still struggle with long horizon tasks , which we specifically focus on . One approach to handle long horizon tasks is to add compositional structure to policies , either from demonstrations ( Krishnan et al. , 2017 ; Fox et al. , 2018 ) , with manually-specified primitives ( Xu et al. , 2017 ; Huang et al. , 2018 ) , learned temporal abstractions ( Neitz et al. , 2018 ) , or through model-free reinforcement learning ( Sutton et al. , 1999 ; Barto & Mahadevan , 2003 ; Bacon et al. , 2016 ; Nachum et al. , 2018 ; Levy et al. , 2019 ) . These works have studied such hierarchy in grid worlds ( Bacon et al. , 2016 ) and simulated control tasks ( Nachum et al. , 2018 ; Eysenbach et al. , 2018 ; Levy et al. , 2019 ) with known reward functions . In contrast , we study how to incorporate compositional structure in learned model-based planning with video prediction models . Our approach is entirely self-supervised , without motion primitives , demonstrations , or shaped rewards , and scales to vision-based manipulation tasks . Classical planning methods have been successful in solving long-horizon tasks ( LaValle , 2006 ; Choset et al. , 2005 ) , but make restrictive assumptions about the state space and reachability between states , limiting their applicability to complex visual manipulation tasks . Similarly , completing long horizon tasks has also been explored with symbolic models ( Toussaint et al. , 2019 ) and Task and Motion Planning ( TAMP ) ( Kaelbling & Lozano-Perez , 2011 ; Srivastava et al. , 2014 ) . However , unlike these approaches our method requires no additional knowledge about the objects in the scene nor any predefined symbolic states . Recently , there have been several works that have explored planning in learned latent spaces ( Kurutach et al. , 2018 ; Ichter & Pavone , 2018 ; Watter et al. , 2015 ; Srinivas et al. , 2018 ) . This has enabled planning in higher dimensional spaces , however these methods still struggle with long-horizon tasks . Furthermore , our hierarchical planning framework is agnostic to state space , and could directly operate in one of the above latent spaces . A number of recent works have explored reaching novel goals using only self-supervision ( Finn & Levine , 2016 ; Eysenbach et al. , 2019 ; Kurutach et al. , 2018 ; Wang et al. , 2019 ; Jayaraman et al. , 2019 ; Nair et al. , 2018a ) . In particular , time-agnostic prediction ( TAP ) ( Jayaraman et al. , 2019 ) aims to identify bottlenecks in long-horizon visual tasks , while other prior works ( Nair et al. , 2018a ; Finn & Levine , 2016 ) reach novel goals using model-free or model-based RL . We compare to all three of these methods in Section 5 and find that HVF significantly outperforms all of them . 3 PRELIMINARIES . We formalize our problem setting as a goal-conditioned Markov decision process ( MDP ) defined by the tuple ( S , A , p , G , C , λ ) where s ∈ S is the state space , which in our case corresponds to images , a ∈ A is the action space , p ( st+1|st , at ) governs the environment dynamics , G ⊂ S represents the set of goals which is a subset of possible states , C ( st , sg ) represents the cost of being in state st ∈ S given that the goal is sg ∈ G , and λ is the discount factor . In practice , acquiring cost functions that accurately reflect the distance between two images is a challenging problem ( Yu et al. , 2019 ) . We make no assumptions about having a shaped cost , assuming the simple yet sparse distance metric of ` 2 distance in pixel space in all of our experiments . Approaches that aim to recover more shaped visual cost functions are complementary to the contributions of this work . In visual foresight , or equivalently , visual MPC ( Finn & Levine , 2016 ; Ebert et al. , 2018b ) , the robot collects a data set of random interactions [ ( s1 , a1 ) , ( s2 , a2 ) , ... , ( sT , aT ) ] from a pre-determined policy . This dataset is used to learn a model of dynamics fθ ( st+1 , st+2 , ... , st+h|st , at , at+1 , ... , at+h−1 ) through maximum likelihood supervised learning . Note the states are camera images , and thus fθ is an action-conditioned video prediction model . Once the model is trained , the robot is given some objective and plans a sequence of actions that optimize the objective via the cross entropy method ( CEM ) ( Rubinstein & Kroese , 2004 ) . In this work , we will assume the objective is specified in the form of an image of the goal sg , while CEM aims to find a sequence of actions that minimize the cost C between the predicted future frames from fθ and the goal image . While standard visual foresight struggles with long-horizon tasks due to uncertainty in fθ as the prediction horizon h increases and sparse C for CEM to optimize , in the next section we describe how our proposed approach uses subgoal generation to mitigate these issues .
This paper introduces a hierarchical extension to existing work in vision-based model predictive control. Here, a hierarchical model is optimised to find sub-goals that minimise the planning cost (bottleneck states), so as to allow for improved planning to goal states expressed in higher dimensional state spaces. As expected, results show that this hierarchy improves tasks execution success rates.
SP:7c442073ca3d80b472665b8bd9ec3534ef010950
Hierarchical Foresight: Self-Supervised Learning of Long-Horizon Tasks via Visual Subgoal Generation
1 INTRODUCTION . Developing robotic systems that can complete long horizon visual control tasks , while generalizing to novel scenes and objectives , remains an unsolved and challenging problem . Generalization to unseen objects and scenes requires robots to be trained across diverse environments , meaning that detailed supervision during data collection in not practical to provide . Furthermore , reasoning over long-horizon tasks introduces two additional major challenges . First , the robot must handle large amounts of uncertainty as the horizon increases . And second , the robot must identify how to reach distant goals when only provided with the final goal state , a sparse indication of the task , as opposed to a shaped cost that implicitly encodes how to get there . In this work , we aim to develop a method that can start to address these challenges , leveraging self-supervised models learned using only unlabeled data , to solve novel temporally-extended tasks . Model-based reinforcement learning has shown promise in generalizing to novel objects and tasks , as learned dynamics models have been shown to generalize to new objects ( Finn & Levine , 2016 ; Ebert et al. , 2018b ) , and can be used in conjunction with planning to reach goals unseen during training . However , planning to reach temporally distant goals is difficult . As the planning horizon increases model error compounds , and the cost function often provides only a noisy or sparse signal of the objective . Both of these challenges are exacerbated when planning in visual space . In this work , the key insight that we leverage is that while model error and sparse cost signals can make long horizon planning difficult , we can mitigate these issues by learning to break down longhorizon tasks into short horizon segments . Consider , for example , the task of opening a drawer and putting a book in it , given supervision only in the form of the final image of the open drawer containing the book . The goal image provides nearly no useful cost signal until the last stage of the task , and model predictions are likely to become inaccurate beyond the first stage of the task . However , if we can generate a sequence of good subgoals , such as ( 1 ) the robot arm grasping the †Work completed at Google Brain Videos and code are available at : https : //sites.google.com/stanford.edu/hvf drawer handle , ( 2 ) the open drawer , and ( 3 ) the robot arm reaching for the book , planning from the initial state to ( 1 ) , from ( 1 ) to ( 2 ) , from ( 2 ) to ( 3 ) , and from ( 3 ) to the final goal , the problem becomes substantially easier . The subgoals break the problem into short horizon subsegments each with some useful cost signal coming from the next subgoal image . Our main contribution is a self-supervised hierarchical planning framework , hierarchical visual foresight ( HVF ) , which combines generative models of images and model predictive control to decompose a long-horizon visual task into a sequence of subgoals . In particular , we propose optimizing over subgoals such that the resulting task subsegments have low expected planning cost . However , in the case of visual planning , optimizing over subgoals corresponds to optimizing within the space of natural images . To address this challenge , we train a generative latent variable model over images from the robot ’ s environment and optimize over subgoals in the latent space of this model . This allows us to optimize over the manifold of images with only a small number of optimization variables . When combined with visual model predictive control , we observe that this subgoal optimization naturally identifies semantically meaningful states in a long horizon tasks as subgoals , and that when using these subgoals during planning , we achieve significantly higher success rates on long horizon , multi-stage visual tasks . Furthermore , since our method outputs subgoals conditioned on a goal image , we can use the same model and approach to plan to solve many different long-horizon tasks , even with previously unseen objects . We first demonstrate our approach in simulation on a continuous control navigation task with tight bottlenecks , and then evaluate on a set of four different multi-stage object manipulation tasks in a simulated desk environment , which require interacting with up to 3 different objects . In the challenging desk environment , we find that our method yields at least a 20 % absolute performance improvement over prior approaches , including model-free reinforcement learning and a state of the art subgoal identification method . Finally , we show that our approach generates realistic subgoals on real robot manipulation data . 2 RELATED WORK . Developing robots that can execute complex behaviours from only pixel inputs has been a well studied problem , for example with visual servoing ( Mohta et al. , 2014 ; Espiau et al. , 1992 ; Wilson et al. , 1996 ; Yoshimi & Allen , 1994 ; Jagersand et al. , 1997 ; Lampe & Riedmiller , 2013 ; Sadeghi et al. , 2018 ; Sadeghi , 2019 ) . Recently , reinforcement learning has shown promise in completing complex tasks from pixels ( Ghadirzadeh et al. , 2017 ; Levine et al. , 2015 ; Kalashnikov et al. , 2018 ; Lange et al. , 2012 ; OpenAI et al. , 2018 ; Schenck & Fox , 2016 ; Matas et al. , 2018 ; James et al. , 2017 ; Singh et al. , 2019 ) , including in goal-conditioned settings ( Kaelbling , 1993 ; Schaul et al. , 2015 ; Andrychowicz et al. , 2017 ; Sadeghi et al. , 2018 ; Sadeghi , 2019 ; Nair et al. , 2018a ) . While model-free RL approaches have illustrated the ability to generalize to new objects ( Kalashnikov et al. , 2018 ) and learn tasks such as grasping and pushing through self-supervision ( Pinto & Gupta , 2015 ; Zeng et al. , 2018 ) , pure model-free approaches generally lack the ability to explicitly reason over temporally-extended plans , making them ill-suited for the problem of learning long-horizon tasks with limited supervision . Video prediction and planning have also shown promise in enabling robots to complete a diverse set of visuomotor tasks while generalizing to novel objects ( Finn & Levine , 2016 ; Kalchbrenner et al. , 2016 ; Boots et al. , 2014 ; Byravan & Fox , 2016 ) . Since then , a number of video prediction frameworks have been developed specifically for robotics ( Babaeizadeh et al. , 2017 ; Lee et al. , 2018 ; Ebert et al. , 2017 ) , which combined with planning have been used to complete diverse behaviors ( Nair et al. , 2018b ; Ebert et al. , 2018b ; Paxton et al. , 2018 ; Xie et al. , 2019 ) . However , these approaches still struggle with long horizon tasks , which we specifically focus on . One approach to handle long horizon tasks is to add compositional structure to policies , either from demonstrations ( Krishnan et al. , 2017 ; Fox et al. , 2018 ) , with manually-specified primitives ( Xu et al. , 2017 ; Huang et al. , 2018 ) , learned temporal abstractions ( Neitz et al. , 2018 ) , or through model-free reinforcement learning ( Sutton et al. , 1999 ; Barto & Mahadevan , 2003 ; Bacon et al. , 2016 ; Nachum et al. , 2018 ; Levy et al. , 2019 ) . These works have studied such hierarchy in grid worlds ( Bacon et al. , 2016 ) and simulated control tasks ( Nachum et al. , 2018 ; Eysenbach et al. , 2018 ; Levy et al. , 2019 ) with known reward functions . In contrast , we study how to incorporate compositional structure in learned model-based planning with video prediction models . Our approach is entirely self-supervised , without motion primitives , demonstrations , or shaped rewards , and scales to vision-based manipulation tasks . Classical planning methods have been successful in solving long-horizon tasks ( LaValle , 2006 ; Choset et al. , 2005 ) , but make restrictive assumptions about the state space and reachability between states , limiting their applicability to complex visual manipulation tasks . Similarly , completing long horizon tasks has also been explored with symbolic models ( Toussaint et al. , 2019 ) and Task and Motion Planning ( TAMP ) ( Kaelbling & Lozano-Perez , 2011 ; Srivastava et al. , 2014 ) . However , unlike these approaches our method requires no additional knowledge about the objects in the scene nor any predefined symbolic states . Recently , there have been several works that have explored planning in learned latent spaces ( Kurutach et al. , 2018 ; Ichter & Pavone , 2018 ; Watter et al. , 2015 ; Srinivas et al. , 2018 ) . This has enabled planning in higher dimensional spaces , however these methods still struggle with long-horizon tasks . Furthermore , our hierarchical planning framework is agnostic to state space , and could directly operate in one of the above latent spaces . A number of recent works have explored reaching novel goals using only self-supervision ( Finn & Levine , 2016 ; Eysenbach et al. , 2019 ; Kurutach et al. , 2018 ; Wang et al. , 2019 ; Jayaraman et al. , 2019 ; Nair et al. , 2018a ) . In particular , time-agnostic prediction ( TAP ) ( Jayaraman et al. , 2019 ) aims to identify bottlenecks in long-horizon visual tasks , while other prior works ( Nair et al. , 2018a ; Finn & Levine , 2016 ) reach novel goals using model-free or model-based RL . We compare to all three of these methods in Section 5 and find that HVF significantly outperforms all of them . 3 PRELIMINARIES . We formalize our problem setting as a goal-conditioned Markov decision process ( MDP ) defined by the tuple ( S , A , p , G , C , λ ) where s ∈ S is the state space , which in our case corresponds to images , a ∈ A is the action space , p ( st+1|st , at ) governs the environment dynamics , G ⊂ S represents the set of goals which is a subset of possible states , C ( st , sg ) represents the cost of being in state st ∈ S given that the goal is sg ∈ G , and λ is the discount factor . In practice , acquiring cost functions that accurately reflect the distance between two images is a challenging problem ( Yu et al. , 2019 ) . We make no assumptions about having a shaped cost , assuming the simple yet sparse distance metric of ` 2 distance in pixel space in all of our experiments . Approaches that aim to recover more shaped visual cost functions are complementary to the contributions of this work . In visual foresight , or equivalently , visual MPC ( Finn & Levine , 2016 ; Ebert et al. , 2018b ) , the robot collects a data set of random interactions [ ( s1 , a1 ) , ( s2 , a2 ) , ... , ( sT , aT ) ] from a pre-determined policy . This dataset is used to learn a model of dynamics fθ ( st+1 , st+2 , ... , st+h|st , at , at+1 , ... , at+h−1 ) through maximum likelihood supervised learning . Note the states are camera images , and thus fθ is an action-conditioned video prediction model . Once the model is trained , the robot is given some objective and plans a sequence of actions that optimize the objective via the cross entropy method ( CEM ) ( Rubinstein & Kroese , 2004 ) . In this work , we will assume the objective is specified in the form of an image of the goal sg , while CEM aims to find a sequence of actions that minimize the cost C between the predicted future frames from fθ and the goal image . While standard visual foresight struggles with long-horizon tasks due to uncertainty in fθ as the prediction horizon h increases and sparse C for CEM to optimize , in the next section we describe how our proposed approach uses subgoal generation to mitigate these issues .
This paper proposes a method, hierarchical visual foresight (HVF) that learns to break down the long horizon tasks into short horizon segments. It first generates the subgoals conditioned on the main goal. These subgoals are optimized to have meaningful states and used for planning. The experiments on Maze navigation, simulated desk manipulation, and real robot manipulation show significant performance gain over the planning method without subgoals and model-free RL.
SP:7c442073ca3d80b472665b8bd9ec3534ef010950
Do Image Classifiers Generalize Across Time?
1 INTRODUCTION . Convolutional neural networks ( CNNs ) still exhibit many troubling failure modes . At one extreme , ` p-adversarial examples cause large drops in accuracy for state-of-the-art models while relying only on visually imperceptible changes to the input image ( Goodfellow et al. , 2014 ; Biggio and Roli , 2018 ) . However , this failure mode usually does not pose a problem outside a fully adversarial context because carefully crafted ` p-perturbations are unlikely to occur naturally in the real world . To study more realistic failure modes , researchers have investigated benign image perturbations such as rotations & translations , colorspace changes , and various image corruptions ( Fawzi and Frossard , 2015 ; Engstrom et al. , 2017 ; Fawzi and Frossard , 2015 ; Hendrycks and Dietterich , 2019 ) . However , it is still unclear whether these perturbations reflect the robustness challenges arising in real data since the perturbations also rely on synthetic image modifications . Recent work has therefore turned to videos as a source of naturally occurring perturbations of images ( Zheng et al. , 2016 ; Azulay and Weiss , 2018 ; Gu et al. , 2019 ) . In contrast to other failure modes , the perturbed images are taken from existing image data without further modifications that make the task more difficult . As a result , robustness to such perturbations directly corresponds to performance improvements on real data . However , it is currently unclear to what extent such video perturbations pose a significant robustness challenge . Azulay and Weiss ( 2018 ) and Zheng et al . ( 2016 ) only provide anecdotal evidence from a small number of videos . Gu et al . ( 2019 ) go beyond individual videos and utilize a large video dataset ( Real et al. , 2017 ) in order to measure the effect of video perturbations more quantitatively . In their evaluation , the best image classifiers lose about 3 % accuracy for video frames up to 0.3 seconds away . However , the authors did not employ humans to review the frames in their videos . Hence the accuracy drop could also be caused by significant changes in the video frames ( e.g. , due to fast camera or object motion ) . Since the 3 % accuracy drop is small to begin with , it remains unclear whether video perturbations are a robustness challenge for current image classifiers . We address these issues by conducting a thorough evaluation of robustness to natural perturbations arising in videos . As a cornerstone of our investigation , we introduce two test sets for evaluating model robustness : ImageNet-Vid-Robust and YTBB-Robust , carefully curated from the ImageNet-Vid and Youtube-BB datasets , respectively ( Russakovsky et al. , 2015 ; Real et al. , 2017 ) . All images in the two datasets were screened by a set of expert labelers to ensure high annotation quality and minimize selection biases that arise when filtering a dataset with CNNs . To the best of our knowledge these are the first datasets of their kind , containing tens of thousands of images that are human reviewed and grouped into thousands of perceptually similar sets . In total , our datasets contain 3,139 sets of temporally adjacent and visually similar images ( 57,897 images total ) . We then utilize these datasets to measure the accuracy of current CNNs to small , naturally occurring perturbations . Our testbed contains over 45 different models , varying both architecture and training methodology ( adversarial training , data augmentation , etc. ) . To better understand the drop in accuracy due to natural perturbations , we also introduce a robustness metric that is more stringent than those employed in prior work . Under this metric , we find that natural perturbations from ImageNet-Vid-Robust and YTBB-Robust induce a median accuracy drop of 16 % and 10 % respectively for classification tasks and a median 14 point drop in mAP for detection tasks.1 Even for the best-performing classification models , we observe an accuracy drop of 14 % for ImageNet-Vid-Robust and 8 % for YTBB-Robust . Our results show that robustness to natural perturbations in videos is indeed a significant challenge for current CNNs . As these models are increasingly deployed in safety-critical environments that require both high accuracy and low latency ( e.g. , autonomous vehicles ) , ensuring reliable predictions on every frame of a video is an important direction for future work . 2 CONSTRUCTING A TEST SET FOR ROBUSTNESS . ImageNet-Vid-Robust and YTBB-Robust are sourced from videos in the ImageNet-Vid and Youtube-BB datasets ( Russakovsky et al. , 2015 ; Real et al. , 2017 ) . All object classes in ImageNet-Vid and Youtube-BB are from the WordNet hierarchy ( Miller , 1995 ) and direct ancestors of ILSVRC-2012 classes . Using the WordNet hierarchy , we construct a canonical mapping from ILSVRC-2012 classes to ImageNet-Vid and Youtube-BB classes , which allows us to evaluate off-the-shelf ILSVRC-2012 models on ImageNet-Vid-Robust and YTBB-Robust . We provide more background on the source datsets in Appendix A . 2.1 CONSTRUCTING IMAGENET-VID-ROBUST AND YTBB-ROBUST Next , we describe how we extracted sets of naturally perturbed frames from ImageNet-Vid and Youtube-BB to create ImageNet-Vid-Robust and YTBB-Robust . A straightforward approach would be to select a set of anchor frames and use temporally adjacent frames in the video with the assumption that such frames contain only small perturbations from the anchor . However , as Fig . 2 illustrates , this assumption is frequently violated , especially due to fast camera or object motion . Instead , we first collect preliminary datasets of natural perturbations following the same approach , and then manually review each of the frame sets . For each video , we randomly sample an anchor frame and take k = 10 frames before and after the anchor frame as candidate perturbation images.2 This results in two datasets containing one anchor frame each from 3,139 videos , with approximately 20 candidate perturbation per anchor frame.3 1We only evaluated detection on ImageNet-Vid-Robust as bounding-box annotations in Youtube-BB were only at 1 frame-per-second and not dense enough for our evaluation . 2For YTBB-Robust we use a subset of the anchor frames used by Gu et al . ( 2019 ) . 3Anchor frames near the start or end of the video may have less than 20 candidate frames . Anchor frame Discarded frame Anchor frame Anchor frame Discarded frameDiscarded frame Next , we curate the dataset with the help of four expert human annotators . The goal of the curation step is to ensure that each anchor frame and its nearby frames are correctly labeled with the same ground truth class , and that the anchor frame and the nearby frames are visually similar . Denser labels for Youtube-BB . As Youtube-BB contains only a single category label per frame at 1 frame per second , annotators first viewed each anchor frame individually and marked any missing labels . In total , annotators corrected the labels for 834 frames , adding an average of 0.5 labels per anchor frame . These labels are then propagated to nearby , unlabeled frames at the native frame rate and verified in the next step . ImageNet-Vid densely labels all classes per frame , so we skip this step . Frame pairs review . Next , for each pair of anchor and candidate perturbation frames , a human annotates ( i ) whether the pair is correctly labeled in the dataset , and ( ii ) whether the pair is similar . We took several steps to mitigate the subjectivity of this task and ensure high annotation quality . First , we trained reviewers to mark frames as dissimilar if the scene undergoes any of the following transformations : significant motion , significant background change , or significant blur change . We asked reviewers to mark each dissimilar frame with one of these transformations , or “ other ” , and to mark a pair of images as dissimilar if a distinctive feature of the object is only visible in one of the two frames ( such as the face of a dog ) . If an annotator was unsure about the correct label , she could mark the pair as “ unsure ” . Second , we present only a single pair of frames at a time to reviewers because presenting videos or groups of frames could cause them to miss large changes due to the phenomenon of change blindness ( Pashler , 1988 ) . Verification . In the previous stage , all annotators were given identical labeling instructions and individually reviewed a total of 71,660 images pairs . To increase consistency in annotation , annotators jointly reviewed all frames marked as dissimilar , incorrectly labeled , or “ unsure ” . A frame was only considered similar to its anchor if a strict majority of the annotators marked the pair as such . After the reviewing was complete , we discarded all anchor frames and candidate perturbations that annotators marked as dissimilar or incorrectly labeled . The final datasets contain a combined total of 3,139 anchor frames with a median of 20 similar frames each . 2.2 THE PM-K EVALUATION METRIC Given the datasets introduced above , we propose a metric to measure a model ’ s robustness to natural perturbations . In particular , let A = { a1 , ... , an } be the set of valid anchor frames in our dataset . Let Y = { y1 , ... , yn } be the set of labels for A . We let Nk ( ai ) be the set of frames marked as similar to anchor frame ai . In our setting , Nk is a subset of the 2k temporally adjacent frames ( plus/minus k frames from the anchor ) . Classification . Classification accuracy is defined as accorig = 1− 1N ∑N i=0 L0/1 ( f ( ai ) , yi ) , where L0/1 is the standard 0-1 loss function . We define the pm-k analog of accuracy as accpmk = 1− 1 N N∑ i=0 max b∈Nk ( ai ) L0/1 ( f ( b ) , yi ) , ( 1 ) which corresponds to picking the worst frame from each set Nk ( ai ) before computing accuracy . Detection . The standard metric for detection is mean average precision ( mAP ) of the predictions at a fixed intersection-over-union ( IoU ) threshold Lin et al . ( 2014 ) . We define the pm-k metric analogous to that for classification : We replace each anchor frame with the nearest frame that minimizes the average precision ( AP , averaged over recall thresholds ) of the predictions , and compute pm-k as the mAP on these worst-case neighboring frames . 3 MAIN RESULTS . We evaluate a testbed of 45 classification and three detection models on ImageNet-Vid-Robust and YTBB-Robust . We first discuss the various types of classification models evaluated with the pm-k classification metric . Second , we evaluate the performance of detection models on ImageNet-Vid-Robust using use the bounding box annotations inherited from ImageNet-Vid using a variant of pm-k for detection . We then analyze the errors made on the detection adversarial examples to isolate the effects of localization errors vs. classification errors . 3.1 CLASSIFICATION . The classification robustness metric is accpmk defined in Equation ( 1 ) . For frames with multiple labels , we count a prediction as correct if the model predicts any of the correct classes for a frame . In Figure 3 , we plot the benign accuracy , accorig , versus the robust accuracy , accpmk , for all classification models in our test bed and find that the relationship between accorig and accpmk is approximately linear . This relationship indicates that improvements in the benign accuracy do result in improvements in the worst-case accuracy , but do not suffice to resolve the accuracy drop due to natural perturbations . Our test bed consists of five model types with increasing levels of supervision . We present results for representative models from each model type in Table 2 and defer the full classification results table to Appendix B.2 . ILSVRC Trained The WordNet hierarchy enables us to repurpose models trained for the 1,000 class ILSVRC dataset on ImageNet-Vid-Robust and YTBB-Robust ( see Appendix A.1 ) . We evaluate a wide array of ILSVRC-2012 models ( available from Cadene ) against our natural perturbations . Since these datasets present a substantial distribution shift from the original ILSVRC2012 validation , we expect the benign accuracy accorig to be lower than the comparable accuracy on the ILSVRC-2012 validation set . However , our main interest here is in the difference between the original and perturbed accuracies accorig - accpmk . A small drop in accuracy would indicate that the model is robust to small changes that occur naturally in videos . Instead , we find significant drops of 15.0 % and 13.2 % in accuracy on our two datasets , indicating sensitivity to such changes . Noise augmentation One hypothesis for the accuracy drop from original to perturbed accuracy is that subtle artifacts and corruptions introduced by video compression schemes could degrade performance when evaluating on these corrupted frames . The worst-case nature of the pm-k metric could then be focusing on these corrupted frames . One model for these corruptions are the perturbations introduced in Hendrycks and Dietterich ( 2019 ) . To test this hypothesis , we evaluate models augmented with a subset of the perturbations ( exactly one of : Gaussian noise , Gaussian blur , shot noise , contrast change , impulse noise , or JPEG compression ) . We found that these augmentation schemes did not improve robustness against our perturbations substantially , and still result in accuracy drop of 15.6 % and 16.6 % on the two datasets . ` ∞ robustness . We evaluate the model from Xie et al . ( 2018 ) , which currently performs best against ` ∞ attacks on ImageNet . We find that this model has a smaller accuracy drop than the two aforementioned model types on both datasets . However , we note that the robust model achieves significantly lower original and perturbed accuracy than either of the two model types above , and the robustness gain is modest ( 3 % compared to models of similar benign accuracy ) . Fine-tuning on video frames . To adapt to the new class vocabulary and the video domain , we fine-tune several network architectures on the ImageNet-Vid and Youtube-BB training sets . For Youtube-BB , we train on the anchor frames used for training in Gu et al . ( 2019 ) , and for ImageNet-Vid we use all frames in the training set . We provide hyperparameters for all models in Appendix K. The resulting models significantly improve in accuracy over their ILSVRC pre-trained counterparts ( e.g. , 13 % on ImageNet-Vid-Robust and 34 % on YTBB-Robust for ResNet-50 ) . This improvement in accuracy results in a modest improvement in the accuracy drop for YTBB-Robust , but a finetuned ResNet-50 still suffers from a significant 9.4 % drop . On ImageNet-Vid-Robust , there is almost no change in the accuracy drop from 15.0 % to 15.1 % . Fine-tuning for detection on video frames . We further analyze whether additional supervision in the form of bounding box annotations improves robustness . To this end , we train the Faster R-CNN detection model Ren et al . ( 2015 ) with a ResNet-50 backbone on ImageNet-Vid . Following standard practice , the detection backbone is pre-trained on ILSVRC-2012 . To evaluate this detector for classification , we assign the class with the most confident bounding box as label to the image . We find that this transformation reduces accuracy compared to the model trained for classification ( 77.6 % vs. 80.8 % ) . While there is a slight reduction in the accuracy drop caused by natural perturbations , the reduction is well within the error bars for this test set .
This paper presents new datasets based on ImageNet and Youtube-BB to assert networks performance consistency across time. Compared to previous work, it uses human labeler to further validate the dataset and discard frames that are deemed too different from the reference one. It provides results on image classification and detection using popular network architectures. Based on these results, the paper claims an accuracy drop of 10 to 16%.
SP:ef3afd5d34fbb7c8310a1dc9d6e49c2f37db07e6
Do Image Classifiers Generalize Across Time?
1 INTRODUCTION . Convolutional neural networks ( CNNs ) still exhibit many troubling failure modes . At one extreme , ` p-adversarial examples cause large drops in accuracy for state-of-the-art models while relying only on visually imperceptible changes to the input image ( Goodfellow et al. , 2014 ; Biggio and Roli , 2018 ) . However , this failure mode usually does not pose a problem outside a fully adversarial context because carefully crafted ` p-perturbations are unlikely to occur naturally in the real world . To study more realistic failure modes , researchers have investigated benign image perturbations such as rotations & translations , colorspace changes , and various image corruptions ( Fawzi and Frossard , 2015 ; Engstrom et al. , 2017 ; Fawzi and Frossard , 2015 ; Hendrycks and Dietterich , 2019 ) . However , it is still unclear whether these perturbations reflect the robustness challenges arising in real data since the perturbations also rely on synthetic image modifications . Recent work has therefore turned to videos as a source of naturally occurring perturbations of images ( Zheng et al. , 2016 ; Azulay and Weiss , 2018 ; Gu et al. , 2019 ) . In contrast to other failure modes , the perturbed images are taken from existing image data without further modifications that make the task more difficult . As a result , robustness to such perturbations directly corresponds to performance improvements on real data . However , it is currently unclear to what extent such video perturbations pose a significant robustness challenge . Azulay and Weiss ( 2018 ) and Zheng et al . ( 2016 ) only provide anecdotal evidence from a small number of videos . Gu et al . ( 2019 ) go beyond individual videos and utilize a large video dataset ( Real et al. , 2017 ) in order to measure the effect of video perturbations more quantitatively . In their evaluation , the best image classifiers lose about 3 % accuracy for video frames up to 0.3 seconds away . However , the authors did not employ humans to review the frames in their videos . Hence the accuracy drop could also be caused by significant changes in the video frames ( e.g. , due to fast camera or object motion ) . Since the 3 % accuracy drop is small to begin with , it remains unclear whether video perturbations are a robustness challenge for current image classifiers . We address these issues by conducting a thorough evaluation of robustness to natural perturbations arising in videos . As a cornerstone of our investigation , we introduce two test sets for evaluating model robustness : ImageNet-Vid-Robust and YTBB-Robust , carefully curated from the ImageNet-Vid and Youtube-BB datasets , respectively ( Russakovsky et al. , 2015 ; Real et al. , 2017 ) . All images in the two datasets were screened by a set of expert labelers to ensure high annotation quality and minimize selection biases that arise when filtering a dataset with CNNs . To the best of our knowledge these are the first datasets of their kind , containing tens of thousands of images that are human reviewed and grouped into thousands of perceptually similar sets . In total , our datasets contain 3,139 sets of temporally adjacent and visually similar images ( 57,897 images total ) . We then utilize these datasets to measure the accuracy of current CNNs to small , naturally occurring perturbations . Our testbed contains over 45 different models , varying both architecture and training methodology ( adversarial training , data augmentation , etc. ) . To better understand the drop in accuracy due to natural perturbations , we also introduce a robustness metric that is more stringent than those employed in prior work . Under this metric , we find that natural perturbations from ImageNet-Vid-Robust and YTBB-Robust induce a median accuracy drop of 16 % and 10 % respectively for classification tasks and a median 14 point drop in mAP for detection tasks.1 Even for the best-performing classification models , we observe an accuracy drop of 14 % for ImageNet-Vid-Robust and 8 % for YTBB-Robust . Our results show that robustness to natural perturbations in videos is indeed a significant challenge for current CNNs . As these models are increasingly deployed in safety-critical environments that require both high accuracy and low latency ( e.g. , autonomous vehicles ) , ensuring reliable predictions on every frame of a video is an important direction for future work . 2 CONSTRUCTING A TEST SET FOR ROBUSTNESS . ImageNet-Vid-Robust and YTBB-Robust are sourced from videos in the ImageNet-Vid and Youtube-BB datasets ( Russakovsky et al. , 2015 ; Real et al. , 2017 ) . All object classes in ImageNet-Vid and Youtube-BB are from the WordNet hierarchy ( Miller , 1995 ) and direct ancestors of ILSVRC-2012 classes . Using the WordNet hierarchy , we construct a canonical mapping from ILSVRC-2012 classes to ImageNet-Vid and Youtube-BB classes , which allows us to evaluate off-the-shelf ILSVRC-2012 models on ImageNet-Vid-Robust and YTBB-Robust . We provide more background on the source datsets in Appendix A . 2.1 CONSTRUCTING IMAGENET-VID-ROBUST AND YTBB-ROBUST Next , we describe how we extracted sets of naturally perturbed frames from ImageNet-Vid and Youtube-BB to create ImageNet-Vid-Robust and YTBB-Robust . A straightforward approach would be to select a set of anchor frames and use temporally adjacent frames in the video with the assumption that such frames contain only small perturbations from the anchor . However , as Fig . 2 illustrates , this assumption is frequently violated , especially due to fast camera or object motion . Instead , we first collect preliminary datasets of natural perturbations following the same approach , and then manually review each of the frame sets . For each video , we randomly sample an anchor frame and take k = 10 frames before and after the anchor frame as candidate perturbation images.2 This results in two datasets containing one anchor frame each from 3,139 videos , with approximately 20 candidate perturbation per anchor frame.3 1We only evaluated detection on ImageNet-Vid-Robust as bounding-box annotations in Youtube-BB were only at 1 frame-per-second and not dense enough for our evaluation . 2For YTBB-Robust we use a subset of the anchor frames used by Gu et al . ( 2019 ) . 3Anchor frames near the start or end of the video may have less than 20 candidate frames . Anchor frame Discarded frame Anchor frame Anchor frame Discarded frameDiscarded frame Next , we curate the dataset with the help of four expert human annotators . The goal of the curation step is to ensure that each anchor frame and its nearby frames are correctly labeled with the same ground truth class , and that the anchor frame and the nearby frames are visually similar . Denser labels for Youtube-BB . As Youtube-BB contains only a single category label per frame at 1 frame per second , annotators first viewed each anchor frame individually and marked any missing labels . In total , annotators corrected the labels for 834 frames , adding an average of 0.5 labels per anchor frame . These labels are then propagated to nearby , unlabeled frames at the native frame rate and verified in the next step . ImageNet-Vid densely labels all classes per frame , so we skip this step . Frame pairs review . Next , for each pair of anchor and candidate perturbation frames , a human annotates ( i ) whether the pair is correctly labeled in the dataset , and ( ii ) whether the pair is similar . We took several steps to mitigate the subjectivity of this task and ensure high annotation quality . First , we trained reviewers to mark frames as dissimilar if the scene undergoes any of the following transformations : significant motion , significant background change , or significant blur change . We asked reviewers to mark each dissimilar frame with one of these transformations , or “ other ” , and to mark a pair of images as dissimilar if a distinctive feature of the object is only visible in one of the two frames ( such as the face of a dog ) . If an annotator was unsure about the correct label , she could mark the pair as “ unsure ” . Second , we present only a single pair of frames at a time to reviewers because presenting videos or groups of frames could cause them to miss large changes due to the phenomenon of change blindness ( Pashler , 1988 ) . Verification . In the previous stage , all annotators were given identical labeling instructions and individually reviewed a total of 71,660 images pairs . To increase consistency in annotation , annotators jointly reviewed all frames marked as dissimilar , incorrectly labeled , or “ unsure ” . A frame was only considered similar to its anchor if a strict majority of the annotators marked the pair as such . After the reviewing was complete , we discarded all anchor frames and candidate perturbations that annotators marked as dissimilar or incorrectly labeled . The final datasets contain a combined total of 3,139 anchor frames with a median of 20 similar frames each . 2.2 THE PM-K EVALUATION METRIC Given the datasets introduced above , we propose a metric to measure a model ’ s robustness to natural perturbations . In particular , let A = { a1 , ... , an } be the set of valid anchor frames in our dataset . Let Y = { y1 , ... , yn } be the set of labels for A . We let Nk ( ai ) be the set of frames marked as similar to anchor frame ai . In our setting , Nk is a subset of the 2k temporally adjacent frames ( plus/minus k frames from the anchor ) . Classification . Classification accuracy is defined as accorig = 1− 1N ∑N i=0 L0/1 ( f ( ai ) , yi ) , where L0/1 is the standard 0-1 loss function . We define the pm-k analog of accuracy as accpmk = 1− 1 N N∑ i=0 max b∈Nk ( ai ) L0/1 ( f ( b ) , yi ) , ( 1 ) which corresponds to picking the worst frame from each set Nk ( ai ) before computing accuracy . Detection . The standard metric for detection is mean average precision ( mAP ) of the predictions at a fixed intersection-over-union ( IoU ) threshold Lin et al . ( 2014 ) . We define the pm-k metric analogous to that for classification : We replace each anchor frame with the nearest frame that minimizes the average precision ( AP , averaged over recall thresholds ) of the predictions , and compute pm-k as the mAP on these worst-case neighboring frames . 3 MAIN RESULTS . We evaluate a testbed of 45 classification and three detection models on ImageNet-Vid-Robust and YTBB-Robust . We first discuss the various types of classification models evaluated with the pm-k classification metric . Second , we evaluate the performance of detection models on ImageNet-Vid-Robust using use the bounding box annotations inherited from ImageNet-Vid using a variant of pm-k for detection . We then analyze the errors made on the detection adversarial examples to isolate the effects of localization errors vs. classification errors . 3.1 CLASSIFICATION . The classification robustness metric is accpmk defined in Equation ( 1 ) . For frames with multiple labels , we count a prediction as correct if the model predicts any of the correct classes for a frame . In Figure 3 , we plot the benign accuracy , accorig , versus the robust accuracy , accpmk , for all classification models in our test bed and find that the relationship between accorig and accpmk is approximately linear . This relationship indicates that improvements in the benign accuracy do result in improvements in the worst-case accuracy , but do not suffice to resolve the accuracy drop due to natural perturbations . Our test bed consists of five model types with increasing levels of supervision . We present results for representative models from each model type in Table 2 and defer the full classification results table to Appendix B.2 . ILSVRC Trained The WordNet hierarchy enables us to repurpose models trained for the 1,000 class ILSVRC dataset on ImageNet-Vid-Robust and YTBB-Robust ( see Appendix A.1 ) . We evaluate a wide array of ILSVRC-2012 models ( available from Cadene ) against our natural perturbations . Since these datasets present a substantial distribution shift from the original ILSVRC2012 validation , we expect the benign accuracy accorig to be lower than the comparable accuracy on the ILSVRC-2012 validation set . However , our main interest here is in the difference between the original and perturbed accuracies accorig - accpmk . A small drop in accuracy would indicate that the model is robust to small changes that occur naturally in videos . Instead , we find significant drops of 15.0 % and 13.2 % in accuracy on our two datasets , indicating sensitivity to such changes . Noise augmentation One hypothesis for the accuracy drop from original to perturbed accuracy is that subtle artifacts and corruptions introduced by video compression schemes could degrade performance when evaluating on these corrupted frames . The worst-case nature of the pm-k metric could then be focusing on these corrupted frames . One model for these corruptions are the perturbations introduced in Hendrycks and Dietterich ( 2019 ) . To test this hypothesis , we evaluate models augmented with a subset of the perturbations ( exactly one of : Gaussian noise , Gaussian blur , shot noise , contrast change , impulse noise , or JPEG compression ) . We found that these augmentation schemes did not improve robustness against our perturbations substantially , and still result in accuracy drop of 15.6 % and 16.6 % on the two datasets . ` ∞ robustness . We evaluate the model from Xie et al . ( 2018 ) , which currently performs best against ` ∞ attacks on ImageNet . We find that this model has a smaller accuracy drop than the two aforementioned model types on both datasets . However , we note that the robust model achieves significantly lower original and perturbed accuracy than either of the two model types above , and the robustness gain is modest ( 3 % compared to models of similar benign accuracy ) . Fine-tuning on video frames . To adapt to the new class vocabulary and the video domain , we fine-tune several network architectures on the ImageNet-Vid and Youtube-BB training sets . For Youtube-BB , we train on the anchor frames used for training in Gu et al . ( 2019 ) , and for ImageNet-Vid we use all frames in the training set . We provide hyperparameters for all models in Appendix K. The resulting models significantly improve in accuracy over their ILSVRC pre-trained counterparts ( e.g. , 13 % on ImageNet-Vid-Robust and 34 % on YTBB-Robust for ResNet-50 ) . This improvement in accuracy results in a modest improvement in the accuracy drop for YTBB-Robust , but a finetuned ResNet-50 still suffers from a significant 9.4 % drop . On ImageNet-Vid-Robust , there is almost no change in the accuracy drop from 15.0 % to 15.1 % . Fine-tuning for detection on video frames . We further analyze whether additional supervision in the form of bounding box annotations improves robustness . To this end , we train the Faster R-CNN detection model Ren et al . ( 2015 ) with a ResNet-50 backbone on ImageNet-Vid . Following standard practice , the detection backbone is pre-trained on ILSVRC-2012 . To evaluate this detector for classification , we assign the class with the most confident bounding box as label to the image . We find that this transformation reduces accuracy compared to the model trained for classification ( 77.6 % vs. 80.8 % ) . While there is a slight reduction in the accuracy drop caused by natural perturbations , the reduction is well within the error bars for this test set .
In this paper, the authors curated two datasets: ImageNet-Vid and Youtube-BB in order to create human-reviewed perceptibly similar sets (Imagenet-Vid-Robust and YTBB-Robust). The obtained datasets are evaluated over 45 different models pre-trained on ImageNet in order to see their drop in accuracy on natural perturbations. Three detection models are also evaluated and show that not only classification models are sensitive to these perturbations, but that it also yields to localization errors.
SP:ef3afd5d34fbb7c8310a1dc9d6e49c2f37db07e6
Pipelined Training with Stale Weights of Deep Convolutional Neural Networks
The growth in the complexity of Convolutional Neural Networks ( CNNs ) is increasing interest in partitioning a network across multiple accelerators during training and pipelining the backpropagation computations over the accelerators . Existing approaches avoid or limit the use of stale weights through techniques such as micro-batching or weight stashing . These techniques either underutilize of accelerators or increase memory footprint . We explore the impact of stale weights on the statistical efficiency and performance in a pipelined backpropagation scheme that maximizes accelerator utilization and keeps memory overhead modest . We use 4 CNNs ( LeNet-5 , AlexNet , VGG and ResNet ) and show that when pipelining is limited to early layers in a network , training with stale weights converges and results in models with comparable inference accuracies to those resulting from non-pipelined training on MNIST and CIFAR-10 datasets ; a drop in accuracy of 0.4 % , 4 % , 0.83 % and 1.45 % for the 4 networks , respectively . However , when pipelining is deeper in the network , inference accuracies drop significantly . We propose combining pipelined and non-pipelined training in a hybrid scheme to address this drop . We demonstrate the implementation and performance of our pipelined backpropagation in PyTorch on 2 GPUs using ResNet , achieving speedups of up to 1.8X over a 1-GPU baseline , with a small drop in inference accuracy . 1 INTRODUCTION . Modern Convolutional Neural Networks ( CNNs ) have grown in size and complexity to demand considerable memory and computational resources , particularly for training . This growth makes it sometimes difficult to train an entire network with a single accelerator ( Huang et al. , 2018 ; Harlap et al. , 2018 ; Chen et al. , 2012 ) . Instead , the network is partitioned among multiple accelerators , typically by partitioning its layers among the available accelerators , as shown in Figure 1 for an example 8-layer network . The 8 layers are divided into 4 computationally-balanced partitions , P0 ... P3 and each partition is mapped to one of the 4 accelerators , A0 ... A3 . Each accelerator is responsible for the computations associated with the layers mapped to it . However , the nature of the backpropagation algorithm used to train CNNs ( Rumelhart et al. , 1986 ) is that the computations of a layer are performed only after the computations of the preceding layer in the forward pass of the algorithm and only after the computations of the succeeding layer in the backward pass . Further , the computations for one batch of input data are only performed after the computations of the preceding batch have updated the parameters ( i.e. , weights ) of the network . These dependences underutilize the accelerators , as shown by the space-time diagram in Figure 2 ; only one accelerator can be active at any given point in time . The underutilization of accelerators can be alleviated by pipelining the computations of the backpropagation algorithm over the accelerators ( Huang et al. , 2018 ; Harlap et al. , 2018 ; Chen et al. , 2012 ) . That is , by overlapping the computations of different input data batches using the multiple accelerators . However , pipelining causes an accelerator to potentially use weights that are yet to be updated by an accelerator further down in the pipeline . The use of such stale weights can negatively affect the statistical efficiency of the network , preventing the convergence of training or producing a model with lower inference accuracy . A0 A1 A2 A3 P0 P1 P2 P3 Idle Time P0 P1 P2 P3 Forward Backward Figure 2 : Schedule of Computations Common wisdom is that the use of stale weights must either be avoided , e.g. , with the use of microbatches ( Huang et al. , 2018 ) , be constrained to ensure the consistency of the weights within an accelerator using stashing ( Harlap et al. , 2018 ) , or by limiting the use of pipelining to very small networks ( Mostafa et al. , 2017 ) . However , these approaches either underutilize accelerators ( Huang et al. , 2018 ) or inflate memory usage to stash multiple copies of weights ( Harlap et al. , 2018 ) . In this paper we question this common wisdom and explore pipelining that allows for the full utilization of accelerators while using stale weights . This results in a pipelining scheme that , compared to existing schemes , is simpler to implement , fully utilizes the accelerators and has lower memory overhead . We evaluate this pipelining scheme using 4 CNNs : LeNet-5 ( trained on MNIST ) , AlexNet , VGG and ResNet ( all trained on CIFAR-10 ) . We analyze the impact of weight staleness and show that if pipelining is limited to early layers in the network , training does converge and the quality of the resulting models is comparable to that of models obtained with non-pipelined training . For the 4 networks , the drop in accuracy is 0.4 % , 4 % , 0.83 % and 1.45 % , respectively . However , inference accuracies drop significantly when the pipelining is deeper in the network . While this is not a limitation since the bulk of computations that can benefit from pipelining are in the early convolutional layers , we address this through a hybrid scheme that combines pipelined and non-pipelined training to maintain inference accuracy while still delivering performance improvement . Evaluation shows that our pipelined training delivers a speedup of up to 1.8X on a 2-GPU system . The remainder of this paper is organized as follows . Section 2 briefly describes the backpropagation for training of CNNs . Section 3 details our pipelining scheme . Section 4 describes how non-pipelined and pipelined backpropagation are combined . Section 5 highlights some of the implementation details . Experimental evaluation is presented in Section 6 . Related work is reviewed in Section 7 . Finally , Section 8 gives concluding remarks and directions for future work . 2 THE BACKPROPAGATION ALGORITHM . The backpropagation algorithm ( Rumelhart et al. , 1986 ) consists of two passes : a forward pass that calculates the output error and a backward pass that calculates the error gradients and updates the weights of the network . The two passes are performed for input data one mini-batch at a time . In the forward pass , a mini-batch is fed into the network , propagating from the first to the last layer . At each layer l , the activations of the layer , denoted by x ( l ) , are computed using the weights of the layer , denoted by W ( l ) . When the output of the network ( layer L ) x ( L ) is produced , it is used with the true data label to obtain a training error e for the mini-batch . In the backward pass , the error e is propagated from the last to the first layer . The error gradients with respect to pre-activations of layer l , denoted by δ ( l ) , are calculated . Further , the error gradients with respect to weights of layer l , ∂e ∂W ( l ) , are computed using the activations from layer l − 1 ( i.e. , x ( l−1 ) ) and δ ( l ) . Subsequently , δ ( l ) is used to calculate the δ ( l−1 ) . When ∂e ∂W ( l ) is computed for every layer , the weights are updated using the error gradients . In the forward pass , the activations of the layer l , x ( l ) , can not be computed until the activations of the previous layers , i.e. , x ( l−1 ) , are computed . In backward pass , ∂e ∂W ( l ) can only be computed once x ( l−1 ) and δ ( l ) have been computed . Moreover , δ ( l ) depends on δ ( l+1 ) . Finally , for a given mini-batch the backward pass can not be started until the forward pass is completed and the error e has been determined . The above dependences ensure that the weights of the layers are updated using the activations and error gradients calculated from the same batch of training data in one iteration of the backpropagation algorithm . Only when the weights are updated is the next batch of training data fed into the network . These dependences limit parallelism when a network is partitioned across multiple accelerators and allow only one accelerator to be active at any point . This results in under-utilization of the accelerators . It is this limitation that pipelining addresses . 3 PIPELINED BACKPROPAGATION . We illustrate our pipelined backpropagation implementation with the L layer network shown in Figure 3 , using conceptual pipeline registers . Two registers are inserted between layers l and l + 1 ; one register for the forward pass and a second for the backward pass . The forward register stores the activations of layer l ( x ( l ) ) . The backward register stores the gradients δ ( l+1 ) of layer l+1 . This defines a 4-stage pipelined backpropagation . The forward pass for layers 1 to l forms forward stage FS1 . The forward pass for layers l + 1 to L form forward stage FS2 . Similarly , the backwards pass for layers l + 1 to L and 1 to l form backward stages BKS1 and BKS2 respectively . The forward and backward stages are executed in a pipelined fashion on 3 accelerators : one for FS1 , one for both FS2 and BKS1 , and one for BKS21 . In cycle 0 , mini-batch 0 is fed to FS1 . The computations of the forward pass are done as in the traditional non-pipelined implementation . In cycle 1 , layer l activations x ( l ) are fed to FS2 and mini-batch 1 is fed to FS1 . In cycle 2 , the error for mini-batch 0 computed in FS2 is directly fed to BKS1 , the activations of layer l x ( l ) are forwarded to FS2 and mini-batch 2 is fed to FS1 . This pipelined execution is illustrated by the space-time diagram in Figure 4 for 5 mini-batches . The figure depicts the mini-batch processed by each accelerator cycles 0 to 6 . At steady state , all the accelerators are active in each cycle of execution . The above pipelining scheme utilizes weights in FS1 that are yet to be updated by the errors calculated by FS2 and BKS1 . At steady state , the activations of a mini-batch in FS1 are calculated using weights that are 2 execution cycles old , or 2 cycles stale . This is reflected in Figure 4 by indicating the weights used by each forward stage and the weights updated by each backward stage . The weights of a forward stage are subscripted by how stale they are ( -ve subscripts ) . Similarly , the weights updated by a backward stage are subscripted by how delayed they are ( +ve subscripts ) . Further , since the updates of the weights by BKS2 requires activations calculated for the same mini-batch in FS1 for all layers in the stage , it is necessary to save these activations until the error gradients with respect to the weights are calculated by BKS2 . Only when the weights are updated using the gradients can these activations be discarded . In the general case , we use K pairs of pipeline registers ( each pair consisting of a forward register and a backward register ) inserted between the layers of the network . We describe the placement of the register pairs by the Pipeline Placement Vector , PPV = ( p1 , p2 , ... , pK ) , where pi represents the layer number after which a pipeline register pair is inserted . Such a placement creates ( K + 1 ) forward stages , labeled FSi , i = 1 , 2 , ... , K + 1 and ( K + 1 ) backward stages , labeled BKSi , i = 1 , 2 , ... , K + 1 . Forward stage FSi and backward stage BKSK−i+2 correspond to the same set of layers . Specifically , stage FSi contains layers pi + 1 to pi+1 , inclusive . We assign each forward stage and each backward stage to an accelerator , with the exception of the FSK+1 and backward stage BKS1 , which are assigned to the same accelerator to reduce weight staleness by an execution cycle . In total 2K + 1 accelerators are used . We quantify weight staleness as follows . A forward stage FSi and backward stage BKSK−i+2 use the same weights that are 2 ( K− i+1 ) cycles old . A forward stage FSi must store the activations of 1We combine FS1 and BKS1 on the same accelerator to reduce weight staleness . all layer in the stage for all 2 ( K− i+1 ) cycles which are used for the corresponding backward stage BKSK−i+2 . Thus , we define the Degree of Staleness as 2 ( K − i + 1 ) , and these saved activations as intermediate activations . For each pair of stages FSi and BKSK−i+2 , let there be Ni weights in their corresponding layers . The layers before the last pipeline register pairs always use stale weights . Thus , we define Percentage of Stale Weight as ( ∑K i=1Ni ) / ( ∑K+1 i=1 Ni ) . On the one hand , the above pipelined execution allows a potential speedup of 2K + 1 over the nonpipelined implementation , keeping all the accelerators active at steady state . On the other hand , the use of stale weights may prevent training convergence or may result in a model that has an inferior inference accuracy . Further , it requires an increase in storage for activations . Our goal is to assess the benefit of this pipelined execution and the impact of its down sides .
This paper investigates the impact of stale weights on the statistical efficiency and performance in a pipelined backpropagation scheme that maximizes accelerator utilization while keeping the memory overhead modest. The paper proposes to combine pipelined and non-pipelined training in a hybrid scheme to address the issue of significant drop in accuracy when pipelining is deeper in the network. The performance of the proposed pipelined backpropagation is demonstrated on 2 GPUs using ResNet with speedups of up to 1.8X over a 1-GPU baseline and a small drop in inference accuracy.
SP:3b0d0ac062a7bc618741cff17c7d507b0b0a7489
Pipelined Training with Stale Weights of Deep Convolutional Neural Networks
The growth in the complexity of Convolutional Neural Networks ( CNNs ) is increasing interest in partitioning a network across multiple accelerators during training and pipelining the backpropagation computations over the accelerators . Existing approaches avoid or limit the use of stale weights through techniques such as micro-batching or weight stashing . These techniques either underutilize of accelerators or increase memory footprint . We explore the impact of stale weights on the statistical efficiency and performance in a pipelined backpropagation scheme that maximizes accelerator utilization and keeps memory overhead modest . We use 4 CNNs ( LeNet-5 , AlexNet , VGG and ResNet ) and show that when pipelining is limited to early layers in a network , training with stale weights converges and results in models with comparable inference accuracies to those resulting from non-pipelined training on MNIST and CIFAR-10 datasets ; a drop in accuracy of 0.4 % , 4 % , 0.83 % and 1.45 % for the 4 networks , respectively . However , when pipelining is deeper in the network , inference accuracies drop significantly . We propose combining pipelined and non-pipelined training in a hybrid scheme to address this drop . We demonstrate the implementation and performance of our pipelined backpropagation in PyTorch on 2 GPUs using ResNet , achieving speedups of up to 1.8X over a 1-GPU baseline , with a small drop in inference accuracy . 1 INTRODUCTION . Modern Convolutional Neural Networks ( CNNs ) have grown in size and complexity to demand considerable memory and computational resources , particularly for training . This growth makes it sometimes difficult to train an entire network with a single accelerator ( Huang et al. , 2018 ; Harlap et al. , 2018 ; Chen et al. , 2012 ) . Instead , the network is partitioned among multiple accelerators , typically by partitioning its layers among the available accelerators , as shown in Figure 1 for an example 8-layer network . The 8 layers are divided into 4 computationally-balanced partitions , P0 ... P3 and each partition is mapped to one of the 4 accelerators , A0 ... A3 . Each accelerator is responsible for the computations associated with the layers mapped to it . However , the nature of the backpropagation algorithm used to train CNNs ( Rumelhart et al. , 1986 ) is that the computations of a layer are performed only after the computations of the preceding layer in the forward pass of the algorithm and only after the computations of the succeeding layer in the backward pass . Further , the computations for one batch of input data are only performed after the computations of the preceding batch have updated the parameters ( i.e. , weights ) of the network . These dependences underutilize the accelerators , as shown by the space-time diagram in Figure 2 ; only one accelerator can be active at any given point in time . The underutilization of accelerators can be alleviated by pipelining the computations of the backpropagation algorithm over the accelerators ( Huang et al. , 2018 ; Harlap et al. , 2018 ; Chen et al. , 2012 ) . That is , by overlapping the computations of different input data batches using the multiple accelerators . However , pipelining causes an accelerator to potentially use weights that are yet to be updated by an accelerator further down in the pipeline . The use of such stale weights can negatively affect the statistical efficiency of the network , preventing the convergence of training or producing a model with lower inference accuracy . A0 A1 A2 A3 P0 P1 P2 P3 Idle Time P0 P1 P2 P3 Forward Backward Figure 2 : Schedule of Computations Common wisdom is that the use of stale weights must either be avoided , e.g. , with the use of microbatches ( Huang et al. , 2018 ) , be constrained to ensure the consistency of the weights within an accelerator using stashing ( Harlap et al. , 2018 ) , or by limiting the use of pipelining to very small networks ( Mostafa et al. , 2017 ) . However , these approaches either underutilize accelerators ( Huang et al. , 2018 ) or inflate memory usage to stash multiple copies of weights ( Harlap et al. , 2018 ) . In this paper we question this common wisdom and explore pipelining that allows for the full utilization of accelerators while using stale weights . This results in a pipelining scheme that , compared to existing schemes , is simpler to implement , fully utilizes the accelerators and has lower memory overhead . We evaluate this pipelining scheme using 4 CNNs : LeNet-5 ( trained on MNIST ) , AlexNet , VGG and ResNet ( all trained on CIFAR-10 ) . We analyze the impact of weight staleness and show that if pipelining is limited to early layers in the network , training does converge and the quality of the resulting models is comparable to that of models obtained with non-pipelined training . For the 4 networks , the drop in accuracy is 0.4 % , 4 % , 0.83 % and 1.45 % , respectively . However , inference accuracies drop significantly when the pipelining is deeper in the network . While this is not a limitation since the bulk of computations that can benefit from pipelining are in the early convolutional layers , we address this through a hybrid scheme that combines pipelined and non-pipelined training to maintain inference accuracy while still delivering performance improvement . Evaluation shows that our pipelined training delivers a speedup of up to 1.8X on a 2-GPU system . The remainder of this paper is organized as follows . Section 2 briefly describes the backpropagation for training of CNNs . Section 3 details our pipelining scheme . Section 4 describes how non-pipelined and pipelined backpropagation are combined . Section 5 highlights some of the implementation details . Experimental evaluation is presented in Section 6 . Related work is reviewed in Section 7 . Finally , Section 8 gives concluding remarks and directions for future work . 2 THE BACKPROPAGATION ALGORITHM . The backpropagation algorithm ( Rumelhart et al. , 1986 ) consists of two passes : a forward pass that calculates the output error and a backward pass that calculates the error gradients and updates the weights of the network . The two passes are performed for input data one mini-batch at a time . In the forward pass , a mini-batch is fed into the network , propagating from the first to the last layer . At each layer l , the activations of the layer , denoted by x ( l ) , are computed using the weights of the layer , denoted by W ( l ) . When the output of the network ( layer L ) x ( L ) is produced , it is used with the true data label to obtain a training error e for the mini-batch . In the backward pass , the error e is propagated from the last to the first layer . The error gradients with respect to pre-activations of layer l , denoted by δ ( l ) , are calculated . Further , the error gradients with respect to weights of layer l , ∂e ∂W ( l ) , are computed using the activations from layer l − 1 ( i.e. , x ( l−1 ) ) and δ ( l ) . Subsequently , δ ( l ) is used to calculate the δ ( l−1 ) . When ∂e ∂W ( l ) is computed for every layer , the weights are updated using the error gradients . In the forward pass , the activations of the layer l , x ( l ) , can not be computed until the activations of the previous layers , i.e. , x ( l−1 ) , are computed . In backward pass , ∂e ∂W ( l ) can only be computed once x ( l−1 ) and δ ( l ) have been computed . Moreover , δ ( l ) depends on δ ( l+1 ) . Finally , for a given mini-batch the backward pass can not be started until the forward pass is completed and the error e has been determined . The above dependences ensure that the weights of the layers are updated using the activations and error gradients calculated from the same batch of training data in one iteration of the backpropagation algorithm . Only when the weights are updated is the next batch of training data fed into the network . These dependences limit parallelism when a network is partitioned across multiple accelerators and allow only one accelerator to be active at any point . This results in under-utilization of the accelerators . It is this limitation that pipelining addresses . 3 PIPELINED BACKPROPAGATION . We illustrate our pipelined backpropagation implementation with the L layer network shown in Figure 3 , using conceptual pipeline registers . Two registers are inserted between layers l and l + 1 ; one register for the forward pass and a second for the backward pass . The forward register stores the activations of layer l ( x ( l ) ) . The backward register stores the gradients δ ( l+1 ) of layer l+1 . This defines a 4-stage pipelined backpropagation . The forward pass for layers 1 to l forms forward stage FS1 . The forward pass for layers l + 1 to L form forward stage FS2 . Similarly , the backwards pass for layers l + 1 to L and 1 to l form backward stages BKS1 and BKS2 respectively . The forward and backward stages are executed in a pipelined fashion on 3 accelerators : one for FS1 , one for both FS2 and BKS1 , and one for BKS21 . In cycle 0 , mini-batch 0 is fed to FS1 . The computations of the forward pass are done as in the traditional non-pipelined implementation . In cycle 1 , layer l activations x ( l ) are fed to FS2 and mini-batch 1 is fed to FS1 . In cycle 2 , the error for mini-batch 0 computed in FS2 is directly fed to BKS1 , the activations of layer l x ( l ) are forwarded to FS2 and mini-batch 2 is fed to FS1 . This pipelined execution is illustrated by the space-time diagram in Figure 4 for 5 mini-batches . The figure depicts the mini-batch processed by each accelerator cycles 0 to 6 . At steady state , all the accelerators are active in each cycle of execution . The above pipelining scheme utilizes weights in FS1 that are yet to be updated by the errors calculated by FS2 and BKS1 . At steady state , the activations of a mini-batch in FS1 are calculated using weights that are 2 execution cycles old , or 2 cycles stale . This is reflected in Figure 4 by indicating the weights used by each forward stage and the weights updated by each backward stage . The weights of a forward stage are subscripted by how stale they are ( -ve subscripts ) . Similarly , the weights updated by a backward stage are subscripted by how delayed they are ( +ve subscripts ) . Further , since the updates of the weights by BKS2 requires activations calculated for the same mini-batch in FS1 for all layers in the stage , it is necessary to save these activations until the error gradients with respect to the weights are calculated by BKS2 . Only when the weights are updated using the gradients can these activations be discarded . In the general case , we use K pairs of pipeline registers ( each pair consisting of a forward register and a backward register ) inserted between the layers of the network . We describe the placement of the register pairs by the Pipeline Placement Vector , PPV = ( p1 , p2 , ... , pK ) , where pi represents the layer number after which a pipeline register pair is inserted . Such a placement creates ( K + 1 ) forward stages , labeled FSi , i = 1 , 2 , ... , K + 1 and ( K + 1 ) backward stages , labeled BKSi , i = 1 , 2 , ... , K + 1 . Forward stage FSi and backward stage BKSK−i+2 correspond to the same set of layers . Specifically , stage FSi contains layers pi + 1 to pi+1 , inclusive . We assign each forward stage and each backward stage to an accelerator , with the exception of the FSK+1 and backward stage BKS1 , which are assigned to the same accelerator to reduce weight staleness by an execution cycle . In total 2K + 1 accelerators are used . We quantify weight staleness as follows . A forward stage FSi and backward stage BKSK−i+2 use the same weights that are 2 ( K− i+1 ) cycles old . A forward stage FSi must store the activations of 1We combine FS1 and BKS1 on the same accelerator to reduce weight staleness . all layer in the stage for all 2 ( K− i+1 ) cycles which are used for the corresponding backward stage BKSK−i+2 . Thus , we define the Degree of Staleness as 2 ( K − i + 1 ) , and these saved activations as intermediate activations . For each pair of stages FSi and BKSK−i+2 , let there be Ni weights in their corresponding layers . The layers before the last pipeline register pairs always use stale weights . Thus , we define Percentage of Stale Weight as ( ∑K i=1Ni ) / ( ∑K+1 i=1 Ni ) . On the one hand , the above pipelined execution allows a potential speedup of 2K + 1 over the nonpipelined implementation , keeping all the accelerators active at steady state . On the other hand , the use of stale weights may prevent training convergence or may result in a model that has an inferior inference accuracy . Further , it requires an increase in storage for activations . Our goal is to assess the benefit of this pipelined execution and the impact of its down sides .
This paper proposes a new pipelined training approach to speedup the training for neural networks. The approach separates forward and backpropagation processes into multiple stages, cache the activation and gradients between stages, processes stages simultaneously, and then uses the stored activations to compute gradients for updating the weights. The approach leads to stale weights and gradients. The authors studied the relation between weight staleness and show that the quality degradation mainly correlates with the percentage of the weights being stale in the pipeline. The quality degradation can also be remedied by turning off the pipelining at the later training steps while overall training speed is still faster than without pipelined training.
SP:3b0d0ac062a7bc618741cff17c7d507b0b0a7489
Efficient and Information-Preserving Future Frame Prediction and Beyond
1 INTRODUCTION . Deep learning has enjoyed tremendous success in recent years due to its ability to capture complex dependencies and non-linearities in large datasets ( Krizhevsky et al . ( 2012 ) ; He et al . ( 2016 ) ; Gomez et al . ( 2017 ) ) . Excellent performance has been achieved on a wide range of supervised machine learning tasks , ranging from image classification ( He et al . ( 2016 ) ) and object detection ( Ren et al . ( 2015 ) ) to speech recognition ( Amodei et al . ( 2016 ) ) . Despite the significant breakthrough in supervised learning , the potential of applying deep architectures to unsupervised learning problems remains largely unexplored . Lately there has been a surge of interest in the task of video prediction , i.e. , to predict future frames of a video sequence ( Wang et al . ( 2017 ; 2018 ) ; Denton et al . ( 2017 ) ; Denton & Fergus ( 2018 ) ; Villegas et al . ( 2017 ) ; Lee et al . ( 2018 ) ) . The significance of video prediction primarily lies in its potential of discovering dynamics in the physical world . The self-supervised nature of video prediction aligns well with how humans learn , without requiring large amounts of labeled data . In addition , videos can provide an abundant and virtually unlimited source of visual information . This allows video prediction models to serve as a generative pre-training strategy of feature representation learning for a variety of downstream supervised tasks . To date , most of the existing models for video prediction employ a hybrid of convolutional and recurrent layers as the underlying architecture ( Wang et al . ( 2017 ) ; Shi et al . ( 2015 ) ; Lotter et al . ( 2016 ) ) . Such architectural design enables the model to simultaneously exploit the ability of convolutional units to model spatial relationships and the potential of recurrent units to capture temporal dependencies . Despite their prevalence in the literature , classical video prediction architectures suffer from two major limitations . Firstly , in dense prediction tasks such as video prediction , models are required to make pixel-wise predictions , which emphasizes the demand for the preservation of information through layers . Prior works attempt to address such demand through the extensive use of resolution-preserving blocks ( Wang et al . ( 2017 ; 2018 ) ; Kalchbrenner et al . ( 2016 ) ) . Nevertheless , these resolution-preserving blocks are not guaranteed to preserve all the relevant information , and they greatly increase the memory consumption and computational cost of the models . The second drawback of existing video prediction models is that they can not efficiently take advantage of 3D convolutions , as that would make these already cumbersome architectures even larger . 3D convolutions have been shown to be a very effective alternative to RNNs to capture temporal relations in a variety of video tasks ( Liu et al . ( 2018 ) ; Carreira & Zisserman ( 2017 ) ) , and thus desirable to exploit . Recently , reversible architectures ( Dinh et al . ( 2014 ) ; Gomez et al . ( 2017 ) ; Jacobsen et al . ( 2018 ) ) have attracted attention due to their light memory demand and their information preserving property by design . However , the effectiveness of reversible models remains greatly unexplored in the video literature . In this paper , we introduce a novel , conditionally reversible video prediction model , CrevNet , in the sense that when conditioned on previous hidden states , it can exactly reconstruct the input from its predictions . The contribution of this work can be summarized as follows : • We introduce a two-way autoencoder that uses the forward and backward passes of an invertible network as encoder and decoder ( Fig 1 ) . The volume-preserving two-way autoencoder not only greatly reduces the memory demand and computational cost , but also enjoys the theoretically guaranteed property of no information loss . The lightweight nature of our model enables us to incorporate 3D convolutions without concern of memory bottleneck . • We propose the reversible predictive module ( RPM ) , as illustrated in Fig 2b , which extends the reversibility from spatial to temporal domain . RPM , together with the two-way autoencoder , provides a conditionally reversible architecture ( CrevNet ) for spatiotemporal learning . CrevNet achieves the state-of-the-art results on Moving MNIST , Traffic4cast and KITTI . • We evaluate the effectiveness of features learnt from self-supervision by adapting our CrevNet for object detection on KITTI . Our competitive results indicate the potential of using CrevNet as a generative pre-training strategy to guide downstream CV tasks . 2 APPROACH . We first outline the general pipeline of our method . Our CrevNet consists of two subnetworks , an autonencoder network with an encoder E , decoder D and a recurrent predictor P bridging encoder and decoder . Let xt ∈ Rw×h×c represent the tth frame in video x , where w , h , and c denote its width , height , and the number of channels . Given x0 : t−1 , the model predicts the next frame x̂t as follows : x̂t = D ( P ( E ( xt−1 ) |x0 : t−2 ) ) ( 1 ) In the case of 3D convolution , xt ∈ Rk×w×h×c denotes the short video clip from t to t + k − 1 instead of a single frame at timestep t , where k is the temporal dimension of input or output . During the multi-frame generation process without access to the ground truth frames , the model uses its previous predictions instead . 2.1 THE INVERTIBLE TWO-WAY AUTOENCODER . We propose a bijective two-way autoencoder based on the additive coupling layer introduced in NICE ( Dinh et al . ( 2014 ) ) . We begin with describing the building block of the two-way autoencoder ( Fig 2a ) . Formally , the input x is first reshaped and split channelwise into two groups , denoted as x1 and x2 . During the forward pass of each building block , one group , e.g . x1 , passes through several convolutions and activations and is then added to another group , x2 , like a residual block : x̂2 = x2 + F1 ( x1 ) x̂1 = x1 + F2 ( x̂2 ) ( 2 ) where F is a composite non-linear operator consisting of convolutions and activations , and x̂1 and x̂2 are the updated x1 and x2 . Note that x1 and x2 can be simply recovered from x̂2 and x̂1 by the inverse computation ( Fig 2c ) as follows : x1 = x̂1 −F2 ( x̂2 ) x2 = x̂2 −F1 ( x1 ) ( 3 ) Multiple building blocks are stacked in an alternating fashion between x1 and x2 to construct a two-way autoencoder , as shown in Fig 2a . A series of the forward and inverse computations builds a one-to-one and onto , i.e . bijective , mapping between the input and features . Such invertibility ensures that there is no information loss during the feature extraction , which is presumably more favorable for video prediction since the model is expected to restore the future frames with finegrained details . To enable the invertibility of the entire autoencoder , our two-way autoencoder uses a bijective downsampling , pixel shuffle layer ( Shi et al . ( 2016 ) ) , that changes the shape of feature from ( w , h , c ) to ( w/n , h/n , c× n2 ) . The resulting volume-preserving architecture can greatly reduce its memory consumption compared with the existing resolution-preserving methods . We further argue that for generative tasks , e.g . video prediction , we can effectively utilize a single two-way autoencoder , and to use its forward and backward pass as the encoder and the decoder , respectively . The predicted frame x̂t is thus given by x̂t = E−1 ( P ( E ( xt−1 ) |x0 : t−2 ) ) ( 4 ) where E−1 is the backward pass of E . Our rationale is that , such setting would not only reduce the number of parameters in the model , but also encourage the model to explore the shared feature space between the inputs and the targets . As a result , our method does not require any form of information sharing , e.g . skip connection , between the encoder and decoder . In addition , our two-way autoencoder can enjoy a lower computational cost at the multi-frame prediction phase where the encoding pass is no longer needed and the predictor directly takes the output from previous timestep as input , as shown in Fig 1 , since E ( E−1 ) is an identity mapping . 2.2 REVERSIBLE PREDICTIVE MODULE . In this section , we describe the second part of our video prediction model , the predictor P , which computes dependencies along both the space and time dimensions . Although the traditional stackedConvRNN layers architecture is the most straightforward choice of predictor , we find that it fails to establish a consistent temporal dependency when equipped with our two-way autoencoder through experiments . Therefore , we propose a novel reversible predictive module ( RPM ) , which can be regarded as a recurrent extension of the two-way autoencoder . In the RPM , we substitute all standard convolutions with layers from the ConvRNN family ( e.g . ConvLSTM or spatiotemporal LSTM ) and introduce a soft attention ( weighting gates ) mechanism to form a weighted sum of the two groups instead of the direct addition . The main operations of RPM used in this paper are given as follows : h1t = ConvRNN ( x 1 t , h 1 t−1 ) ConvRNN gt = φ ( W2 ∗ ReLU ( W1 ∗ h1t + b1 ) + b2 ) Attention module x̂2t = ( 1− gt ) x2t + gt h1t Weighted sum where x1t and x 2 t denote two groups of features at timestep t , h 1 t denote the hidden states of ConvRNN layer , φ is sigmoid activation , ∗ is the standard convolution operator and is the Hadamard product . The architecture of reversible predictive module is also shown in Fig 2b . RPM adopts a similar architectural design as the two-way autoencoder to ensure a pixel-wise alignment between the input and the output , i.e . each position of features can be traced back to certain pixel , and thus make it compatible with our two-way autoencoder . It also mitigates the vanishing gradient issues across stacked layers since the coupling layer provides a nice property w.r.t . the Jacobian ( Dinh et al . ( 2014 ) ) . In addition , the attention mechanism in the RPM enables the model to focus on objects in motion instead of background , which further improves the video prediction quality . Similarly , multiple RPMs alternate between the two groups to form a predictor . We call this predictor conditionally reversible since , given ht−1 , we are able to reconstruct xt−1 from x̂t if there are no numerical errors : xt−1 = E−1 ( P−1 ( E ( x̂t ) |ht−1 ) ) ( 5 ) where P−1 is the inverse computation of the predictor P . We name the video prediction model using two-way autoencoder as its backbone and RPMs as its predictor CrevNet . Another key factor of RPM is the choice of ConvRNN . In this paper , we mainly employ ConvLSTM ( Shi et al . ( 2015 ) ) and spatiotemporal LSTM ( ST-LSTM , Wang et al . ( 2017 ) ) to enable a fair comparison with baselines .
This paper introduces Conditionally Reversible Network (CrevNet) that consists of the invertible autoencoder and a reversible predictive module (RPM). The two-way autoencoder is an invertible network that preserves the volume with no information loss while reducing memory consumption by using bijective downsampling. The RPM is a recurrent extension of two-way autoencoder that provides the reversiblity in temporal domain. The experiments on Moving MNIST, Traffic4cast, KITTI, and 2D object detection on KITTI show the improvement compare to other state-of-the-art models.
SP:00b48a43a5037915e21ddb2f0941cdd26a69d44d
Efficient and Information-Preserving Future Frame Prediction and Beyond
1 INTRODUCTION . Deep learning has enjoyed tremendous success in recent years due to its ability to capture complex dependencies and non-linearities in large datasets ( Krizhevsky et al . ( 2012 ) ; He et al . ( 2016 ) ; Gomez et al . ( 2017 ) ) . Excellent performance has been achieved on a wide range of supervised machine learning tasks , ranging from image classification ( He et al . ( 2016 ) ) and object detection ( Ren et al . ( 2015 ) ) to speech recognition ( Amodei et al . ( 2016 ) ) . Despite the significant breakthrough in supervised learning , the potential of applying deep architectures to unsupervised learning problems remains largely unexplored . Lately there has been a surge of interest in the task of video prediction , i.e. , to predict future frames of a video sequence ( Wang et al . ( 2017 ; 2018 ) ; Denton et al . ( 2017 ) ; Denton & Fergus ( 2018 ) ; Villegas et al . ( 2017 ) ; Lee et al . ( 2018 ) ) . The significance of video prediction primarily lies in its potential of discovering dynamics in the physical world . The self-supervised nature of video prediction aligns well with how humans learn , without requiring large amounts of labeled data . In addition , videos can provide an abundant and virtually unlimited source of visual information . This allows video prediction models to serve as a generative pre-training strategy of feature representation learning for a variety of downstream supervised tasks . To date , most of the existing models for video prediction employ a hybrid of convolutional and recurrent layers as the underlying architecture ( Wang et al . ( 2017 ) ; Shi et al . ( 2015 ) ; Lotter et al . ( 2016 ) ) . Such architectural design enables the model to simultaneously exploit the ability of convolutional units to model spatial relationships and the potential of recurrent units to capture temporal dependencies . Despite their prevalence in the literature , classical video prediction architectures suffer from two major limitations . Firstly , in dense prediction tasks such as video prediction , models are required to make pixel-wise predictions , which emphasizes the demand for the preservation of information through layers . Prior works attempt to address such demand through the extensive use of resolution-preserving blocks ( Wang et al . ( 2017 ; 2018 ) ; Kalchbrenner et al . ( 2016 ) ) . Nevertheless , these resolution-preserving blocks are not guaranteed to preserve all the relevant information , and they greatly increase the memory consumption and computational cost of the models . The second drawback of existing video prediction models is that they can not efficiently take advantage of 3D convolutions , as that would make these already cumbersome architectures even larger . 3D convolutions have been shown to be a very effective alternative to RNNs to capture temporal relations in a variety of video tasks ( Liu et al . ( 2018 ) ; Carreira & Zisserman ( 2017 ) ) , and thus desirable to exploit . Recently , reversible architectures ( Dinh et al . ( 2014 ) ; Gomez et al . ( 2017 ) ; Jacobsen et al . ( 2018 ) ) have attracted attention due to their light memory demand and their information preserving property by design . However , the effectiveness of reversible models remains greatly unexplored in the video literature . In this paper , we introduce a novel , conditionally reversible video prediction model , CrevNet , in the sense that when conditioned on previous hidden states , it can exactly reconstruct the input from its predictions . The contribution of this work can be summarized as follows : • We introduce a two-way autoencoder that uses the forward and backward passes of an invertible network as encoder and decoder ( Fig 1 ) . The volume-preserving two-way autoencoder not only greatly reduces the memory demand and computational cost , but also enjoys the theoretically guaranteed property of no information loss . The lightweight nature of our model enables us to incorporate 3D convolutions without concern of memory bottleneck . • We propose the reversible predictive module ( RPM ) , as illustrated in Fig 2b , which extends the reversibility from spatial to temporal domain . RPM , together with the two-way autoencoder , provides a conditionally reversible architecture ( CrevNet ) for spatiotemporal learning . CrevNet achieves the state-of-the-art results on Moving MNIST , Traffic4cast and KITTI . • We evaluate the effectiveness of features learnt from self-supervision by adapting our CrevNet for object detection on KITTI . Our competitive results indicate the potential of using CrevNet as a generative pre-training strategy to guide downstream CV tasks . 2 APPROACH . We first outline the general pipeline of our method . Our CrevNet consists of two subnetworks , an autonencoder network with an encoder E , decoder D and a recurrent predictor P bridging encoder and decoder . Let xt ∈ Rw×h×c represent the tth frame in video x , where w , h , and c denote its width , height , and the number of channels . Given x0 : t−1 , the model predicts the next frame x̂t as follows : x̂t = D ( P ( E ( xt−1 ) |x0 : t−2 ) ) ( 1 ) In the case of 3D convolution , xt ∈ Rk×w×h×c denotes the short video clip from t to t + k − 1 instead of a single frame at timestep t , where k is the temporal dimension of input or output . During the multi-frame generation process without access to the ground truth frames , the model uses its previous predictions instead . 2.1 THE INVERTIBLE TWO-WAY AUTOENCODER . We propose a bijective two-way autoencoder based on the additive coupling layer introduced in NICE ( Dinh et al . ( 2014 ) ) . We begin with describing the building block of the two-way autoencoder ( Fig 2a ) . Formally , the input x is first reshaped and split channelwise into two groups , denoted as x1 and x2 . During the forward pass of each building block , one group , e.g . x1 , passes through several convolutions and activations and is then added to another group , x2 , like a residual block : x̂2 = x2 + F1 ( x1 ) x̂1 = x1 + F2 ( x̂2 ) ( 2 ) where F is a composite non-linear operator consisting of convolutions and activations , and x̂1 and x̂2 are the updated x1 and x2 . Note that x1 and x2 can be simply recovered from x̂2 and x̂1 by the inverse computation ( Fig 2c ) as follows : x1 = x̂1 −F2 ( x̂2 ) x2 = x̂2 −F1 ( x1 ) ( 3 ) Multiple building blocks are stacked in an alternating fashion between x1 and x2 to construct a two-way autoencoder , as shown in Fig 2a . A series of the forward and inverse computations builds a one-to-one and onto , i.e . bijective , mapping between the input and features . Such invertibility ensures that there is no information loss during the feature extraction , which is presumably more favorable for video prediction since the model is expected to restore the future frames with finegrained details . To enable the invertibility of the entire autoencoder , our two-way autoencoder uses a bijective downsampling , pixel shuffle layer ( Shi et al . ( 2016 ) ) , that changes the shape of feature from ( w , h , c ) to ( w/n , h/n , c× n2 ) . The resulting volume-preserving architecture can greatly reduce its memory consumption compared with the existing resolution-preserving methods . We further argue that for generative tasks , e.g . video prediction , we can effectively utilize a single two-way autoencoder , and to use its forward and backward pass as the encoder and the decoder , respectively . The predicted frame x̂t is thus given by x̂t = E−1 ( P ( E ( xt−1 ) |x0 : t−2 ) ) ( 4 ) where E−1 is the backward pass of E . Our rationale is that , such setting would not only reduce the number of parameters in the model , but also encourage the model to explore the shared feature space between the inputs and the targets . As a result , our method does not require any form of information sharing , e.g . skip connection , between the encoder and decoder . In addition , our two-way autoencoder can enjoy a lower computational cost at the multi-frame prediction phase where the encoding pass is no longer needed and the predictor directly takes the output from previous timestep as input , as shown in Fig 1 , since E ( E−1 ) is an identity mapping . 2.2 REVERSIBLE PREDICTIVE MODULE . In this section , we describe the second part of our video prediction model , the predictor P , which computes dependencies along both the space and time dimensions . Although the traditional stackedConvRNN layers architecture is the most straightforward choice of predictor , we find that it fails to establish a consistent temporal dependency when equipped with our two-way autoencoder through experiments . Therefore , we propose a novel reversible predictive module ( RPM ) , which can be regarded as a recurrent extension of the two-way autoencoder . In the RPM , we substitute all standard convolutions with layers from the ConvRNN family ( e.g . ConvLSTM or spatiotemporal LSTM ) and introduce a soft attention ( weighting gates ) mechanism to form a weighted sum of the two groups instead of the direct addition . The main operations of RPM used in this paper are given as follows : h1t = ConvRNN ( x 1 t , h 1 t−1 ) ConvRNN gt = φ ( W2 ∗ ReLU ( W1 ∗ h1t + b1 ) + b2 ) Attention module x̂2t = ( 1− gt ) x2t + gt h1t Weighted sum where x1t and x 2 t denote two groups of features at timestep t , h 1 t denote the hidden states of ConvRNN layer , φ is sigmoid activation , ∗ is the standard convolution operator and is the Hadamard product . The architecture of reversible predictive module is also shown in Fig 2b . RPM adopts a similar architectural design as the two-way autoencoder to ensure a pixel-wise alignment between the input and the output , i.e . each position of features can be traced back to certain pixel , and thus make it compatible with our two-way autoencoder . It also mitigates the vanishing gradient issues across stacked layers since the coupling layer provides a nice property w.r.t . the Jacobian ( Dinh et al . ( 2014 ) ) . In addition , the attention mechanism in the RPM enables the model to focus on objects in motion instead of background , which further improves the video prediction quality . Similarly , multiple RPMs alternate between the two groups to form a predictor . We call this predictor conditionally reversible since , given ht−1 , we are able to reconstruct xt−1 from x̂t if there are no numerical errors : xt−1 = E−1 ( P−1 ( E ( x̂t ) |ht−1 ) ) ( 5 ) where P−1 is the inverse computation of the predictor P . We name the video prediction model using two-way autoencoder as its backbone and RPMs as its predictor CrevNet . Another key factor of RPM is the choice of ConvRNN . In this paper , we mainly employ ConvLSTM ( Shi et al . ( 2015 ) ) and spatiotemporal LSTM ( ST-LSTM , Wang et al . ( 2017 ) ) to enable a fair comparison with baselines .
In this paper, the authors propose a new method of self-supervised feature learning from videos based on learning future frame prediction. The idea is similar as BERT like NLP tasks, but for videos, the computational cost and memory cost could be very large. To solve this problem efficiently, the authors adopt several existing techniques such as pixel shuffle layer, 3D-CNN, ConvRNN and Attention module to efficiently and effectively capture video information. Experiments on several datasets are conducted to show the effectiveness of the proposed method.
SP:00b48a43a5037915e21ddb2f0941cdd26a69d44d
Global Concavity and Optimization in a Class of Dynamic Discrete Choice Models
Discrete choice models with unobserved heterogeneity are commonly used Econometric models for dynamic Economic behavior which have been adopted in practice to predict behavior of individuals and firms from schooling and job choices to strategic decisions in market competition . These models feature optimizing agents who choose among a finite set of options in a sequence of periods and receive choice-specific payoffs that depend on both variables that are observed by the agent and recorded in the data and variables that are only observed by the agent but not recorded in the data . Existing work in Econometrics assumes that optimizing agents are fully rational and requires finding a functional fixed point to find the optimal policy . We show that in an important class of discrete choice models the value function is globally concave in the policy . That means that simple algorithms that do not require fixed point computation , such as the policy gradient algorithm , globally converge to the optimal policy . This finding can both be used to relax behavioral assumption regarding the optimizing agents and to facilitate Econometric analysis of dynamic behavior . In particular , we demonstrate significant computational advantages in using a simple implementation policy gradient algorithm over existing “ nested fixed point ” algorithms used in Econometrics . 1 INTRODUCTION . Dynamic discrete choice model with unobserved heterogeneity is , arguably , the most popular model that is currently used for Econometric analysis of dynamic behavior of individuals and firms in Economics and Marketing ( e.g . see surveys in Eckstein and Wolpin ( 1989 ) , Dubé et al . ( 2002 ) Abbring and Heckman ( 2007 ) , Aguirregabiria and Mira ( 2010 ) ) . Even most recent Econometric papers on single-agent dynamic decision-making use this setup to showcase their results ( e.g . Arcidiacono and Miller , 2011 ; Aguirregabiria and Magesan , 2016 ; Müller and Reich , 2018 ) .In this model , pioneered in Rust ( 1987 ) , the agent chooses between a discrete set of options ( typically 2 ) in a sequence of discrete time periods to maximize the expected cumulative discounted payoff . The reward in each period is a function of the state variable which follows a Markov process and is observed in the data and also a function of an idiosyncratic random variable that is only observed by the agent but is not reported in the data . The unobserved idiosyncratic component is designed to reflect heterogeneity of agents that may value the same choice differently . Despite significant empirical success in prediction of dynamic economic behavior under uncertainty , dynamic discrete choice models frequently lead to seemingly unrealistic optimization problems that economic agents need to solve . For instance , Hendel and Nevo ( 2006 ) features an elaborate functional fixed point problem with constraints , which is computationally intensive , especially in continuous state spaces , for consumers to buy laundry detergent in the supermarket . Common approach for this functional fixed point problem is value function iteration ( See Section 2.3 for more discussion ) . At the same time , rich literature on Markov Decision Processes ( cf . Sutton and Barto , 2018 ) have developed several effective optimization algorithms , such as the policy gradient algorithm and its variants , that do not require solving for a functional fixed point . However , the drawback of the policy gradient is that the value function in a generic Markov Decision problem is not concave in the policy . This means that gradient-based algorithms have no guarantees for global convergence for a generic MDP . While for some specific and simple models where closed-form characterizations exist , the convergence results are shown by model-specific technique which is hard to generalize ( e.g . Fazel et al. , 2018 , for linear quadratic regulator ) . In this paper our main goal is to resolve the dichotomy in empirical social science literature that the rationality of consumers requires for them to be able to solve the functional fixed point problem which is computationally intensive . Our main theoretic contribution is the proof that , in the class of dynamic discrete choice models with unobserved heterogeneity , the value function of the optimizing agent is globally concave in the policy . This implies that a large set of policy gradient algorithms that have a modest computational power requirement for the optimizing agents have a fast convergence guarantee in our considered class of dynamic discrete choice models . The importance of this result is twofold . First , it gives a promise that seemingly complicated dynamic optimization problems faced by consumers can be solved by relatively simple algorithms that do not require fixed point computation or functional optimization . This means that the policy gradient-style methods have an important behavioral interpretation . As a result , consumer behavior following policy gradient can serve as a behavioral assumption for estimating consumer preferences from data which is more natural for consumer choice settings than other assumptions that have been used in the past for estimation of preferences ( e.g . -regret learning in Nekipelov et al . ( 2015 ) ) . Second , more importantly , our result showing fast convergence of the policy gradient algorithm makes it an attractive alternative to the search for the functional fixed point in this class of problems . While the goal of the Econometric analysis of the data from dynamically optimizing consumers is to estimate consumer preferences by maximizing the likelihood function , it requires to sequentially solve the dynamic optimization problem for each value of utility parameters along the parameter search path . Existing work in Economics prescribes to use fixed point iterations for the value function to solve the dynamic optimization problem ( see Rust ( 1987 ) , Aguirregabiria and Mira ( 2007 ) ) . The replacement of the fixed point iterations with the policy gradient method significantly speeds up the maximization of the likelihood function . This makes the policy gradient algorithm our recommended approach for use in Econometric analysis , and establishes practical relevance of many newer reinforcement learning algorithms from behavioral perspective for social sciences . 2 PRELIMINARIES . In this section , we introduce the concepts of the Markov decision process ( MDP ) with choice-specific payoff heterogeneity , the conditional choice probability ( CCP ) representation and the policy gradient algorithm . 2.1 MARKOV DECISION PROCESS . A discrete-time Markov decision process ( MDP ) with choice-specific heterogeneity is defined as a 5-tuple 〈S , A , r , , P , β〉 , where S is compact convex state space with diam ( S ) ≤ S̄ < ∞ , A is the set of actions , r : S×A → R+ is the reward function , such that r ( s , a ) is the immediate non-negative reward for the state-action pair ( s , a ) , are independent random variables , P is a Markov transition model where where p ( s′|s , a ) defines the transition density between state s and s′ under action a , and β ∈ [ 0 , 1 ) is the discount factor for future payoff . We assume that random variables are observed by the optimizing agent and not recorded in the data . These variables reflect idiosyncratic differences in preferences of different optimizing agents over choices . In the following discussion we refer to these variables as “ random choice-specific shocks . '' In each period t = 1 , 2 , . . . , ∞ , the nature realizes the current state st based on the Markov transition P given the state-action pair ( st−1 , at−1 ) in the previous period t− 1 , and the choice-specific shocks t = { t , a } a∈A drawn i.i.d . from distribution . The optimizing agent chooses an action a ∈ A , and her current period payoff is sum of the immediate reward and the choice-specific shock , i.e. , r ( s , a ) + t , a . Given initial state s1 , the agent ’ s long-term payoff is E 1 , s2 , 2 , ... [ ∑∞ t=1 β t−1r ( st , at ) + t , at ] . This expression makes it clear that random shocks play a crucial role in this model by allowing us to define the ex ante value function of the optimizing agent which reflects the expected reward from agent ’ s choices before the agent observes realization of t. When the distribution of shocks is sufficiently smooth ( differentiable ) , the corresponding ex ante value function is smooth ( differentiable ) as well . This allows us to characterize the impact of agent ’ s policy on the expected value by considering functional derivatives of the value function with respect to the policy . In the remainder of the paper , we rely on the following assumptions . Assumption 2.1 . The state space S is compact in R and the action spaceA is binary , i.e. , A = { 0 , 1 } . Assumption 2.2 . For all states s , the immediate reward r ( s , 0 ) for the state-action pair ( s , 0 ) is zero i.e. , r ( s , 0 ) = 0 , and the immediate reward r ( s , 1 ) for the state-action pair ( s , 1 ) is bounded between [ Rmin , Rmax ] . Assumption 2.3 . Choice-specific shocks are Type I Extreme Value random variables with location parameter 0 ( cf . Hotz and Miller , 1993 ) which are independent over choices and time periods . Assumption 2.1 , 2.2 , 2.3 are present in most of the papers on dynamic decision-making in economics , marketing and finance , ( e.g . Dubé et al. , 2002 ; Aguirregabiria and Mira , 2010 ; Arcidiacono and Miller , 2011 ; Aguirregabiria and Magesan , 2016 ; Müller and Reich , 2018 ) The policy and the value function A stationary Markov policy is a function σ : S × RA → A which maps the current state s and choice-specific shock to an action . In our further discussion we will show that there is a natural more restricted definition of the set of all feasible policies in this model . Given any stationary Markov policy σ , the value function Vσ : S → R is a mapping from the initial state to the long-term payoff under policy σ , i.e. , Vσ ( s1 ) = E 1 , s2 , 2 , ... [ ∞∑ t=1 βt−1 { r ( st , σ ( st , t ) ) + t , σ ( st , t ) } ] . Since the reward is non-negative and bounded , and the discount β ∈ [ 0 , 1 ) , value function Vσ is well-defined and the optimal policy σ̃ ( i.e. , Vσ̃ ( s ) ≥ Vσ ( s ) for all policies σ and states s ) exists . Furthermore , the following Bellman equation holds Vσ ( s ) = E [ r ( s , σ ( s , ) ) + σ ( s , ) + β Es′ [ Vσ ( s′ ) |s , σ ( s , ) ] ] for all policies σ ( 1 ) 2.2 CONDITIONAL CHOICE PROBABILITY REPRESENTATION . Based on the Bellman equation ( 1 ) evaluated at the optimal policy , the optimal Conditional Choice Probability δ̃ ( a|s ) ( i.e. , the probability of choosing action a given state s in the optimal policy σ̃ ) can be defined as δ̃ ( a|s ) = E [ 1 { r ( s , a ) + a + β Es′ [ Vσ̃ ( s′ ) |s , a ] ≥ r ( s , a′ ) + a′ + β Es′ [ Vσ̃ ( s′ ) |s , a′ ] , ∀a′ } ] The optimal policy σ̃ can , therefore , be equivalently characterized by threshold function π̃ ( s , a ) = r ( s , a ) + β Es′ [ Vσ̃ ( s′ ) |s , a ] , such that the optimizing agent chooses action a† which maximizes the sum of the threshold and the choice-specific shock , i.e. , a† = argmaxa { π̃ ( s , a ) + a } . Similarly , all non-optimal policies can be characterized by the corresponding threshold functions denoted π . Under Assumption 2.3 the conditional choice probability δ can be explicitly expressed in terms of the respective threshold π as ( cf . Rust , 1996 ) δ ( a|s ) = exp ( π ( s , a ) ) / ( ∑ a′∈A exp ( π ( s , a ′ ) ) ) . We note that this expression induces a one-to-one mapping from the thresholds to the conditional choice probabilities . Therefore , all policies are fully characterized by their respective conditional choice probabilities . For notational simplicity , since we consider the binary action space A = { 0 , 1 } , and the reward r ( s , 0 ) is normalized to 0 we denote the immediate reward r ( s , 1 ) as r ( s ) ; denote the conditional choice probability δ ( 0|s ) as δ ( s ) ; and denote π ( s , 1 ) as π ( s ) . In the subsequent discussion given that the characterization of policy σ via its threshold is equivalent to its characterization by conditional choice probability δ , we interchangeably refer to δ as the “ policy . '' Then we rewrite the Bellman equation for a given policy δ as Vδ ( s ) = ( 1− δ ( s ) ) r ( s ) − δ ( s ) log ( δ ( s ) ) − ( 1− δ ( s ) ) log ( 1− δ ( s ) ) + β E , s′ [ Vδ ( s ) ( s ′ ) ∣∣∣s ] ( 2 ) Now we make two additional assumptions that are compatible with standard assumptions in the Econometrics literature . Assumption 2.4 . For all states s ∈ S , the conditional distribution of the next period Markov state p ( ·|s , 1 ) first-order stochastically dominates distribution p ( ·|s , 0 ) , i.e. , for all ŝ ∈ S , Prs′ [ s′ ≤ ŝ|s , 1 ] ≤ Prs′ [ s′ ≤ ŝ|s , 0 ] . Assumption 2.5 . Under the optimal policy δ̃ , the value function is non-decreasing in states , i.e. , Vδ̃ ( s ) ≤ Vδ̃ ( s′ ) for all s , s′ ∈ S s.t . s < s′ . Consider a myopic policy δ̄ ( s ) = ( exp ( r ( s ) ) + 1 ) −1 which uses threshold π̄ ( s ) = r ( s ) . This policy corresponds to agent optimizing the immediate reward without considering how current actions impact future rewards . Under Assumption 2.4 and Assumption 2.5 , the threshold for optimal policy is at least the threshold of myopic policy , i.e. , π̃ ( s ) ≥ π̄ ( s ) . Hence , Lemma 2.1 holds . Lemma 2.1 . The optimal policy δ̃ chooses action 0 with weakly lower probability than the myopic policy δ̄ in all states s ∈ S , i.e. , δ̃ ( s ) ≤ δ̄ ( s ) .
This paper considers reinforcement learning for discrete choice models with unobserved heterogeneity, which is useful for analyzing dynamic Economic behavior. Random choice-specific shocks in reward is accommodated, which are only observed by the agent but not recorded in the data. Existing optimization approaches rely on finding a functional fixed point, which is computationally expensive. The main contribution of the paper lies in formulating discrete choice models into an MDP, and showing that the value function is concave with respect to the policy (represented by conditional choice probability). So policy gradient algorithm can provably converge to the global optimal. Conditions on the parameters for global concavity are identified and rates of convergences are established. Finally, significant advantages in computation were demonstrated on the data from Rust (1987), compared with “nested fixed point” algorithms that is commonly used in Econometrics.
SP:5a0e35b51548e82135b965e7b692e8a0af1289f8
Global Concavity and Optimization in a Class of Dynamic Discrete Choice Models
Discrete choice models with unobserved heterogeneity are commonly used Econometric models for dynamic Economic behavior which have been adopted in practice to predict behavior of individuals and firms from schooling and job choices to strategic decisions in market competition . These models feature optimizing agents who choose among a finite set of options in a sequence of periods and receive choice-specific payoffs that depend on both variables that are observed by the agent and recorded in the data and variables that are only observed by the agent but not recorded in the data . Existing work in Econometrics assumes that optimizing agents are fully rational and requires finding a functional fixed point to find the optimal policy . We show that in an important class of discrete choice models the value function is globally concave in the policy . That means that simple algorithms that do not require fixed point computation , such as the policy gradient algorithm , globally converge to the optimal policy . This finding can both be used to relax behavioral assumption regarding the optimizing agents and to facilitate Econometric analysis of dynamic behavior . In particular , we demonstrate significant computational advantages in using a simple implementation policy gradient algorithm over existing “ nested fixed point ” algorithms used in Econometrics . 1 INTRODUCTION . Dynamic discrete choice model with unobserved heterogeneity is , arguably , the most popular model that is currently used for Econometric analysis of dynamic behavior of individuals and firms in Economics and Marketing ( e.g . see surveys in Eckstein and Wolpin ( 1989 ) , Dubé et al . ( 2002 ) Abbring and Heckman ( 2007 ) , Aguirregabiria and Mira ( 2010 ) ) . Even most recent Econometric papers on single-agent dynamic decision-making use this setup to showcase their results ( e.g . Arcidiacono and Miller , 2011 ; Aguirregabiria and Magesan , 2016 ; Müller and Reich , 2018 ) .In this model , pioneered in Rust ( 1987 ) , the agent chooses between a discrete set of options ( typically 2 ) in a sequence of discrete time periods to maximize the expected cumulative discounted payoff . The reward in each period is a function of the state variable which follows a Markov process and is observed in the data and also a function of an idiosyncratic random variable that is only observed by the agent but is not reported in the data . The unobserved idiosyncratic component is designed to reflect heterogeneity of agents that may value the same choice differently . Despite significant empirical success in prediction of dynamic economic behavior under uncertainty , dynamic discrete choice models frequently lead to seemingly unrealistic optimization problems that economic agents need to solve . For instance , Hendel and Nevo ( 2006 ) features an elaborate functional fixed point problem with constraints , which is computationally intensive , especially in continuous state spaces , for consumers to buy laundry detergent in the supermarket . Common approach for this functional fixed point problem is value function iteration ( See Section 2.3 for more discussion ) . At the same time , rich literature on Markov Decision Processes ( cf . Sutton and Barto , 2018 ) have developed several effective optimization algorithms , such as the policy gradient algorithm and its variants , that do not require solving for a functional fixed point . However , the drawback of the policy gradient is that the value function in a generic Markov Decision problem is not concave in the policy . This means that gradient-based algorithms have no guarantees for global convergence for a generic MDP . While for some specific and simple models where closed-form characterizations exist , the convergence results are shown by model-specific technique which is hard to generalize ( e.g . Fazel et al. , 2018 , for linear quadratic regulator ) . In this paper our main goal is to resolve the dichotomy in empirical social science literature that the rationality of consumers requires for them to be able to solve the functional fixed point problem which is computationally intensive . Our main theoretic contribution is the proof that , in the class of dynamic discrete choice models with unobserved heterogeneity , the value function of the optimizing agent is globally concave in the policy . This implies that a large set of policy gradient algorithms that have a modest computational power requirement for the optimizing agents have a fast convergence guarantee in our considered class of dynamic discrete choice models . The importance of this result is twofold . First , it gives a promise that seemingly complicated dynamic optimization problems faced by consumers can be solved by relatively simple algorithms that do not require fixed point computation or functional optimization . This means that the policy gradient-style methods have an important behavioral interpretation . As a result , consumer behavior following policy gradient can serve as a behavioral assumption for estimating consumer preferences from data which is more natural for consumer choice settings than other assumptions that have been used in the past for estimation of preferences ( e.g . -regret learning in Nekipelov et al . ( 2015 ) ) . Second , more importantly , our result showing fast convergence of the policy gradient algorithm makes it an attractive alternative to the search for the functional fixed point in this class of problems . While the goal of the Econometric analysis of the data from dynamically optimizing consumers is to estimate consumer preferences by maximizing the likelihood function , it requires to sequentially solve the dynamic optimization problem for each value of utility parameters along the parameter search path . Existing work in Economics prescribes to use fixed point iterations for the value function to solve the dynamic optimization problem ( see Rust ( 1987 ) , Aguirregabiria and Mira ( 2007 ) ) . The replacement of the fixed point iterations with the policy gradient method significantly speeds up the maximization of the likelihood function . This makes the policy gradient algorithm our recommended approach for use in Econometric analysis , and establishes practical relevance of many newer reinforcement learning algorithms from behavioral perspective for social sciences . 2 PRELIMINARIES . In this section , we introduce the concepts of the Markov decision process ( MDP ) with choice-specific payoff heterogeneity , the conditional choice probability ( CCP ) representation and the policy gradient algorithm . 2.1 MARKOV DECISION PROCESS . A discrete-time Markov decision process ( MDP ) with choice-specific heterogeneity is defined as a 5-tuple 〈S , A , r , , P , β〉 , where S is compact convex state space with diam ( S ) ≤ S̄ < ∞ , A is the set of actions , r : S×A → R+ is the reward function , such that r ( s , a ) is the immediate non-negative reward for the state-action pair ( s , a ) , are independent random variables , P is a Markov transition model where where p ( s′|s , a ) defines the transition density between state s and s′ under action a , and β ∈ [ 0 , 1 ) is the discount factor for future payoff . We assume that random variables are observed by the optimizing agent and not recorded in the data . These variables reflect idiosyncratic differences in preferences of different optimizing agents over choices . In the following discussion we refer to these variables as “ random choice-specific shocks . '' In each period t = 1 , 2 , . . . , ∞ , the nature realizes the current state st based on the Markov transition P given the state-action pair ( st−1 , at−1 ) in the previous period t− 1 , and the choice-specific shocks t = { t , a } a∈A drawn i.i.d . from distribution . The optimizing agent chooses an action a ∈ A , and her current period payoff is sum of the immediate reward and the choice-specific shock , i.e. , r ( s , a ) + t , a . Given initial state s1 , the agent ’ s long-term payoff is E 1 , s2 , 2 , ... [ ∑∞ t=1 β t−1r ( st , at ) + t , at ] . This expression makes it clear that random shocks play a crucial role in this model by allowing us to define the ex ante value function of the optimizing agent which reflects the expected reward from agent ’ s choices before the agent observes realization of t. When the distribution of shocks is sufficiently smooth ( differentiable ) , the corresponding ex ante value function is smooth ( differentiable ) as well . This allows us to characterize the impact of agent ’ s policy on the expected value by considering functional derivatives of the value function with respect to the policy . In the remainder of the paper , we rely on the following assumptions . Assumption 2.1 . The state space S is compact in R and the action spaceA is binary , i.e. , A = { 0 , 1 } . Assumption 2.2 . For all states s , the immediate reward r ( s , 0 ) for the state-action pair ( s , 0 ) is zero i.e. , r ( s , 0 ) = 0 , and the immediate reward r ( s , 1 ) for the state-action pair ( s , 1 ) is bounded between [ Rmin , Rmax ] . Assumption 2.3 . Choice-specific shocks are Type I Extreme Value random variables with location parameter 0 ( cf . Hotz and Miller , 1993 ) which are independent over choices and time periods . Assumption 2.1 , 2.2 , 2.3 are present in most of the papers on dynamic decision-making in economics , marketing and finance , ( e.g . Dubé et al. , 2002 ; Aguirregabiria and Mira , 2010 ; Arcidiacono and Miller , 2011 ; Aguirregabiria and Magesan , 2016 ; Müller and Reich , 2018 ) The policy and the value function A stationary Markov policy is a function σ : S × RA → A which maps the current state s and choice-specific shock to an action . In our further discussion we will show that there is a natural more restricted definition of the set of all feasible policies in this model . Given any stationary Markov policy σ , the value function Vσ : S → R is a mapping from the initial state to the long-term payoff under policy σ , i.e. , Vσ ( s1 ) = E 1 , s2 , 2 , ... [ ∞∑ t=1 βt−1 { r ( st , σ ( st , t ) ) + t , σ ( st , t ) } ] . Since the reward is non-negative and bounded , and the discount β ∈ [ 0 , 1 ) , value function Vσ is well-defined and the optimal policy σ̃ ( i.e. , Vσ̃ ( s ) ≥ Vσ ( s ) for all policies σ and states s ) exists . Furthermore , the following Bellman equation holds Vσ ( s ) = E [ r ( s , σ ( s , ) ) + σ ( s , ) + β Es′ [ Vσ ( s′ ) |s , σ ( s , ) ] ] for all policies σ ( 1 ) 2.2 CONDITIONAL CHOICE PROBABILITY REPRESENTATION . Based on the Bellman equation ( 1 ) evaluated at the optimal policy , the optimal Conditional Choice Probability δ̃ ( a|s ) ( i.e. , the probability of choosing action a given state s in the optimal policy σ̃ ) can be defined as δ̃ ( a|s ) = E [ 1 { r ( s , a ) + a + β Es′ [ Vσ̃ ( s′ ) |s , a ] ≥ r ( s , a′ ) + a′ + β Es′ [ Vσ̃ ( s′ ) |s , a′ ] , ∀a′ } ] The optimal policy σ̃ can , therefore , be equivalently characterized by threshold function π̃ ( s , a ) = r ( s , a ) + β Es′ [ Vσ̃ ( s′ ) |s , a ] , such that the optimizing agent chooses action a† which maximizes the sum of the threshold and the choice-specific shock , i.e. , a† = argmaxa { π̃ ( s , a ) + a } . Similarly , all non-optimal policies can be characterized by the corresponding threshold functions denoted π . Under Assumption 2.3 the conditional choice probability δ can be explicitly expressed in terms of the respective threshold π as ( cf . Rust , 1996 ) δ ( a|s ) = exp ( π ( s , a ) ) / ( ∑ a′∈A exp ( π ( s , a ′ ) ) ) . We note that this expression induces a one-to-one mapping from the thresholds to the conditional choice probabilities . Therefore , all policies are fully characterized by their respective conditional choice probabilities . For notational simplicity , since we consider the binary action space A = { 0 , 1 } , and the reward r ( s , 0 ) is normalized to 0 we denote the immediate reward r ( s , 1 ) as r ( s ) ; denote the conditional choice probability δ ( 0|s ) as δ ( s ) ; and denote π ( s , 1 ) as π ( s ) . In the subsequent discussion given that the characterization of policy σ via its threshold is equivalent to its characterization by conditional choice probability δ , we interchangeably refer to δ as the “ policy . '' Then we rewrite the Bellman equation for a given policy δ as Vδ ( s ) = ( 1− δ ( s ) ) r ( s ) − δ ( s ) log ( δ ( s ) ) − ( 1− δ ( s ) ) log ( 1− δ ( s ) ) + β E , s′ [ Vδ ( s ) ( s ′ ) ∣∣∣s ] ( 2 ) Now we make two additional assumptions that are compatible with standard assumptions in the Econometrics literature . Assumption 2.4 . For all states s ∈ S , the conditional distribution of the next period Markov state p ( ·|s , 1 ) first-order stochastically dominates distribution p ( ·|s , 0 ) , i.e. , for all ŝ ∈ S , Prs′ [ s′ ≤ ŝ|s , 1 ] ≤ Prs′ [ s′ ≤ ŝ|s , 0 ] . Assumption 2.5 . Under the optimal policy δ̃ , the value function is non-decreasing in states , i.e. , Vδ̃ ( s ) ≤ Vδ̃ ( s′ ) for all s , s′ ∈ S s.t . s < s′ . Consider a myopic policy δ̄ ( s ) = ( exp ( r ( s ) ) + 1 ) −1 which uses threshold π̄ ( s ) = r ( s ) . This policy corresponds to agent optimizing the immediate reward without considering how current actions impact future rewards . Under Assumption 2.4 and Assumption 2.5 , the threshold for optimal policy is at least the threshold of myopic policy , i.e. , π̃ ( s ) ≥ π̄ ( s ) . Hence , Lemma 2.1 holds . Lemma 2.1 . The optimal policy δ̃ chooses action 0 with weakly lower probability than the myopic policy δ̄ in all states s ∈ S , i.e. , δ̃ ( s ) ≤ δ̄ ( s ) .
This paper deals with a certain class of models, known as discrete choice models. These models are popular in econometrics, and aim at modelling the complex behavioural patterns of individuals or firms. Entities in these models are typically modelled as rational agents, that behave optimally for reaching their goal of maximizing a certain objective function such as maximizing expected cumulative discounted payoff over a fixed period.
SP:5a0e35b51548e82135b965e7b692e8a0af1289f8
On the implicit minimization of alternative loss functions when training deep networks
1 INTRODUCTION . In the last few years , deep learning has succeeded in establishing state of the art performances in a wide variety of tasks in fields like computer vision , natural language processing and bioinformatics ( LeCun et al. , 2015 ) . Understanding when and how these networks generalize better is important to keep improving their performance . Many works starting mainly from Neyshabur et al . ( 2015 ) , Zhang et al . ( 2017 ) and Keskar et al . ( 2017 ) hint to a rich interplay between regularization and the optimization process of learning the weights of the network . The idea is that a form of inductive bias can be realized implicitly by the optimization algorithm . In this paper , we investigate the implicit bias induced from using different learning rates and batch sizes when minimizing the cross-entropy loss with SGD . A common theory is that more noise in the gradient bias the solution toward flatter minima ( Keskar et al. , 2017 ) . We draw a connection between a particular measure of flatness and margin based loss functions1 . Our contributions are the following : 1 . A new loss function ( Gcdf loss ) that can be interpreted as a measure of flatness for the 0−1 loss ( for the top layer ’ s weights of the network ) . 2 . A methodology consisting in tracking alternative loss functions during training and comparing them for a given training loss value to try to uncover implicit biases in the optimization algorithm applied to varying the learning rate and batch size in SGD . 3 . Experimental results on CIFAR10 and MNIST showing that larger learning rates and smaller batch sizes are better at implicitly minimizing the cross-entropy loss with larger temperature parameter , the hinge loss with larger margin parameter and the Gcdf loss with larger standard deviation parameter . At the opposite , smaller learning rates and larger batch sizes are better at implicitly minimizing the cross-entropy loss , the hinge loss and the Gcdf loss with smaller values of their respective parameter . 1The concept of margin has been link to generalization of deep networks ; see for example Bartlett et al . ( 2017 ) , Poggio et al . ( 2019 ) and Jiang et al . ( 2019 ) We do not propose to modify optimization algorithms to try to improve large batch training but we instead try to offer new insights on how the solutions it produces are different from solutions resulting from small batch training ( or larger learning rates ) . The hope is to eventually succeed at incorporating the inductive bias in the objective being optimized instead of relying on the implicit bias of the optimization algorithm . It is not yet clear to what extent this goal can be realized ( and by what means2 ) and we certainly do not claim to be reaching it . We offer only a partial understanding of some of the differences between large batch training ( or using small learning rates ) and small batch training ( or using large learning rates ) through the behavior of alternative loss functions during training . 2 RELATED WORK . It was observed by Zhang et al . ( 2017 ) that deep networks can often obtain good results without explicit regularization even if they have the capacity to essentially memorize the training set . They hypothesized that SGD is probably acting as an implicit regularizer . Also , the earlier work of Neyshabur et al . ( 2015 ) brought forward the idea that optimization might be implicitly biasing the trajectory toward low norm models . Since then , many works have investigated the idea of implicit regularization for neural networks ( linear or non-linear ) . For example , Arora et al . ( 2019 ) studied how gradient descent finds low rank solutions for matrix completion with deep linear networks . Soudry et al . ( 2018 ) showed that gradient descent converges to the max-margin solution for logistic regression and Lyu & Li ( 2019 ) provides and extension to deep non-linear homogeneous networks . In contrast to these works , we study empirically how the optimization algorithm implicitly minimizes alternative loss functions during the course of training . A highly studied source of implicit bias from the optimization algorithm is the ability to reach flatter minima . In Keskar et al . ( 2017 ) , the worst loss that can be obtained when slightly perturbing the parameters is considered as a measure of sharpness while Neyshabur et al . ( 2017 ) considered the expected loss under Gaussian noise in the weights . We consider a measure of sharpness ( section 3 ) similar to Neyshabur et al . ( 2017 ) and we apply it to the 0 − 1 loss directly instead of the usual surrogate cross-entropy loss . The batch size and the learning rate are two ways to control the noise in the gradient which might influence the sharpness of the resulting solution ( see for example Smith & Le ( 2018 ) , Smith et al . ( 2018 ) ) . In conjunction with increasing the learning rate , different strategies like training for more epochs ( Hoffer et al. , 2017 ) , “ warm up ” ( Goyal et al. , 2017 ) and using a separate learning rate for each layer based on the norm of the weights ( You et al. , 2017 ) have been proposed to improve the performance of large batch training . Instead of trying to offer a new modification to the optimization algorithm , we try here to capture the inductive bias into computationally efficient to use loss functions in the hope of eventually simplifying the design of optimization algorithms . 3 GCDF LOSS . This section introduces a loss function based on the idea of flat minima . It is defined as a measure of sharpness for the 0 − 1 loss . The main motivation for introducing this loss function is that it is simultaneously a measure of sharpness and a margin based loss function establishing a clear relationship between these ideas . Furthermore , as opposed to the cross-entropy loss and the Hinge loss , it is bounded and non-convex ( see section 4.1 for a visual comparison ) . It thus offers more diversity to the loss functions investigated in this paper . We start with the binary linear case in 3.1 and then extend to the multi-class case in 3.2 . For deep networks , this loss will be applied on the top layer . It is a possible extension to our work to consider loss functions applied on multiple layers maybe in a similar fashion to Elsayed et al . ( 2018 ) . 2see for example Arora et al . ( 2019 ) about the difficulties to capture the implicit bias of gradient descent with norms . 3.1 BINARY LINEAR CASE . Let f ( w , x ) = wTx + b , where w , x ∈ Rn and b ∈ R. Consider the 0 − 1 loss for a binary linear classifier : L ( f ( w , x ) , y ) = 1 [ y ( wTx+b ) < 0 ] , where 1 is the indicator function . Note that we will write all the loss functions for single examples ( x , y ) throughout the paper and it will be understood that the training loss is obtained by taking the mean over the training set . We smooth ( or “ robustify ” ) the 0− 1 loss by considering its expectation under Gaussian noise in the weights . This loss function will then be denoted by Lσ ( w , x , y ) when the standard deviation is σ . Consider the random variable ∼ N ( 0 , σ2I ) , where N ( 0 , σ2I ) is a zero mean isotropic Gaussian distribution with covariance matrix σ2I . Since ( w + ) Tx + b is distributed as a Gaussian distribution with mean wTx + b and variance σ2||x||2 , we get that y ( ( w + ) Tx+ b ) is distributed as a Gaussian distribution with mean y ( wTx+ b ) and the same variance . Therefore , Lσ ( w , x , y ) = E L ( f ( w + , x ) , y ) = Φ ( −y ( wTx+ b ) σ||x|| ) , ( 1 ) where Φ is the Gaussian cumulative distribution function ( Gcdf ) given by Φ ( z ) = 1√ 2π ∫ z −∞ exp ( −t2 2 ) dt . ( 2 ) If we assume that x is normalized , the loss Lσ is a ( decreasing ) function of yf ( w , x ) ( it is a margin based loss function in the terminology from Lin ( 2004 ) for example ) . 3.2 MULTI-CLASS CASE . Suppose the number of classes is m and now consider the affine mapping f ( W , x ) = Wx+ b with x ∈ Rn , b ∈ Rm and W ∈ Rm×n . For some fixed x ∈ Rn and denoting by wj the jth row of W , let sj : = wTj x+ bj be the corresponding score for class j . Finally , let sj ( j ) : = ( wj + j ) Tx+ bj be the perturbed score , j an isotropic Gaussian random variable with mean 0 and covariance matrix σ2I . For a given class y , we get P { sy ( y ) 6= max j sj ( j ) } ≤ ∑ j 6=y P { sj ( j ) > sy ( y ) } ( 3 ) = ∑ j 6=y P { sj − sy > ( y − j ) Tx } ( 4 ) = ∑ j 6=y Φ ( sj − sy ||x||σ √ 2 ) , ( 5 ) since ( y − j ) Tx follows a zero mean Gaussian distribution with variance 2σ2||x||2 . We define Lσ ( W , x , y ) : = ∑ j 6=y Φ ( sj − sy ||x||σ √ 2 ) . ( 6 ) This is an upper bound on the probability that the classifier does not predict y under Gaussian noise on W . We will experiment with this Gcdf loss function on top of feedforward neural networks ( and also with other loss functions ) in the following sections . In all the experiments , we use normalization to enforce ||x|| = 1 ( this x now represents the feature vector for the top layer ) . 4 IMPLICIT MINIMIZATION OF DIFFERENT LOSS FUNCTIONS . In this section , we track different loss functions while training deep neural networks with the crossentropy loss varying the learning rates and batch sizes in SGD with momentum . The results in the main text are obtained while training on CIFAR10 . Results on MNIST are given in Appendix A . The following loss functions are considered : cross-entropy with different values of temperature , Hinge loss with different margin parameters and the Gcdf loss with different standard deviation parameters . For the cross-entropy loss , the temperature T divides the scores sj before the softmax function . That is , the probability for class j is then given by exp ( sj/T ) ∑ k exp ( sk/T ) . ( 7 ) Remark that the positive homogeneity of the Relu implies that normalizing each layer of the network is equivalent to take T equal to the product of the norm of the layers . The cross-entropy loss after normalization at the end of training is investigated in Liao et al . ( 2018 ) . In contrast , we consider here multiple values for T and investigate the behavior during training . Given the probabilities for each class , the cross-entropy loss ( on a single example ) is then the negative log probability for the correct class . For its part , the multi-class Hinge loss with margin parameter γ ( on a single example ) is given by ∑ j 6=y max { 0 , γ + ( sj − sy ) } . ( 8 ) The Gcdf loss with standard deviation parameter σ has been described and motivated in the previous section .
This paper want to show that minimizing cross-entropy loss will simultaneously minimize Hinge loss with different margins, cross-entropy loss with different temperatures and a newly introduced Gcdf loss with different standard deviations. The main contribution is a new gcdf loss based on Gaussian-perturbed parameters. However, this loss can only be used with linear models. For deep models, the authors suggest that only measure this loss on the top layer of model.
SP:3145e0027567692ea5c3ca4ef8d0d94b40f8f27f
On the implicit minimization of alternative loss functions when training deep networks
1 INTRODUCTION . In the last few years , deep learning has succeeded in establishing state of the art performances in a wide variety of tasks in fields like computer vision , natural language processing and bioinformatics ( LeCun et al. , 2015 ) . Understanding when and how these networks generalize better is important to keep improving their performance . Many works starting mainly from Neyshabur et al . ( 2015 ) , Zhang et al . ( 2017 ) and Keskar et al . ( 2017 ) hint to a rich interplay between regularization and the optimization process of learning the weights of the network . The idea is that a form of inductive bias can be realized implicitly by the optimization algorithm . In this paper , we investigate the implicit bias induced from using different learning rates and batch sizes when minimizing the cross-entropy loss with SGD . A common theory is that more noise in the gradient bias the solution toward flatter minima ( Keskar et al. , 2017 ) . We draw a connection between a particular measure of flatness and margin based loss functions1 . Our contributions are the following : 1 . A new loss function ( Gcdf loss ) that can be interpreted as a measure of flatness for the 0−1 loss ( for the top layer ’ s weights of the network ) . 2 . A methodology consisting in tracking alternative loss functions during training and comparing them for a given training loss value to try to uncover implicit biases in the optimization algorithm applied to varying the learning rate and batch size in SGD . 3 . Experimental results on CIFAR10 and MNIST showing that larger learning rates and smaller batch sizes are better at implicitly minimizing the cross-entropy loss with larger temperature parameter , the hinge loss with larger margin parameter and the Gcdf loss with larger standard deviation parameter . At the opposite , smaller learning rates and larger batch sizes are better at implicitly minimizing the cross-entropy loss , the hinge loss and the Gcdf loss with smaller values of their respective parameter . 1The concept of margin has been link to generalization of deep networks ; see for example Bartlett et al . ( 2017 ) , Poggio et al . ( 2019 ) and Jiang et al . ( 2019 ) We do not propose to modify optimization algorithms to try to improve large batch training but we instead try to offer new insights on how the solutions it produces are different from solutions resulting from small batch training ( or larger learning rates ) . The hope is to eventually succeed at incorporating the inductive bias in the objective being optimized instead of relying on the implicit bias of the optimization algorithm . It is not yet clear to what extent this goal can be realized ( and by what means2 ) and we certainly do not claim to be reaching it . We offer only a partial understanding of some of the differences between large batch training ( or using small learning rates ) and small batch training ( or using large learning rates ) through the behavior of alternative loss functions during training . 2 RELATED WORK . It was observed by Zhang et al . ( 2017 ) that deep networks can often obtain good results without explicit regularization even if they have the capacity to essentially memorize the training set . They hypothesized that SGD is probably acting as an implicit regularizer . Also , the earlier work of Neyshabur et al . ( 2015 ) brought forward the idea that optimization might be implicitly biasing the trajectory toward low norm models . Since then , many works have investigated the idea of implicit regularization for neural networks ( linear or non-linear ) . For example , Arora et al . ( 2019 ) studied how gradient descent finds low rank solutions for matrix completion with deep linear networks . Soudry et al . ( 2018 ) showed that gradient descent converges to the max-margin solution for logistic regression and Lyu & Li ( 2019 ) provides and extension to deep non-linear homogeneous networks . In contrast to these works , we study empirically how the optimization algorithm implicitly minimizes alternative loss functions during the course of training . A highly studied source of implicit bias from the optimization algorithm is the ability to reach flatter minima . In Keskar et al . ( 2017 ) , the worst loss that can be obtained when slightly perturbing the parameters is considered as a measure of sharpness while Neyshabur et al . ( 2017 ) considered the expected loss under Gaussian noise in the weights . We consider a measure of sharpness ( section 3 ) similar to Neyshabur et al . ( 2017 ) and we apply it to the 0 − 1 loss directly instead of the usual surrogate cross-entropy loss . The batch size and the learning rate are two ways to control the noise in the gradient which might influence the sharpness of the resulting solution ( see for example Smith & Le ( 2018 ) , Smith et al . ( 2018 ) ) . In conjunction with increasing the learning rate , different strategies like training for more epochs ( Hoffer et al. , 2017 ) , “ warm up ” ( Goyal et al. , 2017 ) and using a separate learning rate for each layer based on the norm of the weights ( You et al. , 2017 ) have been proposed to improve the performance of large batch training . Instead of trying to offer a new modification to the optimization algorithm , we try here to capture the inductive bias into computationally efficient to use loss functions in the hope of eventually simplifying the design of optimization algorithms . 3 GCDF LOSS . This section introduces a loss function based on the idea of flat minima . It is defined as a measure of sharpness for the 0 − 1 loss . The main motivation for introducing this loss function is that it is simultaneously a measure of sharpness and a margin based loss function establishing a clear relationship between these ideas . Furthermore , as opposed to the cross-entropy loss and the Hinge loss , it is bounded and non-convex ( see section 4.1 for a visual comparison ) . It thus offers more diversity to the loss functions investigated in this paper . We start with the binary linear case in 3.1 and then extend to the multi-class case in 3.2 . For deep networks , this loss will be applied on the top layer . It is a possible extension to our work to consider loss functions applied on multiple layers maybe in a similar fashion to Elsayed et al . ( 2018 ) . 2see for example Arora et al . ( 2019 ) about the difficulties to capture the implicit bias of gradient descent with norms . 3.1 BINARY LINEAR CASE . Let f ( w , x ) = wTx + b , where w , x ∈ Rn and b ∈ R. Consider the 0 − 1 loss for a binary linear classifier : L ( f ( w , x ) , y ) = 1 [ y ( wTx+b ) < 0 ] , where 1 is the indicator function . Note that we will write all the loss functions for single examples ( x , y ) throughout the paper and it will be understood that the training loss is obtained by taking the mean over the training set . We smooth ( or “ robustify ” ) the 0− 1 loss by considering its expectation under Gaussian noise in the weights . This loss function will then be denoted by Lσ ( w , x , y ) when the standard deviation is σ . Consider the random variable ∼ N ( 0 , σ2I ) , where N ( 0 , σ2I ) is a zero mean isotropic Gaussian distribution with covariance matrix σ2I . Since ( w + ) Tx + b is distributed as a Gaussian distribution with mean wTx + b and variance σ2||x||2 , we get that y ( ( w + ) Tx+ b ) is distributed as a Gaussian distribution with mean y ( wTx+ b ) and the same variance . Therefore , Lσ ( w , x , y ) = E L ( f ( w + , x ) , y ) = Φ ( −y ( wTx+ b ) σ||x|| ) , ( 1 ) where Φ is the Gaussian cumulative distribution function ( Gcdf ) given by Φ ( z ) = 1√ 2π ∫ z −∞ exp ( −t2 2 ) dt . ( 2 ) If we assume that x is normalized , the loss Lσ is a ( decreasing ) function of yf ( w , x ) ( it is a margin based loss function in the terminology from Lin ( 2004 ) for example ) . 3.2 MULTI-CLASS CASE . Suppose the number of classes is m and now consider the affine mapping f ( W , x ) = Wx+ b with x ∈ Rn , b ∈ Rm and W ∈ Rm×n . For some fixed x ∈ Rn and denoting by wj the jth row of W , let sj : = wTj x+ bj be the corresponding score for class j . Finally , let sj ( j ) : = ( wj + j ) Tx+ bj be the perturbed score , j an isotropic Gaussian random variable with mean 0 and covariance matrix σ2I . For a given class y , we get P { sy ( y ) 6= max j sj ( j ) } ≤ ∑ j 6=y P { sj ( j ) > sy ( y ) } ( 3 ) = ∑ j 6=y P { sj − sy > ( y − j ) Tx } ( 4 ) = ∑ j 6=y Φ ( sj − sy ||x||σ √ 2 ) , ( 5 ) since ( y − j ) Tx follows a zero mean Gaussian distribution with variance 2σ2||x||2 . We define Lσ ( W , x , y ) : = ∑ j 6=y Φ ( sj − sy ||x||σ √ 2 ) . ( 6 ) This is an upper bound on the probability that the classifier does not predict y under Gaussian noise on W . We will experiment with this Gcdf loss function on top of feedforward neural networks ( and also with other loss functions ) in the following sections . In all the experiments , we use normalization to enforce ||x|| = 1 ( this x now represents the feature vector for the top layer ) . 4 IMPLICIT MINIMIZATION OF DIFFERENT LOSS FUNCTIONS . In this section , we track different loss functions while training deep neural networks with the crossentropy loss varying the learning rates and batch sizes in SGD with momentum . The results in the main text are obtained while training on CIFAR10 . Results on MNIST are given in Appendix A . The following loss functions are considered : cross-entropy with different values of temperature , Hinge loss with different margin parameters and the Gcdf loss with different standard deviation parameters . For the cross-entropy loss , the temperature T divides the scores sj before the softmax function . That is , the probability for class j is then given by exp ( sj/T ) ∑ k exp ( sk/T ) . ( 7 ) Remark that the positive homogeneity of the Relu implies that normalizing each layer of the network is equivalent to take T equal to the product of the norm of the layers . The cross-entropy loss after normalization at the end of training is investigated in Liao et al . ( 2018 ) . In contrast , we consider here multiple values for T and investigate the behavior during training . Given the probabilities for each class , the cross-entropy loss ( on a single example ) is then the negative log probability for the correct class . For its part , the multi-class Hinge loss with margin parameter γ ( on a single example ) is given by ∑ j 6=y max { 0 , γ + ( sj − sy ) } . ( 8 ) The Gcdf loss with standard deviation parameter σ has been described and motivated in the previous section .
This paper makes a step towards understanding of the implicit bias of optimization algorithms in deep learning. The authors consider alternative loss functions for deep networks: (1) the temperature-scaled cross-entropy loss with different values of the temperature; (2) the hinge-loss with different values of the margin parameter; (3) the Gcdf loss with different values of the variance parameter. The paper introduces the Gcdf loss which is derived as a modification of the 0-1 loss under the noise in the parameters of the linear output layer. The authors propose to use the alternative losses as measures of margin and sharpness associated with a solution found by an optimization algorithm. The experiments show how SGD in different learning scenarios (low/high learning rate and small/large batch) performs implicit minimization of the alternative loss functions with different parameters. Specifically, using larger learning rates/smaller batch sizes is shown to implicitly minimize the losses corresponding to higher values of the temperature/margin/variance. The results provide insights about margins and sharpness of solutions found by different modes of SGD.
SP:3145e0027567692ea5c3ca4ef8d0d94b40f8f27f
The Curious Case of Neural Text Degeneration
Context : In a shocking finding , scientist discovered a herd of unicorns living in a remote , previously unexplored valley , in the Andes Mountains . Even more surprising to the researchers was the fact that the unicorns spoke perfect English . Beam Search , b=32 : `` The study , published in the Proceedings of the National Academy of Sciences of the United States of America ( PNAS ) , was conducted by researchers from the Universidad Nacional Autónoma de México ( UNAM ) and the Universidad Nacional Autónoma de México ( UNAM/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de ... ” Pure Sampling : They were cattle called Bolivian Cavalleros ; they live in a remote desert uninterrupted by town , and they speak huge , beautiful , paradisiacal Bolivian linguistic thing . They say , 'Lunch , marge . ' They do n't tell what the lunch is , '' director Professor Chuperas Omwell told Sky News . `` They 've only been talking to scientists , like we 're being interviewed by TV reporters . We do n't even stick around to be interviewed by TV reporters . Maybe that 's how they figured out that they 're cosplaying as the Bolivian Cavalleros . '' Figure 1 : Even with substantial human context and the powerful GPT-2 Large language model , Beam Search ( size 32 ) leads to degenerate repetition ( highlighted in blue ) while pure sampling leads to incoherent gibberish ( highlighted in red ) . When b ≥ 64 , both GPT-2 Large and XL ( 774M and 1542M parameters , respectively ) prefer to stop generating immediately after the given context . 1 INTRODUCTION . On February 14th 2019 , OpenAI surprised the scientific community with an impressively highquality article about Ovid ’ s Unicorn , written by GPT-2.1 Notably , the top-quality generations ob- 1https : //openai.com/blog/better-language-models/ tained from the model rely on randomness in the decoding method , in particular through top-k sampling that samples the next word from the top k most probable choices ( Fan et al. , 2018 ; Holtzman et al. , 2018 ; Radford et al. , 2019 ) , instead of aiming to decode text that maximizes likelihood . In fact , decoding strategies that optimize for output with high probability , such as beam search , lead to text that is incredibly degenerate , even when using state-of-the-art models such as GPT-2 Large , as shown in Figure 1 . This may seem counter-intuitive , as one would expect that good models would assign higher probability to more human-like , grammatical text . Indeed , language models do generally assign high scores to well-formed text , yet the highest scores for longer texts are often generic , repetitive , and awkward . Figure 2 exposes how different the distribution of probabilities assigned to beam search decoded text and naturally occurring text really are . Perhaps equally surprising is the right side of Figure 1 , which shows that pure sampling — sampling directly from the probabilities predicted by the model — results in text that is incoherent and almost unrelated to the context . Why is text produced by pure sampling so degenerate ? In this work we show that the “ unreliable tail ” is to blame . This unreliable tail is composed of tens of thousands of candidate tokens with relatively low probability that are over-represented in the aggregate . To overcome these issues we introduce Nucleus Sampling ( §3.1 ) . The key intuition of Nucleus Sampling is that the vast majority of probability mass at each time step is concentrated in the nucleus , a small subset of the vocabulary that tends to range between one and a thousand candidates . Instead of relying on a fixed top-k , or using a temperature parameter to control the shape of the distribution without sufficiently suppressing the unreliable tail , we propose sampling from the top-p portion of the probability mass , expanding and contracting the candidate pool dynamically . In order to compare current methods to Nucleus Sampling , we compare various distributional properties of generated text to the reference distribution , such as the likelihood of veering into repetition and the perplexity of generated text . The latter reveals that text generated by maximization or top-k sampling is too probable , indicating a lack of diversity and divergence in vocabulary usage from the human distribution . On the other hand , pure sampling produces text that is significantly less likely than the gold , corresponding to lower generation quality . Vocabulary usage and Self-BLEU ( Zhu et al. , 2018 ) statistics reveal that high values of k are needed to make top-k sampling match human statistics . Yet , generations based on high values of k often have high variance in likelihood , hinting at qualitatively observable incoherency issues . Nucleus Sampling can easily match reference perplexity through tuning the value of p , avoiding the incoherence caused by setting k high enough to match distributional statistics . Finally , we perform Human Unified with Statistical Evaluation ( HUSE ; Hashimoto et al. , 2019 ) to jointly assess the overall quality and diversity of the decoding strategies , which can not be captured using either human or automatic evaluation alone . The HUSE evaluation demonstrates that Nucleus Sampling is the best overall decoding strategy . We include generated examples for qualitative analysis – see Figure 3 for a representative example , and further examples in the appendix.2 2Code and all generations are available at https : //github.com/ari-holtzman/degen 2 BACKGROUND . 2.1 TEXT GENERATION DECODING STRATEGIES . A number of recent works have alluded to the disadvantages of generation by maximization , which tend to generate output with high grammaticality but low diversity ( Kulikov et al. , 2019 ; Holtzman et al. , 2018 ; Fan et al. , 2018 ) . Generative Adversarial Networks ( GANs ) have been a prominent research direction ( Yu et al. , 2017 ; Xu et al. , 2018 ) , but recent work has shown that when quality and diversity are considered jointly , GAN-generated text fails to outperform generations from language models ( Caccia et al. , 2018 ; Tevet et al. , 2019 ; Semeniuta et al. , 2018 ) . Work on neural dialog systems have proposed methods for diverse beam search , using a task-specific diversity scoring function or constraining beam hypotheses to be sufficiently different ( Li et al. , 2016a ; Vijayakumar et al. , 2018 ; Kulikov et al. , 2019 ; Pal et al. , 2006 ) . While such utility functions encourage desirable properties in generations , they do not remove the need to choose an appropriate decoding strategy , and we believe that Nucleus Sampling will have complementary advantages in such approaches . Finally , Welleck et al . ( 2020 ) begin to address the problem of neural text degeneration through an “ unlikelihood loss ” , which decreases training loss on repeated tokens and thus implicitly reduces gradients on frequent tokens as well . Our focus is on exposing neural text degeneration and providing a decoding solution that can be used with arbitrary models , but future work will likely combine training-time and inference-time solutions . 2.2 OPEN-ENDED VS DIRECTED GENERATION . Many text generation tasks are defined through ( input , output ) pairs , such that the output is a constrained transformation of the input . Example applications include machine translation ( Bahdanau et al. , 2015 ) , data-to-text generation ( Wiseman et al. , 2017 ) , and summarization ( Nallapati et al. , 2016 ) . We refer to these tasks as directed generation . Typically encoder-decoder architectures are used , often with an attention mechanism ( Bahdanau et al. , 2015 ; Luong et al. , 2015 ) or using attention-based architectures such as the Transformer ( Vaswani et al. , 2017 ) . Generation is usually performed using beam search ; since output is tightly scoped by the input , repetition and genericness are not as problematic . Still , similar issues have been reported when using large beam sizes ( Koehn & Knowles , 2017 ) and more recently with exact inference ( Stahlberg & Byrne , 2019 ) , a counter-intuitive observation since more comprehensive search helps maximize probability . Open-ended generation , which includes conditional story generation and contextual text continuation ( as in Figure 1 ) , has recently become a promising research direction due to significant advances in neural language models ( Clark et al. , 2018 ; Holtzman et al. , 2018 ; Fan et al. , 2018 ; Peng et al. , 2018 ; Radford et al. , 2019 ) . While the input context restricts the space of acceptable output generations , there is a considerable degree of freedom in what can plausibly come next , unlike in directed generation settings . Our work addresses the challenges faced by neural text generation with this increased level of freedom , but we note that some tasks , such as goal-oriented dialog , may fall somewhere in between open-ended and directed generation . 3 LANGUAGE MODEL DECODING . Given an input text passage as context , the task of open-ended generation is to generate text that forms a coherent continuation from the given context . More formally , given a sequence of m tokens x1 . . . xm as context , the task is to generate the next n continuation tokens to obtain the completed sequence x1 . . . xm+n . We assume that models compute P ( x1 : m+n ) using the common left-to-right decomposition of the text probability , P ( x1 : m+n ) = m+n∏ i=1 P ( xi|x1 . . . xi−1 ) , ( 1 ) which is used to generate the generation token-by-token using a particular decoding strategy . Maximization-based decoding The most commonly used decoding objective , in particular for directed generation , is maximization-based decoding . Assuming that the model assigns higher probability to higher quality text , these decoding strategies search for the continuation with the highest An unprecedented number of mostly young whales have become stranded on the West Australian coast since 2008 .. likelihood . Since finding the optimum argmax sequence from recurrent neural language models or Transformers is not tractable ( Chen et al. , 2018 ) , common practice is to use beam search ( Li et al. , 2016b ; Shen et al. , 2017 ; Wiseman et al. , 2017 ) . However , several recent studies on open-ended generation have reported that maximization-based decoding does not lead to high quality text ( Fan et al. , 2018 ; Holtzman et al. , 2018 ) . 3.1 NUCLEUS SAMPLING . We propose a new stochastic decoding method : Nucleus Sampling . The key idea is to use the shape of the probability distribution to determine the set of tokens to be sampled from . Given a distribution P ( x|x1 : i−1 ) , we define its top-p vocabulary V ( p ) ⊂ V as the smallest set such that ∑ x∈V ( p ) P ( x|x1 : i−1 ) ≥ p. ( 2 ) Let p′ = ∑ x∈V ( p ) P ( x|x1 : i−1 ) . The original distribution is re-scaled to a new distribution , from which the next word is sampled : P ′ ( x|x1 : i−1 ) = { P ( x|x1 : i−1 ) /p′ if x ∈ V ( p ) 0 otherwise . ( 3 ) In practice this means selecting the highest probability tokens whose cumulative probability mass exceeds the pre-chosen threshold p. The size of the sampling set will adjust dynamically based on the shape of the probability distribution at each time step . For high values of p , this is a small subset of vocabulary that takes up vast majority of the probability mass — the nucleus . 3.2 TOP-k SAMPLING Top-k sampling has recently become a popular alternative sampling procedure ( Fan et al. , 2018 ; Holtzman et al. , 2018 ; Radford et al. , 2019 ) . Nucleus Sampling and top-k both sample from truncated Neural LM distributions , differing only in the strategy of where to truncate . Choosing where to truncate can be interpreted as determining the generative model ’ s trustworthy prediction zone . At each time step , the top k possible next tokens are sampled from according to their relative probabilities . Formally , given a distribution P ( x|x1 : i−1 ) , we define its top-k vocabulary V ( k ) ⊂ V as the set of size k which maximizes ∑ x∈V ( k ) P ( x|x1 : i−1 ) . Let p′ = ∑ x∈V ( k ) P ( x|x1 : i−1 ) . The distribution is then re-scaled as in equation 3 , and sampling is performed based on that distribution . Note that the scaling factor p′ can vary wildly at each time-step , in contrast to Nucleus Sampling . Difficulty in choosing a suitable value of k While top-k sampling leads to considerably higher quality text than either beam search or sampling from the full distribution , the use of a constant k is sub-optimal across varying contexts . As illustrated on the left of Figure 5 , in some contexts the head of the next word distribution can be flat across tens or hundreds of reasonable options ( e.g . nouns or verbs in generic contexts ) , while in other contexts most of the probability mass is concentrated in one or a small number of tokens , as on the right of the figure . Therefore if k is small , in some contexts there is a risk of generating bland or generic text , while if k is large the top-k vocabulary will include inappropriate candidates which will have their probability of being sampled increased by the renormalization . Under Nucleus Sampling , the number of candidates considered rises and falls dynamically , corresponding to the changes in the model ’ s confidence region over the vocabulary which top-k sampling fails to capture for any one choice of k .
This paper is motivated by an observation that maximization-based decoding approaches such as beam search can lead to incoherent and repetitive sentences when open-ended long-form text generation based on neural language model such as GPT-2 is performed. To solve the problem, this paper proposes a sampling method called Nucleus Sampling. Similar to Top-k sampling, Nucleus Sampling truncates the probability distribution of the words in the vocabulary. Instead of re-normalizing the probabilities for the top-k words, Nucleus Sampling re-normalizes the original probabilities for the words with values above a pre-chosen threshold p. Some quantitative and qualitative results show that the proposed sampling method can generate long-form texts with some nice properties.
SP:08889d3b0659e76092dbb9a9fd2825701cebda44
The Curious Case of Neural Text Degeneration
Context : In a shocking finding , scientist discovered a herd of unicorns living in a remote , previously unexplored valley , in the Andes Mountains . Even more surprising to the researchers was the fact that the unicorns spoke perfect English . Beam Search , b=32 : `` The study , published in the Proceedings of the National Academy of Sciences of the United States of America ( PNAS ) , was conducted by researchers from the Universidad Nacional Autónoma de México ( UNAM ) and the Universidad Nacional Autónoma de México ( UNAM/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de ... ” Pure Sampling : They were cattle called Bolivian Cavalleros ; they live in a remote desert uninterrupted by town , and they speak huge , beautiful , paradisiacal Bolivian linguistic thing . They say , 'Lunch , marge . ' They do n't tell what the lunch is , '' director Professor Chuperas Omwell told Sky News . `` They 've only been talking to scientists , like we 're being interviewed by TV reporters . We do n't even stick around to be interviewed by TV reporters . Maybe that 's how they figured out that they 're cosplaying as the Bolivian Cavalleros . '' Figure 1 : Even with substantial human context and the powerful GPT-2 Large language model , Beam Search ( size 32 ) leads to degenerate repetition ( highlighted in blue ) while pure sampling leads to incoherent gibberish ( highlighted in red ) . When b ≥ 64 , both GPT-2 Large and XL ( 774M and 1542M parameters , respectively ) prefer to stop generating immediately after the given context . 1 INTRODUCTION . On February 14th 2019 , OpenAI surprised the scientific community with an impressively highquality article about Ovid ’ s Unicorn , written by GPT-2.1 Notably , the top-quality generations ob- 1https : //openai.com/blog/better-language-models/ tained from the model rely on randomness in the decoding method , in particular through top-k sampling that samples the next word from the top k most probable choices ( Fan et al. , 2018 ; Holtzman et al. , 2018 ; Radford et al. , 2019 ) , instead of aiming to decode text that maximizes likelihood . In fact , decoding strategies that optimize for output with high probability , such as beam search , lead to text that is incredibly degenerate , even when using state-of-the-art models such as GPT-2 Large , as shown in Figure 1 . This may seem counter-intuitive , as one would expect that good models would assign higher probability to more human-like , grammatical text . Indeed , language models do generally assign high scores to well-formed text , yet the highest scores for longer texts are often generic , repetitive , and awkward . Figure 2 exposes how different the distribution of probabilities assigned to beam search decoded text and naturally occurring text really are . Perhaps equally surprising is the right side of Figure 1 , which shows that pure sampling — sampling directly from the probabilities predicted by the model — results in text that is incoherent and almost unrelated to the context . Why is text produced by pure sampling so degenerate ? In this work we show that the “ unreliable tail ” is to blame . This unreliable tail is composed of tens of thousands of candidate tokens with relatively low probability that are over-represented in the aggregate . To overcome these issues we introduce Nucleus Sampling ( §3.1 ) . The key intuition of Nucleus Sampling is that the vast majority of probability mass at each time step is concentrated in the nucleus , a small subset of the vocabulary that tends to range between one and a thousand candidates . Instead of relying on a fixed top-k , or using a temperature parameter to control the shape of the distribution without sufficiently suppressing the unreliable tail , we propose sampling from the top-p portion of the probability mass , expanding and contracting the candidate pool dynamically . In order to compare current methods to Nucleus Sampling , we compare various distributional properties of generated text to the reference distribution , such as the likelihood of veering into repetition and the perplexity of generated text . The latter reveals that text generated by maximization or top-k sampling is too probable , indicating a lack of diversity and divergence in vocabulary usage from the human distribution . On the other hand , pure sampling produces text that is significantly less likely than the gold , corresponding to lower generation quality . Vocabulary usage and Self-BLEU ( Zhu et al. , 2018 ) statistics reveal that high values of k are needed to make top-k sampling match human statistics . Yet , generations based on high values of k often have high variance in likelihood , hinting at qualitatively observable incoherency issues . Nucleus Sampling can easily match reference perplexity through tuning the value of p , avoiding the incoherence caused by setting k high enough to match distributional statistics . Finally , we perform Human Unified with Statistical Evaluation ( HUSE ; Hashimoto et al. , 2019 ) to jointly assess the overall quality and diversity of the decoding strategies , which can not be captured using either human or automatic evaluation alone . The HUSE evaluation demonstrates that Nucleus Sampling is the best overall decoding strategy . We include generated examples for qualitative analysis – see Figure 3 for a representative example , and further examples in the appendix.2 2Code and all generations are available at https : //github.com/ari-holtzman/degen 2 BACKGROUND . 2.1 TEXT GENERATION DECODING STRATEGIES . A number of recent works have alluded to the disadvantages of generation by maximization , which tend to generate output with high grammaticality but low diversity ( Kulikov et al. , 2019 ; Holtzman et al. , 2018 ; Fan et al. , 2018 ) . Generative Adversarial Networks ( GANs ) have been a prominent research direction ( Yu et al. , 2017 ; Xu et al. , 2018 ) , but recent work has shown that when quality and diversity are considered jointly , GAN-generated text fails to outperform generations from language models ( Caccia et al. , 2018 ; Tevet et al. , 2019 ; Semeniuta et al. , 2018 ) . Work on neural dialog systems have proposed methods for diverse beam search , using a task-specific diversity scoring function or constraining beam hypotheses to be sufficiently different ( Li et al. , 2016a ; Vijayakumar et al. , 2018 ; Kulikov et al. , 2019 ; Pal et al. , 2006 ) . While such utility functions encourage desirable properties in generations , they do not remove the need to choose an appropriate decoding strategy , and we believe that Nucleus Sampling will have complementary advantages in such approaches . Finally , Welleck et al . ( 2020 ) begin to address the problem of neural text degeneration through an “ unlikelihood loss ” , which decreases training loss on repeated tokens and thus implicitly reduces gradients on frequent tokens as well . Our focus is on exposing neural text degeneration and providing a decoding solution that can be used with arbitrary models , but future work will likely combine training-time and inference-time solutions . 2.2 OPEN-ENDED VS DIRECTED GENERATION . Many text generation tasks are defined through ( input , output ) pairs , such that the output is a constrained transformation of the input . Example applications include machine translation ( Bahdanau et al. , 2015 ) , data-to-text generation ( Wiseman et al. , 2017 ) , and summarization ( Nallapati et al. , 2016 ) . We refer to these tasks as directed generation . Typically encoder-decoder architectures are used , often with an attention mechanism ( Bahdanau et al. , 2015 ; Luong et al. , 2015 ) or using attention-based architectures such as the Transformer ( Vaswani et al. , 2017 ) . Generation is usually performed using beam search ; since output is tightly scoped by the input , repetition and genericness are not as problematic . Still , similar issues have been reported when using large beam sizes ( Koehn & Knowles , 2017 ) and more recently with exact inference ( Stahlberg & Byrne , 2019 ) , a counter-intuitive observation since more comprehensive search helps maximize probability . Open-ended generation , which includes conditional story generation and contextual text continuation ( as in Figure 1 ) , has recently become a promising research direction due to significant advances in neural language models ( Clark et al. , 2018 ; Holtzman et al. , 2018 ; Fan et al. , 2018 ; Peng et al. , 2018 ; Radford et al. , 2019 ) . While the input context restricts the space of acceptable output generations , there is a considerable degree of freedom in what can plausibly come next , unlike in directed generation settings . Our work addresses the challenges faced by neural text generation with this increased level of freedom , but we note that some tasks , such as goal-oriented dialog , may fall somewhere in between open-ended and directed generation . 3 LANGUAGE MODEL DECODING . Given an input text passage as context , the task of open-ended generation is to generate text that forms a coherent continuation from the given context . More formally , given a sequence of m tokens x1 . . . xm as context , the task is to generate the next n continuation tokens to obtain the completed sequence x1 . . . xm+n . We assume that models compute P ( x1 : m+n ) using the common left-to-right decomposition of the text probability , P ( x1 : m+n ) = m+n∏ i=1 P ( xi|x1 . . . xi−1 ) , ( 1 ) which is used to generate the generation token-by-token using a particular decoding strategy . Maximization-based decoding The most commonly used decoding objective , in particular for directed generation , is maximization-based decoding . Assuming that the model assigns higher probability to higher quality text , these decoding strategies search for the continuation with the highest An unprecedented number of mostly young whales have become stranded on the West Australian coast since 2008 .. likelihood . Since finding the optimum argmax sequence from recurrent neural language models or Transformers is not tractable ( Chen et al. , 2018 ) , common practice is to use beam search ( Li et al. , 2016b ; Shen et al. , 2017 ; Wiseman et al. , 2017 ) . However , several recent studies on open-ended generation have reported that maximization-based decoding does not lead to high quality text ( Fan et al. , 2018 ; Holtzman et al. , 2018 ) . 3.1 NUCLEUS SAMPLING . We propose a new stochastic decoding method : Nucleus Sampling . The key idea is to use the shape of the probability distribution to determine the set of tokens to be sampled from . Given a distribution P ( x|x1 : i−1 ) , we define its top-p vocabulary V ( p ) ⊂ V as the smallest set such that ∑ x∈V ( p ) P ( x|x1 : i−1 ) ≥ p. ( 2 ) Let p′ = ∑ x∈V ( p ) P ( x|x1 : i−1 ) . The original distribution is re-scaled to a new distribution , from which the next word is sampled : P ′ ( x|x1 : i−1 ) = { P ( x|x1 : i−1 ) /p′ if x ∈ V ( p ) 0 otherwise . ( 3 ) In practice this means selecting the highest probability tokens whose cumulative probability mass exceeds the pre-chosen threshold p. The size of the sampling set will adjust dynamically based on the shape of the probability distribution at each time step . For high values of p , this is a small subset of vocabulary that takes up vast majority of the probability mass — the nucleus . 3.2 TOP-k SAMPLING Top-k sampling has recently become a popular alternative sampling procedure ( Fan et al. , 2018 ; Holtzman et al. , 2018 ; Radford et al. , 2019 ) . Nucleus Sampling and top-k both sample from truncated Neural LM distributions , differing only in the strategy of where to truncate . Choosing where to truncate can be interpreted as determining the generative model ’ s trustworthy prediction zone . At each time step , the top k possible next tokens are sampled from according to their relative probabilities . Formally , given a distribution P ( x|x1 : i−1 ) , we define its top-k vocabulary V ( k ) ⊂ V as the set of size k which maximizes ∑ x∈V ( k ) P ( x|x1 : i−1 ) . Let p′ = ∑ x∈V ( k ) P ( x|x1 : i−1 ) . The distribution is then re-scaled as in equation 3 , and sampling is performed based on that distribution . Note that the scaling factor p′ can vary wildly at each time-step , in contrast to Nucleus Sampling . Difficulty in choosing a suitable value of k While top-k sampling leads to considerably higher quality text than either beam search or sampling from the full distribution , the use of a constant k is sub-optimal across varying contexts . As illustrated on the left of Figure 5 , in some contexts the head of the next word distribution can be flat across tens or hundreds of reasonable options ( e.g . nouns or verbs in generic contexts ) , while in other contexts most of the probability mass is concentrated in one or a small number of tokens , as on the right of the figure . Therefore if k is small , in some contexts there is a risk of generating bland or generic text , while if k is large the top-k vocabulary will include inappropriate candidates which will have their probability of being sampled increased by the renormalization . Under Nucleus Sampling , the number of candidates considered rises and falls dynamically , corresponding to the changes in the model ’ s confidence region over the vocabulary which top-k sampling fails to capture for any one choice of k .
This paper studies an important problem, i.e., how to find a good decoding strategy for open-ended text generation. To this end, the authors provide a deep analysis of the most common decoding methods, and propose Nucleus Sampling, a very simple yet effective method to generate higher-quality text. Compared with top-k sampling, the key idea behind the proposed method is to sample from the dynamic nucleus of tokens containing the majority of the probability mass. Experiments demonstrate that nucleus sampling is an effective decoding strategy in practice.
SP:08889d3b0659e76092dbb9a9fd2825701cebda44
Implementation Matters in Deep RL: A Case Study on PPO and TRPO
1 INTRODUCTION . Deep reinforcement learning ( RL ) algorithms have fueled many of the most publicized achievements in modern machine learning ( Silver et al. , 2017 ; OpenAI , 2018 ; Abbeel & Schulman , 2016 ; Mnih et al. , 2013 ) . However , despite these accomplishments , deep RL methods still are not nearly as reliable as their ( deep ) supervised learning counterparts . Indeed , recent research found the existing deep RL methods to be brittle ( Henderson et al. , 2017 ; Zhang et al. , 2018 ) , hard to reproduce ( Henderson et al. , 2017 ; Tucker et al. , 2018 ) , unreliable across runs ( Henderson et al. , 2017 ; 2018 ) , and sometimes outperformed by simple baselines ( Mania et al. , 2018 ) . The prevalence of these issues points to a broader problem : we do not understand how the parts comprising deep RL algorithms impact agent training , either separately or as a whole . This unsatisfactory understanding suggests that we should re-evaluate the inner workings of our algorithms . Indeed , the overall question motivating our work is : how do the multitude of mechanisms used in deep RL training algorithms impact agent behavior ? Our contributions . We analyze the underpinnings of agent behavior—both through the traditional metric of cumulative reward , and by measuring more fine-grained algorithmic properties . As a first step towards tackling this question , we conduct a case study of two of the most popular deep policygradient methods : Trust Region Policy Optimization ( TRPO ) ( Schulman et al. , 2015a ) and Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) . These two methods are closely related : PPO was originally developed as a refinement of TRPO . We find that much of the PPO ’ s observed improvement in performance comes from seemingly small modifications to the core algorithm that either can be found only in a paper ’ s original implementation , or are described as auxiliary details and are not present in the corresponding TRPO baselines . 1 We pinpoint these modifications , and perform an ablation study demonstrating that they are instrumental to the PPO ’ s performance . ∗Equal contribution . Work done in part as an intern at Two Sigma . 1Note that these code-level optimizations are separate from “ implementation choices ” like the choice of PyTorch versus TensorFlow in that they intentionally change the training algorithm ’ s operation . This observation prompts us to study how such code-level optimizations change agent training dynamics , and whether we can truly think of them as merely auxiliary improvements . Our results indicate that these optimizations fundamentally change algorithms ’ operation , and go even beyond improvements in agent reward . We find that they majorly impact a key algorithmic principle behind TRPO and PPO ’ s operations : trust region enforcement . Ultimately , we discover that the PPO code-optimizations are more important in terms of final reward achieved than the choice of general training algorithm ( TRPO vs. PPO ) . This result is in stark contrast to the previous view that the central PPO clipping method drives the gains seen in Schulman et al . ( 2017 ) . In doing so , we demonstrate that the algorithmic changes imposed by such optimizations make rigorous comparisons of algorithms difficult . Without a rigorous understanding of the full impact of code-level optimizations , we can not hope to gain any reliable insight from comparing algorithms on benchmark tasks . Our results emphasize the importance of building RL methods in a modular manner . To progress towards more performant and reliable algorithms , we need to understand each component ’ s impact on agent behavior and performance—both individually , and as part of a whole . Code for all the results shown in this work is available at https : //github.com/MadryLab/ implementation-matters . 2 RELATED WORK . The idea of using gradient estimates to update neural network–based RL agents dates back at least to the work of Williams ( 1992 ) , who proposed the REINFORCE algorithm . Later , Sutton et al . ( 1999 ) established a unifying framework that casts the previous algorithms as instances of the policy gradient method . Our work focuses on proximal policy optimization ( PPO ) ( Schulman et al. , 2017 ) and trust region policy optimization ( TRPO ) ( Schulman et al. , 2015a ) , which are two of the most prominent policy gradient algorithms used in deep RL . Much of the original inspiration for the usage of the trust regions stems from the conservative policy update of Kakade ( 2001 ) . This policy update , similarly to TRPO , uses a natural gradient descent-based greedy policy update . TRPO also bears similarity to the relative policy entropy search method of Peters et al . ( 2010 ) , which constrains the distance between marginal action distributions ( whereas TRPO constrains the conditionals of such action distributions ) . Notably , Henderson et al . ( 2017 ) points out a number of brittleness , reproducibility , and experimental practice issues in deep RL algorithms . Importantly , we build on the observation of Henderson et al . ( 2017 ) that final reward for a given algorithm is greatly influenced depending on the code base used . Rajeswaran et al . ( 2017 ) and Mania et al . ( 2018 ) also demonstrate that on many of the benchmark tasks , the performance of PPO and TRPO can be matched by fairly elementary randomized search approaches . Additionally , Tucker et al . ( 2018 ) showed that one of the recently proposed extensions of the policy gradient framework , i.e. , the usage of baseline functions that are also actiondependent ( in addition to being state-dependent ) , might not lead to better policies after all . 3 ATTRIBUTING SUCCESS IN PROXIMAL POLICY OPTIMIZATION . Our overarching goal is to better understand the underpinnings of the behavior of deep policy gradient methods . We thus perform a careful study of two tightly linked algorithms : TRPO and PPO ( recall that PPO is motivated as TRPO with a different trust region enforcement mechanism ) . To better understand these methods , we start by thoroughly investigating their implementations in practice . We find that in comparison to TRPO , the PPO implementation contains many non-trivial optimizations that are not ( or only barely ) described in its corresponding paper . Indeed , the standard implementation of PPO 2 contains the following additional optimizations : 2From the OpenAI baselines GitHub repository : https : //github.com/openai/baselines 1 . Value function clipping : Schulman et al . ( 2017 ) originally suggest fitting the value network via regression to target values : LV = ( Vθt − Vtarg ) 2 , but the standard implementation instead fits the value network with a PPO-like objective : LV = min [ ( Vθt − Vtarg ) 2 , ( clip ( Vθt , Vθt−1 − ε , Vθt−1 + ε ) − Vtarg ) 2 ] , where Vθ is clipped around the previous value estimates ( and ε is fixed to the same value as the value used in ( 2 ) to clip the probability ratios ) . 2 . Reward scaling : Rather than feeding the rewards directly from the environment into the objective , the PPO implementation performs a certain discount-based scaling scheme . In this scheme , the rewards are divided through by the standard deviation of a rolling discounted sum of the rewards ( without subtracting and re-adding the mean ) —see Algorithm 1 in Appendix A.2 . 3 . Orthogonal initialization and layer scaling : Instead of using the default weight initialization scheme for the policy and value networks , the implementation uses an orthogonal initialization scheme with scaling that varies from layer to layer . 4 . Adam learning rate annealing : Depending on the task , the implementation sometimes anneals the learning rate of Adam ( Kingma & Ba , 2014 ) ( an already adaptive method ) for optimization . 5 . Reward Clipping : The implementation also clips the rewards within a preset range ( usually [ −5 , 5 ] or [ −10 , 10 ] ) . 6 . Observation Normalization : In a similar manner to the rewards , the raw states are also not fed into the optimizer . Instead , the states are first normalized to mean-zero , variance-one vectors . 7 . Observation Clipping : Analagously to rewards , the observations are also clipped within a range , usually [ −10 , 10 ] . 8 . Hyperbolic tan activations : As also observed by Henderson et al . ( 2017 ) , implementations of policy gradient algorithms also also use hyperbolic tangent function activations between layers in the policy and value networks . 9 . Global Gradient Clipping : After computing the gradient with respect to the policy and the value networks , the implementation clips the gradients such the “ global ` 2 norm ” ( i.e . the norm of the concatenated gradients of all parameters ) does not exceed 0.5 . These optimizations may appear as merely surface-level or insignificant algorithmic changes to the core policy gradient method at hand . However , we find that they dramatically affect the performance of PPO . To demonstrate this , we start by performing a full ablation study on the four optimizations mentioned above 3 . Figure 1 shows a histogram of the final rewards of agents trained with every possible configuration of the above optimizations—for each configuration , a grid search for the optimal learning rate is performed , and we measure the reward of random agents trained using the identified learning rate . Our findings suggest that many code-level optimizations are necessary for PPO to attain its claimed performance . The above findings show that our ability to understand PPO from an algorithmic perspective hinges on the ability to distill out its fundamental principles from such algorithm-independent ( in the sense that these optimizations can be implemented for any policy gradient method ) optimizations . We thus consider a variant of PPO called PPO-MINIMAL ( PPO-M ) which implements only the core of the algorithm . PPO-M uses the standard value network loss , no reward scaling , the default network initialization , and Adam with a fixed learning rate . Importantly , PPO-M ignores all the code-level optimizations listed above in the beginning of Section 3 . We then explore PPO-M alongside PPO and TRPO . We list all the algorithms we study and their defining properties in Table 1 . Overall , our results on the importance of these optimizations both corroborate results demonstrating the brittleness of deep policy gradient methods , and demonstrate that even beyond environmental brittleness , the algorithms themselves exhibit high sensitivity to implementation choices 4 . 3Due to restrictions on computational resources , we could only perform a full ablation on the first four of the identified optimizations . 4This might also explain the difference between different codebases observed in Henderson et al . ( 2017 ) 4 CODE-LEVEL OPTIMIZATIONS HAVE ALGORITHMIC EFFECTS . In the previous section , we found that canonical implementations of PPO contain many code-level optimizations : implementation choices that are not integral to the method but profoundly impact performance . The seemingly disproportionate effect of code-level optimizations identified in our ablation study may lead us to ask : how do these seemingly superficial code-level optimizations impact underlying agent behavior ? In this section , we demonstrate that the code-level optimizations fundamentally alter agent behavior . Rather than merely improving ultimate cumulative award , such optimizations directly impact the principles motivating the core algorithms . Trust Region Optimization . A key property of policy gradient algorithms is that update steps computed at any specific policy πθt are only guaranteed predictiveness in a neighborhood around θt . Thus , to ensure that the update steps we derive remain predictive many policy gradient algo- rithms ensure that these steps stay in the vicinity of the current policy . The resulting “ trust region ” methods ( Kakade , 2001 ; Schulman et al. , 2015a ; 2017 ) try to constrain the local variation of the parameters in policy-space by restricting the distributional distance between successive policies . A popular method in this class is trust region policy optimization ( TRPO ) Schulman et al . ( 2015a ) . TRPO constrains the KL divergence between successive policies on the optimization trajectory , leading to the following problem : max θ E ( st , at ) ∼π [ πθ ( at|st ) π ( at|st ) Âπ ( st , at ) ] s.t . DKL ( πθ ( · | s ) ||π ( · | s ) ) ≤ δ , ∀s . ( 1 ) In practice , we maximize this objective with a second-order approximation of the KL divergence and natural gradient descent , and replace the worst-case KL constraints over all possible states with an approximation of the mean KL based on the states observed in the current trajectory . Proximal policy optimization . One disadvantage of the TRPO algorithm is that it can be computationally costly—the step direction is estimated with nonlinear conjugate gradients , which requires the computation of multiple Hessian-vector products . To address this issue , Schulman et al . ( 2017 ) propose proximal policy optimization ( PPO ) , which tries to enforce a trust region with a different objective that does not require computing a projection . Concretely , PPO proposes replacing the KL-constrained objective ( 1 ) of TRPO by clipping the objective function directly as : max θ E ( st , at ) ∼π [ min ( clip ( ρt , 1− ε , 1 + ε ) Âπ ( st , at ) , ρtÂπ ( st , at ) ) ] ( 2 ) where ρt = πθ ( at|st ) π ( at|st ) . ( 3 ) Note that this objective can be optimized without an explicit projection step , leading to a simpler parameter update during training . In addition to its simplicity , PPO is intended to be faster and more sample-efficient than TRPO ( Schulman et al. , 2017 ) . Trust regions in TRPO and PPO . Enforcing a trust region is a core algorithmic property of different policy gradient methods . However , whether or not a trust region is enforced is not directly observable from the final rewards . So , how does this algorithmic property vary across state-of-the-art policy gradient methods ? In Figure 2 we measure the mean KL divergence between successive policies in a training run of both TRPO and PPO-M ( PPO without code-level optimizations ) . Recall that TRPO is designed specifically to constrain this KL-based trust region , while the clipping mechanism of PPO attempts to approximate it . Indeed , we find that TRPO precisely enforces this trust region ( this is unsuprising , and nearly by construction ) . We thus turn our attention to the trust regions induced by training with PPO and PPO-M. First , we consider mathematically the contribution of a single state-action pair to the gradient of the PPO objective , which is given by ∇θLPPO = { ∇θLθ πθ ( a|s ) π ( a|s ) ∈ [ 1− , 1 + ] or L C θ < Lθ 0 otherwise , where Lθ : = E ( s , a ) ∈τ∼π [ πθ ( a|s ) π ( a|s ) Aπ ( s , a ) ] , and LCθ : = E ( s , a ) ∈τ∼π [ clip ( πθ ( a|s ) π ( a|s ) , 1− ε , 1 + ε ) Aπ ( s , a ) ] are respectively the standard and clipped versions of the surrogate objective . As a result , since we initialize πθ as π ( and thus the ratios start all equal to one ) the first step we take is identical to a maximization step over the unclipped surrogate objective . It thus stands to reason that the nature of the trust region enforced is heavily dependent on the method with which the clipped PPO objective is optimized , rather than on the objective itself . Therefore , the size of step we take is determined solely be the steepness of the surrogate landscape ( i.e . Lipschitz constant of the optimization problem we solve ) , and we can end up moving arbitrarily far from the trust region . We hypothesize that this dependence of PPO on properties of the optimizer rather than the optimization objective contributes to the brittleness of the algorithm to hyperparameters such as learning rate and momentum , as observed by Henderson et al . ( 2018 ) and others . The results we observe ( shown in Figure 2 ) corroborate this intuition . For agents trained with optimal parameters , all three algorithms are able to maintain a KL-based trust region . First , we note that all three algorithms fail to maintain a ratio-based trust region , despite PPO and PPO-M being trained directly with a ratio-clipping objective . Furthermore , the nature of the KL trust region enforced differs between PPO and PPO-M , despite the fact that the core algorithm remains constant between the two methods ; while PPO-M KL trends up as the number of iterations increases , PPO KL peaks halfway through training before trending down again . The findings from this experiment and the corresponding calculations demonstrate that perhaps a key factor in the behavior of PPO-trained agents even from an algorithmic viewpoint comes from auxiliary optimizations , rather than the core methodology .
This paper calls to attention the importance of specifying all performance altering implementation details that are current inherent in the state-of-the-art deep policy gradient community. Specifically, this paper builds very closely on the work started by Henderson et al. 2017, building a conversation around the importance of more rigorous and careful scientific study of published algorithms. This paper identifies many "code-level optimizations" that account for the differences between the popular TRPO and PPO deep policy gradient algorithms. The paper then subselects four of these optimizations and carefully investigates their impact on the final performance of each algorithm. The clear conclusion from the paper is that the touted algorithmic improvement of PPO over TRPO has negligible effect on performance, and any previously reported differences are due only to what were considered unimportant implementation details.
SP:133403fbbb8b1195da7a017675d19d3b7b270811
Implementation Matters in Deep RL: A Case Study on PPO and TRPO
1 INTRODUCTION . Deep reinforcement learning ( RL ) algorithms have fueled many of the most publicized achievements in modern machine learning ( Silver et al. , 2017 ; OpenAI , 2018 ; Abbeel & Schulman , 2016 ; Mnih et al. , 2013 ) . However , despite these accomplishments , deep RL methods still are not nearly as reliable as their ( deep ) supervised learning counterparts . Indeed , recent research found the existing deep RL methods to be brittle ( Henderson et al. , 2017 ; Zhang et al. , 2018 ) , hard to reproduce ( Henderson et al. , 2017 ; Tucker et al. , 2018 ) , unreliable across runs ( Henderson et al. , 2017 ; 2018 ) , and sometimes outperformed by simple baselines ( Mania et al. , 2018 ) . The prevalence of these issues points to a broader problem : we do not understand how the parts comprising deep RL algorithms impact agent training , either separately or as a whole . This unsatisfactory understanding suggests that we should re-evaluate the inner workings of our algorithms . Indeed , the overall question motivating our work is : how do the multitude of mechanisms used in deep RL training algorithms impact agent behavior ? Our contributions . We analyze the underpinnings of agent behavior—both through the traditional metric of cumulative reward , and by measuring more fine-grained algorithmic properties . As a first step towards tackling this question , we conduct a case study of two of the most popular deep policygradient methods : Trust Region Policy Optimization ( TRPO ) ( Schulman et al. , 2015a ) and Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) . These two methods are closely related : PPO was originally developed as a refinement of TRPO . We find that much of the PPO ’ s observed improvement in performance comes from seemingly small modifications to the core algorithm that either can be found only in a paper ’ s original implementation , or are described as auxiliary details and are not present in the corresponding TRPO baselines . 1 We pinpoint these modifications , and perform an ablation study demonstrating that they are instrumental to the PPO ’ s performance . ∗Equal contribution . Work done in part as an intern at Two Sigma . 1Note that these code-level optimizations are separate from “ implementation choices ” like the choice of PyTorch versus TensorFlow in that they intentionally change the training algorithm ’ s operation . This observation prompts us to study how such code-level optimizations change agent training dynamics , and whether we can truly think of them as merely auxiliary improvements . Our results indicate that these optimizations fundamentally change algorithms ’ operation , and go even beyond improvements in agent reward . We find that they majorly impact a key algorithmic principle behind TRPO and PPO ’ s operations : trust region enforcement . Ultimately , we discover that the PPO code-optimizations are more important in terms of final reward achieved than the choice of general training algorithm ( TRPO vs. PPO ) . This result is in stark contrast to the previous view that the central PPO clipping method drives the gains seen in Schulman et al . ( 2017 ) . In doing so , we demonstrate that the algorithmic changes imposed by such optimizations make rigorous comparisons of algorithms difficult . Without a rigorous understanding of the full impact of code-level optimizations , we can not hope to gain any reliable insight from comparing algorithms on benchmark tasks . Our results emphasize the importance of building RL methods in a modular manner . To progress towards more performant and reliable algorithms , we need to understand each component ’ s impact on agent behavior and performance—both individually , and as part of a whole . Code for all the results shown in this work is available at https : //github.com/MadryLab/ implementation-matters . 2 RELATED WORK . The idea of using gradient estimates to update neural network–based RL agents dates back at least to the work of Williams ( 1992 ) , who proposed the REINFORCE algorithm . Later , Sutton et al . ( 1999 ) established a unifying framework that casts the previous algorithms as instances of the policy gradient method . Our work focuses on proximal policy optimization ( PPO ) ( Schulman et al. , 2017 ) and trust region policy optimization ( TRPO ) ( Schulman et al. , 2015a ) , which are two of the most prominent policy gradient algorithms used in deep RL . Much of the original inspiration for the usage of the trust regions stems from the conservative policy update of Kakade ( 2001 ) . This policy update , similarly to TRPO , uses a natural gradient descent-based greedy policy update . TRPO also bears similarity to the relative policy entropy search method of Peters et al . ( 2010 ) , which constrains the distance between marginal action distributions ( whereas TRPO constrains the conditionals of such action distributions ) . Notably , Henderson et al . ( 2017 ) points out a number of brittleness , reproducibility , and experimental practice issues in deep RL algorithms . Importantly , we build on the observation of Henderson et al . ( 2017 ) that final reward for a given algorithm is greatly influenced depending on the code base used . Rajeswaran et al . ( 2017 ) and Mania et al . ( 2018 ) also demonstrate that on many of the benchmark tasks , the performance of PPO and TRPO can be matched by fairly elementary randomized search approaches . Additionally , Tucker et al . ( 2018 ) showed that one of the recently proposed extensions of the policy gradient framework , i.e. , the usage of baseline functions that are also actiondependent ( in addition to being state-dependent ) , might not lead to better policies after all . 3 ATTRIBUTING SUCCESS IN PROXIMAL POLICY OPTIMIZATION . Our overarching goal is to better understand the underpinnings of the behavior of deep policy gradient methods . We thus perform a careful study of two tightly linked algorithms : TRPO and PPO ( recall that PPO is motivated as TRPO with a different trust region enforcement mechanism ) . To better understand these methods , we start by thoroughly investigating their implementations in practice . We find that in comparison to TRPO , the PPO implementation contains many non-trivial optimizations that are not ( or only barely ) described in its corresponding paper . Indeed , the standard implementation of PPO 2 contains the following additional optimizations : 2From the OpenAI baselines GitHub repository : https : //github.com/openai/baselines 1 . Value function clipping : Schulman et al . ( 2017 ) originally suggest fitting the value network via regression to target values : LV = ( Vθt − Vtarg ) 2 , but the standard implementation instead fits the value network with a PPO-like objective : LV = min [ ( Vθt − Vtarg ) 2 , ( clip ( Vθt , Vθt−1 − ε , Vθt−1 + ε ) − Vtarg ) 2 ] , where Vθ is clipped around the previous value estimates ( and ε is fixed to the same value as the value used in ( 2 ) to clip the probability ratios ) . 2 . Reward scaling : Rather than feeding the rewards directly from the environment into the objective , the PPO implementation performs a certain discount-based scaling scheme . In this scheme , the rewards are divided through by the standard deviation of a rolling discounted sum of the rewards ( without subtracting and re-adding the mean ) —see Algorithm 1 in Appendix A.2 . 3 . Orthogonal initialization and layer scaling : Instead of using the default weight initialization scheme for the policy and value networks , the implementation uses an orthogonal initialization scheme with scaling that varies from layer to layer . 4 . Adam learning rate annealing : Depending on the task , the implementation sometimes anneals the learning rate of Adam ( Kingma & Ba , 2014 ) ( an already adaptive method ) for optimization . 5 . Reward Clipping : The implementation also clips the rewards within a preset range ( usually [ −5 , 5 ] or [ −10 , 10 ] ) . 6 . Observation Normalization : In a similar manner to the rewards , the raw states are also not fed into the optimizer . Instead , the states are first normalized to mean-zero , variance-one vectors . 7 . Observation Clipping : Analagously to rewards , the observations are also clipped within a range , usually [ −10 , 10 ] . 8 . Hyperbolic tan activations : As also observed by Henderson et al . ( 2017 ) , implementations of policy gradient algorithms also also use hyperbolic tangent function activations between layers in the policy and value networks . 9 . Global Gradient Clipping : After computing the gradient with respect to the policy and the value networks , the implementation clips the gradients such the “ global ` 2 norm ” ( i.e . the norm of the concatenated gradients of all parameters ) does not exceed 0.5 . These optimizations may appear as merely surface-level or insignificant algorithmic changes to the core policy gradient method at hand . However , we find that they dramatically affect the performance of PPO . To demonstrate this , we start by performing a full ablation study on the four optimizations mentioned above 3 . Figure 1 shows a histogram of the final rewards of agents trained with every possible configuration of the above optimizations—for each configuration , a grid search for the optimal learning rate is performed , and we measure the reward of random agents trained using the identified learning rate . Our findings suggest that many code-level optimizations are necessary for PPO to attain its claimed performance . The above findings show that our ability to understand PPO from an algorithmic perspective hinges on the ability to distill out its fundamental principles from such algorithm-independent ( in the sense that these optimizations can be implemented for any policy gradient method ) optimizations . We thus consider a variant of PPO called PPO-MINIMAL ( PPO-M ) which implements only the core of the algorithm . PPO-M uses the standard value network loss , no reward scaling , the default network initialization , and Adam with a fixed learning rate . Importantly , PPO-M ignores all the code-level optimizations listed above in the beginning of Section 3 . We then explore PPO-M alongside PPO and TRPO . We list all the algorithms we study and their defining properties in Table 1 . Overall , our results on the importance of these optimizations both corroborate results demonstrating the brittleness of deep policy gradient methods , and demonstrate that even beyond environmental brittleness , the algorithms themselves exhibit high sensitivity to implementation choices 4 . 3Due to restrictions on computational resources , we could only perform a full ablation on the first four of the identified optimizations . 4This might also explain the difference between different codebases observed in Henderson et al . ( 2017 ) 4 CODE-LEVEL OPTIMIZATIONS HAVE ALGORITHMIC EFFECTS . In the previous section , we found that canonical implementations of PPO contain many code-level optimizations : implementation choices that are not integral to the method but profoundly impact performance . The seemingly disproportionate effect of code-level optimizations identified in our ablation study may lead us to ask : how do these seemingly superficial code-level optimizations impact underlying agent behavior ? In this section , we demonstrate that the code-level optimizations fundamentally alter agent behavior . Rather than merely improving ultimate cumulative award , such optimizations directly impact the principles motivating the core algorithms . Trust Region Optimization . A key property of policy gradient algorithms is that update steps computed at any specific policy πθt are only guaranteed predictiveness in a neighborhood around θt . Thus , to ensure that the update steps we derive remain predictive many policy gradient algo- rithms ensure that these steps stay in the vicinity of the current policy . The resulting “ trust region ” methods ( Kakade , 2001 ; Schulman et al. , 2015a ; 2017 ) try to constrain the local variation of the parameters in policy-space by restricting the distributional distance between successive policies . A popular method in this class is trust region policy optimization ( TRPO ) Schulman et al . ( 2015a ) . TRPO constrains the KL divergence between successive policies on the optimization trajectory , leading to the following problem : max θ E ( st , at ) ∼π [ πθ ( at|st ) π ( at|st ) Âπ ( st , at ) ] s.t . DKL ( πθ ( · | s ) ||π ( · | s ) ) ≤ δ , ∀s . ( 1 ) In practice , we maximize this objective with a second-order approximation of the KL divergence and natural gradient descent , and replace the worst-case KL constraints over all possible states with an approximation of the mean KL based on the states observed in the current trajectory . Proximal policy optimization . One disadvantage of the TRPO algorithm is that it can be computationally costly—the step direction is estimated with nonlinear conjugate gradients , which requires the computation of multiple Hessian-vector products . To address this issue , Schulman et al . ( 2017 ) propose proximal policy optimization ( PPO ) , which tries to enforce a trust region with a different objective that does not require computing a projection . Concretely , PPO proposes replacing the KL-constrained objective ( 1 ) of TRPO by clipping the objective function directly as : max θ E ( st , at ) ∼π [ min ( clip ( ρt , 1− ε , 1 + ε ) Âπ ( st , at ) , ρtÂπ ( st , at ) ) ] ( 2 ) where ρt = πθ ( at|st ) π ( at|st ) . ( 3 ) Note that this objective can be optimized without an explicit projection step , leading to a simpler parameter update during training . In addition to its simplicity , PPO is intended to be faster and more sample-efficient than TRPO ( Schulman et al. , 2017 ) . Trust regions in TRPO and PPO . Enforcing a trust region is a core algorithmic property of different policy gradient methods . However , whether or not a trust region is enforced is not directly observable from the final rewards . So , how does this algorithmic property vary across state-of-the-art policy gradient methods ? In Figure 2 we measure the mean KL divergence between successive policies in a training run of both TRPO and PPO-M ( PPO without code-level optimizations ) . Recall that TRPO is designed specifically to constrain this KL-based trust region , while the clipping mechanism of PPO attempts to approximate it . Indeed , we find that TRPO precisely enforces this trust region ( this is unsuprising , and nearly by construction ) . We thus turn our attention to the trust regions induced by training with PPO and PPO-M. First , we consider mathematically the contribution of a single state-action pair to the gradient of the PPO objective , which is given by ∇θLPPO = { ∇θLθ πθ ( a|s ) π ( a|s ) ∈ [ 1− , 1 + ] or L C θ < Lθ 0 otherwise , where Lθ : = E ( s , a ) ∈τ∼π [ πθ ( a|s ) π ( a|s ) Aπ ( s , a ) ] , and LCθ : = E ( s , a ) ∈τ∼π [ clip ( πθ ( a|s ) π ( a|s ) , 1− ε , 1 + ε ) Aπ ( s , a ) ] are respectively the standard and clipped versions of the surrogate objective . As a result , since we initialize πθ as π ( and thus the ratios start all equal to one ) the first step we take is identical to a maximization step over the unclipped surrogate objective . It thus stands to reason that the nature of the trust region enforced is heavily dependent on the method with which the clipped PPO objective is optimized , rather than on the objective itself . Therefore , the size of step we take is determined solely be the steepness of the surrogate landscape ( i.e . Lipschitz constant of the optimization problem we solve ) , and we can end up moving arbitrarily far from the trust region . We hypothesize that this dependence of PPO on properties of the optimizer rather than the optimization objective contributes to the brittleness of the algorithm to hyperparameters such as learning rate and momentum , as observed by Henderson et al . ( 2018 ) and others . The results we observe ( shown in Figure 2 ) corroborate this intuition . For agents trained with optimal parameters , all three algorithms are able to maintain a KL-based trust region . First , we note that all three algorithms fail to maintain a ratio-based trust region , despite PPO and PPO-M being trained directly with a ratio-clipping objective . Furthermore , the nature of the KL trust region enforced differs between PPO and PPO-M , despite the fact that the core algorithm remains constant between the two methods ; while PPO-M KL trends up as the number of iterations increases , PPO KL peaks halfway through training before trending down again . The findings from this experiment and the corresponding calculations demonstrate that perhaps a key factor in the behavior of PPO-trained agents even from an algorithmic viewpoint comes from auxiliary optimizations , rather than the core methodology .
This paper investigates the impact of implementation "details", with existing implementations of TRPO and PPO as examples. The main takeaway is that the performance gains observed in PPO (compared to TRPO) are actually caused by differences in implementation, and not by the differences between the two learning algorithms. In particular, adding to the TRPO code the same implementation changes as in PPO makes TRPO on par with (and possibly even better than) PPO. The clipping objective of PPO is also found to have no significant impact on its performance. This calls for more careful comparisons between algorithms (by minimizing implementation changes and more in-depth ablation studies) than has typically been done until now in the RL research community.
SP:133403fbbb8b1195da7a017675d19d3b7b270811
Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Recent work has explored sequence-to-sequence latent variable models for expressive speech synthesis ( supporting control and transfer of prosody and style ) , but has not presented a coherent framework for understanding the trade-offs between the competing methods . In this paper , we propose embedding capacity ( the amount of information the embedding contains about the data ) as a unified method of analyzing the behavior of latent variable models of speech , comparing existing heuristic ( non-variational ) methods to variational methods that are able to explicitly constrain capacity using an upper bound on representational mutual information . In our proposed model ( Capacitron ) , we show that by adding conditional dependencies to the variational posterior such that it matches the form of the true posterior , the same model can be used for high-precision prosody transfer , text-agnostic style transfer , and generation of natural-sounding prior samples . For multi-speaker models , Capacitron is able to preserve target speaker identity during inter-speaker prosody transfer and when drawing samples from the latent prior . Lastly , we introduce a method for decomposing embedding capacity hierarchically across two sets of latents , allowing a portion of the latent variability to be specified and the remaining variability sampled from a learned prior . Audio examples are available on the web1 . 1 INTRODUCTION . The synthesis of realistic human speech is a challenging problem that is important for natural humancomputer interaction . End-to-end neural network-based approaches have seen significant progress in recent years ( Wang et al. , 2017 ; Taigman et al. , 2018 ; Ping et al. , 2018 ; Sotelo et al. , 2017 ) , even matching human performance for short assistant-like utterances ( Shen et al. , 2018 ) . However , these neural models are sometimes viewed as less interpretable or controllable than more traditional models composed of multiple stages of processing that each operate on reified linguistic or phonetic representations . Text-to-speech ( TTS ) is an underdetermined problem , meaning the same text input has an infinite number of reasonable spoken realizations . In addition to speaker and channel characteristics , important sources of variability in TTS include intonation , stress , and rhythm ( collectively referred to as prosody ) . These attributes convey linguistic , semantic , and emotional meaning beyond what is present in the lexical representation ( i.e. , the text ) ( Wagner & Watson , 2010 ) . Recent end-to-end TTS research has aimed to model and/or directly control the remaining variability in the output . Skerry-Ryan et al . ( 2018 ) augment a Tacotron-like model ( Wang et al. , 2017 ) with a deterministic encoder that projects reference speech into a learned embedding space . The system can be used for prosody transfer between speakers ( “ say it like this ” ) , but does not work for transfer between unrelated sentences , and does not preserve the pitch range of the target speaker . Lee & Kim ( 2019 ) partially address the pitch range problem by centering the learned embeddings using speaker-wise means . 1https : //variational-embedding-capacity.github.io/demos/ Other work targets style transfer , a text-agnostic variation on prosody transfer . The Global Style Token ( GST ) system ( Wang et al. , 2018 ) uses a modified attention-based reference encoder to transfer global style properties to arbitrary text , and Ma et al . ( 2019 ) use an adversarial objective to disentangle style from text . Hsu et al . ( 2019 ) and Zhang et al . ( 2019 ) use a variational approach ( Kingma & Welling , 2014 ) to tackle the style task . Advantages of this approach include its ability to generate style samples via the accompanying prior and the potential for better disentangling between latent style factors ( Burgess et al. , 2018 ) . Additionally , Hsu et al . ( 2019 ) use a Gaussian mixture prior over the latents , which ( when interpreting the mixture component index as a high-level discrete latent ) allows a form of hierarchical control . This work extends the above approaches by providing the following contributions : 1 . We propose a unified approach for analyzing the characteristics of TTS latent variable models , independent of architecture , using the capacity of the learned embeddings ( i.e. , the representational mutual information between the embedding and the data ) . 2 . We target specific capacities for our proposed model using a Lagrange multiplier-based optimization scheme , and show that capacity is correlated with perceptual reference similarity . 3 . We show that modifying the variational posterior to match the form of the true posterior enables style and prosody transfer in the same model , helps preserve target speaker identity during inter-speaker transfer , and leads to natural-sounding prior samples even at high embedding capacities . 4 . We introduce a method for controlling what fraction of the variation represented in an embedding is specified , allowing the remaining variation to be sampled from the model . 2 MEASURING REFERENCE EMBEDDING CAPACITY . 2.1 LEARNING A REFERENCE EMBEDDING SPACE . Existing heuristic ( non-variational ) end-to-end approaches to prosody and style transfer ( Skerry-Ryan et al. , 2018 ; Wang et al. , 2018 ; Lee & Kim , 2019 ; Henter et al. , 2018 ) typically start with the teacherforced reconstruction loss , ( 1 ) , used to train Tacotron-like sequence-to-sequence models and simply augment the model with a deterministic reference encoder , ge ( x ) , as shown in eq . ( 2 ) . L ( x , yT , yS ) ≡ − log p ( x|yT , yS ) = ‖fθ ( yT , yS ) − x‖1 +K ( 1 ) L′ ( x , yT , yS ) ≡ − log p ( x|yT , yS , ge ( x ) ) = ‖fθ ( yT , yS , ge ( x ) ) − x‖1 +K ( 2 ) where x is an audio spectrogram , yT is the input text , yS is the target speaker ( if training a multispeaker model ) , fθ ( · ) is a deterministic function that maps the inputs to spectrogram predictions , and K is a normalization constant . Teacher-forcing implies that fθ ( · ) is dependent on x < t when predicting spectrogram frame xt . In practice , fθ ( · ) serves as the greedy deterministic output of the model , and transfer is accomplished by pairing the embedding computed by the reference encoder with different text or speakers during synthesis . In these heuristic models , the architecture chosen for the reference encoder determines the transfer characteristics of the model . This decision affects the information capacity of the embedding and allows the model to target a specific trade-off between transfer precision ( how closely the output resembles the reference ) and generality ( how well an embedding works when paired with arbitrary text ) . Higher capacity embeddings prioritize precision and are better suited for prosody transfer to similar text , while lower capacity embeddings prioritize generality and are better suited for text-agnostic style transfer . The variational extensions from Hsu et al . ( 2019 ) and Zhang et al . ( 2019 ) augment the reconstruction loss in eq . ( 2 ) with a KL divergence term . This encourages a stochastic reference encoder ( variational posterior ) , q ( z|x ) , to align well with a prior , p ( z ) ( eq . ( 3 ) ) . The overall loss is then equivalent to the negative evidence lower bound ( ELBO ) of the marginal likelihood of the data ( Kingma & Welling , 2014 ) . LELBO ( x , yT , yS ) ≡ Ez∼q ( z|x ) [ − log p ( x|z , yT , yS ) ] +DKL ( q ( z|x ) ‖p ( z ) ) ( 3 ) − log p ( x|yT , yS ) ≤ LELBO ( x , yT , yS ) ( 4 ) Controlling embedding capacity in variational models can be accomplished more directly by manipulating the KL term in ( 3 ) . Recent work has shown that the KL term provides an upper bound on the mutual information between the data , x , and the latent embedding , z ∼ q ( z|x ) ( Hoffman & Johnson , 2016 ; Makhzani et al. , 2015 ; Alemi et al. , 2018 ) . RAVG ≡ Ex∼pD ( x ) [ DKL ( q ( z|x ) ‖p ( z ) ) ] , R ≡ DKL ( q ( z|x ) ‖p ( z ) ) ( 5 ) Iq ( X ; Z ) ≡ Ex∼pD ( x ) [ DKL ( q ( z|x ) ‖q ( z ) ) ] , q ( z ) ≡ Ex∼pD ( x ) q ( z|x ) ( 6 ) RAVG = Iq ( X ; Z ) +DKL ( q ( z ) ‖p ( z ) ) ( 7 ) =⇒ Iq ( X ; Z ) ≤ RAVG ( 8 ) where pD ( x ) is the data distribution , R is the the KL term in ( 3 ) , RAVG is the KL term averaged over the data distribution , Iq ( X ; Z ) is the representational mutual information ( the capacity of z ) , and q ( z ) is the aggregated posterior . This brief derivation is expanded in Appendix C.1 . The bound in ( 8 ) follows from ( 7 ) and the non-negativity of the KL divergence , and ( 7 ) shows that the slack on the bound is DKL ( q ( z ) ‖p ( z ) ) , the aggregate KL . In addition to providing a tighter bound , having a low aggregate KL is desirable when sampling from the model via the prior , because then the samples of z that the decoder sees during training will be very similar to samples from the prior . Various approaches to controlling the KL term have been proposed , including varying a weight on the KL term , β ( Higgins et al. , 2017 ) , and penalizing its deviation from a target value ( Alemi et al. , 2018 ; Burgess et al. , 2018 ) . Because we would like to smoothly optimize for a specific bound on the embedding capacity , we adapt the Lagrange multiplier-based optimization approach of Rezende & Viola ( 2018 ) by applying it to the KL term rather than the reconstruction term . min θ max β≥0 { Ez∼qθ ( z|x ) [ − log pθ ( x|z , yT , yS ) ] + β ( DKL ( qθ ( z|x ) ‖p ( z ) ) − C ) } ( 9 ) where θ are the model parameters , β serves as an automatically-tuned weight on the KL term , C is the capacity limit , and updates to θ and β are interleaved during training . We constrain β to be non-negative by passing an unconstrained parameter through a softplus non-linearity , which makes the capacity constraint a limit rather than a target . This approach is less tedious than tuning β by hand and leads to more consistent behavior from run-to-run . It also allows more stable optimization than directly penalizing the ` 1 deviation from the target KL . 2.2 ESTIMATING EMBEDDING CAPACITY . Estimating heuristic embedding capacity Unfortunately , the heuristic methods do not come packaged with an easy way to estimate embedding capacity . We can estimate an effective capacity ordering , however , by measuring the test-time reconstruction loss when using the reference encoder from each method . In Figure 1 , we show how the reconstruction loss varies with embedding dimensionality for the tanh-based prosody transfer ( PT ) and softmax-based global style token ( GST ) bottlenecks ( Skerry-Ryan et al. , 2018 ; Wang et al. , 2018 ) and for variational models ( Var . ) with different capacity limits , C. We also compare to a baseline Tacotron model without a reference encoder . For this preliminary comparison , we use the expressive single-speaker dataset and training setup described in Section 4.2 . Looking at the heuristic methods in Figure 1 , we see that the GST bottleneck is much more restrictive than the PT bottleneck , which hurts transfer precision but allows sufficient embedding generality for text-agnostic style transfer . Bounding variational embedding capacity We saw in ( 8 ) that the KL term is an upper bound on embedding capacity , so we can directly target a specific capacity limit by constraining the KL term using the objective in eq . ( 9 ) . For the three values of C in Figure 1 , we can see that the reconstruction loss flattens out once the embedding reaches a certain dimensionality . This gives us a consistent way to control embedding capacity as it only requires using a reference encoder architecture with sufficient structural capacity ( at least C ) to achieve the desired representational capacity in the variational embedding . Because of this , we use 128-dimensional embeddings in all of our experiments , which should be sufficient for the range of capacities we target .
1. Summary: This paper proposes Capacitron, a conditional variational latent variable model for TTS which allow for controllable latent variable capacity. They optimize the Lagrangian dual of the ELBO and restrict the capacity of the rate-term through a learnable, non-negative multiplier. They demonstrate the effectiveness of their approach on a range of TTS tasks such as same-text prosody transfer and inter-text style transfer, and provide extensive analyses on their latent variable capacity (in addition to comparisons to non-variational approaches based on Tacotron).
SP:af656384c8eec0891912cc1893a5d827bc6efb78
Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Recent work has explored sequence-to-sequence latent variable models for expressive speech synthesis ( supporting control and transfer of prosody and style ) , but has not presented a coherent framework for understanding the trade-offs between the competing methods . In this paper , we propose embedding capacity ( the amount of information the embedding contains about the data ) as a unified method of analyzing the behavior of latent variable models of speech , comparing existing heuristic ( non-variational ) methods to variational methods that are able to explicitly constrain capacity using an upper bound on representational mutual information . In our proposed model ( Capacitron ) , we show that by adding conditional dependencies to the variational posterior such that it matches the form of the true posterior , the same model can be used for high-precision prosody transfer , text-agnostic style transfer , and generation of natural-sounding prior samples . For multi-speaker models , Capacitron is able to preserve target speaker identity during inter-speaker prosody transfer and when drawing samples from the latent prior . Lastly , we introduce a method for decomposing embedding capacity hierarchically across two sets of latents , allowing a portion of the latent variability to be specified and the remaining variability sampled from a learned prior . Audio examples are available on the web1 . 1 INTRODUCTION . The synthesis of realistic human speech is a challenging problem that is important for natural humancomputer interaction . End-to-end neural network-based approaches have seen significant progress in recent years ( Wang et al. , 2017 ; Taigman et al. , 2018 ; Ping et al. , 2018 ; Sotelo et al. , 2017 ) , even matching human performance for short assistant-like utterances ( Shen et al. , 2018 ) . However , these neural models are sometimes viewed as less interpretable or controllable than more traditional models composed of multiple stages of processing that each operate on reified linguistic or phonetic representations . Text-to-speech ( TTS ) is an underdetermined problem , meaning the same text input has an infinite number of reasonable spoken realizations . In addition to speaker and channel characteristics , important sources of variability in TTS include intonation , stress , and rhythm ( collectively referred to as prosody ) . These attributes convey linguistic , semantic , and emotional meaning beyond what is present in the lexical representation ( i.e. , the text ) ( Wagner & Watson , 2010 ) . Recent end-to-end TTS research has aimed to model and/or directly control the remaining variability in the output . Skerry-Ryan et al . ( 2018 ) augment a Tacotron-like model ( Wang et al. , 2017 ) with a deterministic encoder that projects reference speech into a learned embedding space . The system can be used for prosody transfer between speakers ( “ say it like this ” ) , but does not work for transfer between unrelated sentences , and does not preserve the pitch range of the target speaker . Lee & Kim ( 2019 ) partially address the pitch range problem by centering the learned embeddings using speaker-wise means . 1https : //variational-embedding-capacity.github.io/demos/ Other work targets style transfer , a text-agnostic variation on prosody transfer . The Global Style Token ( GST ) system ( Wang et al. , 2018 ) uses a modified attention-based reference encoder to transfer global style properties to arbitrary text , and Ma et al . ( 2019 ) use an adversarial objective to disentangle style from text . Hsu et al . ( 2019 ) and Zhang et al . ( 2019 ) use a variational approach ( Kingma & Welling , 2014 ) to tackle the style task . Advantages of this approach include its ability to generate style samples via the accompanying prior and the potential for better disentangling between latent style factors ( Burgess et al. , 2018 ) . Additionally , Hsu et al . ( 2019 ) use a Gaussian mixture prior over the latents , which ( when interpreting the mixture component index as a high-level discrete latent ) allows a form of hierarchical control . This work extends the above approaches by providing the following contributions : 1 . We propose a unified approach for analyzing the characteristics of TTS latent variable models , independent of architecture , using the capacity of the learned embeddings ( i.e. , the representational mutual information between the embedding and the data ) . 2 . We target specific capacities for our proposed model using a Lagrange multiplier-based optimization scheme , and show that capacity is correlated with perceptual reference similarity . 3 . We show that modifying the variational posterior to match the form of the true posterior enables style and prosody transfer in the same model , helps preserve target speaker identity during inter-speaker transfer , and leads to natural-sounding prior samples even at high embedding capacities . 4 . We introduce a method for controlling what fraction of the variation represented in an embedding is specified , allowing the remaining variation to be sampled from the model . 2 MEASURING REFERENCE EMBEDDING CAPACITY . 2.1 LEARNING A REFERENCE EMBEDDING SPACE . Existing heuristic ( non-variational ) end-to-end approaches to prosody and style transfer ( Skerry-Ryan et al. , 2018 ; Wang et al. , 2018 ; Lee & Kim , 2019 ; Henter et al. , 2018 ) typically start with the teacherforced reconstruction loss , ( 1 ) , used to train Tacotron-like sequence-to-sequence models and simply augment the model with a deterministic reference encoder , ge ( x ) , as shown in eq . ( 2 ) . L ( x , yT , yS ) ≡ − log p ( x|yT , yS ) = ‖fθ ( yT , yS ) − x‖1 +K ( 1 ) L′ ( x , yT , yS ) ≡ − log p ( x|yT , yS , ge ( x ) ) = ‖fθ ( yT , yS , ge ( x ) ) − x‖1 +K ( 2 ) where x is an audio spectrogram , yT is the input text , yS is the target speaker ( if training a multispeaker model ) , fθ ( · ) is a deterministic function that maps the inputs to spectrogram predictions , and K is a normalization constant . Teacher-forcing implies that fθ ( · ) is dependent on x < t when predicting spectrogram frame xt . In practice , fθ ( · ) serves as the greedy deterministic output of the model , and transfer is accomplished by pairing the embedding computed by the reference encoder with different text or speakers during synthesis . In these heuristic models , the architecture chosen for the reference encoder determines the transfer characteristics of the model . This decision affects the information capacity of the embedding and allows the model to target a specific trade-off between transfer precision ( how closely the output resembles the reference ) and generality ( how well an embedding works when paired with arbitrary text ) . Higher capacity embeddings prioritize precision and are better suited for prosody transfer to similar text , while lower capacity embeddings prioritize generality and are better suited for text-agnostic style transfer . The variational extensions from Hsu et al . ( 2019 ) and Zhang et al . ( 2019 ) augment the reconstruction loss in eq . ( 2 ) with a KL divergence term . This encourages a stochastic reference encoder ( variational posterior ) , q ( z|x ) , to align well with a prior , p ( z ) ( eq . ( 3 ) ) . The overall loss is then equivalent to the negative evidence lower bound ( ELBO ) of the marginal likelihood of the data ( Kingma & Welling , 2014 ) . LELBO ( x , yT , yS ) ≡ Ez∼q ( z|x ) [ − log p ( x|z , yT , yS ) ] +DKL ( q ( z|x ) ‖p ( z ) ) ( 3 ) − log p ( x|yT , yS ) ≤ LELBO ( x , yT , yS ) ( 4 ) Controlling embedding capacity in variational models can be accomplished more directly by manipulating the KL term in ( 3 ) . Recent work has shown that the KL term provides an upper bound on the mutual information between the data , x , and the latent embedding , z ∼ q ( z|x ) ( Hoffman & Johnson , 2016 ; Makhzani et al. , 2015 ; Alemi et al. , 2018 ) . RAVG ≡ Ex∼pD ( x ) [ DKL ( q ( z|x ) ‖p ( z ) ) ] , R ≡ DKL ( q ( z|x ) ‖p ( z ) ) ( 5 ) Iq ( X ; Z ) ≡ Ex∼pD ( x ) [ DKL ( q ( z|x ) ‖q ( z ) ) ] , q ( z ) ≡ Ex∼pD ( x ) q ( z|x ) ( 6 ) RAVG = Iq ( X ; Z ) +DKL ( q ( z ) ‖p ( z ) ) ( 7 ) =⇒ Iq ( X ; Z ) ≤ RAVG ( 8 ) where pD ( x ) is the data distribution , R is the the KL term in ( 3 ) , RAVG is the KL term averaged over the data distribution , Iq ( X ; Z ) is the representational mutual information ( the capacity of z ) , and q ( z ) is the aggregated posterior . This brief derivation is expanded in Appendix C.1 . The bound in ( 8 ) follows from ( 7 ) and the non-negativity of the KL divergence , and ( 7 ) shows that the slack on the bound is DKL ( q ( z ) ‖p ( z ) ) , the aggregate KL . In addition to providing a tighter bound , having a low aggregate KL is desirable when sampling from the model via the prior , because then the samples of z that the decoder sees during training will be very similar to samples from the prior . Various approaches to controlling the KL term have been proposed , including varying a weight on the KL term , β ( Higgins et al. , 2017 ) , and penalizing its deviation from a target value ( Alemi et al. , 2018 ; Burgess et al. , 2018 ) . Because we would like to smoothly optimize for a specific bound on the embedding capacity , we adapt the Lagrange multiplier-based optimization approach of Rezende & Viola ( 2018 ) by applying it to the KL term rather than the reconstruction term . min θ max β≥0 { Ez∼qθ ( z|x ) [ − log pθ ( x|z , yT , yS ) ] + β ( DKL ( qθ ( z|x ) ‖p ( z ) ) − C ) } ( 9 ) where θ are the model parameters , β serves as an automatically-tuned weight on the KL term , C is the capacity limit , and updates to θ and β are interleaved during training . We constrain β to be non-negative by passing an unconstrained parameter through a softplus non-linearity , which makes the capacity constraint a limit rather than a target . This approach is less tedious than tuning β by hand and leads to more consistent behavior from run-to-run . It also allows more stable optimization than directly penalizing the ` 1 deviation from the target KL . 2.2 ESTIMATING EMBEDDING CAPACITY . Estimating heuristic embedding capacity Unfortunately , the heuristic methods do not come packaged with an easy way to estimate embedding capacity . We can estimate an effective capacity ordering , however , by measuring the test-time reconstruction loss when using the reference encoder from each method . In Figure 1 , we show how the reconstruction loss varies with embedding dimensionality for the tanh-based prosody transfer ( PT ) and softmax-based global style token ( GST ) bottlenecks ( Skerry-Ryan et al. , 2018 ; Wang et al. , 2018 ) and for variational models ( Var . ) with different capacity limits , C. We also compare to a baseline Tacotron model without a reference encoder . For this preliminary comparison , we use the expressive single-speaker dataset and training setup described in Section 4.2 . Looking at the heuristic methods in Figure 1 , we see that the GST bottleneck is much more restrictive than the PT bottleneck , which hurts transfer precision but allows sufficient embedding generality for text-agnostic style transfer . Bounding variational embedding capacity We saw in ( 8 ) that the KL term is an upper bound on embedding capacity , so we can directly target a specific capacity limit by constraining the KL term using the objective in eq . ( 9 ) . For the three values of C in Figure 1 , we can see that the reconstruction loss flattens out once the embedding reaches a certain dimensionality . This gives us a consistent way to control embedding capacity as it only requires using a reference encoder architecture with sufficient structural capacity ( at least C ) to achieve the desired representational capacity in the variational embedding . Because of this , we use 128-dimensional embeddings in all of our experiments , which should be sufficient for the range of capacities we target .
In this work authors present a regularized, variational autoencoder method for speech synthesis. To endow the latent space with more capacity, the authors employ a modified variational autoencoder objective, which uses a learnable Lagrange multiplier to impose a capacity limit on KL divergence between latent posterior and prior. The authors furthermore propose to decompose the latent embedding space into a two-level hierarchical representation to give generative process more control over style transfer and sample-to-sample variance. They extend earlier theoretical results providing upper bounds on the mutual information between data and its latent embedding to their hierarchical latent representation. In numerical experiments the authors evaluate their approach on a number of speech synthesis tasks involving same-text prosody transfer, inter-text style transfer, inter-speaker prosody transfer. They also analyze speech samples generated from latent samples drawn from the prior.
SP:af656384c8eec0891912cc1893a5d827bc6efb78
Hyperbolic Discounting and Learning Over Multiple Horizons
1 INTRODUCTION . The standard treatment of the reinforcement learning ( RL ) problem is the Markov Decision Process ( MDP ) which includes a discount factor 0 ≤ γ ≤ 1 that exponentially reduces the present value of future rewards ( Bellman , 1957 ; Sutton & Barto , 1998 ) . A reward rt received in t-time steps is devalued to γtrt , a discounted utility model introduced by Samuelson ( 1937 ) . This establishes a timepreference for rewards realized sooner rather than later . The decision to exponentially discount future rewards by γ leads to value functions that satisfy theoretical convergence properties ( Bertsekas , 1995 ) . The magnitude of γ also plays a role in stabilizing learning dynamics of RL algorithms ( Prokhorov & Wunsch , 1997 ; Bertsekas & Tsitsiklis , 1996 ) and has recently been treated as a hyperparameter of the optimization ( OpenAI , 2018 ; Xu et al. , 2018 ) . However , both the magnitude and the functional form of this discounting function establish priors over the solutions learned . The magnitude of γ chosen establishes an effective horizon for the agent of 1/ ( 1− γ ) , far beyond which rewards are neglected ( Kearns & Singh , 2002 ) . This effectively imposes a time-scale of the environment , which may not be accurate . Further , the exponential discounting of future rewards is consistent with a prior belief that there is a known constant per-time-step hazard rate ( Sozou , 1998 ) or probability of dying of 1− γ ( Lattimore & Hutter , 2011 ) . Additionally , discounting future values exponentially and according to a single discount factor γ does not harmonize with the measured value preferences in humans1 and animals ( Mazur , 1985 ; 1997 ; Ainslie , 1992 ; Green & Myerson , 2004 ; Maia , 2009 ) . A wealth of empirical evidence has been amassed that humans , monkeys , rats and pigeons instead discount future returns hyperbolically , where dk ( t ) = 11+kt , for some positive k > 0 ( Ainslie , 1975 ; 1992 ; Mazur , 1985 ; 1997 ; Frederick et al. , 2002 ; Green et al. , 1981 ; Green & Myerson , 2004 ) . This discrepancy between the time-preferences of animals from the exponential discounted measure of value might be presumed irrational . But Sozou ( 1998 ) showed that hyperbolic time-preferences is mathematically consistent with the agent maintaining some uncertainty over the prior belief of the hazard rate in the environment . Hazard rate h ( t ) measures the per-time-step risk the agent incurs as it acts in the environment due to a potential early death . Precisely , if s ( t ) is the probability that the 1Time-preference reversals are one implication . Consider two hypothetical choices : ( 1 ) a stranger offers $ 1M now or $ 1.1M dollars tomorrow ( 2 ) a stranger instead offers $ 1M in 99 days versus $ 1.1M in 100 days . agent is alive at time t then the hazard rate is h ( t ) = − ddt lns ( t ) . We consider the case where there is a fixed , but potentially unknown hazard rate h ( t ) = λ ≥ 0 . The prior belief of the hazard rate p ( λ ) implies a specific discount function Sozou ( 1998 ) . Under this formalism , the canonical case in RL of discounting future rewards according to d ( t ) = γt is consistent with the belief that there exists a single hazard rate λ = e−γ known with certainty . Further details are available in Appendix A . Common RL environments are also characterized by risk , but often in a narrower sense . In deterministic environments like the original Arcade Learning Environment ( ALE ) ( Bellemare et al. , 2013 ) stochasticity is often introduced through techniques like no-ops ( Mnih et al. , 2015 ) and sticky actions ( Machado et al. , 2018 ) where the action execution is noisy . Physics simulators may have noise and the randomness of the policy itself induces risk . But even with these stochastic injections the risk to reward emerges in a more restricted sense . In Section 2 we show that a prior distribution reflecting the uncertainty over the hazard rate , has an associated discount function in the sense that an MDP with either this hazard distribution or the discount function , has the same value function for all policies . This equivalence implies that learning policies with a discount function can be interpreted as making them robust to the associated hazard distribution . Thus , discounting serves as a tool to ensure that policies deployed in the real world perform well even under risks they were not trained under . We propose an algorithm that approximates hyperbolic discounting while building on successful Qlearning ( Watkins & Dayan , 1992 ) tools and their associated theoretical guarantees . We show learning many Q-values , each discounting exponentially with a different discount factor γ , can be aggregated to approximate hyperbolic ( and other non-exponential ) discount factors . We demonstrate the efficacy of our approximation scheme in our proposed Pathworld environment which is characterized both by an uncertain per-time-step risk to the agent . Conceptually , Pathworld emulates a foraging environment where an agent must balance easily realizable , small meals versus more distant , fruitful meals . We then consider higher-dimensional deep RL agents in the ALE , where we measure the benefits of hyperbolic discounting . This approximation mirrors the work of Kurth-Nelson & Redish ( 2009 ) ; Redish & Kurth-Nelson ( 2010 ) which empirically demonstrates that modeling a finite set of µAgents simultaneously can approximate hyperbolic discounting function . Our method then generalizes to other non-hyperbolic discount functions and uses deep neural networks to model the different Q-values from a shared representation . Surprisingly and in addition to enabling new non-exponential discounting schemes , we observe that learning a set of Q-values is beneficial as an auxiliary task ( Jaderberg et al. , 2016 ) . Adding this multi-horizon auxiliary task often improves over a state-of-the-art baseline , Rainbow ( Hessel et al. , 2018 ) in the ALE ( Bellemare et al. , 2013 ) . This work questions the RL paradigm of learning policies through a single discount function which exponentially discounts future rewards through the following contributions : 1 . Hazardous MDPs . We formulate MDPs with hazard present and demonstrate an equivalence between undiscounted values learned under hazards and ( potentially nonexponentially ) discounted values without hazard . 2 . Hyperbolic ( and other non-exponential ) -agent . A practical approach for training an agent which discounts future rewards by a hyperbolic ( or other non-exponential ) discount function and acts according to this . 3 . Multi-horizon auxiliary task . A demonstration of multi-horizon learning over many γ simultaneously as an effective auxiliary task . 2 HAZARD IN MDPS . To study MDPs with hazard distributions and general discount functions we introduce two modifications . The hazardous MDP now is defined by the tuple < S , A , R , P , H , d > . In standard form , the state space S and the action space A may be discrete or continuous . The learner observes samples from the environment transition probability P ( st+1|st , at ) for going from st ∈ S to st+1 ∈ S given at ∈ A . We will consider the case where P is a sub-stochastic transition function , which defines an episodic MDP . The environment emits a bounded reward r : S × A → [ rmin , rmax ] on each transition . In this work we consider non-infinite episodic MDPs . The first difference is that at the beginning of each episode , a hazard λ ∈ [ 0 , ∞ ) is sampled from the hazard distributionH . This is equivalent to sampling a continuing probability γ = e−λ . During the episode , the hazard modified transition function will be Pλ , in that Pλ ( s′|s , a ) = e−λP ( s′|s , a ) . The second difference is that we now consider a general discount function d ( t ) . This differs from the standard approach of exponential discounting in RL with γ according to d ( t ) = γt , which is a special case . This setting makes a close connection to partially observable Markov Decision Process ( POMDP ) ( Kaelbling et al. , 1998 ) where one might consider λ as an unobserved variable . However , the classic POMDP definition contains an explicit discount function γ as part of its definition which does not appear here . A policy π : S → A is a mapping from states to actions . The state action value function QH , dπ ( s , a ) is the expected discounted rewards after taking action a in state s and then following policy π until termination . QH , dπ ( s , a ) = EλEπ , Pλ [ ∞∑ t=0 d ( t ) R ( st , at ) |s0 = s , a0 = a ] ( 1 ) where λ ∼ H and Eπ , Pλ implies that st+1 ∼ Pλ ( ·|st , at ) and at ∼ π ( ·|st ) . 2.1 EQUIVALENCE BETWEEN HAZARD AND DISCOUNTING . In the hazardous MDP setting we observe the same connections between hazard and discount functions delineated in Appendix A . This expresses an equivalence between the value function of an MDP with a discount and MDP with a hazard distribution . For example , there exists an equivalence between the exponential discount function d ( t ) = γt to the undiscounted case where the agent is subject to a ( 1− γ ) per time-step of dying ( Lattimore & Hutter , 2011 ) . The typical Q-value ( left side of Equation 2 ) is when the agent acts in an environment without hazard λ = 0 or H = δ ( 0 ) and discounts future rewards according to d ( t ) = γt = e−λt which we denote as Qδ ( 0 ) , γ t π ( s , a ) . The alternative Q-value ( right side of Equation 2 ) is when the agent acts under hazard rate λ = − ln γ but does not discount future rewards which we denote as Q δ ( − ln γ ) ,1 π ( s , a ) . Qδ ( 0 ) , γ t π ( s , a ) = Q δ ( − ln γ ) ,1 π ( s , a ) ∀ π , s , a . ( 2 ) where δ ( x ) denotes the Dirac delta distribution at x . This follows from Pλ ( s′|s , a ) = e−λP ( s′|s , a ) Eπ , P [ ∞∑ t=0 γtR ( st , at ) |s0 = s , a0 = a ] = Eπ , P [ ∞∑ t=0 e−λtR ( st , at ) |s0 = s , a0 = a ] = Eπ , Pλ [ ∞∑ t=0 R ( st , at ) |s0 = s , a0 = a ] We also show a similar equivalence between hyperbolic discounting and the specific hazard distribution pk ( λ ) = 1k exp ( −λ/k ) , where again , λ ∈ [ 0 , ∞ ) in Appendix E. Qδ ( 0 ) , Γkπ ( s , a ) = Q pk,1 π ( s , a ) For notational brevity later in the paper , we will omit the explicit hazard distributionH-superscript if the environment is not hazardous . This formulation builds upon Sozou ( 1998 ) ’ s relate of hazard rate and discount functions and shows that this holds for generalized Q-values in reinforcement learning .
The paper investigates hyperbolic discounting as a more biologically plausible alternative to exponential discounting in reinforcement learning. First, it formulates a notion of hazard in MDPs as constant exponential discounting and shows that hyperbolic discounting is consistent with uncertainty over the hazard rate. The paper then shows how value functions learned with exponential discounting can be used to approximate value functions with other forms of discounting. Specifically, the paper shows in section 4 how exponentially-discounted value functions can be used to approximate hyperbolically discounted value functions. The paper then presents experiments on a small MDP and Atari 2600 games, showing that learning discounted action values with many different discount rates as an auxiliary task improves performance on most Atari games.
SP:87d54568606a9c1f752dae1420484a7b02a7ab1f
Hyperbolic Discounting and Learning Over Multiple Horizons
1 INTRODUCTION . The standard treatment of the reinforcement learning ( RL ) problem is the Markov Decision Process ( MDP ) which includes a discount factor 0 ≤ γ ≤ 1 that exponentially reduces the present value of future rewards ( Bellman , 1957 ; Sutton & Barto , 1998 ) . A reward rt received in t-time steps is devalued to γtrt , a discounted utility model introduced by Samuelson ( 1937 ) . This establishes a timepreference for rewards realized sooner rather than later . The decision to exponentially discount future rewards by γ leads to value functions that satisfy theoretical convergence properties ( Bertsekas , 1995 ) . The magnitude of γ also plays a role in stabilizing learning dynamics of RL algorithms ( Prokhorov & Wunsch , 1997 ; Bertsekas & Tsitsiklis , 1996 ) and has recently been treated as a hyperparameter of the optimization ( OpenAI , 2018 ; Xu et al. , 2018 ) . However , both the magnitude and the functional form of this discounting function establish priors over the solutions learned . The magnitude of γ chosen establishes an effective horizon for the agent of 1/ ( 1− γ ) , far beyond which rewards are neglected ( Kearns & Singh , 2002 ) . This effectively imposes a time-scale of the environment , which may not be accurate . Further , the exponential discounting of future rewards is consistent with a prior belief that there is a known constant per-time-step hazard rate ( Sozou , 1998 ) or probability of dying of 1− γ ( Lattimore & Hutter , 2011 ) . Additionally , discounting future values exponentially and according to a single discount factor γ does not harmonize with the measured value preferences in humans1 and animals ( Mazur , 1985 ; 1997 ; Ainslie , 1992 ; Green & Myerson , 2004 ; Maia , 2009 ) . A wealth of empirical evidence has been amassed that humans , monkeys , rats and pigeons instead discount future returns hyperbolically , where dk ( t ) = 11+kt , for some positive k > 0 ( Ainslie , 1975 ; 1992 ; Mazur , 1985 ; 1997 ; Frederick et al. , 2002 ; Green et al. , 1981 ; Green & Myerson , 2004 ) . This discrepancy between the time-preferences of animals from the exponential discounted measure of value might be presumed irrational . But Sozou ( 1998 ) showed that hyperbolic time-preferences is mathematically consistent with the agent maintaining some uncertainty over the prior belief of the hazard rate in the environment . Hazard rate h ( t ) measures the per-time-step risk the agent incurs as it acts in the environment due to a potential early death . Precisely , if s ( t ) is the probability that the 1Time-preference reversals are one implication . Consider two hypothetical choices : ( 1 ) a stranger offers $ 1M now or $ 1.1M dollars tomorrow ( 2 ) a stranger instead offers $ 1M in 99 days versus $ 1.1M in 100 days . agent is alive at time t then the hazard rate is h ( t ) = − ddt lns ( t ) . We consider the case where there is a fixed , but potentially unknown hazard rate h ( t ) = λ ≥ 0 . The prior belief of the hazard rate p ( λ ) implies a specific discount function Sozou ( 1998 ) . Under this formalism , the canonical case in RL of discounting future rewards according to d ( t ) = γt is consistent with the belief that there exists a single hazard rate λ = e−γ known with certainty . Further details are available in Appendix A . Common RL environments are also characterized by risk , but often in a narrower sense . In deterministic environments like the original Arcade Learning Environment ( ALE ) ( Bellemare et al. , 2013 ) stochasticity is often introduced through techniques like no-ops ( Mnih et al. , 2015 ) and sticky actions ( Machado et al. , 2018 ) where the action execution is noisy . Physics simulators may have noise and the randomness of the policy itself induces risk . But even with these stochastic injections the risk to reward emerges in a more restricted sense . In Section 2 we show that a prior distribution reflecting the uncertainty over the hazard rate , has an associated discount function in the sense that an MDP with either this hazard distribution or the discount function , has the same value function for all policies . This equivalence implies that learning policies with a discount function can be interpreted as making them robust to the associated hazard distribution . Thus , discounting serves as a tool to ensure that policies deployed in the real world perform well even under risks they were not trained under . We propose an algorithm that approximates hyperbolic discounting while building on successful Qlearning ( Watkins & Dayan , 1992 ) tools and their associated theoretical guarantees . We show learning many Q-values , each discounting exponentially with a different discount factor γ , can be aggregated to approximate hyperbolic ( and other non-exponential ) discount factors . We demonstrate the efficacy of our approximation scheme in our proposed Pathworld environment which is characterized both by an uncertain per-time-step risk to the agent . Conceptually , Pathworld emulates a foraging environment where an agent must balance easily realizable , small meals versus more distant , fruitful meals . We then consider higher-dimensional deep RL agents in the ALE , where we measure the benefits of hyperbolic discounting . This approximation mirrors the work of Kurth-Nelson & Redish ( 2009 ) ; Redish & Kurth-Nelson ( 2010 ) which empirically demonstrates that modeling a finite set of µAgents simultaneously can approximate hyperbolic discounting function . Our method then generalizes to other non-hyperbolic discount functions and uses deep neural networks to model the different Q-values from a shared representation . Surprisingly and in addition to enabling new non-exponential discounting schemes , we observe that learning a set of Q-values is beneficial as an auxiliary task ( Jaderberg et al. , 2016 ) . Adding this multi-horizon auxiliary task often improves over a state-of-the-art baseline , Rainbow ( Hessel et al. , 2018 ) in the ALE ( Bellemare et al. , 2013 ) . This work questions the RL paradigm of learning policies through a single discount function which exponentially discounts future rewards through the following contributions : 1 . Hazardous MDPs . We formulate MDPs with hazard present and demonstrate an equivalence between undiscounted values learned under hazards and ( potentially nonexponentially ) discounted values without hazard . 2 . Hyperbolic ( and other non-exponential ) -agent . A practical approach for training an agent which discounts future rewards by a hyperbolic ( or other non-exponential ) discount function and acts according to this . 3 . Multi-horizon auxiliary task . A demonstration of multi-horizon learning over many γ simultaneously as an effective auxiliary task . 2 HAZARD IN MDPS . To study MDPs with hazard distributions and general discount functions we introduce two modifications . The hazardous MDP now is defined by the tuple < S , A , R , P , H , d > . In standard form , the state space S and the action space A may be discrete or continuous . The learner observes samples from the environment transition probability P ( st+1|st , at ) for going from st ∈ S to st+1 ∈ S given at ∈ A . We will consider the case where P is a sub-stochastic transition function , which defines an episodic MDP . The environment emits a bounded reward r : S × A → [ rmin , rmax ] on each transition . In this work we consider non-infinite episodic MDPs . The first difference is that at the beginning of each episode , a hazard λ ∈ [ 0 , ∞ ) is sampled from the hazard distributionH . This is equivalent to sampling a continuing probability γ = e−λ . During the episode , the hazard modified transition function will be Pλ , in that Pλ ( s′|s , a ) = e−λP ( s′|s , a ) . The second difference is that we now consider a general discount function d ( t ) . This differs from the standard approach of exponential discounting in RL with γ according to d ( t ) = γt , which is a special case . This setting makes a close connection to partially observable Markov Decision Process ( POMDP ) ( Kaelbling et al. , 1998 ) where one might consider λ as an unobserved variable . However , the classic POMDP definition contains an explicit discount function γ as part of its definition which does not appear here . A policy π : S → A is a mapping from states to actions . The state action value function QH , dπ ( s , a ) is the expected discounted rewards after taking action a in state s and then following policy π until termination . QH , dπ ( s , a ) = EλEπ , Pλ [ ∞∑ t=0 d ( t ) R ( st , at ) |s0 = s , a0 = a ] ( 1 ) where λ ∼ H and Eπ , Pλ implies that st+1 ∼ Pλ ( ·|st , at ) and at ∼ π ( ·|st ) . 2.1 EQUIVALENCE BETWEEN HAZARD AND DISCOUNTING . In the hazardous MDP setting we observe the same connections between hazard and discount functions delineated in Appendix A . This expresses an equivalence between the value function of an MDP with a discount and MDP with a hazard distribution . For example , there exists an equivalence between the exponential discount function d ( t ) = γt to the undiscounted case where the agent is subject to a ( 1− γ ) per time-step of dying ( Lattimore & Hutter , 2011 ) . The typical Q-value ( left side of Equation 2 ) is when the agent acts in an environment without hazard λ = 0 or H = δ ( 0 ) and discounts future rewards according to d ( t ) = γt = e−λt which we denote as Qδ ( 0 ) , γ t π ( s , a ) . The alternative Q-value ( right side of Equation 2 ) is when the agent acts under hazard rate λ = − ln γ but does not discount future rewards which we denote as Q δ ( − ln γ ) ,1 π ( s , a ) . Qδ ( 0 ) , γ t π ( s , a ) = Q δ ( − ln γ ) ,1 π ( s , a ) ∀ π , s , a . ( 2 ) where δ ( x ) denotes the Dirac delta distribution at x . This follows from Pλ ( s′|s , a ) = e−λP ( s′|s , a ) Eπ , P [ ∞∑ t=0 γtR ( st , at ) |s0 = s , a0 = a ] = Eπ , P [ ∞∑ t=0 e−λtR ( st , at ) |s0 = s , a0 = a ] = Eπ , Pλ [ ∞∑ t=0 R ( st , at ) |s0 = s , a0 = a ] We also show a similar equivalence between hyperbolic discounting and the specific hazard distribution pk ( λ ) = 1k exp ( −λ/k ) , where again , λ ∈ [ 0 , ∞ ) in Appendix E. Qδ ( 0 ) , Γkπ ( s , a ) = Q pk,1 π ( s , a ) For notational brevity later in the paper , we will omit the explicit hazard distributionH-superscript if the environment is not hazardous . This formulation builds upon Sozou ( 1998 ) ’ s relate of hazard rate and discount functions and shows that this holds for generalized Q-values in reinforcement learning .
This paper argues that hyperbolic and other non-exponential discounting mechanisms have been more utilized by humans and animals for value preferences than exponential discounting as widely used in RL literature. The authors claim that hyperbolic discounting mechanisms are especially preferred in the setting of maintaining uncertainty over the prior belief of the hazard rate in the environment and propose an efficient approximation of the Q function with hyperbolic and other non-exponential discounting mechanisms as a weighted sum of Q-functions with the standard exponential discounting factor. The paper shows empirical evidence that hyperbolic discounting function can more accurately estimate the value in a vanilla Pathworld environment and also demonstrate that the approximated multi-horizon Q functions can improve performance on ALE, which is largely attributed to learning over multi-horizons as an auxiliary task.
SP:87d54568606a9c1f752dae1420484a7b02a7ab1f
Information-Theoretic Local Minima Characterization and Regularization
1 INTRODUCTION . Recently , there has been a surge in the interest of acquiring a theoretical understanding over deep neural network ’ s behavior . Breakthroughs have been made in characterizing the optimization process , showing that learning algorithms such as stochastic gradient descent ( SGD ) tend to end up in one of the many local minima which have close-to-zero training loss ( Choromanska et al. , 2015 ; Dauphin et al. , 2014 ; Kawaguchi , 2016 ; Nguyen & Hein , 2018 ; Du et al. , 2018 ) . However , these numerically similar local minima typically exhibit very different behaviors in terms of generalizability . It is , therefore , natural to ask two closely related questions : ( a ) What kind of local minima can generalize better ? ( b ) How to find those better local minima ? To our knowledge , existing work focused only on one of the two questions . For the “ what ” question , various definitions of “ flatness/sharpness ” have been introduced and analyzed ( Keskar et al. , 2017 ; Neyshabur et al. , 2018 ; 2017 ; Wu et al. , 2017 ; Liang et al. , 2017 ) . However , they suffer from one or more of the problems : ( 1 ) being mostly theoretical with no or poor empirical evaluations on modern neural networks , ( 2 ) lack of theoretical analysis and understanding , ( 3 ) in practice not applicable to finding better local minima . Regarding the “ how ” question , existing approaches ( Hochreiter & Schmidhuber , 1997 ; Sokolić et al. , 2017 ; Chaudhari et al. , 2017 ; Hoffer et al. , 2017 ; Neyshabur et al. , 2015a ; Izmailov et al. , 2018 ) share some of the common drawbacks : ( 1 ) derived only from intuitions but no specific metrics provided to characterize local minima , ( 2 ) no or weak analysis of such metrics , ( 3 ) not applicable or no consistent generalization improvement for modern DNNs . In this paper , we tackle both the “ what ” and the “ how ” questions in a unified manner . Our answer provides both the theory and applications for the generalization problems across different local minima . Based on the determinant of Fisher information estimated from the training set , we propose a metric that solves all the aforementioned issues . The metric can well capture properties that characterize local minima of different generalization ability . We provide its theoretical analysis , primarily a generalization bound based on PAC-Bayes ( McAllester , 1999b ; a ) . For modern DNNs in practice , it is necessary to provide a tractable approximation of our metric . We propose an intuitive and efficient approximation to compare it across different local minima . Our empirical evaluations fully illustrate the effectiveness of the metric as a strong indicator of local minima ’ s generalizability . Moreover , from the metric we further derive and design a practical regularization technique that guides the optimization process in finding better generalizable local minima . The experiments on image classification datasets demonstrate that our approach gives consistent generalization boost for a range of DNN architectures . 2 RELATED WORK . It has been empirically shown that larger batch sizes lead to worse generalization ( Keskar et al. , 2017 ) . Hoffer et al . ( 2017 ) analyzed how the training dynamics is affected by different batch sizes and presented a perturbed batch normalization technique for better generalization . While it effectively improves generalization for large-batch training , a specific metric that indicates the generalizability is missing . Similarly , Elsayed et al . ( 2018 ) employed a structured margin loss to improve performance of DNNs w.r.t . noise and adversarial attack yet no metric was proposed . Furthermore , this approach essentially provided no generalization gain in the normal training setup . The local entropy of the loss landscape was proposed to measure “ flatness ” in Chaudhari et al . ( 2017 ) , which also designed an entropy-guided SGD that achieves faster convergence in training DNNs . However , the method does not consistently improve generalization , e.g. , a decrease of performance on CIFAR-10 ( Krizhevsky & Hinton , 2009 ) . Another method that focused on modifying the optimization process is the Path-SGD proposed by Neyshabur et al . ( 2015a ) . Specifically , the authors derived an approximate steepest descent algorithm that utilizes the path-wise norm regularization to achieve better generalization . The authors only evaluated it on a two-layer neural network , very likely since the path norm is computationally expensive to optimize during training . A flat minimum search algorithm was proposed by Hochreiter & Schmidhuber ( 1997 ) based on the “ flatness ” of local minima defined as the volume of local boxes . Yet since the boxes have their axes aligned to the axes of the model parameters , their volumes could be significant underestimations of “ flatness ” for over-parametrized networks , due to the specific spectral density of Hessian of DNNs studied in Pennington & Worah ( 2018 ) ; Sagun et al . ( 2018 ) . The authors of Wu et al . ( 2017 ) also characterized the “ flatness ” by volumes . They considered the inverse volume of the basin of attraction and proposed to use the Frobenius norm of Hessian at the local minimum as a metric . In our experiments , we show that their metric does not accurately capture the generalization ability of local minima under different scenarios . Moreover , they have not derived a regularizer from their metric . Based on a “ robustness ” metric , Sokolić et al . ( 2017 ) derived a regularization technique that successfully improves generalization on multiple image classification datasets . Nevertheless , we show that their metric fails to capture the generalizability across different local minima . By using the Bayes factor , MacKay ( 1992 ) studied the generalization ability of different local minima obtained by varying the coefficient of L2 regularization . It derived a formula involving the determinant of Hessian , similar to the one in ours . Whereas , this approach has restricted settings and , without proposing an efficient approximation , its metric is not applicable to modern DNNs , let alone serving as a regularizer . A generalization bound is missing in MacKay ( 1992 ) as well . In a broader context of the “ what ” question , properties that capture the generalization of neural networks have been extensively studied . Various complexity measures for DNNs have been proposed based on norm , margin , Lipschitz constant , compression and robustness ( Bartlett & Mendelson , 2002 ; Neyshabur et al. , 2015b ; Sokolić et al. , 2017 ; Xu & Mannor , 2012 ; Bartlett et al. , 2017 ; Zhou et al. , 2019 ; Dziugaite & Roy , 2017 ; Arora et al. , 2018 ; Jiang et al. , 2019 ) . While some of them aimed to provide tight generalization bounds and some of them to provide better empirical results , none of the above approaches explored the “ how ” question at the same time . Very recently , Karakida et al . ( 2019 ) and Sun & Nielsen ( 2019 ) studied the Fisher information of the neural network through the lens of its spectral density . In specific , Karakida et al . ( 2019 ) applied mean field theory to study the statistics of the spectrum and the appropriate size of the learning rate . Also an information-theoretic approach , Sun & Nielsen ( 2019 ) derived a novel formulation of the minimum description length in the context of deep learning by utilizing tools from singular semi-Riemannian geometry . 3 OUTLINE AND NOTATIONS . In a typical K-way classification setting , each sample x ∈ X belongs to a single class denoted cx ∈ { 0 , 1 , ... , K } according to the probability vector y ∈ Y , where Y is the k-dimensional probability simplex so that p ( cx = i ) = yi and ∑ i yi = 1 . Denote a feed-forward DNN parametrized by w ∈ RW as fw : X → Y , which uses nonlinear activation functions and a softmax layer at the end . Denote the cross entropy loss as ` ( fw ( x ) , y ) = − ∑ i yi ln fw ( x ) i. De- note the training set as S , defined over X × Y with |S|= N . The training objective is given as L ( S , w ) = 1N ∑ ( x , y ) ∼S ` ( fw ( x ) , y ) . Assume S is sampled from some true data distribution denoted D , we can define expected loss L ( D , w ) = E ( x , y ) ∼D [ ` ( fw ( x ) , y ) ] . Throughout this paper , we refer a local minimum of L ( S , w ) corresponding to a local minimizer w0 as just the local minimum w0 . Given such w0 , our paper ’ s outline as well as our main achievements are : • In Section 4 we relates Fisher information to neural network training as a prerequisite . • In Section 5.1 we propose a metric γ ( w0 ) that well captures local minima ’ s generalizability . • In Section 5.2 we provide a generalization bound related to γ ( w0 ) . • In Section 5.3 we propose an approximation γ̂ ( w0 ) for γ ( w0 ) , which is shown to be very effective in Section 7.1 via extensive empirical evaluations . • In Section 6 we devise a practical regularizer from γ ( w0 ) that consistently improves gen- eralizability across different DNNs , as evaluated in Section 7.2 . 3.1 OTHER NOTATIONS . Denote∇w as gradient , Jw [ · ] as Jacobian matrix , ∇2w as Hessian , DKL ( ·‖· ) as KL divergence , ‖·‖2 as spectrum or Euclidean norm , ‖·‖F as Frobenius norm , |·| as determinant , tr ( · ) as trace norm , ρ ( · ) as spectral radius , `` S ( w ) as log-likelihood on S , and [ · ] i for selecting the ith entry . We define ` x ( w ) ∈ RK whose ith entry is− ln fw ( x ) i so that ` ( fw ( x ) , y ) = ` x ( w ) T y . We define ȳ as argmax ( y ) and ỹ ∈ RK the one-hot vector whose ȳ-th entry is 1 and otherwise 0 . Then we define L̃ ( S , w ) ∈ RN as the “ simplified ” loss vector of S whose entries are ` ( fw ( x ) , ỹ ) for ( x , y ) ∈ S , i.e. , we approximate the cross entropy loss ` ( fw ( x ) , y ) by ` ( fw ( x ) , ỹ ) . 4 LOCAL MINIMUM AND FISHER INFORMATION . First of all , if y is strictly one-hot , no local minimum will even exist with 100 % training accuracy , since the cross entropy loss will always be positive . To admit good local minima in the first place , we assume the widely used label smoothing ( Szegedy et al. , 2016 ) is applied to train all models in our analysis . Label smoothing enables us to assume a local minimum w0 ( in this case , also a global minimum ) of the training loss with ∑ ( x , y ) ∈S DKL ( fw0 ( x ) ‖y ) = 0 . Each sample ( x , y ) ∈ S has its label cx sampled by p ( cx = i|x ) = yi , denoted as ( x , cx ) ∼ S. The joint probability p ( x , cx ) modeled by the DNN is p ( x , cx = i ; w ) = p ( cx = i|x ; w ) p ( x ) = [ fw ( x ) ] i p ( x ) with p ( x ) = 1N . We can relate the training loss L ( S , w ) to the negative log-likelihood − `` S ( w ) = − ∑ ( x , y ) ∈S Ecx∼y ln p ( x , cx ; w ) by : L ( S , w ) = 1 N ∑ ( x , y ) ∈S ` x ( w ) T y = − 1 N ∑ ( x , y ) ∈S E cx∼y ln p ( cx|x ; w ) = − 1 N `` S ( w ) + ln 1 N And so w0 also corresponds to a local maximum of the likelihood function . The observed Fisher information evaluated at w0 is defined as I ( w0 ) = − 1N∇ 2 w `` S ( w0 ) . We can further derive : I ( w0 ) = ∇2wL ( S , w0 ) = E ( x , cx ) ∼S [ ∇w ln p ( cx|x ; w0 ) ∇w ln p ( cx|x ; w0 ) T ] ( 1 ) The first equality is straightforward ; the second has its proof in Appendix A . Since p ( cx = i|x ) = yi and ln p ( cx = i|x ; w0 ) = [ ` x ( w0 ) ] i , we can further simplify the Equation 1 to : I ( w0 ) = 1 N ∑ ( x , y ) ∈S K∑ i=1 ( ∇w [ ` x ( w0 ) ] i ) ( ∇w [ ` x ( w0 ) ] i ) T ( 2 ) Remark : When we assume global optimality , we have ∇w ` ( fw0 ( x ) , y ) = 0 as DKL ( fw0 ( x ) ‖y ) = 0 ; yet it does not indicate I ( w0 ) ∈ RW×W 6= 0 in Equation 2 .
This paper contributes to the deep learning generalization theory, mainly from the theoretical perspective with experimental verifications. The key proposition is given by the unnumbered simple equation in the middle of page 4 (please number it), where \mathcal{I} is the Fisher information matrix. According to the authors, this simple metric, which is the log-determinant of the Fisher information matrix, can characterize the generalization of a DNN.
SP:39c5ad94a057196b513d4a96d3478ddf73add838
Information-Theoretic Local Minima Characterization and Regularization
1 INTRODUCTION . Recently , there has been a surge in the interest of acquiring a theoretical understanding over deep neural network ’ s behavior . Breakthroughs have been made in characterizing the optimization process , showing that learning algorithms such as stochastic gradient descent ( SGD ) tend to end up in one of the many local minima which have close-to-zero training loss ( Choromanska et al. , 2015 ; Dauphin et al. , 2014 ; Kawaguchi , 2016 ; Nguyen & Hein , 2018 ; Du et al. , 2018 ) . However , these numerically similar local minima typically exhibit very different behaviors in terms of generalizability . It is , therefore , natural to ask two closely related questions : ( a ) What kind of local minima can generalize better ? ( b ) How to find those better local minima ? To our knowledge , existing work focused only on one of the two questions . For the “ what ” question , various definitions of “ flatness/sharpness ” have been introduced and analyzed ( Keskar et al. , 2017 ; Neyshabur et al. , 2018 ; 2017 ; Wu et al. , 2017 ; Liang et al. , 2017 ) . However , they suffer from one or more of the problems : ( 1 ) being mostly theoretical with no or poor empirical evaluations on modern neural networks , ( 2 ) lack of theoretical analysis and understanding , ( 3 ) in practice not applicable to finding better local minima . Regarding the “ how ” question , existing approaches ( Hochreiter & Schmidhuber , 1997 ; Sokolić et al. , 2017 ; Chaudhari et al. , 2017 ; Hoffer et al. , 2017 ; Neyshabur et al. , 2015a ; Izmailov et al. , 2018 ) share some of the common drawbacks : ( 1 ) derived only from intuitions but no specific metrics provided to characterize local minima , ( 2 ) no or weak analysis of such metrics , ( 3 ) not applicable or no consistent generalization improvement for modern DNNs . In this paper , we tackle both the “ what ” and the “ how ” questions in a unified manner . Our answer provides both the theory and applications for the generalization problems across different local minima . Based on the determinant of Fisher information estimated from the training set , we propose a metric that solves all the aforementioned issues . The metric can well capture properties that characterize local minima of different generalization ability . We provide its theoretical analysis , primarily a generalization bound based on PAC-Bayes ( McAllester , 1999b ; a ) . For modern DNNs in practice , it is necessary to provide a tractable approximation of our metric . We propose an intuitive and efficient approximation to compare it across different local minima . Our empirical evaluations fully illustrate the effectiveness of the metric as a strong indicator of local minima ’ s generalizability . Moreover , from the metric we further derive and design a practical regularization technique that guides the optimization process in finding better generalizable local minima . The experiments on image classification datasets demonstrate that our approach gives consistent generalization boost for a range of DNN architectures . 2 RELATED WORK . It has been empirically shown that larger batch sizes lead to worse generalization ( Keskar et al. , 2017 ) . Hoffer et al . ( 2017 ) analyzed how the training dynamics is affected by different batch sizes and presented a perturbed batch normalization technique for better generalization . While it effectively improves generalization for large-batch training , a specific metric that indicates the generalizability is missing . Similarly , Elsayed et al . ( 2018 ) employed a structured margin loss to improve performance of DNNs w.r.t . noise and adversarial attack yet no metric was proposed . Furthermore , this approach essentially provided no generalization gain in the normal training setup . The local entropy of the loss landscape was proposed to measure “ flatness ” in Chaudhari et al . ( 2017 ) , which also designed an entropy-guided SGD that achieves faster convergence in training DNNs . However , the method does not consistently improve generalization , e.g. , a decrease of performance on CIFAR-10 ( Krizhevsky & Hinton , 2009 ) . Another method that focused on modifying the optimization process is the Path-SGD proposed by Neyshabur et al . ( 2015a ) . Specifically , the authors derived an approximate steepest descent algorithm that utilizes the path-wise norm regularization to achieve better generalization . The authors only evaluated it on a two-layer neural network , very likely since the path norm is computationally expensive to optimize during training . A flat minimum search algorithm was proposed by Hochreiter & Schmidhuber ( 1997 ) based on the “ flatness ” of local minima defined as the volume of local boxes . Yet since the boxes have their axes aligned to the axes of the model parameters , their volumes could be significant underestimations of “ flatness ” for over-parametrized networks , due to the specific spectral density of Hessian of DNNs studied in Pennington & Worah ( 2018 ) ; Sagun et al . ( 2018 ) . The authors of Wu et al . ( 2017 ) also characterized the “ flatness ” by volumes . They considered the inverse volume of the basin of attraction and proposed to use the Frobenius norm of Hessian at the local minimum as a metric . In our experiments , we show that their metric does not accurately capture the generalization ability of local minima under different scenarios . Moreover , they have not derived a regularizer from their metric . Based on a “ robustness ” metric , Sokolić et al . ( 2017 ) derived a regularization technique that successfully improves generalization on multiple image classification datasets . Nevertheless , we show that their metric fails to capture the generalizability across different local minima . By using the Bayes factor , MacKay ( 1992 ) studied the generalization ability of different local minima obtained by varying the coefficient of L2 regularization . It derived a formula involving the determinant of Hessian , similar to the one in ours . Whereas , this approach has restricted settings and , without proposing an efficient approximation , its metric is not applicable to modern DNNs , let alone serving as a regularizer . A generalization bound is missing in MacKay ( 1992 ) as well . In a broader context of the “ what ” question , properties that capture the generalization of neural networks have been extensively studied . Various complexity measures for DNNs have been proposed based on norm , margin , Lipschitz constant , compression and robustness ( Bartlett & Mendelson , 2002 ; Neyshabur et al. , 2015b ; Sokolić et al. , 2017 ; Xu & Mannor , 2012 ; Bartlett et al. , 2017 ; Zhou et al. , 2019 ; Dziugaite & Roy , 2017 ; Arora et al. , 2018 ; Jiang et al. , 2019 ) . While some of them aimed to provide tight generalization bounds and some of them to provide better empirical results , none of the above approaches explored the “ how ” question at the same time . Very recently , Karakida et al . ( 2019 ) and Sun & Nielsen ( 2019 ) studied the Fisher information of the neural network through the lens of its spectral density . In specific , Karakida et al . ( 2019 ) applied mean field theory to study the statistics of the spectrum and the appropriate size of the learning rate . Also an information-theoretic approach , Sun & Nielsen ( 2019 ) derived a novel formulation of the minimum description length in the context of deep learning by utilizing tools from singular semi-Riemannian geometry . 3 OUTLINE AND NOTATIONS . In a typical K-way classification setting , each sample x ∈ X belongs to a single class denoted cx ∈ { 0 , 1 , ... , K } according to the probability vector y ∈ Y , where Y is the k-dimensional probability simplex so that p ( cx = i ) = yi and ∑ i yi = 1 . Denote a feed-forward DNN parametrized by w ∈ RW as fw : X → Y , which uses nonlinear activation functions and a softmax layer at the end . Denote the cross entropy loss as ` ( fw ( x ) , y ) = − ∑ i yi ln fw ( x ) i. De- note the training set as S , defined over X × Y with |S|= N . The training objective is given as L ( S , w ) = 1N ∑ ( x , y ) ∼S ` ( fw ( x ) , y ) . Assume S is sampled from some true data distribution denoted D , we can define expected loss L ( D , w ) = E ( x , y ) ∼D [ ` ( fw ( x ) , y ) ] . Throughout this paper , we refer a local minimum of L ( S , w ) corresponding to a local minimizer w0 as just the local minimum w0 . Given such w0 , our paper ’ s outline as well as our main achievements are : • In Section 4 we relates Fisher information to neural network training as a prerequisite . • In Section 5.1 we propose a metric γ ( w0 ) that well captures local minima ’ s generalizability . • In Section 5.2 we provide a generalization bound related to γ ( w0 ) . • In Section 5.3 we propose an approximation γ̂ ( w0 ) for γ ( w0 ) , which is shown to be very effective in Section 7.1 via extensive empirical evaluations . • In Section 6 we devise a practical regularizer from γ ( w0 ) that consistently improves gen- eralizability across different DNNs , as evaluated in Section 7.2 . 3.1 OTHER NOTATIONS . Denote∇w as gradient , Jw [ · ] as Jacobian matrix , ∇2w as Hessian , DKL ( ·‖· ) as KL divergence , ‖·‖2 as spectrum or Euclidean norm , ‖·‖F as Frobenius norm , |·| as determinant , tr ( · ) as trace norm , ρ ( · ) as spectral radius , `` S ( w ) as log-likelihood on S , and [ · ] i for selecting the ith entry . We define ` x ( w ) ∈ RK whose ith entry is− ln fw ( x ) i so that ` ( fw ( x ) , y ) = ` x ( w ) T y . We define ȳ as argmax ( y ) and ỹ ∈ RK the one-hot vector whose ȳ-th entry is 1 and otherwise 0 . Then we define L̃ ( S , w ) ∈ RN as the “ simplified ” loss vector of S whose entries are ` ( fw ( x ) , ỹ ) for ( x , y ) ∈ S , i.e. , we approximate the cross entropy loss ` ( fw ( x ) , y ) by ` ( fw ( x ) , ỹ ) . 4 LOCAL MINIMUM AND FISHER INFORMATION . First of all , if y is strictly one-hot , no local minimum will even exist with 100 % training accuracy , since the cross entropy loss will always be positive . To admit good local minima in the first place , we assume the widely used label smoothing ( Szegedy et al. , 2016 ) is applied to train all models in our analysis . Label smoothing enables us to assume a local minimum w0 ( in this case , also a global minimum ) of the training loss with ∑ ( x , y ) ∈S DKL ( fw0 ( x ) ‖y ) = 0 . Each sample ( x , y ) ∈ S has its label cx sampled by p ( cx = i|x ) = yi , denoted as ( x , cx ) ∼ S. The joint probability p ( x , cx ) modeled by the DNN is p ( x , cx = i ; w ) = p ( cx = i|x ; w ) p ( x ) = [ fw ( x ) ] i p ( x ) with p ( x ) = 1N . We can relate the training loss L ( S , w ) to the negative log-likelihood − `` S ( w ) = − ∑ ( x , y ) ∈S Ecx∼y ln p ( x , cx ; w ) by : L ( S , w ) = 1 N ∑ ( x , y ) ∈S ` x ( w ) T y = − 1 N ∑ ( x , y ) ∈S E cx∼y ln p ( cx|x ; w ) = − 1 N `` S ( w ) + ln 1 N And so w0 also corresponds to a local maximum of the likelihood function . The observed Fisher information evaluated at w0 is defined as I ( w0 ) = − 1N∇ 2 w `` S ( w0 ) . We can further derive : I ( w0 ) = ∇2wL ( S , w0 ) = E ( x , cx ) ∼S [ ∇w ln p ( cx|x ; w0 ) ∇w ln p ( cx|x ; w0 ) T ] ( 1 ) The first equality is straightforward ; the second has its proof in Appendix A . Since p ( cx = i|x ) = yi and ln p ( cx = i|x ; w0 ) = [ ` x ( w0 ) ] i , we can further simplify the Equation 1 to : I ( w0 ) = 1 N ∑ ( x , y ) ∈S K∑ i=1 ( ∇w [ ` x ( w0 ) ] i ) ( ∇w [ ` x ( w0 ) ] i ) T ( 2 ) Remark : When we assume global optimality , we have ∇w ` ( fw0 ( x ) , y ) = 0 as DKL ( fw0 ( x ) ‖y ) = 0 ; yet it does not indicate I ( w0 ) ∈ RW×W 6= 0 in Equation 2 .
This paper provides a metric to characterize local minima of deep network loss landscapes based on the Fisher information matrix of the model parameterized by the deep network. The authors connect the Fisher information to the curvature of the loss landscape (the loss considered is the negative loss likelihood) and obtain generalization bounds through PAC Bayes analysis. They further propose regularizing the training of deep networks using the local curvature of the loss as a regularizer. In the final experimental section of the paper, the relationship between the empirical measures and generalization is shown on a variety of networks.
SP:39c5ad94a057196b513d4a96d3478ddf73add838
Dynamic Model Pruning with Feedback
1 INTRODUCTION . Highly overparametrized deep neural networks show impressive results on machine learning tasks . However , with the increase in model size comes also the demand for memory and computer power at inference stage—two resources that are scarcely available on low-end devices . Pruning techniques have been successfully applied to remove a significant fraction of the network weights while preserving test accuracy attained by dense models . In some cases , the generalization of compressed networks has even been found to be better than with full models ( Han et al. , 2015 ; 2017 ; Mocanu et al. , 2018 ) . The sparsity of a network is the number of weights that are identically zero , and can be obtained by applying a sparsity mask on the weights . There are several different approaches to find sparse models . For instance , one-shot pruning strategies find a suitable sparsity mask by inspecting the weights of a pretrained network ( Mozer & Smolensky , 1989 ; LeCun et al. , 1990 ; Han et al. , 2017 ) . While these algorithms achieve a substantial size reduction of the network with little degradation in accuracy , they are computationally expensive ( training and refinement on the dense model ) , and they are outperformed by algorithms that explore different sparsity masks instead of a single one . In dynamic pruning methods , the sparsity mask is readjusted during training according to different criteria ( Mostafa & Wang , 2019 ; Mocanu et al. , 2018 ) . However , these methods require fine-tuning of many hyperparameters . We propose a new pruning approach to obtain sparse neural networks with state-of-the-art test accuracy . Our compression scheme uses a new saliency criterion that identifies important weights in the network throughout training to propose candidate masks . As a key feature , our algorithm not only evolves the pruned sparse model alone , but jointly also a ( closely related ) dense model that is used in a natural way to correct for pruning errors during training . This results in better generalization properties on a wide variety of tasks , since the simplicity of the scheme allows us further to study it from a theoretical point of view , and to provide further insights and interpretation . We do not require tuning of additional hyperparameters , and no retraining of the sparse model is needed ( though can further improve performance ) . Contributions .. • A novel dynamic pruning scheme , that incorporates an error feedback in a natural way Sec . 3 and finds a trained sparse model in one training pass . Sec . 5 • We demonstrate state-of-the-art performance ( in accuracy and sparsity ) , Sec . 5 ourperforming all previously proposed pruning schemes . Sec . 5 • We complement our results by an ablation study that provides further insights . Sec . 6 and convergence analysis for convex and non-convex objectives . Sec . 4 2 RELATED WORK . Previous works on obtaining pruned networks can ( loosely ) be divided into three main categories . Pruning after training . Training approaches to obtain sparse networks usually include a three stage pipeline—training of a dense model , one-shot pruning and fine-tuning—e.g. , ( Han et al. , 2015 ) . Their results ( i.e. , moderate sparsity level with minor quality loss ) made them the standard method for network pruning and led to several variations ( Guo et al. , 2016 ; Carreira-Perpinán & Idelbayev , 2018 ) . Pruning during training . Zhu & Gupta ( 2017 ) propose the use of magnitude-based pruning and to gradually increase the sparsity ratio while training the model from scratch . A pruning schedule determines when the new masks are computed ( extending and simplifying ( Narang et al. , 2017 ) ) . He et al . ( 2018 ) ( SFP ) prune entire filters of the model at the end of each epoch , but allow the pruned filters to be updated when training the model . Deep Rewiring ( DeepR ) ( Bellec et al. , 2018 ) allows for even more adaptivity by performing pruning and regrowth decisions periodically . This approach is computationally expensive and challenging to apply to large networks and datasets . Sparse evolutionary training ( SET ) ( Mocanu et al. , 2018 ) simplifies prune–regrowth cycles by using heuristics for random growth at the end of each training epoch and NeST ( Dai et al. , 2019 ) by inspecting gradient magnitudes . Dynamic Sparse Reparameterization ( DSR ) ( Mostafa & Wang , 2019 ) implements a prune– redistribute–regrowth cycle where target sparsity levels are redistributed among layers , based on loss gradients ( in contrast to SET , which uses fixed , manually configured , sparsity levels ) . Sparse Momentum ( SM ) ( Dettmers & Zettlemoyer , 2019 ) follows the same cycle but instead using the mean momentum magnitude of each layer during the redistribute phase . SM outperforms DSR on ImageNet for unstructured pruning by a small margin but has no performance difference on CIFAR experiments . Our approach also falls in the dynamic category but we use error compensation mechanisms instead of hand crafted redistribute–regrowth cycles . Pruning before training . Recently—spurred by the lottery ticket hypothesis ( LT ) ( Frankle & Carbin , 2019 ) —methods which try to find a sparse mask that can be trained from scratch have attracted increased interest . For instance , Lee et al . ( 2019 ) propose SNIP to find a pruning mask by inspecting connection sensitivities and identifying structurally important connections in the network for a given task . Pruning is applied at initialization , and the sparsity mask remains fixed throughout training . Note that Frankle & Carbin ( 2019 ) ; Frankle et al . ( 2019 ) do not propose an efficient pruning scheme to find the mask , instead they rely on iterative pruning , repeated for several full training passes . Further Approaches . Srinivas et al . ( 2017 ) ; Louizos et al . ( 2018 ) learn gating variables ( e.g . through ` 0 regularization ) that minimize the number of nonzero weights , recent parallel work studies filter pruning for pre-trained models ( You et al. , 2019 ) . Gal et al . ( 2017 ) ; Neklyudov et al . ( 2017 ) ; Molchanov et al . ( 2017 ) prune from Bayesian perspectives to learn dropout probabilities during training to prune and sparsify networks as dropout weight probabilities reach 1 . Gale et al . ( 2019 ) extensively study recent unstructured pruning methods on large-scale learning tasks , and find that complex techniques ( Molchanov et al. , 2017 ; Louizos et al. , 2018 ) perform inconsistently . Simple magnitude pruning approaches achieve comparable or better results ( Zhu & Gupta , 2017 ) . 3 METHOD . We consider the training of a non-convex loss function f : Rd → R. We assume for a weight vector w ∈ Rd to have access to a stochastic gradient g ( w ) ∈ Rd such that E [ g ( w ) ] = ∇f ( w ) . This corresponds to the standard machine learning setting with g ( w ) representing a ( mini-batch ) gradient of one ( or several ) components of the loss function . Stochastic Gradient Descent ( SGD ) computes a sequence of iterates by the update rule wt+1 : = wt − γtg ( wt ) , ( SGD ) for some learning rate γt . To obtain a sparse model , a general approach is to prune some of the weights of wt , i.e. , to set them to zero . Such pruning can be implemented by applying a mask m ∈ { 0 , 1 } d to the weights , resulting in a sparse model w̃t : = m wt , where denotes the entry-wise ( Hadamard ) product . The mask could potentially depend on the weights wt ( e.g. , smallest magnitude pruning ) , or depend on t ( e.g. , the sparsity is incremented over time ) . Before we introduce our proposed dynamic pruning scheme , we formalize the three main existing types of pruning methodologies ( summarized in Figure 1 ) . These approaches differ in the way the mask is computed , and the moment when it is applied.1 Pruning before training . A mask m0 ( depending on e.g . the initialization w0 or the network architecture of f ) is applied and ( SGD ) is used for training on the resulting subnetwork f̃ ( w ) : = f ( m0 w ) with the advantage that only pruned weights need to be stored and updated2 , and that by training with SGD a local minimum of the subnetwork f̃ ( but not of f—the original training target ) can be reached . In practice however , it remains a challenge to efficiently determine a good mask m0 and a wrongly chosen mask at the beginning strongly impacts the performance . 1The method introduced in Section 2 typically follow one of these broad themes loosely , with slight variations in detail . For the sake of clarity we omit a too technical and detailed discussion here . 2When training on f̃ ( w ) , it suffices to access stochastic gradients of f̃ ( w ) , denoted by g̃ ( w ) , which can potentially be cheaper be computed than by naively applying the mask to g ( w ) ( note g̃ ( w ) = m0 g ( w ) ) . Pruning after training ( one-shot pruning ) . A dense model is trained , and pruning is applied to the trained model wT . As the pruned model w̃T = mT wT is very likely not at a local optimum of f , fine-tuning ( retraining with the fixed mask mT ) is necessary to improve performance . Pruning during training ( incremental and dynamic pruning ) . Dynamic schemes change the mask mt every ( few ) iterations based on observations during training ( i.e . by observing the weights and stochastic gradients ) . Incremental schemes monotonically increase the sparsity pattern , fully dynamic schemes can also reactivate previously pruned weights . In contrast to previous dynamic schemes that relied on elaborated heuristics to adapt the mask mt , we propose a simpler approach : Dynamic pruning with feedback ( DPF , Algorithm 1 ) . Our scheme evaluates a stochastic gradient at the pruned model w̃t = mt wt and applies it to the ( simultaneously maintained ) dense model wt : wt+1 : = wt − γtg ( mt wt ) = wt − γtg ( w̃t ) . ( DPF ) Applying the gradient to the full model allows to recover from “ errors ” , i.e . prematurely masking out important weights : when the accumulated gradient updates from the following steps drastically change a specific weight , it can become activated again ( in contrast to incremental pruning approaches that have to stick to sub-optimal decisions ) . For illustration , observe that ( DPF ) can equivalently be written as wt+1 = wt − γtg ( wt + et ) , where et : = w̃t −wt is the error produced by the compression . This provides a different intuition of the behavior of ( DPF ) , and connects it with the concept of error-feedback ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) .3 We illustrate this principle in Figure 2 and give detailed pseudocode and further implementation details in Appendix A.1 . The DPF scheme can also be seen as an instance of a more general class of schemes that apply ( arbitrary ) perturbed gradient updates to the dense model . For instance straight-through gradient estimators ( Bengio et al. , 2013 ) that are used to empirically simplify the backpropagation can be seen as such perturbations . Our stronger assumptions on the structure of the perturbation allow to derive non-asymptotic convergence rates in the next section , though our analysis could also be extended to the setting in ( Yin et al. , 2019 ) if the perturbations can be bounded .
This work proposes a simple pruning method that dynamically sparsifies the network during training. This is achieved by performing at fixed intervals magnitude based pruning for either individual weights or entire neurons. While similar methods have been explored before, this work proposes a slight twist; instead of updating the weights of the model by following the gradient of the parameters of the dense model, they update the parameters of the dense model according to the gradients of the sparse model. Essentially, this corresponds to a variant of the straight-through estimator [1], where in the forward pass we evaluate the compressed model, but in the backward pass we update the model as if the compression didn’t take place. The authors argue that this process allows for ``feedback” in the pruning mechanism, as the pruned weights still receive gradient updates hence they can be ``re-activated” at later stages of training. They then provide a convergence analysis about the optimization procedure with such a gradient, and show that for strongly convex functions the method converges in the vicinity of the global optimum, whereas for non-convex functions it converges to the neighbourhood of a stationary point. Finally, the authors perform extensive experimental evaluation and show that their method is better than the baselines that they considered.
SP:cf0aed09560d12961f718e915b72a2c5403c4e4a
Dynamic Model Pruning with Feedback
1 INTRODUCTION . Highly overparametrized deep neural networks show impressive results on machine learning tasks . However , with the increase in model size comes also the demand for memory and computer power at inference stage—two resources that are scarcely available on low-end devices . Pruning techniques have been successfully applied to remove a significant fraction of the network weights while preserving test accuracy attained by dense models . In some cases , the generalization of compressed networks has even been found to be better than with full models ( Han et al. , 2015 ; 2017 ; Mocanu et al. , 2018 ) . The sparsity of a network is the number of weights that are identically zero , and can be obtained by applying a sparsity mask on the weights . There are several different approaches to find sparse models . For instance , one-shot pruning strategies find a suitable sparsity mask by inspecting the weights of a pretrained network ( Mozer & Smolensky , 1989 ; LeCun et al. , 1990 ; Han et al. , 2017 ) . While these algorithms achieve a substantial size reduction of the network with little degradation in accuracy , they are computationally expensive ( training and refinement on the dense model ) , and they are outperformed by algorithms that explore different sparsity masks instead of a single one . In dynamic pruning methods , the sparsity mask is readjusted during training according to different criteria ( Mostafa & Wang , 2019 ; Mocanu et al. , 2018 ) . However , these methods require fine-tuning of many hyperparameters . We propose a new pruning approach to obtain sparse neural networks with state-of-the-art test accuracy . Our compression scheme uses a new saliency criterion that identifies important weights in the network throughout training to propose candidate masks . As a key feature , our algorithm not only evolves the pruned sparse model alone , but jointly also a ( closely related ) dense model that is used in a natural way to correct for pruning errors during training . This results in better generalization properties on a wide variety of tasks , since the simplicity of the scheme allows us further to study it from a theoretical point of view , and to provide further insights and interpretation . We do not require tuning of additional hyperparameters , and no retraining of the sparse model is needed ( though can further improve performance ) . Contributions .. • A novel dynamic pruning scheme , that incorporates an error feedback in a natural way Sec . 3 and finds a trained sparse model in one training pass . Sec . 5 • We demonstrate state-of-the-art performance ( in accuracy and sparsity ) , Sec . 5 ourperforming all previously proposed pruning schemes . Sec . 5 • We complement our results by an ablation study that provides further insights . Sec . 6 and convergence analysis for convex and non-convex objectives . Sec . 4 2 RELATED WORK . Previous works on obtaining pruned networks can ( loosely ) be divided into three main categories . Pruning after training . Training approaches to obtain sparse networks usually include a three stage pipeline—training of a dense model , one-shot pruning and fine-tuning—e.g. , ( Han et al. , 2015 ) . Their results ( i.e. , moderate sparsity level with minor quality loss ) made them the standard method for network pruning and led to several variations ( Guo et al. , 2016 ; Carreira-Perpinán & Idelbayev , 2018 ) . Pruning during training . Zhu & Gupta ( 2017 ) propose the use of magnitude-based pruning and to gradually increase the sparsity ratio while training the model from scratch . A pruning schedule determines when the new masks are computed ( extending and simplifying ( Narang et al. , 2017 ) ) . He et al . ( 2018 ) ( SFP ) prune entire filters of the model at the end of each epoch , but allow the pruned filters to be updated when training the model . Deep Rewiring ( DeepR ) ( Bellec et al. , 2018 ) allows for even more adaptivity by performing pruning and regrowth decisions periodically . This approach is computationally expensive and challenging to apply to large networks and datasets . Sparse evolutionary training ( SET ) ( Mocanu et al. , 2018 ) simplifies prune–regrowth cycles by using heuristics for random growth at the end of each training epoch and NeST ( Dai et al. , 2019 ) by inspecting gradient magnitudes . Dynamic Sparse Reparameterization ( DSR ) ( Mostafa & Wang , 2019 ) implements a prune– redistribute–regrowth cycle where target sparsity levels are redistributed among layers , based on loss gradients ( in contrast to SET , which uses fixed , manually configured , sparsity levels ) . Sparse Momentum ( SM ) ( Dettmers & Zettlemoyer , 2019 ) follows the same cycle but instead using the mean momentum magnitude of each layer during the redistribute phase . SM outperforms DSR on ImageNet for unstructured pruning by a small margin but has no performance difference on CIFAR experiments . Our approach also falls in the dynamic category but we use error compensation mechanisms instead of hand crafted redistribute–regrowth cycles . Pruning before training . Recently—spurred by the lottery ticket hypothesis ( LT ) ( Frankle & Carbin , 2019 ) —methods which try to find a sparse mask that can be trained from scratch have attracted increased interest . For instance , Lee et al . ( 2019 ) propose SNIP to find a pruning mask by inspecting connection sensitivities and identifying structurally important connections in the network for a given task . Pruning is applied at initialization , and the sparsity mask remains fixed throughout training . Note that Frankle & Carbin ( 2019 ) ; Frankle et al . ( 2019 ) do not propose an efficient pruning scheme to find the mask , instead they rely on iterative pruning , repeated for several full training passes . Further Approaches . Srinivas et al . ( 2017 ) ; Louizos et al . ( 2018 ) learn gating variables ( e.g . through ` 0 regularization ) that minimize the number of nonzero weights , recent parallel work studies filter pruning for pre-trained models ( You et al. , 2019 ) . Gal et al . ( 2017 ) ; Neklyudov et al . ( 2017 ) ; Molchanov et al . ( 2017 ) prune from Bayesian perspectives to learn dropout probabilities during training to prune and sparsify networks as dropout weight probabilities reach 1 . Gale et al . ( 2019 ) extensively study recent unstructured pruning methods on large-scale learning tasks , and find that complex techniques ( Molchanov et al. , 2017 ; Louizos et al. , 2018 ) perform inconsistently . Simple magnitude pruning approaches achieve comparable or better results ( Zhu & Gupta , 2017 ) . 3 METHOD . We consider the training of a non-convex loss function f : Rd → R. We assume for a weight vector w ∈ Rd to have access to a stochastic gradient g ( w ) ∈ Rd such that E [ g ( w ) ] = ∇f ( w ) . This corresponds to the standard machine learning setting with g ( w ) representing a ( mini-batch ) gradient of one ( or several ) components of the loss function . Stochastic Gradient Descent ( SGD ) computes a sequence of iterates by the update rule wt+1 : = wt − γtg ( wt ) , ( SGD ) for some learning rate γt . To obtain a sparse model , a general approach is to prune some of the weights of wt , i.e. , to set them to zero . Such pruning can be implemented by applying a mask m ∈ { 0 , 1 } d to the weights , resulting in a sparse model w̃t : = m wt , where denotes the entry-wise ( Hadamard ) product . The mask could potentially depend on the weights wt ( e.g. , smallest magnitude pruning ) , or depend on t ( e.g. , the sparsity is incremented over time ) . Before we introduce our proposed dynamic pruning scheme , we formalize the three main existing types of pruning methodologies ( summarized in Figure 1 ) . These approaches differ in the way the mask is computed , and the moment when it is applied.1 Pruning before training . A mask m0 ( depending on e.g . the initialization w0 or the network architecture of f ) is applied and ( SGD ) is used for training on the resulting subnetwork f̃ ( w ) : = f ( m0 w ) with the advantage that only pruned weights need to be stored and updated2 , and that by training with SGD a local minimum of the subnetwork f̃ ( but not of f—the original training target ) can be reached . In practice however , it remains a challenge to efficiently determine a good mask m0 and a wrongly chosen mask at the beginning strongly impacts the performance . 1The method introduced in Section 2 typically follow one of these broad themes loosely , with slight variations in detail . For the sake of clarity we omit a too technical and detailed discussion here . 2When training on f̃ ( w ) , it suffices to access stochastic gradients of f̃ ( w ) , denoted by g̃ ( w ) , which can potentially be cheaper be computed than by naively applying the mask to g ( w ) ( note g̃ ( w ) = m0 g ( w ) ) . Pruning after training ( one-shot pruning ) . A dense model is trained , and pruning is applied to the trained model wT . As the pruned model w̃T = mT wT is very likely not at a local optimum of f , fine-tuning ( retraining with the fixed mask mT ) is necessary to improve performance . Pruning during training ( incremental and dynamic pruning ) . Dynamic schemes change the mask mt every ( few ) iterations based on observations during training ( i.e . by observing the weights and stochastic gradients ) . Incremental schemes monotonically increase the sparsity pattern , fully dynamic schemes can also reactivate previously pruned weights . In contrast to previous dynamic schemes that relied on elaborated heuristics to adapt the mask mt , we propose a simpler approach : Dynamic pruning with feedback ( DPF , Algorithm 1 ) . Our scheme evaluates a stochastic gradient at the pruned model w̃t = mt wt and applies it to the ( simultaneously maintained ) dense model wt : wt+1 : = wt − γtg ( mt wt ) = wt − γtg ( w̃t ) . ( DPF ) Applying the gradient to the full model allows to recover from “ errors ” , i.e . prematurely masking out important weights : when the accumulated gradient updates from the following steps drastically change a specific weight , it can become activated again ( in contrast to incremental pruning approaches that have to stick to sub-optimal decisions ) . For illustration , observe that ( DPF ) can equivalently be written as wt+1 = wt − γtg ( wt + et ) , where et : = w̃t −wt is the error produced by the compression . This provides a different intuition of the behavior of ( DPF ) , and connects it with the concept of error-feedback ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) .3 We illustrate this principle in Figure 2 and give detailed pseudocode and further implementation details in Appendix A.1 . The DPF scheme can also be seen as an instance of a more general class of schemes that apply ( arbitrary ) perturbed gradient updates to the dense model . For instance straight-through gradient estimators ( Bengio et al. , 2013 ) that are used to empirically simplify the backpropagation can be seen as such perturbations . Our stronger assumptions on the structure of the perturbation allow to derive non-asymptotic convergence rates in the next section , though our analysis could also be extended to the setting in ( Yin et al. , 2019 ) if the perturbations can be bounded .
In this paper, the authors proposed a novel model compression method that uses error feedbacks to dynamically allocates sparsity patterns during training. The authors provided a systematic overview of a good number of existing model compression algorithms depending on the relative order of pruning and training processes. The effectiveness of the proposed algorithm is illustrated by comparing its generalization performance with 6 existing algorithms (and their variants) with two standard datasets and various networks of standard structures. The authors also showed the convergence rate and the fundamental limit of the proposed algorithm with two theorems.
SP:cf0aed09560d12961f718e915b72a2c5403c4e4a
Contextual Text Style Transfer
1 INTRODUCTION . Text style transfer has recently been applied to many applications with remarkable success ( e.g. , sentiment manipulation , formalized writing ) . Early work relied on parallel corpora with a sequenceto-sequence learning framework ( Bahdanau et al. , 2015 ; Jhamtani et al. , 2017 ) . However , collecting annotations for parallel data is highly time-consuming . There has been a recent surge of interest in developing text style transfer models using non-parallel data ( Hu et al. , 2017 ; Li et al. , 2018 ; Prabhumoye et al. , 2018 ; Subramanian et al. , 2018 ) , assuming that disentangling style information from semantic content can be achieved in an auto-encoding fashion with the introduction of additional regularizers ( e.g. , adversarial discriminators ( Shen et al. , 2017 ) or language models ( Yang et al. , 2018 ) ) . Despite promising results , these techniques still have a long way towards practical use . Specifically , existing models mostly focus on sentence-level rewriting . However , in real-world applications , sentences typically reside in a proper context such as a paragraph . For example , in the formalized writing task , the rewritten span should align well with the surrounding context ( e.g. , personal email , scientific content ) to keep a coherent text flow . Taking a single sentence as the sole input of a style transfer model may fail to preserve topical coherency of the generated sentence with its surrounding context , resulting in poor semantic and logical consistency on the paragraph level ( see Example C in Table 4 ) . Motivated by this , we propose and investigate a new task - Contextual Text Style Transfer . Given a paragraph , the system aims to automatically edit sentences into a desired style , while keeping the edited section topically coherent with its surrounding context . To achieve this goal , we propose a novel Context-Aware Style Transfer ( CAST ) model , by jointly considering style transfer and context alignment . For parallel training data , CAST uses two separate encoders to encode the source sentence and its surrounding context , respectively , and a decoder to translate the encoded features 1Source code , and collected new datasets will be released upon acceptance . into the target sentence . A pre-trained coherence classifier is further applied to regularize the generated target sentence to be consistent with the context . To overcome the data sparsity issue , we further leverage non-parallel data by using a hybrid approach . With large-scale non-parallel corpus , the training of the sentence encoder and decoder are enhanced via additional self-reconstruction and back-translation objectives . A pre-trained style classifier is also used for style regularization . The final CAST model is jointly trained with both parallel and non-parallel data . As this is a newly proposed task , we also introduce two new datasets , Enron-Context and RedditContext , collected via crowdsourcing . The former contains 14,734 formal vs. informal paired samples from Enron ( Klimt & Yang , 2004 ) ( an email dataset ) , and the latter contains 23,158 offensive vs. non-offensive paired samples from Reddit ( Serban et al. , 2017 ) . Each paired sample contains an original sentence and a human-rewritten sentence with the desired style , accompanied by its paragraph context . Besides this , in order to enhance model training , we exploit additional 28,375/29,774 formal/informal non-parallel sentences from GYAFC ( Rao & Tetreault , 2018 ) , and 53,028/53,714 offensive/non-offensive non-parallel sentences from Reddit ( dos Santos et al. , 2018 ) . The main contributions of this work are summarized as follows : ( i ) We propose a new task - Contextual Text Style Transfer , which aims to translate an input sentence into a desired style , while preserving its style-irrelevant semantics and topical consistency with the surrounding context . ( ii ) We introduce two new datasets for this task , Enron-Context and Reddit-Context , which provide reliable benchmarks for measuring contextual style transfer models . ( iii ) We present a new model - Context-Aware Style Transfer ( CAST ) , which jointly optimizes the generation quality of the target sentence and its topical coherency with adjacent sentences . Extensive experiments on these two new datasets demonstrate that the proposed CAST model outperforms state-of-the-art baselines . 2 RELATED WORK . Text Style Transfer Text style transfer aims to modify an input sentence into a desired style while preserving its style-independent semantics . Previous work has explored this as a sequence-tosequence learning task using parallel corpora with paired source/target sentences in different styles . For example , Jhamtani et al . ( 2017 ) pre-trained word embeddings by leveraging external dictionaries mapping Shakespearean words to modern English words and additional text . However , available parallel data in different styles are very limited . Therefore , there is a recent surge of interest in considering a more realistic setting , where only non-parallel stylized corpora are available . A typical approach is : ( i ) disentangling latent space as content and style features ; then ( ii ) generating stylistic sentences by tweaking style-relevant features and passing them through a decoder , together with the original content-relevant features ( Xu et al. , 2018 ) . Many of these approaches borrowed the idea of adversarial discriminator/classifier from the Generative Adversarial Network ( GAN ) framework ( Goodfellow et al. , 2014 ) . For example , Shen et al . ( 2017 ) ; Fu et al . ( 2018 ) ; Lample et al . ( 2018 ) used adversarial classifiers to force the decoder to transfer the encoded source sentence into a different style/language . Alternatively , Li et al . ( 2018 ) achieved disentanglement by filtering stylistic words of input sentences . Another direction for text style transfer without parallel data is using back-translation ( Prabhumoye et al. , 2018 ) with a denoising auto-encoding objective ( Logeswaran et al. , 2018 ; Subramanian et al. , 2018 ) . Regarding the tasks , sentiment transfer is one of the most widely studied problems . From informality to formality ( Rao & Tetreault , 2018 ) is another direction of text style transfer , aiming to change the style of a given sentence to more formal text . dos Santos et al . ( 2018 ) presented an approach to transferring offensive text to non-offensive based on social network data . In Prabhumoye et al . ( 2018 ) , the authors proposed the political slant transfer task . However , all these previous studies did not directly consider context-aware text style transfer , which is the main focus of this work . Context-aware Text Generation Our work is related to context-aware text generation ( Mikolov & Zweig , 2012 ; Tang et al. , 2016 ) , which can be applied to many NLP tasks ( Mangrulkar et al. , 2018 ) . For example , previous work has investigated language modeling with context information ( Wang & Cho , 2015 ; Wang et al. , 2017 ) , treating the preceding sentences as context . There are also studies on response generation for conversational systems ( Sordoni et al. , 2015b ; Wen et al. , 2015 ) , where dialogue history is treated as a context . Zang & Wan ( 2017 ) introduced a neural model to generate long reviews from aspect-sentiment scores given the topics . Vinyals & Le ( 2015 ) proposed a model to predict the next sentence given the previous sentences in a dialogue session . Sordoni et al . ( 2015a ) presented a hierarchical recurrent encoder-decoder model to encode dialogue context . Our work is the first to explore context information in the text style transfer task . 3 CONTEXTUAL TEXT STYLE TRANSFER . In this section , we first describe the problem definition and provide an overview of the model architecture in Section 3.1 . Section 3.2 presents the proposed Context-Aware Style Transfer ( CAST ) model with parallel data , and Section 3.3 further introduces how to augment the CAST model with non-parallel data in a hybrid approach . 3.1 OVERVIEW . Problem Definition The problem of contextual text style transfer is defined as follows . Given a style-labelled parallel dataset P = { ( xi , li ) , ( yi , l̃i ) , ci } Mi=1 , where the i-th instance contains the original sentence xi in style li , its corresponding rewritten sentence yi in another style l̃i , and the paragraph context ci . xi and yi are expected to contain the same semantic content , but in different language styles ( i.e. , li 6= l̃i ) . The goal is to transform xi in style li to yi in style l̃i , while keeping the sentence yi semantically coherent with its context ci . In practice , labelled parallel data may be difficult to garner . Therefore , we assume that additional non-parallel data U = { ( xi , li ) } Ni=1 can be leveraged to enhance overall model training . Training Objective The overall architecture of the proposed CAST model is illustrated in Figure 1 . The hybrid model training process consists of two paths , one for parallel data and the other for non-parallel data . In the parallel path , a Seq2Seq loss and a contextual coherence loss are defined , to learn the two encoders ( sentence encoder and context encoder ) and the sentence decoder with labeled parallel data . The non-parallel path is designed to further enhance the sentence encoder and decoder with three additional losses : ( i ) a self-reconstruction loss ; ( ii ) a back-translation loss ; and ( iii ) a style classification loss . The overall training objective , taking both parallel and non-parallel paths into consideration , can be written as : LP , Ufinal = L P c−s2s + λ1L P cohere + λ2L U recon + λ3L U back−trans + λ4L U style , ( 1 ) where λ1 , λ2 , λ3 and λ4 are hyper-parameters to balance different objectives . Each of these loss terms will be explained in the following sub-sections . 3.2 CAST WITH PARALLEL DATA . In this subsection , we discuss the training objective associated with parallel data , consisting of ( i ) a contextual Seq2Seq loss ; and ( ii ) a contextual coherence loss . Contextual Seq2Seq Loss When parallel data are available , a Seq2Seq model can be directly learned for text style transfer . We denote Seq2Seq model as ( E , D ) , where the semantic representation of sentence xi is extracted by the encoder E ( i.e. , E ( xi ) ) , and the decoder D aims to learn a conditional distribution of yi given the encoded feature E ( xi ) and style l̃i : LPs2s = − E xi , yi∼P log pD ( yi|E ( xi ) , l̃i ) . ( 2 ) However , in such a sentence-to-sentence style transfer setting , the context of the paragraph is ignored , which if well utilized , could help improve the quality of generated text ( such as paragraphlevel topical coherence ) . Thus , to take advantage of the paragraph context ci information , we use two separate encoders Es and Ec to encode the sentence and the context independently . The outputs of the two encoders are combined via a linear layer , to obtain a context-aware sentence representation , which is used for generating the target sentence . The model is trained to minimize the following loss : LPc−s2s = − E xi , ci , yi∼P log pD ( yi|Es ( xi ) , Ec ( ci ) , l̃i ) . ( 3 ) Compared with Eqn . ( 2 ) , the use of Ec ( ci ) makes the text style transfer process context-dependent . The generated sentence can be denoted as ỹi = D ( Es ( xi ) , Ec ( ci ) , l̃i ) . Contextual Coherence Loss To enforce contextual coherence ( i.e. , to make the generated sentence yi align with the surrounding context ci ) , we train a coherence classifier that aims to distinguish whether ci is the context of yi , by adopting a language model with an objective similar to next sentence prediction ( Devlin et al. , 2019 ) . Specifically , assume that yi is the t-th sentence of a paragraph pi ( i.e. , yi = p ( t ) i ) , and ci = { p ( 0 ) i , . . . , p ( t−1 ) i , p ( t+1 ) i , . . . , p ( T ) i } is its surrounding context . We first reconstruct the paragraph pi = { p ( 0 ) i , . . . , p ( T ) i } by inserting yi into the proper position in ci , denoted as [ ci ; yi ] . Based on this , we obtain a paragraph representation ui via a language model encoder . Then , we apply a linear layer to the representation , followed by a tanh function and a softmax layer to predict a binary label si , which indicates whether ci is the context of yi : ui = LM ( [ ci ; f ( yi ) ] ) ( 4 ) pLM ( si|ci , yi ) = softmax ( tanh ( Wui + b ) ) . ( 5 ) where LM represents the language model encoder , and si = 1 indicates that ci is the context of yi . Note that since ỹi are discrete tokens which are non-differentiable , we use the continuous feature , denoted as f ( ỹi ) , that generates ỹi as the input of the language model . We construct paired data { yi , ci , si } Ni=1 for training the classifier , where the negative samples are generated by replacing a sentence in a paragraph with another random sentence . After pre-training , the coherence classifier is used to obtain the contextual coherence loss : LPcohere = − E xi , ci∼P log pLM ( si = 1|ci , f ( ỹi ) ) . ( 6 ) Intuitively , minimizing LPcohere encourages the ỹi to blend better to its context ci . Note that the coherence classifier is pre-trained , and fixed during the training of the CAST model . The above coherence loss can be used to update the parameters of Es , Ec and D during model training .
The authors propose the task of contextual text style transfer: transferring the style of one text into another (i.e., informal to formal, or offensive to non-offensive), when the text is present within some larger, provided context. The authors propose a model (CAST) which takes advantage of the additional context to perform the style transfer. CAST outperforms previous style transfer models according to several automatic metrics, as well as human evaluation.
SP:27e5d87807bde38fd23e80517608417aaaf724f3
Contextual Text Style Transfer
1 INTRODUCTION . Text style transfer has recently been applied to many applications with remarkable success ( e.g. , sentiment manipulation , formalized writing ) . Early work relied on parallel corpora with a sequenceto-sequence learning framework ( Bahdanau et al. , 2015 ; Jhamtani et al. , 2017 ) . However , collecting annotations for parallel data is highly time-consuming . There has been a recent surge of interest in developing text style transfer models using non-parallel data ( Hu et al. , 2017 ; Li et al. , 2018 ; Prabhumoye et al. , 2018 ; Subramanian et al. , 2018 ) , assuming that disentangling style information from semantic content can be achieved in an auto-encoding fashion with the introduction of additional regularizers ( e.g. , adversarial discriminators ( Shen et al. , 2017 ) or language models ( Yang et al. , 2018 ) ) . Despite promising results , these techniques still have a long way towards practical use . Specifically , existing models mostly focus on sentence-level rewriting . However , in real-world applications , sentences typically reside in a proper context such as a paragraph . For example , in the formalized writing task , the rewritten span should align well with the surrounding context ( e.g. , personal email , scientific content ) to keep a coherent text flow . Taking a single sentence as the sole input of a style transfer model may fail to preserve topical coherency of the generated sentence with its surrounding context , resulting in poor semantic and logical consistency on the paragraph level ( see Example C in Table 4 ) . Motivated by this , we propose and investigate a new task - Contextual Text Style Transfer . Given a paragraph , the system aims to automatically edit sentences into a desired style , while keeping the edited section topically coherent with its surrounding context . To achieve this goal , we propose a novel Context-Aware Style Transfer ( CAST ) model , by jointly considering style transfer and context alignment . For parallel training data , CAST uses two separate encoders to encode the source sentence and its surrounding context , respectively , and a decoder to translate the encoded features 1Source code , and collected new datasets will be released upon acceptance . into the target sentence . A pre-trained coherence classifier is further applied to regularize the generated target sentence to be consistent with the context . To overcome the data sparsity issue , we further leverage non-parallel data by using a hybrid approach . With large-scale non-parallel corpus , the training of the sentence encoder and decoder are enhanced via additional self-reconstruction and back-translation objectives . A pre-trained style classifier is also used for style regularization . The final CAST model is jointly trained with both parallel and non-parallel data . As this is a newly proposed task , we also introduce two new datasets , Enron-Context and RedditContext , collected via crowdsourcing . The former contains 14,734 formal vs. informal paired samples from Enron ( Klimt & Yang , 2004 ) ( an email dataset ) , and the latter contains 23,158 offensive vs. non-offensive paired samples from Reddit ( Serban et al. , 2017 ) . Each paired sample contains an original sentence and a human-rewritten sentence with the desired style , accompanied by its paragraph context . Besides this , in order to enhance model training , we exploit additional 28,375/29,774 formal/informal non-parallel sentences from GYAFC ( Rao & Tetreault , 2018 ) , and 53,028/53,714 offensive/non-offensive non-parallel sentences from Reddit ( dos Santos et al. , 2018 ) . The main contributions of this work are summarized as follows : ( i ) We propose a new task - Contextual Text Style Transfer , which aims to translate an input sentence into a desired style , while preserving its style-irrelevant semantics and topical consistency with the surrounding context . ( ii ) We introduce two new datasets for this task , Enron-Context and Reddit-Context , which provide reliable benchmarks for measuring contextual style transfer models . ( iii ) We present a new model - Context-Aware Style Transfer ( CAST ) , which jointly optimizes the generation quality of the target sentence and its topical coherency with adjacent sentences . Extensive experiments on these two new datasets demonstrate that the proposed CAST model outperforms state-of-the-art baselines . 2 RELATED WORK . Text Style Transfer Text style transfer aims to modify an input sentence into a desired style while preserving its style-independent semantics . Previous work has explored this as a sequence-tosequence learning task using parallel corpora with paired source/target sentences in different styles . For example , Jhamtani et al . ( 2017 ) pre-trained word embeddings by leveraging external dictionaries mapping Shakespearean words to modern English words and additional text . However , available parallel data in different styles are very limited . Therefore , there is a recent surge of interest in considering a more realistic setting , where only non-parallel stylized corpora are available . A typical approach is : ( i ) disentangling latent space as content and style features ; then ( ii ) generating stylistic sentences by tweaking style-relevant features and passing them through a decoder , together with the original content-relevant features ( Xu et al. , 2018 ) . Many of these approaches borrowed the idea of adversarial discriminator/classifier from the Generative Adversarial Network ( GAN ) framework ( Goodfellow et al. , 2014 ) . For example , Shen et al . ( 2017 ) ; Fu et al . ( 2018 ) ; Lample et al . ( 2018 ) used adversarial classifiers to force the decoder to transfer the encoded source sentence into a different style/language . Alternatively , Li et al . ( 2018 ) achieved disentanglement by filtering stylistic words of input sentences . Another direction for text style transfer without parallel data is using back-translation ( Prabhumoye et al. , 2018 ) with a denoising auto-encoding objective ( Logeswaran et al. , 2018 ; Subramanian et al. , 2018 ) . Regarding the tasks , sentiment transfer is one of the most widely studied problems . From informality to formality ( Rao & Tetreault , 2018 ) is another direction of text style transfer , aiming to change the style of a given sentence to more formal text . dos Santos et al . ( 2018 ) presented an approach to transferring offensive text to non-offensive based on social network data . In Prabhumoye et al . ( 2018 ) , the authors proposed the political slant transfer task . However , all these previous studies did not directly consider context-aware text style transfer , which is the main focus of this work . Context-aware Text Generation Our work is related to context-aware text generation ( Mikolov & Zweig , 2012 ; Tang et al. , 2016 ) , which can be applied to many NLP tasks ( Mangrulkar et al. , 2018 ) . For example , previous work has investigated language modeling with context information ( Wang & Cho , 2015 ; Wang et al. , 2017 ) , treating the preceding sentences as context . There are also studies on response generation for conversational systems ( Sordoni et al. , 2015b ; Wen et al. , 2015 ) , where dialogue history is treated as a context . Zang & Wan ( 2017 ) introduced a neural model to generate long reviews from aspect-sentiment scores given the topics . Vinyals & Le ( 2015 ) proposed a model to predict the next sentence given the previous sentences in a dialogue session . Sordoni et al . ( 2015a ) presented a hierarchical recurrent encoder-decoder model to encode dialogue context . Our work is the first to explore context information in the text style transfer task . 3 CONTEXTUAL TEXT STYLE TRANSFER . In this section , we first describe the problem definition and provide an overview of the model architecture in Section 3.1 . Section 3.2 presents the proposed Context-Aware Style Transfer ( CAST ) model with parallel data , and Section 3.3 further introduces how to augment the CAST model with non-parallel data in a hybrid approach . 3.1 OVERVIEW . Problem Definition The problem of contextual text style transfer is defined as follows . Given a style-labelled parallel dataset P = { ( xi , li ) , ( yi , l̃i ) , ci } Mi=1 , where the i-th instance contains the original sentence xi in style li , its corresponding rewritten sentence yi in another style l̃i , and the paragraph context ci . xi and yi are expected to contain the same semantic content , but in different language styles ( i.e. , li 6= l̃i ) . The goal is to transform xi in style li to yi in style l̃i , while keeping the sentence yi semantically coherent with its context ci . In practice , labelled parallel data may be difficult to garner . Therefore , we assume that additional non-parallel data U = { ( xi , li ) } Ni=1 can be leveraged to enhance overall model training . Training Objective The overall architecture of the proposed CAST model is illustrated in Figure 1 . The hybrid model training process consists of two paths , one for parallel data and the other for non-parallel data . In the parallel path , a Seq2Seq loss and a contextual coherence loss are defined , to learn the two encoders ( sentence encoder and context encoder ) and the sentence decoder with labeled parallel data . The non-parallel path is designed to further enhance the sentence encoder and decoder with three additional losses : ( i ) a self-reconstruction loss ; ( ii ) a back-translation loss ; and ( iii ) a style classification loss . The overall training objective , taking both parallel and non-parallel paths into consideration , can be written as : LP , Ufinal = L P c−s2s + λ1L P cohere + λ2L U recon + λ3L U back−trans + λ4L U style , ( 1 ) where λ1 , λ2 , λ3 and λ4 are hyper-parameters to balance different objectives . Each of these loss terms will be explained in the following sub-sections . 3.2 CAST WITH PARALLEL DATA . In this subsection , we discuss the training objective associated with parallel data , consisting of ( i ) a contextual Seq2Seq loss ; and ( ii ) a contextual coherence loss . Contextual Seq2Seq Loss When parallel data are available , a Seq2Seq model can be directly learned for text style transfer . We denote Seq2Seq model as ( E , D ) , where the semantic representation of sentence xi is extracted by the encoder E ( i.e. , E ( xi ) ) , and the decoder D aims to learn a conditional distribution of yi given the encoded feature E ( xi ) and style l̃i : LPs2s = − E xi , yi∼P log pD ( yi|E ( xi ) , l̃i ) . ( 2 ) However , in such a sentence-to-sentence style transfer setting , the context of the paragraph is ignored , which if well utilized , could help improve the quality of generated text ( such as paragraphlevel topical coherence ) . Thus , to take advantage of the paragraph context ci information , we use two separate encoders Es and Ec to encode the sentence and the context independently . The outputs of the two encoders are combined via a linear layer , to obtain a context-aware sentence representation , which is used for generating the target sentence . The model is trained to minimize the following loss : LPc−s2s = − E xi , ci , yi∼P log pD ( yi|Es ( xi ) , Ec ( ci ) , l̃i ) . ( 3 ) Compared with Eqn . ( 2 ) , the use of Ec ( ci ) makes the text style transfer process context-dependent . The generated sentence can be denoted as ỹi = D ( Es ( xi ) , Ec ( ci ) , l̃i ) . Contextual Coherence Loss To enforce contextual coherence ( i.e. , to make the generated sentence yi align with the surrounding context ci ) , we train a coherence classifier that aims to distinguish whether ci is the context of yi , by adopting a language model with an objective similar to next sentence prediction ( Devlin et al. , 2019 ) . Specifically , assume that yi is the t-th sentence of a paragraph pi ( i.e. , yi = p ( t ) i ) , and ci = { p ( 0 ) i , . . . , p ( t−1 ) i , p ( t+1 ) i , . . . , p ( T ) i } is its surrounding context . We first reconstruct the paragraph pi = { p ( 0 ) i , . . . , p ( T ) i } by inserting yi into the proper position in ci , denoted as [ ci ; yi ] . Based on this , we obtain a paragraph representation ui via a language model encoder . Then , we apply a linear layer to the representation , followed by a tanh function and a softmax layer to predict a binary label si , which indicates whether ci is the context of yi : ui = LM ( [ ci ; f ( yi ) ] ) ( 4 ) pLM ( si|ci , yi ) = softmax ( tanh ( Wui + b ) ) . ( 5 ) where LM represents the language model encoder , and si = 1 indicates that ci is the context of yi . Note that since ỹi are discrete tokens which are non-differentiable , we use the continuous feature , denoted as f ( ỹi ) , that generates ỹi as the input of the language model . We construct paired data { yi , ci , si } Ni=1 for training the classifier , where the negative samples are generated by replacing a sentence in a paragraph with another random sentence . After pre-training , the coherence classifier is used to obtain the contextual coherence loss : LPcohere = − E xi , ci∼P log pLM ( si = 1|ci , f ( ỹi ) ) . ( 6 ) Intuitively , minimizing LPcohere encourages the ỹi to blend better to its context ci . Note that the coherence classifier is pre-trained , and fixed during the training of the CAST model . The above coherence loss can be used to update the parameters of Es , Ec and D during model training .
The paper proposes a new task for text style transfer, based on the idea that the the surrounding context of a sentence is important, whereas previous such tasks have only looked at sentences in isolation. Two new crowd-sourced datasets are created, and a combination of now fairly standard neural components is shown to outperform some strong baselines on the new datasets, on a variety of evaluation metrics. An ablation analysis of the components - including some auto-encoding auxiliary losses - shows that all the various parts are helpful to performance.
SP:27e5d87807bde38fd23e80517608417aaaf724f3