paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Modal Uncertainty Estimation via Discrete Latent Representations
1 INTRODUCTION . Making predictions in the real world has to face with various uncertainties . One of the arguably most common uncertainties is due to partial or corrupted observations , as such it is often insufficient for making a unique and deterministic prediction . For example , when inspecting where a single CT scan of a patient contains lesion , without more information it is possible for radiologists to reach different conclusions , as a result of the different hypotheses they have about the image . In such an ambiguous scenario , the question is thus , given the observable , which one ( s ) out of the many possibilities would be more reasonable than others ? Mathematically , this is a one-to-many mapping problem and can be formulated as follows . Suppose the observed information is x ∈ X in the input space , we are asked to estimate the conditional distribution p ( y|x ) for y ∈ Y in the prediction space , based on the training sample pairs ( x , y ) . There are immediate challenges that prevent p ( y|x ) being estimated directly in practical situations . First of all , both X and Y , e.g.as spaces of images , can be embedded in very high dimensional spaces with very complex structures . Secondly , only the unorganized pairs ( x , y ) , not the one-tomany mappings x 7→ { yi } i , are explicitly available . Fortunately , recent advances in conditional generative models based on Variational Auto-Encoder ( VAE ) framework from Kingma & Welling ( 2014 ) shed light on how to tackle our problem . By modelling through latent variables c = c ( x ) , one aims to explain the underlying mechanism of how y is assigned to x . And hopefully , variation of c will result in variation in the output ŷ ( x , c ) , which will approximate the true one-to-many mappings distributionally . Many current conditional generative models , including cVAE in Sohn et al . ( 2015 ) , BiCycleGAN in Zhu et al . ( 2017b ) , Probabilistic U-Net in Kohl et al . ( 2018 ) , etc. , are developed upon the VAE framework , with Gaussian distribution with diagonal covariance as the de facto parametrization of the latent variables . However , in the following we will show that such a parametrization put a dilemma between model training and actual inference , as a form of what is known as the posterior collapse problem in the VAE literature Alemi et al . ( 2018 ) ; Razavi et al . ( 2018 ) . This issue is particularly easy to understand in our setting , where we assume there are multiple y ’ s for a given x . Let us recall that one key ingredient of the VAE framework is to minimize the KL-divergence between the latent prior distribution p ( c|x ) and the latent variational approximation pφ ( c|x , y ) of the posterior . Here φ denotes the model parameters of the “ recognition model ” in VAE . It does not matter if the prior is fixed p ( c|x ) = p ( c ) Kingma & Welling ( 2014 ) or learned p ( c|x ) = pθ ( c|x ) Sohn et al . ( 2015 ) , as long as both prior and variational posterior are parameterized by Gaussians . Now suppose for a particular x , there there are two modes y1 , y2 for the corresponding predictions . Since the minimization is performed on the entire training set , p ( c|x ) is forced to approximate a posterior mixture p ( c|x , y ( · ) ) of two Gaussians from mode y1 and y2 . In the situation when the minimization is successful , meaning the KL divergence is small , the mixture of the variational posteriors must be close to a Gaussian , i.e.posterior collapsed as in Fig.1 ( b ) , and hence the multi-modal information is lost . Putting it in contrapositive , if multi-modal information is to be conveyed by the variational posterior , then the minimization will not be successful , meaning higher KL divergence . This may partly explain why it can be a delicate matter to train a conditional VAE . The situation is schematically illustrated in Figure 1 in one dimension . Note that the case in Figure 1 ( a ) is usually more preferable , however the density values of the prior used during testing can not reflect the uncertainty level of the outputs . We quantitative demonstrate this in Section 4 and Fig.2 . One direction to solve the above problem is to modify the strength of KL-divergence or the variational lower bound , while keeping the Gaussian parametrization , and has been explored in the literature extensively , as in Higgins et al . ( 2017 ) ; Alemi et al . ( 2018 ) ; Rezende & Viola ( 2018 ) . However , besides the need of extensive parameter tuning for these approaches , they are not tailored for the multi-modal posterior collapse problem we described above , thus do not solve the inaccurate uncertainty estimation problem . Mixture or compositions of Gaussian priors have also been proposed in Nalisnick et al . ( 2016 ) ; Tomczak & Welling ( 2018 ) , but the number of Gaussians in the mixture is usually fixed apriori . Hence making it a conditional generative model further complicates the matter , since the number in the mixture should depend on the input . We therefore adopt another direction , which is to use a latent distribution parameterization other than Gaussians , and one that can naturally exhibit multiple modes . The simplest choice would be to constrain the latent space to be a finite set , as proposed in van den Oord et al . ( 2017 ) , so that we can learn the conditional distribution as a categorical distribution . We argue that the approach of discrete latent space may be beneficial particularly in our setting . First , different from unconditional or weak conditional generative modelling tasks where diversity is the main consideration , making accurate predictions based on partial information often leads to a significantly restricted output space . Second , there is no longer noise injection during training , so that the decoder can utilize the information from the latent variable more effectively . This makes it less prone to ignore the latent variable completely , in contrast to many conditional generation methods using noise inputs . Third , the density value learned on the latent space is more interpretable , since the learned prior can approximate the variational posterior better . In our case , the latent variables can now represent latent mode hypotheses for making the corresponding most likely predictions . We call our approach modal uncertainty estimation ( MUE ) . The main contributions of this work are : ( 1 ) We solve the MUE problem by using c-VAE and justify the use of a discrete latent space from the perspective of multi-modal posterior collapse problem . ( 2 ) Our uncertainty estimation improves significantly over the existing state-of-art . ( 3 ) In contrast to models using noise inputs that require sampling at the testing stage , our model can directly produce results ordered by their latent mode hypothesis probabilities , and is thus more informative and convenient for practical use . The rest of paper is organized as follows . In Section 2 we sample some works that related to ours and stress the key differences between them . In Section 3 we layout our general framework and model details . We conducted a series of experiments on both synthetic and real datasets described in Section 4 . The paper is concluded in Section 5 . 2 RELATED WORK . Conditional generative models aim to capture the conditional distribution of the data and generate them according to some given information . Thanks to the recent advancement of deep learning techniques , especially the methods of generative adversarial networks ( GANs ) Goodfellow et al . ( 2014 ) and variational auto-encoders ( VAEs ) Kingma & Welling ( 2014 ) , conditional generative models have been effectively applied to various computer vision and graphics tasks such as image synthesis , style transfer , image in-painting , etc . Early works in this direction focused on learning the unimodal mapping , as in Isola et al . ( 2017 ) and Zhu et al . ( 2017a ) . They are called uni-modal because the mapping is between fixed categories , namely a one-to-one mapping . There are no latent codes to sample from , thus the generation is deterministic . In these works , images of a specific category are translated to another category , while keeping the desired semantic content . These methods achieved the goal through a meta supervision technique known as the adversarial loss as in the GAN framework , where one only needs to supply weak supervision for whether the generated image belongs to a certain category or not . Adversarial loss has been known for producing sharp visual look but it alone can not guarantee faithful distribution approximation , where issues known as mode collapse and mode dropping often occur for complicated data distribution Srivastava et al . ( 2017 ) . In Isola et al . ( 2017 ) it is noted that additional noise input in the conditional model in fact fails to increase variability in the output . How to ensure good approximation of output distribution for GANs is still an active area of research . Therefore , the above frameworks might not be suitable for approximating the distribution of one-to-many mappings . Many works have been proposed to extend to the setting of one-to-many mappings by learning disentangled representations , of e.g. “ content ” and “ style ” , and consequently some form of auto-encoding has to be used . Conditional generation can then be accomplished by corresponding latent code sampling and decoding . This includes the approaches of Zhu et al . ( 2017b ) ; Huang et al . ( 2018 ) for multi-modal image-to-image translation , Zheng et al . ( 2019 ) for image in-painting , and many others . Since the main objectives of these works are the visual quality and diversity of the outputs , they are usually not evaluated in terms of the approximation quality of the output distribution . One notable exception is Probabilistic U-Net proposed in Kohl et al . ( 2018 ) , which is based on the conditional VAE framework Sohn et al . ( 2015 ) and is close in spirit to ours . Probabilistic U-Net has shown superior performance over various other methods for calibrated uncertainty estimation , including the ensemble methods of Lakshminarayanan et al . ( 2017 ) , multi-heads of Rupprecht et al . ( 2017 ) ; Ilg et al . ( 2018 ) , drop-out of Kendall et al . ( 2015 ) and Image2Image VAE of Zhu et al . ( 2017b ) . However , as discussed in Section 1 , Probabilistic U-Net can not solve the multi-modal posterior collapse problem since it uses Gaussian latent parameterization . Therefore , in case the conditional distribution is varying for different input data , the performance is expected to degrade . Furthermore , the latent prior density learned has no interpretation , and thus can not rank its prediction . To perform uncertainty estimation for Probabilistic U-Net , one must perform extensive sampling and clustering . Our framework improves significantly upon Probabilistic U-Net by introducing discrete latent space . With this latent parameterization we can directly output the uncertainty estimation and we can rank our predictions easily . The discrete latent space has been proposed in the vq-VAE framework of van den Oord et al . ( 2017 ) . With such a latent space it can get rid of the noise sampling , which enables the latent variable to be more effectively utilized by the decoder and produce outputs with better visual quality . While our use of discrete latent space is motivated by the multi-modal posterior collapse problem . The major technical difference compared to our framework is that the image in vq-VAE framework is encoded by a collection of codes arranged in the spatial order . As such , the joint distribution of the codes can not be obtained directly , and has to be estimated or sampled using e.g.an auto-regressive model in the spatial dimension , such as PixelCNN Van den Oord et al . ( 2016 ) . In contrast , we learn disentangled representations and only the necessary information to produce different outputs goes into the discrete latent space . In particular , we model the each mode of y given x by a single latent code , thus our model enjoys much simpler sampling . Besides vq-VAE van den Oord et al . ( 2017 ) , the use of discrete latent variables in neural network has been explored in various previous works , including the early work of Mnih & Gregor ( 2014 ) and Mnih & Rezende ( 2016 ) that use single or multiple samples objectives with variance reduction techniques to help training . Others have explored using continuous approximations to the discrete distributions , know as Concrete Maddison et al . ( 2016 ) or Gumbel-Softmax Jang et al . ( 2016 ) distributions . As is noted in van den Oord et al . ( 2017 ) , in general the above approaches have fallen short of their continuous counterparts . Worth mentioning is a recently proposed neural dialogue generation method Zhao et al . ( 2018 ) that uses Gumbel-Softmax approximation , which treats the dialogue generation as a one-to-many mapping problem . Our method diverge from theirs by the assumption about the model . In Zhao et al . ( 2018 ) , they designed the learned discrete representation for an utterance to be “ context free ” . This is in contrast to our assumption that the latent hypothesis of an input should depend on the input itself . Taking the task of medical image segmentation for an example , if we encode the hypotheses from the segmentation alone as in Zhao et al . ( 2018 ) , likely there will either be two modes ( benign vs malignant ) or a huge number of modes if the shape of the segmentation is taken into account . Moreover , it will not contain any information about what kinds of actual biological tissue they might be , which on the other hand can be judged from the actual scan image . In our case , we have deliberately separated the recognition task learning , e.g . segmenting the image , and the hypothesis learning , so that together they can approximate the variation of the outputs given the input . Finally , we briefly summarize the differences between MUE and existing uncertainty estimation methodologies in deep learning . Many existing works Gal & Ghahramani ( 2016 ) ; Gal ( 2016 ) ; Kendall et al . ( 2015 ) ; Kendall & Gal ( 2017 ) focus on model uncertainty , which try to capture the calibrated level of confidence of the model prediction by using stochastic regularization techniques . Such uncertainty will be of major interest for model predictions on unseen data and long-tail rare cases , or when model is trained on limited data . While ours is more about learning from conflicting or ambiguous training data , and estimating the calibrated uncertainty of the input-output relationship in the dataset . Interestingly , Kohl et al . ( 2018 ) has experimented using Dropout as comparison to the c-VAE framework in the MUE setting , but found it only achieved inferior performance . In general , since MUE is independent from the model uncertainty , our framework can be used jointly with existing techniques for prediction confidence estimation .
This paper introduces a novel conditional generative model for high dimensional data with multimodal output distributions. The proposed method, called modal uncertainty estimation (MUE), is a conditional VAE but with discrete latent representations. This discrete latent space allows the model to better handle multimodal outputs and provide confidence scores for the different modes predicted by the model. These capabilities are applied to the task of segmenting lesions in medical scans.
SP:dd1ac7776d55534c5458d43d1fe39af30386343d
Practical Evaluation of Out-of-Distribution Detection Methods for Image Classification
We reconsider the evaluation of OOD detection methods for image recognition . Although many studies have been conducted so far to build better OOD detection methods , most of them follow Hendrycks and Gimpel ’ s work for the method of experimental evaluation . While the unified evaluation method is necessary for a fair comparison , there is a question of if its choice of tasks and datasets reflect real-world applications and if the evaluation results can generalize to other OOD detection application scenarios . In this paper , we experimentally evaluate the performance of representative OOD detection methods for three scenarios , i.e. , irrelevant input detection , novel class detection , and domain shift detection , on various datasets and classification tasks . The results show that differences in scenarios and datasets alter the relative performance among the methods . Our results can also be used as a guide for practitioners for the selection of OOD detection methods . 1 INTRODUCTION . Despite their high performance on various visual recognition tasks , convolutional neural networks ( CNNs ) often show unpredictable behaviors against out-of-distribution ( OOD ) inputs , i.e. , those sampled from a different distribution from the training data . For instance , CNNs often classify irrelevant images to one of the known classes with high confidence . A visual recognition system should desirably be equipped with an ability to detect such OOD inputs upon its real-world deployment . There are many studies of OOD detection that are based on diverse motivations and purposes . However , as far as the recent studies targeted at visual recognition are concerned , most of them follow the work of Hendrycks & Gimpel ( 2017 ) , which provides a formal problem statement of OOD detection and an experimental procedure to evaluate the performance of methods . Employing this procedure , the recent studies focus mainly on increasing detection accuracy , where the performance is measured using the same datasets . On the one hand , the employment of the experimental procedure has arguably bought about the rapid progress of research in a short period . On the other hand , little attention has been paid to how well the employed procedure models real-world problems and applications . They are diverse in purposes and domains , which obviously can not be covered by the single problem setting with a narrow range of datasets . In this study , to address this issue , we consider multiple , more realistic scenarios of the application of OOD detection , and then experimentally compare the representative methods . To be specific , we consider the three scenarios : detection of irrelevant inputs , detection of novel class inputs , and detection of domain shift . The first two scenarios differ in the closeness between ID samples and OOD samples . Unlike the first two , domain shift detection is not precisely OOD detection . Nonetheless , it is the same as the other two in that what we want is to judge if the model can make a meaningful inference for a novel input . In other words , we can generalize OOD detection to the problem of judging this . Then , the above three scenarios are naturally fallen into the same group of problems , and it becomes natural to consider applying OOD detection methods to the third scenario . It is noteworthy that domain shift detection has been poorly studied in the community . Despite many demands from practitioners , there is no established method in the context of deep learning for image classification . Based on the above generalization of OOD detection , we propose a meta-approach in which any OOD detection method can be used as its component . For each of these three scenarios , we compare the following methods : the confidence-based baseline ( Hendrycks & Gimpel , 2017 ) , MC dropout ( Gal & Ghahramani , 2016 ) , ODIN ( Liang et al. , 2017 ) , cosine similarity ( Techapanurak et al. , 2019 ; Hsu et al. , 2020 ) , and the Mahalanobis detector ( Lee et al. , 2018 ) . Domain shift detection is studied in ( Elsahar & Gallé , 2019 ) with natural language processing tasks , where proxy-A distance ( PAD ) is reported to perform the best ; thus we test it in our experiments . As for choosing the compared methods , we follow the argument shared by many recent studies ( Shafaei et al. , 2019 ; Techapanurak et al. , 2019 ; Yu & Aizawa , 2019 ; Yu et al. , 2020 ; Hsu et al. , 2020 ) that OOD detection methods should not assume the availability of explicit OOD samples at training time . Although this may sound obvious considering the nature of OOD , some of the recent methods ( e.g. , Liang et al . ( 2017 ) ; Lee et al . ( 2018 ) ) use a certain amount of OOD samples as validation data to determine their hyperparameters . The recent studies , ( Shafaei et al. , 2019 ; Techapanurak et al. , 2019 ) , show that these methods do perform poorly when encountering OOD inputs sampled from a different distribution from the assumed one at test time . Thus , for ODIN and the Mahalanobis detector , we employ their variants ( Hsu et al. , 2020 ; Lee et al. , 2018 ) that can work without OOD samples . The other compared methods do not need OOD samples . The contribution of this study are summarized as follows . i ) Listing three problems that practitioners frequently encounter , we evaluate the existing OOD detection methods on each of them . ii ) We show a practical approach to domain shift detection that is applicable to CNNs for image classification . iii ) We show experimental evaluation of representative OOD detection methods on these problems , revealing each method ’ s effectiveness and ineffectiveness in each scenario . 2 PROBLEMS AND METHODS . 2.1 PRACTICAL SCENARIOS OF OOD DETECTION . We consider image recognition tasks in which a CNN classifies a single image x into one of C known classes . The CNN is trained using pairs of x and its label , and x is sampled according to x ∼ p ( x ) . At test time , it will encounter an unseen input x , which is usually from p ( x ) but is sometimes from p′ ( x ) , a different , unknown distribution . In this study , we consider the following three scenarios . Detecting Irrelevant Inputs The new input x does not belong to any of the known classes and is out of concern . Suppose we want to build a smartphone app that recognizes dog breeds . We train a CNN on a dataset containing various dog images , enabling it to perform the task with reasonable accuracy . We then point the smartphone to a sofa and shoot its image , feeding it to our classifier . It could classify the image as a Bull Terrier with high confidence . Naturally , we want to avoid this by detecting the irrelevance of x . Most studies of OOD detection assumes this scenario for evaluation . Detecting Novel Classes The input x belongs to a novel class , which differs from any of C known classes , and furthermore , we want our CNN to learn to classify it later , e.g. , after additional training . For instance , suppose we are building a system that recognizes insects in the wild , with an ambition to make it cover all the insects on the earth . Further , suppose an image of one of the endangered ( and thus rare ) insects is inputted to the system while operating it . If we can detect it as a novel class , we would be able to update the system in several ways . The problem is the same as the first scenario in that we want to detect whether x ∼ p ( x ) or not . The difference is that x is more similar to samples of the learned classes , or equivalently , p′ ( x ) is more close to p ( x ) , arguably making the detection more difficult . Note that in this study , we don ’ t consider distinguishing whether x is an irrelevant input or a novel class input , for the sake of simplicity . We left it for a future study . Detecting Domain Shift The input x belongs to one of C known classes , but its underlying distribution is p′ ( x ) , not p ( x ) . We are especially interested in the case where a distributional shift p ( x ) → p′ ( x ) occurs either suddenly or gradually while running a system for the long term . Our CNN may or may not generalize beyond this shift to p′ ( x ) . Thus , we want to detect if it does not . If we can do this , we would take some actions , such as re-training the network with new training data ( Elsahar & Gallé , 2019 ) . We consider the case where no information is available other than the incoming inputs x′s . A good example is a surveillance system using a camera deployed outdoor . Let us assume the images ’ quality deteriorates after some time since its deployment , for instance , due to the camera ’ s aging . Then , the latest images will follow a different distribution from that of the training data . Unlike the above two cases where we have to decide for a single input , we can use multiple inputs ; we should , especially when the quality of input images deteriorate gradually as time goes . The problem here has three differences from the above two scenarios . First , the input is a valid sample belonging to a known class , neither an irrelevant sample nor a novel class sample . Second , we are basically interested in the accuracy of our CNN with the latest input ( s ) and not in whether x ∼ p ( x ) or p′ ( x ) . Third , as mentioned above , we can use multiple inputs { xi } i=1 , ... , n for the judgment . Additional remarks on this scenario . Assuming a temporal sequence of inputs , the distributional shift is also called concept drift ( Gama et al. , 2014 ) . It includes several different subproblems , and the one considered here is called virtual concept drift in its terminology . Mathematically , concept drift occurs when p ( x , y ) changes with time . It is called virtual when p ( x ) changes while p ( y|x ) does not change . Intuitively , this is the case where the classes ( i.e. , concept ) remain the same but p ( x ) changes , demanding the classifier to deal with inputs drawn from p′ ( x ) . Then , we are usually interested in predicting if x lies in a region of the data space for which our classifier is well trained and can correctly classify it . If not , we might want to retrain our classifier using additional data or invoke unsupervised domain adaptation methods ( Ganin & Lempitsky , 2015 ; Tzeng et al. , 2017 ) . 2.2 COMPARED METHODS . We select five representative OOD detection methods that do not use real OOD samples to be encountered at test time . Baseline : Max-softmax Hendrycks & Gimpel ( 2017 ) showed that the maximum of the softmax outputs , or confidence , can be used to detect OOD inputs . We use it as the score of an input being in-distribution ( ID ) . We will refer to this method as Baseline . It is well known that the confidence can be calibrated using temperature to better represent classification accuracy ( Guo et al. , 2017 ; Li & Hoiem , 2020 ) . We also evaluate this calibrated confidence , which will be referred to as Calib . MC Dropout The confidence ( i.e. , the max-softmax ) is also thought of as a measure of uncertainty of prediction , but it captures only aleatoric uncertainty ( Hüllermeier & Waegeman , 2019 ) . Bayesian neural networks ( BNNs ) can also take epistemic uncertainty into account , which is theoretically more relevant to OOD detection . MC ( Monte-Carlo ) dropout ( Gal & Ghahramani , 2016 ) is an approximation of BNNs that is computationally more efficient than an ensemble of networks ( Lakshminarayanan et al. , 2017 ) . To be specific , using dropout ( Srivastava et al. , 2014 ) at test time provides multiple prediction samples , from which the average of their max-softmax values is calculated and used as ID score . Cosine Similarity It is recently shown in Techapanurak et al . ( 2019 ) ; Hsu et al . ( 2020 ) that using scaled cosine similarities at the last layer of a CNN , similar to the angular softmax for metric learning , enables accurate OOD detection . To be specific , the method first computes cosine similarities between the feature vector of the final layer and class centers ( or equivalently , normalized weight vectors for classes ) . They are multiplied with a scale and then normalized by softmax to obtain class scores . The scale , which is the inverse temperature , is predicted from the same feature vector . These computations are performed by a single layer replacing the last layer of a standard CNN . The maximum of the cosine similarities ( without the scale ) gives ID score . The method is free of hyperparameters for OOD detection . We will refer to it as Cosine . ODIN ( with OOD-sample Free Extension ) ODIN was proposed by Liang et al . ( 2017 ) to improve Baseline by perturbing an input x→ x+ · sgn ( δx ) in the direction δx of maximally increasing the max-softmax and also by temperature scaling . Thus , there are two hyperparameters , the perturbation size and the temperature T . In Liang et al . ( 2017 ) , they are chosen by assuming the availability of explicit OOD samples . Recently , Hsu et al . ( 2020 ) proposed to select ← argmax ∑ yκ ( x + · sgn ( δx ) ) , where yκ is the max-softmax and the summation is taken over ID samples in the validation set . As for the temperature , they set T = 1000 . ID score is given by yκ ( x+ · sgn ( δx ) ) . To distinguish from the original ODIN , we refer to this as ODIN∗ . Mahalanobis Detector The above three methods are based on the confidence . Another approach is to formulate the problem as unsupervised anomaly detection . Lee et al . ( 2018 ) proposed to model the distribution of intermediate layer ’ s activation by a Gaussian distribution for each class but with a shared covariance matrix among the classes . Given an input , the Mahalanobis distance concerning the predicted class is calculated at each layer . A score for OOD is given by the weighted sum of those calculated at different layers . The weights are predicted by logistic regression , which is determined by assuming the availability of OOD samples . To be free from the assumption , another method is suggested that generates adversarial examples from ID samples and regard them as OOD samples . It is also reported in ( Hsu et al. , 2020 ) that setting all the weights to one works reasonably well . We evaluate the last two methods that do not need OOD samples . Although the original method optionally uses input perturbation similar to ODIN , we do not use it because our experiments show that its improvement is very small despite its high computational cost . Effects of Fine-tuning a Pre-trained Network It has been well known that fine-tuning a pretrained network on a downstream task improves its prediction accuracy , especially when a small amount of training data is available . It was pointed out in ( He et al. , 2019 ) that the improvement is little when there is sufficient training data . Hendrycks et al . ( 2019 ) then show that even in that case , using a pre-trained network helps increase the overall robustness of the inference . It includes improved OOD detection performance , in addition to robustness to adversarial attacks , better calibration of confidence , robustness to covariate shift . However , their experimental validation is performed only on a single configuration with a few datasets . It remains unclear if the improvement can generalize to a broader range of purposes and settings that may differ in image size , the number of training samples , and ID/OOD combinations .
The paper empirically analyzes the evaluation framework of the current OOD detection systems for the image recognition task, specifically the evaluation described in [1] using Max-softmax and calibrated confidence. They motivate the paper by the necessity of having better evaluation for OOD detection to be reflective of real world scenarios. The addressed problem is interesting and valuable for the field as many of the defined OOD datasets, and evaluation metrics may not cover many real-world scenarios. They specifically addressed three scenarios, inputs that i) are irrelevant to the task ii) are from novel classes and iii) are from another domain (domain shift), which for the first 2 scenarios, they only evaluate them as unseen classes and not distinguish between them. Based on my understanding of the paper, they compare 5 OOD detection methods from the literature, suggest a few test datasets/scenarios and conclude using cosine similarity is consistently favorable for evaluation, and the choice of using confidence-based methods in case of domain shift detection scenarios.
SP:07471c50632db15eedbbc63f360a391140c1e094
Group Equivariant Generative Adversarial Networks
1 INTRODUCTION . Generative visual modeling is an area of active research , time and again finding diverse and creative applications . A prevailing approach is the generative adversarial network ( GAN ) , wherein density estimation is implicitly approximated by a min-max game between two neural networks ( Goodfellow et al. , 2014 ) . Recent GANs are capable of high-quality natural image synthesis and scale dramatically with increases in data and compute ( Brock et al. , 2018 ) . However , GANs are prone to instability due to the difficulty of achieving a local equilibrium between the two networks . Frequent failures include one or both networks diverging or the generator only capturing a few modes of the empirical distribution . Several proposed remedies include modifying training objectives ( Arjovsky et al. , 2017 ; Jolicoeur-Martineau , 2018 ) , hierarchical methods ( Karras et al. , 2017 ) , instance selection ( Sinha et al. , 2019 ; 2020 ) , latent optimization ( Wu et al. , 2019 ) , and strongly regularizing one or both networks ( Gulrajani et al. , 2017 ; Miyato et al. , 2018 ; Dieng et al. , 2019 ) , among others . In practice , one or all of the above techniques are ultimately adapted to specific use cases . Further , limits on data quantity empirically exacerbate training stability issues more often due to discriminator overfitting . Recent work on GANs for small sample sizes can be roughly divided into transfer learning approaches ( Wang et al. , 2018 ; Noguchi & Harada , 2019 ; Mo et al. , 2020 ; Zhao et al. , 2020a ) or methods which transform/augment the available training data and provide the discriminator with auxiliary tasks . For example , Chen et al . ( 2019 ) propose a multi-task discriminator which additionally predicts the degree by which an input image has been rotated , whereas Zhang et al . ( 2020 ) ; Zhao et al . ( 2020c ) incorporate consistency regularization where the discriminator is penalized towards similar activations for transformed/augmented real and fake images . However , with consistency regularization and augmentation , network capacity is spent learning equivariance to transformation as opposed to the desired task and equivariance is not guaranteed . In this work , we consider the problem of training tabula rasa on limited data which possess global and even local symmetries . We begin by noting that GANs ubiquitously use convolutional layers ∗Work started and partially done during an internship at Merck & Co. , Inc. †Work done while employed at Merck & Co. , Inc. which exploit the approximate translation invariance and equivariance of image labels and distributions , respectively . Equivariance to geometric transformations is key to understanding image representations ( Bietti & Mairal , 2019 ) . Unfortunately , other symmetries ( e.g. , rotations and reflections ) inherent to modalities such as astronomy and medical imaging where galaxies and cells can be in arbitrary orientations are not accounted for by standard convolutional layers . To this end , Cohen & Welling ( 2016 ) proposed a group-theoretic generalization of convolutional layers ( groupconvolutions ) which in addition to translation , exploit other inherent symmetries and increase the expressive capacity of a network thereby increasing its sample efficiency significantly in detection ( Winkels & Cohen , 2019 ) , classification ( Veeling et al. , 2018 ) , and segmentation ( Chidester et al. , 2019 ) . Importantly , equivariant networks outperform standard CNNs trained with augmentations from the corresponding group ( Veeling et al. , 2018 , Table 1 ) , ( Lafarge et al. , 2020a , Fig . 7 ) . See Cohen et al . ( 2019 ) ; Esteves ( 2020 ) for a formal treatment of equivariant CNNs . Equivariant features may also be constructed via scattering networks consisting of non-trainable Wavelet filters , enabling equivariance to diverse symmetries ( Mallat , 2012 ; Bruna & Mallat , 2013 ; Sifre & Mallat , 2013 ) . Generative scattering networks include Angles & Mallat ( 2018 ) where a standard convolutional decoder is optimized to reconstruct images from an embedding generated by a fixed scattering network and Oyallon et al . ( 2019 ) who show preliminary results using a standard convolutional GAN to generate scattering coefficients . We note that while both approaches are promising , they currently yield suboptimal synthesis results not comparable to modern GANs . Capsule networks ( Hinton et al. , 2011 ; Sabour et al. , 2017 ) are also equivariant and emerging work has shown that using a capsule network for the GAN discriminator ( Jaiswal et al. , 2019 ; Upadhyay & Schrater , 2018 ) improves synthesis on toy datasets . However , capsule GANs and generative scattering approaches require complex training strategies , restrictive architectural choices not compatible with recent insights in GAN training , and have not yet been shown to scale to real-world datasets . In this work , we improve the generative modeling of images with transformation invariant labels by using an inductive bias of symmetry . We replace all convolutions with group-convolutions thereby admitting a higher degree of weight sharing which enables increased visual fidelity , especially with limited-sample datasets . To our knowledge , we are the first to use group-equivariant layers in the GAN context and to use symmetry-driven considerations in both generator and discriminator architectures . Our contributions are as follows , 1 . We introduce symmetry priors via group-equivariance to generative adversarial networks . 2 . We show that recent insights in improving GAN training are fully compatible with group- equivariance with careful reformulations . 3 . We improve class-conditional image synthesis across a diversity of datasets , architectures , loss functions , and regularizations . These improvements are consistent for both symmetric images and even natural images with preferred orientation . 2 METHODS . 2.1 PRELIMINARIES . Groups and group-convolutions . A group is a set with an endowed binary function satisfying the properties of closure , associativity , identity , and invertibility . A two-dimensional symmetry group is the set of all transformations under which a geometric object is invariant with an endowed operation of composition . Given a group G and a map Φ : X → Y between two G-sets X and Y , Φ is said to be equivariant i.f.f . Φ ( g ·x ) = g ·Φ ( x ) , ∀x ∈ X , ∀g ∈ G. Colloquially , an equivariant map implies that transforming an input and applying the map yields the same result as applying the map and then transforming the output . Analogously , invariance requires that Φ ( g · x ) = Φ ( x ) , ∀x ∈ X , ∀g ∈ G. In deep networks , equivariance to a planar symmetry group can be achieved by either transforming filters ( Cohen & Welling , 2016 ) or feature maps ( Dieleman et al. , 2016 ) . Our work utilizes the plane symmetry groups p4 ( all compositions of 90-degree rotations and translations ) and p4m ( all compositions of 90-degree rotations , reflections , and translations ) ( Schattschneider , 1978 ) . These groups can be parameterized neatly following Cohen & Welling ( 2016 ) , g ( r , u , v ) = [ cos ( rπ2 ) −sin ( rπ 2 ) u sin ( rπ2 ) cos ( rπ 2 ) v 0 0 1 ] ; g′ ( m , r , u , v ) = ( −1 ) mcos ( rπ2 ) ( −1 ) m+1sin ( rπ2 ) usin ( rπ2 ) cos ( rπ2 ) v 0 0 1 where g ( r , u , v ) parameterizes p4 , g′ ( m , r , u , v ) parameterizes p4m , 0 ≤ r < 4 ( the number of 90- degree rotations ) , m ∈ { 0 , 1 } ( the number of reflections ) , and ( u , v ) ∈ Z2 ( integer translations ) . The group operation is matrix multiplication for both groups . The matrix g ( r , u , v ) rotates and translates a point ( expressed as homogeneous coordinate vector ) in pixel space via left-multiplication . Analogous intuition follows for g′ ( m , r , u , v ) . We now briefly define G-equivariant convolutions . We note that formally these are correlations and not convolutions and that the literature uses the terms interchangeably . A G-convolution between a vector-valued K-channel image f : Z2 → RK and filter ψ : Z2 → RK with f = ( f1 , f2 , . . . , fk ) and ψ = ( ψ1 , ψ2 , . . . , ψk ) can be expressed as [ f ∗ ψ ] ( g ) = ∑ y∈Z2 ∑K k=1 fk ( y ) ψk ( g −1y ) . For standard reference , if one considers G to be the translation group on Z2 , we have g−1y = y− g and recover the standard convolution . After the first layer of a G-CNN , we see that ( f ∗ ψ ) is a function on G , necessitating that filter banks also be functions on G. Subsequent G-convolutional layers are therefore defined as [ f ∗ ψ ] ( g ) = ∑ h∈G ∑K k=1 fk ( h ) ψk ( g −1h ) . Finally , for tasks where the output is an image , it is necessary to bring the domain of feature maps from G back to Z2 . We can pool the feature map for each filter over the set of transformations , corresponding to average or max pooling over the group of rotations ( or roto-reflections as appropriate ) . GAN optimization and stability . As we focus on the limited data setting where training instability is exacerbated , we briefly describe the two major stabilizing methods used in all experiments here . We regularize the discriminator by using a zero-centered gradient penalty ( GP ) on the real data as proposed by Mescheder et al . ( 2018 ) of the form , R1 : = γ2Ex∼Preal [ ‖∇D ( x ) ‖ 2 2 ] , where γ is the regularization weight , x is sampled from the real distribution Preal , and D is the discriminator . This GP has been shown to cause convergence ( in toy cases ) , alleviate catastrophic forgetting ( ThanhTung & Tran , 2018 ) , and strongly stabilize GAN training . However , empirical work has found that this GP achieves stability at the cost of worsening GAN evaluation scores ( Brock et al. , 2018 ) . A widely used technique for GAN stabilization is spectral normalization ( Miyato et al. , 2018 ) , which constrains the discriminator to be 1-Lipschitz , thereby improving gradient feedback to the generator ( Zhou et al. , 2019 ; Chu et al. , 2020 ) . With spectral normalization , each layer is rescaled as , WSN = W/σ ( W ) , where W is the weight matrix for a given layer and σ ( W ) is its spectral norm . In practice , σ ( W ) is estimated via a power iteration method as opposed to computing the full singular value decomposition during each training iteration . Finally , applying spectral normalization to both generator and discriminator empirically improves training significantly ( Zhang et al. , 2018 ) . 2.2 GROUP EQUIVARIANT GENERATIVE ADVERSARIAL NETWORKS . Here , we outline how to induce a symmetry prior into the GAN framework . Implementations are available at https : //github.com/neel-dey/equivariant-gans . The literature has developed several techniques for normalization and conditioning of the individual networks , along with unique architectural choices - we extend these developments to the equivariant setting . We start by replacing all convolutional layers with group-convolutional layers where filters and feature maps are functions on a symmetry group G. Batch normalization moments ( Ioffe & Szegedy , 2015 ) are calculated per group-feature map as opposed to spatial feature maps . Pointwise nonlinearities preserve equivariance for the groups considered here . Pre-activation residual blocks common to modern GANs are used freely as the sum of equivariant feature maps on G is also equivariant . Generator . The generator is illustrated at a high-level in Figure 2 . We use a fully connected layer to linearly project and reshape the concatenated noise vector z ∼ N ( 0 , I ) and class embedding c into spatial feature maps on Z2 . We then use spectrally-normalized group-convolutions , interspersed with pointwise-nonlinearities , and nearest-neighbours upsampling to increase spatial extent . We use upsampling followed by group-convolutions instead of transposed group-convolutions to reduce checkerboard artefacts ( Odena et al. , 2016 ) . We further use a novel group-equivariant classconditional batch normalization layer ( described below ) to normalize and class-condition image generation while also projecting the latent vector z to each level of the group-convolutional hierarchy . We finally max-pool over the set of transformations to obtain the generated image x. Discriminator . The group-equivariant discriminator receives an input x , which it maps to a scalar indicating whether it is real or fake . We do this via spectrally normalized group-convolutions , pointwise-nonlinearities , and spatial-pooling layers to decrease spatial extent . After the final groupconvolutional layer , we pool over the group and use global average pooling to obtain an invariant representation at the output . Finally , we condition the discriminator output via the projection method proposed by Miyato & Koyama ( 2018 ) . Importantly , the equivariance of group-convolutions depends on the convolutional stride . Strided convolutions were commonly used for downsampling in early GANs ( Radford et al. , 2015 ) . However , stride values must be adjusted to the dataset to preserve equivariance , which makes comparisons to equivalent non-equivariant GAN architectures difficult . We therefore use pooling layers over the plane ( commonly used in recent GANs ) to downsample in all settings to preserve equivariance and enable a fair comparison . Spectral Normalization . As the singular values of a matrix are invariant under compositions of 90- degree rotations , transpositions , and reflections - spectral normalization on a group-weight matrix preserves equivariance and we use it freely . Class-conditional Batch Normalization . Conditional batch normalization ( Perez et al. , 2018 ) replaces the scale and shift of features with an affine transformation learned from the class label ( and optionally from the latent vector as well ( Brock et al. , 2018 ) ) via linear dense layers , and is widely used in generative networks . We propose a group-equivariance preserving conditional normalization by learning the affine transformation parameters per group-feature map , rather than each spatial feature . As we use fewer group-filters than equivalent non-equivariant GANs , we use fewer dense parameters to learn conditional scales and shifts .
The submission concerns an application of group convolutions (Cohen & Welling, 2016) to the image synthesis setting, where images are produced by the generator of a GAN. The two GAN components are augmented mainly by a straightforward replacement of "regular" convolutions by group convolutions, in addition to some other training tricks of the trade (gradient penalty, spectral normalization). Experiments indicate somewhat lower FID scores on both synthetic and real settings. The method is seen as useful especially for the low data regime case.
SP:74ef7a70748db738244d9e402bbc4a9b43002896
Integrating linguistic knowledge into DNNs: Application to online grooming detection
1 INTRODUCTION . Online grooming ( OG ) is a communicative process of entrapment in which an adult lures a minor into taking part in sexual activities online and , at times , offline ( Lorenzo-Dus et al. , 2016 ; Chiang & Grant , 2019 ) . Our aim is to detect instances of OG . This is achieved through binary classification of whole conversations into OG ( positive class ) or neutral ( negative class ) . This classification requires the ability to capture subtleties in the language used by groomers . Corpus Linguistic ( CL ) analysis provides a detailed characterisation of language in large textual datasets ( McEnery & Wilson , 2003 ; Sinclair , 1991 ) . We argue that , when integrated into ML models , the products of CL analysis may allow a better capture of language subtleties , while simplifying and guiding the learning task . We consider two types of CL products and explore strategies for their integration into several stages of DNNs . Moreover , we show that CL knowledge may help law enforcement in interpreting the ML decision process , towards the production of evidences for potential prosecution . Our text heavily uses slang and sms-style writing , as many real-world Natural Language Processing ( NLP ) tasks for chat logs . Text normalisation methods were proposed to reduce variance in word choice and/or spelling and simplify learning , e.g . ( Mansfield et al. , 2019 ) for sms-style writing . However , they do not account for the final analysis goal and may discard some informative variance , e.g . the use of certain forms of slang possibly indicative of a user category . CL analysis provides with the preferred usage of spelling variants or synonyms . We propose to use this domain knowledge to selectively normalise chat logs while preserving the informative variance for the classification task . As demonstrated by the CL analysis in ( Lorenzo-Dus et al. , 2016 ) , the theme and immediate purpose of groomer messages may vary throughout the conversation , in order to achieve the overarching goal of entrapping the victims . Groomers use a series of inter-connected ” sub-goals ” , referred to as OG processes here , namely gaining the child ’ s trust , planning activities , building a relationship , isolating them emotionally and physically from his/her support network , checking their level of compliance , introducing sexual content and trying to secure a meeting off-line . The language used within these processes is not always sexually explicit , which makes their detection more challenging . However , CL analysis additionally flags some contexts associated to the OG processes , in the form of word collocations ( i.e . words that occur within a same window of 7 words ) that tend to occur more frequently in , and therefore can be associated with , OG processes . We propose to exploit the relations between the OG processes and their overarching goal of OG to improve the final OG classification . We use the CL identified context windows to guide the learning of our DNN . Our main contributions are : 1 ) We explore different strategies for integrating CL knowledge into DNNs . They are applied to two architecture types and demonstrated on OG detection , but may generalise to other NLP applications that involve digital language and/or complex conversational strategies . 2 ) The principle and several implementations of selectively normalising text through modifying a word embedding in support to classification . 3 ) The decomposition of conversation analysis into identifying sub-goals . Our DNN implicitly models the relations between these sub-goals and the conversation ’ s overarching final goal . 4 ) A new attention mechanism for LSTM based on the direct stimulation of its input gates , with two proposed implementations . 5 ) A state-of-the-art ( SoTA ) and interpretable OG detector . 6 ) A new corpus for OG detection , to be publicly released on demand , and that extends PAN2012 with more conversations and with products of CL analysis . 2 RELATED WORK . Villatoro-Tello et al . ( 2012 ) detected OG chat logs using a DNN to classify binary bag-of-words . This simple approach highlights the importance of commonly used words amongst groomers which we exploit for selective text normalisation . This is emphasised in ( Vartapetiance & Gillam , 2014 ; Hidalgo & Dı́az , 2012 ) where a set of phrases are derived from the important features of a Naı̈ve Bayes classifier to describe common behaviours among groomers . Liu et al . ( 2017 ) obtained the current OG detection SoTA using a word embedding for semantic of important words and an LSTM . Integrating domain knowledge into DNNs is often done with additional losses that assist with sparse and low quality data . ( Muralidhar et al. , 2018 ) penalise a DNN ’ s output violating logical rules w.r.t . the input features . ( Hu et al. , 2018 ) use the posterior regularisation framework of ( Ganchev et al. , 2010 ) to encode domain constraints for generative models . A teacher-student architecture in ( Hu et al. , 2016 ) incorporates first-order logic rules to create an additional loss for the student network . Other works integrated prior knowledge in the design of the DNN architecture . In BrainNetCNN ( Kawahara et al. , 2017 ) , the convolutions of a convolutional neural network ( CNN ) are defined based on the graph data ’ s locality to account for the brain ’ s connectivity . The training procedure may also integrate priors without modifying the DNN ’ s architecture . Derakhshani et al . ( 2019 ) use assisted excitation of CNN neurons in the images ’ areas of interest , thus providing both localisation and semantic information to the DNN . An attention mechanism was used in a supervised way to focus a DNN on important words in ( Nguyen & Nguyen , 2018 ) . We experiment with these various approaches and adapt them to our domain knowledge and DNN architectures . Linguistic knowledge was integrated to learnt word embeddings in the past . Knowledge in the form of lexicons , that carry a manual categorisation and/or ranking of words , is combined with a learnt word embedding in ( Margatina et al. , 2019 ) . Three strategies are proposed , namely concatenating the lexicon and embedding features , and using the lexicon features to conditionally select or transform the word embeddings . In our study , we are concerned with a different type of linguistic knowledge . However , our modification of word embedding ( Section 4.1 ) may also exploit this lexicon knowledge . 3 AUGMENTED PAN2012 DATASET . PAN2012 ( Inches & Crestani , 2012 ) is a standard corpus for OG detection . It was gathered from Omegle ( one-to-one conversations , IRC ( technical discussions in groups ) , and the Perverted Justice ( PJ ) website1 ( chat logs from convicted groomers interacting with trained adult decoys ) , with 396 groomers and 5700 / 216,121 OG / non-OG conversations . Some non-OG chat logs contain sexual wording , making the OG classification more challenging . Conversations are truncated to 150 messages each , which limits both CL and ML analyses . To resolve this limitation , we augment the corpus with full OG conversations and the addition of new groomers from PJ , totalling 623 groomers in 6204 OG conversations ( same negatives which could not be augmented to fuller conversations due to no access to the original data ) . Final OG / non-OG conversations total an average ( std ) of 215 ( 689 ) / 13 ( 23 ) messages and 1010 ( 3231 ) / 94 ( 489 ) words , respectively . Statistics on the dataset content are in the sup . materials . PJ data is freely available online and was largely used in previous social science and NLP studies , thus its use does not raise any peculiar ethical concern . For a debate on its usability see ( Chiang & Grant , 2019 ; Schneevogt et al. , 2018 ) . Our dataset also includes the results of a CL analysis of the new corpus using the method described in ( Lorenzo-Dus et al. , 2016 ) , which involves a heavy use of manual analysis by CL experts . As part of data preparation for CL analysis , word variants are identified , which are either spelling variations ( mistakes or intentional e.g . ‘ loool ’ → ‘ lol ’ ) , or the same semantic meaning behind two terms ( e.g . ‘ not comfy ’ → ‘ uncomfortable ’ ) . These variants are not specific to OG , but rather reflect digital language , 1http : //perverted-justice.com and are therefore valid for other real-world chat logs . The CL analysis also identified the variants that are most used among groomers . The CL products in our dataset include : 1 ) the set of variants both general and groomer-preferred , 2 ) a set of frequent 3-word collocates ( not necessarily direct neighbours , but located within a window of 7 words ) that are used among many different users , and 3 ) a manual annotation of 2100 samples of OG processes ( there are 7 types of OG processes , as identified in ( Lorenzo-Dus et al. , 2016 ) and listed in the introduction and detailed in the sup . materials ) that could be associated to 3-word collocates and the context windows that these latter define . These CL products are sensitive data that might be used to help groomers refine their strategies , therefore they will only be shared on request . They are used in Sections 4-5 to train a DNN model , but this model does not require CL analysis to be performed at testing phase , as it takes raw text only as input . 4 METHODOLOGY . Overarching vision and general applicability – We integrate two CL priors into DNNs : the word variants and the identification of OG processes . Word variants provide knowledge of same semantic meaning , which allows reducing variance in the text . The knowledge of groomer ’ s preferred variants brings an implicit and selective text normalisation that supports the classification task . It is achieved through a reduction of distances between non-discriminative variants in a word embedding . This selective normalisation is applicable to other classification tasks from real-world chat logs , provided an updated selection of the preferred and discriminative variants . As highlighted in Section 3 , the variants reflect digital language and are relevant to different analyses of chat conversations . The selection of discriminative variants is done easily and automatically following a procedure described in Section 4.1 using empirical occurrences in positive and negative conversations . This knowledge integration is also applicable to all DNNs that use a word embedding to capture word semantic . The use of OG processes aides in differentiating between causal conversations involving sexual language , and OG conversations with complex strategies and sub-goals ( i.e . OG processes ) . The language associated to OG processes , reflected by the 3-word collocates and context windows that they define , may be more informative than traditionally used simple sexual wording in making this distinction . We propose 3 strategies to integrate this knowledge , namely the definition of sub-tasks and two stimulations of DNN attention . They all guide the learning by providing focus on contexts of interest ( a valuable complement to attention mechanisms , as demonstrated in our experiments ) , and by implicitly modelling the relation between sub- and final goals . This CL knowledge integration principle is generally applicable to the analysis of complex conversations , provided an appropriate CL identification of the conversation ’ s sub-goals and of their associated language through context windows . This identification of sub-goals has been the focus of many social science studies . For example , a large corpus of works have identified strategies for persuasion and manipulation in extreme ideology groups , e.g . ( Brindle , 2016 ; Nouri & Lorenzo-Dus , 2019 ; Lorenzo-Dus & Nouri , 2020 ; Saridakis & Mouka , 2020 ) for radical right hate speech and ( Baker et al. , 2021 ) for jihadi radicalisation . This established baseline of knowledge may be integrated into DNNs in multidisciplinary works . The identification of frequent 3-word collocates is automated , as described in ( Lorenzo-Dus et al. , 2016 ) . The association of their occurrences to the identified sub-goals is the only task that may require additional manual work . Our stimulation of DNN attention may also be more generally used to focus a DNN ’ s attention on a priori known important elements of a training set . Base models – We demonstrate the general applicability of our CL integration strategies by applying them to two DNN architecture types representative of the two NLP standards of recurrent and transformer models . The recurrent DNN of Liu et al . ( 2017 ) is the current SoTA for OG classification . It comprises a language model that builds two word and sentence embeddings , and an OG classifier with two LSTMs and a linear projection . Our base model # 1 is a modified version ( Fig . 1 left ) with the word embedding provided as input to the OG classifier in place of the sentence embedding . This word embedding will be more directly impacted by our CL integration , and it increases explainability as will be seen next . It may be replaced by similar embeddings , and we also present results using the pre-trained GloVe ( Pennington et al. , 2014 ) . Further , to compensate for the loss of sentence structure modelling previously provided by the sentence embedding , and to account for the longer sequences of inputs into the classifier , we add an unsupervised attention mechanism ( Luong et al. , 2015 ) into the classifier . Following the method in ( Luong et al. , 2015 ) , the hidden states of the last LSTM for all words of the conversation are provided to the attention mechanism that outputs a conversation embedding of the same size as the LSTMs hidden state , namely 256 . XLNet ( Yang et al. , 2019 ) is a popular transformer model , a SoTA for many NLP tasks , and therefore a strong baseline for this study . It iteratively refines word embeddings , starting from an initial embedding that captures word semantic similarly to that of Liu et al . ( 2017 ) , and attaining richer word representations that account for word relationships within a sentence using a positional embedding and self attention layers . The refined contextualised word embeddings are classified by linear projection . In our application , this projection fails to handle our class imbalance and always outputs the same class with F-score at 0.392 . Providing the contextualised word embeddings to a two-layer LSTM , whose last hidden state is used as a conversation embedding to be classified by the linear projection , solves this issue2 and forms base model # 2 ( Fig . 1 right ) . The combination of a transformer model with LSTM is not new , see for example ( Ma , 2019 ) , and has the advantage of allowing the use of our LSTM-based knowledge integration strategies ( see ‘ Stimulating LSTM input gates ’ in Section 4.2 ) . Input to the models – The analysis is performed on whole conversations , and the final OG / non-OG classification is obtained for the whole conversation , rather than per-message . Messages are separated by the [ SEP ] token , so that inter-text representations can be modelled . Messages from both users are included with no distinction . For base model # 2 , the [ CLS ] token is added at the beginning of conversations following the XLNet standard . Conversations longer than 2,000 words are truncated to retain their end part ( 12 % / 8e-5 % of OG / non-OG conversations ) . All base and CL-augmented DNNs take raw text as input only . The only text preparation prior to the DNN is tokenisation of named entities . We do not apply explicit text normalisation such as ( Mansfield et al. , 2019 ) as part of text preparation , since the methodological premise of the paper is the design of a hybrid approach where an ML model incorporates its own text normalisation informed by CL knowledge .
This work proposes the approach of integrating priors into a DNN in the form of Linguistic sub-models that capture characteristics of OG. The authors use the example of the PAN-12 dataset for sexual predators to use information about linguistics behaviour for the grooming phases. The work then goes to highlight the augmentations that are done on baseline DNN models to include these CL characteristics. The authors then go on to show the impact of these augmenations on performance of classification on the PAN-12 dataset.
SP:f7611cb09eeb69912df93a040cf1ea98f59fd309
GraphCGAN: Convolutional Graph Neural Network with Generative Adversarial Networks
1 INTRODUCTION . Graph-based semi-supervised learning ( SSL ) aims to classify nodes in graph , where only small amounts of nodes are labeled due to the expensive and time-consuming label collection process . To solve such task , various graph neural networks ( GNNs ) have been proposed using the idea of convolutional neural networks ( CNN ) to implicitly propagate the information of labeled nodes to unlabeled nodes through the linkage between nodes ( Kipf & Welling , 2016 ; Veličković et al. , 2017 ; Hamilton et al. , 2017 ) . These convolution-based graph neural networks have achieved superior performance on multiple benchmark datasets in graph-based SSL tasks ( Wu et al. , 2019 ) . Recently , generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) have been shown a power in improving the performance of image-based SSL problems ( Odena , 2016 ; Salimans et al. , 2016 ; Li et al. , 2019b ) . In semi-GAN ( Salimans et al. , 2016 ) , authors converted the M -class classification task into solving ( M + 1 ) -class problem where the synthetic ( M + 1 ) th class is generated by the GAN ’ s generator . Later on , Dai et al . provided a theoretical insight that the generated data are able to boost the performance of classifier under certain assumptions . Our work is motivated by the the semi-GAN . GraphSGAN ( Ding et al. , 2018 ) first investigated the adversarial learning over graph , where the graph is embedding into an embedding space and synthetic data are generated in the corresponding space . The multi-layer perceptron ( MLP ) is trained as the classifier on the embedding vectors . However , to our knowledge , there is still no existed method to combine the adversarial learning to convolution-based GNNs on graph-based SSL task . In this work , we explore the potential of incorporating the convolution-based GNN and GAN . The challenges of constructing a general framework have three folds : first , the attributed graph data are non-Euclidean whose distribution contains information of graph topology structure as well as the attributes of nodes . Hence , it is not trivial to construct generator to model the distribution . Second , even the generator can model the graph ’ s distribution , the generator should be trained properly to boost the performance of the classifier . A poor-quality generator would introduce noise to the existed graph and affect the classifier . Third , many variants of GCN have been proposed continuously . The framework should be built with flexibility to adapt to different convolution-based GNNs . We construct a novel approach called GraphCGAN to deal with above challenges . First , to model the distribution of graph , the generator is built sequentially from two sub-generators : one models the attribute information ( node ’ s attribute ) and another one models the graph topology structure ( adjacency relation of node ) . Details can be found in Section 3.1 . Second , in GraphCGAN , the generator is trained based on the feature matching technique ( Salimans et al. , 2016 ) which minimizes the distance between generated nodes and real nodes in the constructed feature space . This technique showed a good performance in SSL tasks in practice . The details for construction of loss functions can be found in Section 3.3 . For GCN , the attributes of nodes are aggregated convolutionally by multiple layers . The representation of the last layer is usually considered as the prediction for the labels . For variants of GCN , the main differences exist in the strategy of layer aggregation ( Hamilton et al. , 2017 ) . In our framework , we choose the second to the last layer of convolution-based GNN as the feature matching functions . Therefore , our framework is easily extended to variants of GCN . More discussions can be found in Section 3.2 . 2 PRELIMINARY . We first introduce the notation about graph . Let G = ( V , E ) denote a graph , where V is the set of nodes with |V | = n and E ⊂ V × V is a set of edges with |E| = m. The adjacency matrix A ∈ R|V |×|V | is defined as Aij = 1 if node vi and vj has edge , otherwise Aij = 0 . Suppose each node vi has a d-dimensional feature xi ∈ Rd and a single value label yi ∈ { 1 , 2 , .. , M } . In the semi-supervised learning setting , there is a disjoint partition for the nodes , V = V L ∪ V U , such that , for vi ∈ V L , the corresponding label is known and for vj ∈ V U the corresponding label is unknown . The distributions of node in labeled set V L and unlabeled set V U are denoted as pV L and pV U , respectively . The semi-supervised learning is to learn the label for unlabeled set { yj |vj ∈ V U } given adjacency matrix A , feature matrix X = [ xi ] vi∈V and labels for labeled sets { yi|vi ∈ V L } . 2.1 CONVOLUTION BASED GRAPH NEURAL NETWORK CLASSIFIER . Based on the Laplacian smoothing , the convolution-based GNN models propagate the information of nodes features across the nodes ’ neighbors in each layer . Specifically , in GCN , the layer-wise propagation rule can be defined as follows : H ( l+1 ) = σ ( D−1AH ( l ) W ( l ) + b ( l ) ) , l = 0 , 1 , 2 .. , L− 1 ( 1 ) where W ( l ) and b ( l ) are layer-specific trainable weight matrix and bias , respectively . σ ( · ) is an activation function . D is the diagonal degree matrix with Dii = ∑ j Aij . Hence , D −1A represents normalization of adjacency matrix A . The initial layer H ( 0 ) is the feature matrix X . The final layer H ( L ) followed by a softmax layer can be viewed as the prediction of one-hot representation for the true label y . Recently , many variants of the GCN layer-wise propagation rule had been proposed , including graph attention network , cluster GCN ( Veličković et al. , 2017 ; Chiang et al. , 2019 ) , which achieved stateof-the-art performances in many benchmark datasets . 2.2 GENERATIVE ADVERSARIAL NETWORK BASED SEMI-SUPERVISED LEARNING . In semi-GAN , the classifier C and generator G play a non-cooperative game , where classifier aims to classify the unlabeled data as well as distinguish the generated data from real data ; generator attempts to match feature of real data and that of generated data . Therefore , the objective function for classifier can be divided into two parts ( Salimans et al. , 2016 ) . The first part is the supervised loss function Lsup = Ev , y∼pV L logPC ( y|v , y ≤M ) which is the log probability of the node label given the real nodes . The second part is the unsupervised loss function Lun−sup = Ev∼pV U logPC ( y ≤M |v ) + Ev∼pV G logPC ( y =M + 1|v ) which is the sum of log probability of the first M classes for real nodes and the log probability of the ( M + 1 ) th class for generated nodes V G. The classifier C can be trained by maximize the objective function LC = Lsup + Lun−sup . ( 2 ) For objective function of generator , Salimans et al . ( 2016 ) found minimizing feature matching loss in Equation 3 achieved superior performance in practice LG = ||Ev∼pV U ( f ( v ) ) − Ez∼pz ( z ) ( f ( G ( z ) ) ) || 2 2 , ( 3 ) where the feature matching function f ( · ) maps the input into a feature space and z ∼ pz ( z ) is drawn from a given distribution like uniform distribution . Furthermore , Dai et al . ( 2017 ) provided a theoretical justification that complementary generator G was able to boost the performance of classifier C in SSL task . 3 FRAMEWORK OF GRAPHCGAN . To combine the aforementioned Laplacian smoothing on graph and semi-GAN on SSL together , we develop GraphCGAN model , using generated nodes to boost the performance of convolution-based GNN models . 3.1 CONSTRUCTION OF GENERATOR FOR GRAPHCGAN . The generator G generates fake node v0 by generating feature vector x0 ∈ Rd and adjacency relation a0 ∈ Rn jointly , where a0 , i = 1 if the fake node is connected to real node vi , otherwise a0 , i = 0 . Therefore , the distribution for generated node pG ( v0 ) can be expressed by the joint distribution of the corresponding feature and adjacency relation pG ( x0 , a0 ) . From the conditional distribution formula , the joint distribution can be written as pG ( x0 , a0 ) = pG1 ( x0 ) pG2 ( a0|x0 ) . We use sub-generators G1 and G2 to generate fake feature x0 and a0|x0 , respectively . In practice , a0|x0 can be modeled by G2 ( z ; x0 ) = G2 ( z ; G1 ( z ) ) where the adjacency relation a0 is constructed by sub-generator G2 given the input of x0 . The distribution of generated node can be denoted by pG ( v0 ) = pG ( x0 , a0 ) = pG ( x0 ) p ( a0|x0 ) = p ( G1 ( z ) ) p ( G2 ( z ; G1 ( z ) ) ) = : p ( G ( z ) ) . ( 4 ) If B nodes ( v0,1 , v0,2 , .. , v0 , B ) are generated , the generated feature matrix is denoted as X0 = ( xT0,1 , x T 0,2 , .. , x T 0 , B ) T and generated adjacency matrix has form A0 = ( aT0,1 , a T 0,2 , .. , a T 0 , B ) T . Hence , the combined adjacency matrix can be denoted as à = [ A AT0 A0 IB ] ∈ R ( n+B ) × ( n+B ) , ( 5 ) The combined feature vector is X̃ = [ X X0 ] ∈ R ( n+B ) ×d . ( 6 ) The diagonal degree matrix D̃ ∈ R ( n+B ) × ( n+B ) can be denoted as [ D∗ 0 0 DB ] where D∗ ∈ Rn×n with D∗ , ii = ∑ j Aij + ∑ bA0 , bi and DB ∈ RB×B with DB , bb = ∑ j A0 , bj + 1 . 3.2 ANALYSIS OF CLASSIFIER FOR GRAPHCGAN . In GraphCGAN , we adopt the convolution-based GNN , such as GCN , GraphSage ( Hamilton et al. , 2017 ) or GAT ( Veličković et al. , 2017 ) , as the the classifier . The classifier is applied to the enlarged graph G̃ = [ X̃ , à ] to obtain the prediction ỹ of nodes V ∪ V G. Specially , considering the layer-wise propagation of GCN ( Equation 1 ) as the classifier in GraphCGAN , the propogation rule can be denoted as H̃ ( l+1 ) = σ ( D̃−1ÃH̃ ( l ) W ( l ) + b̃ ( l ) ) = σ ( [ D−1∗ 0 0 D−1B ] [ A AT0 A0 IB ] [ H ( l ) ∗ H ( l ) 0 ] W ( l ) + [ b ( l ) b ( l ) B ] ) = σ ( [ D−1∗ AH ( l ) ∗ +D −1 ∗ A T 0 H ( l ) 0 D−1B A0H ( l ) ∗ +D −1 B H ( l ) 0 ] W ( l ) + [ b ( l ) b ( l ) B ] ) = σ ( [ D−1∗ AH ( l ) ∗ W ( l ) + b ( l ) ∗ ( D−1B A0H ( l ) ∗ +D −1 B W ( l ) ) W ( l ) + b ( l ) B ] ) = : [ H ( l+1 ) ∗ H ( l+1 ) 0 ] . ( 7 ) where the first layer is chosen as the enlarged feature matrix H̃ ( 0 ) = X̃ . Weight matrix W ( l ) has the same in Equation 1 . Bias vector b̃ ( l ) has dimension ( n+B ) which is denoted as [ b ( l ) T , b ( l ) TB ] T . We denote b ( l ) ∗ = D−1∗ A T 0 H ( l ) ∗ W ( l ) + b ( l ) to make the format clear . From Equation 7 , the layer propagation of real nodes ( first n rows ) follows the same format as the GCN layer propagation in Equation 1 . As a special case , for the zero generator A0 = 0 or X0 = 0 , the performance of classifier on V ∪ V G would be the same as that of original classifier on V . For the last layer H̃ ( L ) ∈ R ( n+B ) ×M , we adopt the strategy in Salimans et al . ( 2016 ) to obtain the ( M + 1 ) class label ỹ by ỹ = softmax ( H̃ ( L ) ||0 ( n+B ) ×1 ) , ( 8 ) where || denotes concatenation and 0 ( n+B ) ×1 ∈ R ( n+B ) ×1 is a zero matrix . The loss function for classifier in GraphCGAN follows the same format in Equation 2 .
The paper presents a method to combine graph convolutional neural networks (GCNs) with generative adversarial networks (GANs). The authors focus on the problem of semi-supervised learning on graphs and propose an end-to-end framework in which the generative model is followed by direct convolutions on the graph nodes. Experiments are conducted on standard benchmark datasets and the proposed method, GraphCGAN is compared against several state-of-the-art approaches.
SP:98871703cab28ed757a6ea54eea0407621624d62
Multi-Time Attention Networks for Irregularly Sampled Time Series
1 INTRODUCTION . Irregularly sampled time series occur in application domains including healthcare , climate science , ecology , astronomy , biology and others . It is well understood that irregular sampling poses a significant challenge to machine learning models , which typically assume fully-observed , fixed-size feature representations ( Marlin et al. , 2012 ; Yadav et al. , 2018 ) . While recurrent neural networks ( RNNs ) have been widely used to model such data because of their ability to handle variable length sequences , basic RNNs assume regular spacing between observation times as well as alignment of the time points where observations occur for different variables ( i.e. , fully-observed vectors ) . In practice , both of these assumptions can fail to hold for real-world sparse and irregularly observed time series . To respond to these challenges , there has been significant progress over the last decade on building and adapting machine learning models that can better capture the structure of irregularly sampled multivariate time series ( Li & Marlin , 2015 ; 2016 ; Lipton et al. , 2016 ; Futoma et al. , 2017 ; Che et al. , 2018 ; Shukla & Marlin , 2019 ; Rubanova et al. , 2019 ) . In this work , we introduce a new model for multivariate , sparse and irregularly sampled time series that we refer to as Multi-Time Attention networks or mTANs . mTANs are fundamentally continuous-time , interpolation-based models . Their primary innovations are the inclusion of a learned continuous-time embedding mechanism coupled with a time attention mechanism that replaces the use of a fixed similarity kernel when forming representation from continuous time inputs . This gives mTANs more representational flexibility than previous interpolation-based models ( Shukla & Marlin , 2019 ) . Our approach re-represents an irregularly sampled time series at a fixed set of reference points . The proposed time attention mechanism uses reference time points as queries and the observed time points as keys . We propose an encoder-decoder framework for end-to-end learning using an mTAN module to interface with given multivariate , sparse and irregularly sampled time series inputs . The encoder takes the irregularly sampled time series as input and produces a fixed-length latent representation over a set of reference points , while the decoder uses the latent representations to produce reconstructions conditioned on the set of observed time points . Learning uses established methods for variational autoencoders ( Rezende et al. , 2014 ; Kingma & Welling , 2014 ) . 1Implementation available at : https : //github.com/reml-lab/mTAN The main contributions of the mTAN model framework are : ( 1 ) It provides a flexible approach to modeling multivariate , sparse and irregularly sampled time series data ( including irregularly sampled time series of partially observed vectors ) by leveraging a time attention mechanism to learn temporal similarity from data instead of using fixed kernels . ( 2 ) It uses a temporally distributed latent representation to better capture local structure in time series data . ( 3 ) It provides interpolation and classification performance that is as good as current state-of-the-art methods or better , while providing significantly reduced training times . 2 RELATED WORK . An irregularly sampled time series is a time series with irregular time intervals between observations . In the multivariate setting , there can also be a lack of alignment across different variables within the same multivariate time series . Finally , when gaps between observation times are large , the time series is also considered to be sparse . Such data occur in electronic health records ( Marlin et al. , 2012 ; Yadav et al. , 2018 ) , climate science ( Schulz & Stattegger , 1997 ) , ecology ( Clark & Bjørnstad , 2004 ) , biology ( Ruf , 1999 ) , and astronomy ( Scargle , 1982 ) . It is well understood that such data cause significant issues for standard supervised machine learning models that typically assume fully observed , fixed-size feature representations ( Marlin et al. , 2012 ) . A basic approach to dealing with irregular sampling is fixed temporal discretization . For example , Marlin et al . ( 2012 ) and Lipton et al . ( 2016 ) discretize continuous-time observations into hour-long bins . This has the advantage of simplicity , but requires ad-hoc handling of bins with more than one observation and results in missing data when bins are empty . The alternative to temporal discretization is to construct models with the ability to directly use an irregularly sampled time series as input . Che et al . ( 2018 ) present several methods based on gated recurrent unit networks ( GRUs , Chung et al . ( 2014 ) ) , including an approach that takes as input a sequence consisting of observed values , missing data indicators , and time intervals since the last observation . Pham et al . ( 2017 ) proposed to capture time irregularity by modifying the forget gate of an LSTM ( Hochreiter & Schmidhuber , 1997 ) , while Neil et al . ( 2016 ) introduced a new time gate that regulates access to the hidden and cell state of the LSTM . While these approaches allow the network to handle event-based sequences with irregularly spaced vector-valued observations , they do not support learning directly from vectors that are partially observed , which commonly occurs in the multivariate setting because of lack of alignment of observation times across different variables . Another line of work has looked at using observations from the future as well as from the past for interpolation . Yoon et al . ( 2019 ) and Yoon et al . ( 2018 ) presented an approach based on the multi-directional RNN ( M-RNN ) that can leverage observations from the relative past and future of a given time point . Shukla & Marlin ( 2019 ) proposed the interpolation-prediction network framework , consisting of several semi-parametric RBF interpolation layers that interpolate multivariate , sparse , and irregularly sampled input time series against a set of reference time points while taking into account all observed data in a time series . Horn et al . ( 2020 ) proposed a set function-based approach for classifying time-series with irregularly sampled and unaligned observation . Chen et al . ( 2018 ) proposed a variational auto-encoder model ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) for continuous time data based on the use of a neural network decoder combined with a latent ordinary differential equation ( ODE ) model . They model time series data via a latent continuous-time function that is defined via a neural network representation of its gradient field . Building on this , Rubanova et al . ( 2019 ) proposed a latent ODE model that uses an ODE-RNN model as the encoder . ODE-RNNs use neural ODEs to model the hidden state dynamics and an RNN to update the hidden state in the presence of a new observation . De Brouwer et al . ( 2019 ) proposed GRU-ODE-Bayes , a continuous-time version of the Gated Recurrent Unit ( Chung et al. , 2014 ) . Instead of the encoder-decoder architecture where the ODE is decoupled from the input processing , GRU-ODE-Bayes provides a tighter integration by interleaving the ODE and the input processing steps . Several recent approaches have also used attention mechanisms to model irregularly sampled time series ( Song et al. , 2018 ; Tan et al. , 2020 ; Zhang et al. , 2019 ) as well as medical concepts ( Peng et al. , 2019 ; Cai et al. , 2018 ) . Most of these approaches are similar to Vaswani et al . ( 2017 ) where they replace the positional encoding with an encoding of time and model sequences using self-attention . However , instead of adding the time encoding to the input representation as in Vaswani et al . ( 2017 ) , they concatenate it with the input representation . These methods use a fixed time encoding similar to the positional encoding of Vaswani et al . ( 2017 ) . Xu et al . ( 2019 ) learn a functional time representation and concatenate it with the input event embedding to model time-event interactions . Like Xu et al . ( 2019 ) and Kazemi et al . ( 2019 ) , our proposed method learns a time representation . However , instead of concatenating it with the input embedding , our model learns to attend to observations at different time points by computing a similarity weighting using only the time embedding . Our proposed model uses the time embedding as both the queries and keys in the attention formulation . It learns an interpolation over the query time points by attending to the observed values at key time points . Our proposed method is thus similar to kernel-based interpolation , but learning the time attention based similarity kernel gives our model more flexibility compared to methods like that of Shukla & Marlin ( 2019 ) that use similarity kernels with fixed functional forms . Another important difference relative to many of these previous methods is that our proposed approach attends only to the observed data dimensions at each time point and hence does not require a separate imputation step to handle vector valued observations with an arbitrary collection of dimensions missing at any given time point . 3 THE MULTI-TIME ATTENTION MODULE . In this section , we present the proposed Multi-Time Attention Module ( mTAN ) . The role of this module is to re-represent a sparse and irregularly sampled time series in a fixed-dimensional space . This module uses multiple continuous-time embeddings and attention-based interpolation . We begin by presenting notation followed by the time embedding and attention components . Notation : In the case of a supervised learning task , we let D = { ( sn , yn ) |n = 1 , ... , N } represent a data set containing N data cases . An individual data case consists of a single target value yn ( discrete for classification ) , as well as a D-dimensional , sparse and irregularly sampled multivariate time series sn . Different dimensions d of the multivariate time series can have observations at different times , as well as different total numbers of observations Ldn . Thus , we represent time series d for data case n as a tuple sdn = ( tdn , xdn ) where tdn = [ t1dn , ... , tLdndn ] is the list of time points at which observations are defined and xdn = [ x1dn , ... , xLdndn ] is the corresponding list of observed values . In the case of an unsupervised task such as interpolation , each data case consists of a multivariate time series sn only . We drop the data case index n for brevity when the context is clear . Time Embedding : Time attention module is based on embedding continuous time points into a vector space . We generalize the notion of a positional encoding used in transformer-based models to continuous time . Time attention networks simultaneously leverage H embedding functions φh ( t ) , each outputting a representation of size dr. Dimension i of embedding h is defined as follows : φh ( t ) [ i ] = { ω0h · t+ α0h , if i = 0 sin ( ωih · t+ αih ) , if 0 < i < dr ( 1 ) where the ωih ’ s and αih ’ s are learnable parameters . The periodic terms can capture periodicity in time series data . In this case , ωih and αih represent the frequency and phase of the sine function . The linear term , on the other hand , can capture non-periodic patterns dependent on the progression of time . For a given difference ∆ , φh ( t+ ∆ ) can be represented as a linear function of φh ( t ) . Learning the periodic time embedding functions is equivalent to using a one-layer fully connected network with a sine function non-linearity to map the time values into a higher dimensional space . By contrast , the positional encoding used in transformer models is defined only for discrete positions . We note that our time embedding functions subsume positional encodings when evaluated at discrete positions . Multi-Time Attention : The time embedding component described above takes a continuous time point and embeds it into H different dr-dimensional spaces . In this section , we describe how we leverage time embeddings to produce a continuous-time embedding module for sparse and irregularly sampled time series . This multi-time attention embedding module mTAN ( t , s ) takes as input a query time point t and a set of keys and values in the form of a D-dimensional multivariate sparse and irregularly sampled time series s ( as defined in the notation section above ) , and returns a J- dimensional embedding at time t. This process leverages a continuous-time attention mechanism applied to the H time embeddings . The complete computation is described below . mTAN ( t , s ) [ j ] = H∑ h=1 D∑ d=1 x̂hd ( t , s ) · Uhdj ( 2 ) x̂hd ( t , s ) = Ld∑ i=1 κh ( t , tid ) xid ( 3 ) κh ( t , tid ) = exp ( φh ( t ) wv Tφh ( tid ) T / √ dk ) ∑Ld i′=1 exp ( φh ( t ) wvTφh ( ti′d ) T / √ dk ) ( 4 ) As shown in Equation 2 , dimension j of the mTAN embedding mTAN ( t , s ) [ j ] is given by a linear combination of intermediate univariate continuous-time functions x̂hd ( t , s ) . There is one such function defined for each input data dimension d and each time embedding h. The parameters Uhdj are learnable linear combination weights . As shown in Equation 3 , the structure of the intermediate continuous-time function x̂hd ( t , s ) is essentially a kernel smoother applied to the dth dimension of the time series . However , the interpolation weights κh ( t , tid ) are defined based on a time attention mechanism that leverages time embeddings , as shown in Equation 4 . As we can see , the same time embedding function φh ( t ) is applied for all data dimensions . The form of the attention mechanism is a softmax function over the observed time points tid for dimension d. The activation within the softmax is a scaled inner product between the time embedding φh ( t ) of the query time point t and the time embedding φh ( tid ) of the observed time point , the key . The parameters w and v are each dr × dk matrices where dk ≤ dr. We use a scaling factor 1√ dk to normalize the dot product to counteract the growth in the dot product magnitude with increase in the dimension dk . Learning the time embeddings provides our model with flexibility to learn complex temporal kernel functions κh ( t , t′ ) . The use of multiple simultaneous time embeddings φh ( t ) and a final linear combination across time embedding dimensions and data dimensions means that the final output representation function mTAN ( t , s ) is extremely flexible . Different input dimensions can leverage different time embeddings via learned sparsity patterns in the parameter tensor U . Information from different data dimensions can also be mixed together to create compact reduced dimensional representations . We note that all of the required computations can be parallelized using masking variables to deal with unobserved dimensions , allowing for efficient implementation on a GPU . Discretization : Since the mTAN module defines a continuous function of t given s , it can not be directly incorporated into neural network architectures that expect inputs in the form of fixeddimensional vectors or discrete sequences . However , the mTAN module can easily be adapted to produce such an output representation by materializing its output at a set of reference time points r = [ r1 , ... , rK ] . In some cases , we may have a fixed set of such points . In other cases , the set of reference time points may need to depend on s itself . In particular , we define the auxiliary function ρ ( s ) to return the set of time points at which there is an observation on any dimension of s. Given a collection of reference time points r , we define the discretized mTAN module mTAND ( r , s ) as mTAND ( r , s ) [ i ] = mTAN ( ri , s ) . This module takes as input the set of reference time points r and the time series s and outputs a sequence of mTAN embeddings of length |r| , each of dimension J . The architecture of the mTAND module is shown in Figure 1 . The mTAND module can be used to interface sparse and irregularly sampled multivariate time series data with any deep neural network layer type including fully-connected , recurrent , and convolutional layers . In the next section , we describe the construction of a temporal encoder-decoder architecture leveraging the mTAND module , which can be applied to both classification and interpolation tasks .
This paper proposes a novel approach to learn an embedding of continuous time values and use an attention mechanism to produce a fixed-length representation of a time series containing a variable number of observations. In particular, it proposes an mTAN network to leverage the mTAN module in an encoder-decoder framework for both unsupervised and supervised Learning. The main contribution of this paper is the introduction of Multi-Time Attention Networks to learns a time representation and learns to attend to observations at different time points by computing a similarity weighting by the learning time embedding. Empirical studies are performed to show the superiority of the proposed model mTANs over several baseline approaches on the tasks unsupervised and supervised learning.
SP:1996387f48b0d87ffe78a2c08a08faeb618c2213
Capturing Label Characteristics in VAEs
1 INTRODUCTION . Learning the characteristic factors of perceptual observations has long been desired for effective machine intelligence ( Brooks , 1991 ; Bengio et al. , 2013 ; Hinton & Salakhutdinov , 2006 ; Tenenbaum , 1998 ) . In particular , the ability to learn meaningful factors—capturing human-understandable characteristics from data—has been of interest from the perspective of human-like learning ( Tenenbaum & Freeman , 2000 ; Lake et al. , 2015 ) and improving decision making and generalization across tasks ( Bengio et al. , 2013 ; Tenenbaum & Freeman , 2000 ) . At its heart , learning meaningful representations of data allows one to not only make predictions , but critically also to manipulate factors of a datapoint . For example , we might want to manipulate the age of a person in an image . Such manipulations allow for the expression of causal effects between the meaning of factors and their corresponding realizations in the data . They can be categorized into conditional generation—the ability to construct whole exemplar data instances with characteristics dictated by constraining relevant factors—and intervention—the ability to manipulate just particular factors for a given data point , and subsequently affect only the associated characteristics . A particularly flexible framework within which to explore the learning of meaningful representations are variational autoencoders ( VAEs ) , a class of deep generative models where representations of data are captured in the underlying latent variables . A variety of methods have been proposed for inducing meaningful factors in this framework ( Kim & Mnih , 2018 ; Mathieu et al. , 2019 ; Mao et al. , 2019 ; Kingma et al. , 2014 ; Siddharth et al. , 2017 ; Vedantam et al. , 2018 ) , and it has been argued that the most effective generally exploit available labels to ( partially ) supervise the training process ( Locatello et al. , 2019 ) . Such approaches aim to associate certain factors of the representation ( or equivalently factors of the generative model ) with the labels , such that the former encapsulate the latter—providing a mechanism for manipulation via targeted adjustments of relevant factors . ∗work done while at Oxford †equal contribution Prior approaches have looked to achieve this by directly associating certain latent variables with labels ( Kingma et al. , 2014 ; Siddharth et al. , 2017 ; Maaløe et al. , 2016 ) . Originally motivated by the desiderata of semi–supervised classification , each label is given a corresponding latent variable of the same type ( e.g . categorical ) , whose value is fixed to that of the label when the label is observed and imputed by the encoder when it is not . Though natural , we argue that this assumption is not just unnecessary but actively harmful from a representation-learning perspective , particularly in the context of performing manipulations . To allow manipulations , we want to learn latent factors that capture the characteristic information associated with a label , which is typically much richer than just the label value itself . For example , there are various visual characteristics of people ’ s faces associated with the label “ young , ” but simply knowing the label is insufficient to reconstruct these characteristics for any particular instance . Learning a meaningful representation that captures these characteristics , and isolates them from others , requires encoding more than just the label value itself , as illustrated in Figure 1 . The key idea of our work is to use labels to help capture and isolate this related characteristic information in a VAE ’ s representation . We do this by exploiting the interplay between the labels and inputs to capture more information than the labels alone convey ; information that will be lost ( or at least entangled ) if we directly encode the label itself . Specifically , we introduce the characteristic capturing VAE ( CCVAE ) framework , which employs a novel VAE formulation which captures label characteristics explicitly in the latent space . For each label , we introduce a set of characteristic latents that are induced into capturing the characteristic information associated with that label . By coupling this with a principled variational objective and carefully structuring the characteristic-latent and label variables , we show that CCVAEs successfully capture meaningful representations , enabling better performance on manipulation tasks , while matching previous approaches for prediction tasks . In particular , they permit certain manipulation tasks that can not be performed with conventional approaches , such as manipulating characteristics without changing the labels themselves and producing multiple distinct samples consistent with the desired intervention . We summarize our contributions as follows : i ) showing how labels can be used to capture and isolate rich characteristic information ; ii ) formulating CCVAEs , a novel model class and objective for supervised and semi-supervised learning in VAEs that allows this information to be captured effectively ; iii ) demonstrating CCVAEs ’ ability to successfully learn meaningful representations in practice . 2 BACKGROUND . VAEs ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) are a powerful and flexible class of model that combine the unsupervised representation-learning capabilities of deep autoencoders ( Hinton & Zemel , 1994 ) with generative latent-variable models—a popular tool to capture factored low-dimensional representations of higher-dimensional observations . In contrast to deep autoencoders , generative models capture representations of data not as distinct values corresponding to observations , but rather as distributions of values . A generative model defines a joint distribution over observed data x and latent variables z as pθ ( x , z ) = p ( z ) pθ ( x | z ) . Given a model , learning representations of data can be viewed as performing inference—learning the posterior distribution pθ ( z | x ) that constructs the distribution of latent values for a given observation . VAEs employ amortized variational inference ( VI ) ( Wainwright & Jordan , 2008 ; Kingma & Welling , 2013 ) using the encoder and decoder of an autoencoder to transform this setup by i ) taking the model likelihood pθ ( x | z ) to be parameterized by a neural network using the decoder , and ii ) constructing an amortized variational approximation qφ ( z | x ) to the ( intractable ) posterior pθ ( z | x ) using the encoder . The variational approximation of the posterior enables effective estimation of the objective—maximizing the marginal likelihood—through importance sampling . The objective is obtained through invoking Jensen ’ s inequality to derive the evidence lower bound ( ELBO ) of the model which is given as : log pθ ( x ) = logEqφ ( z|x ) [ pθ ( z , x ) qφ ( z | x ) ] ≥ Eqφ ( z|x ) [ log pθ ( z , x ) qφ ( z | x ) ] ≡ L ( x ; φ , θ ) . ( 1 ) Given observations D = { x1 , . . . , xN } taken to be realizations of random variables generated from an unknown distribution pD ( x ) , the overall objective is 1N ∑ n L ( xn ; θ , φ ) . Hierarchical VAEs Sønderby et al . ( 2016 ) impose a hierarchy of latent variables improving the flexibility of the approximate posterior , however we do not consider these models in this work . Semi-supervised VAEs ( SSVAEs ) ( Kingma et al. , 2014 ; Maaløe et al. , 2016 ; Siddharth et al. , 2017 ) consider the setting where a subset of data S ⊂ D is assumed to also have corresponding labels y. Denoting the ( unlabeled ) data as U = D\S , the log-marginal likelihood is decomposed as log p ( D ) = ∑ ( x , y ) ∈S log pθ ( x , y ) + ∑ x∈U log pθ ( x ) , where the individual log-likelihoods are lower bounded by their ELBOs . Standard practice is then to treat y as a latent variable to marginalize over whenever the label is not provided . More specifically , most approaches consider splitting the latent space in z = { zy , z\y } and then directly fix zy = y whenever the label is provided , such that each dimension of zy explicitly represents a predicted value of a label , with this value known exactly only for the labeled datapoints . Much of the original motivation for this ( Kingma et al. , 2014 ) was based around performing semi–supervised classification of the labels , with the encoder being used to impute the values of zy for the unlabeled datapoints . However , the framework is also regularly used as a basis for learning meaningful representations and performing manipulations , exploiting the presence of the decoder to generate new datapoints after intervening on the labels via changes to zy . Our focus lies on the latter , for which we show this standard formulation leads to serious pathologies . Our primary goal is not to improve the fidelity of generations , but instead to demonstrate how label information can be used to structure the latent space such that it encapsulates and disentangles the characteristics associated with the labels . 3 RETHINKING SUPERVISION . As we explained in the last section , the de facto assumption for most approaches to supervision in VAEs is that the labels correspond to a partially observed augmentation of the latent space , zy . However , this can cause a number of issues if we want the latent space to encapsulate not just the labels themselves , but also the characteristics associated with these labels . For example , encapsulating the youthful characteristics of a face , not just the fact that it is a “ young ” face . At an abstract level , such an approach fails to capture the relationship between the inputs and labels : it fails to isolate characteristic information associated with each label from the other information required to reconstruct data . More specifically , it fails to deal with the following issues . Firstly , the information in a datapoint associated with a label is richer than stored by the ( typically categorical ) label itself . That is not to say such information is absent when we impose zy = y , but here it is entangled with the other latent variables z\y , which simultaneously contain the associated information for all the labels . Moreover , when y is categorical , it can be difficult to ensure that the VAE actually uses zy , rather than just capturing information relevant to reconstruction in the higher-capacity , continuous , z\y . Overcoming this is challenging and generally requires additional heuristics and hyper-parameters . Second , we may wish to manipulate characteristics without fully changing the categorical label itself . For example , making a CelebA image depict more or less ‘ smiling ’ without fully changing its “ smile ” label . Here we do not know how to manipulate the latents to achieve this desired effect : we can only do the binary operation of changing the relevant variable in zy . Also , we often wish to keep a level of diversity when carrying out conditional generation and , in particular , interventions . For example , if we want to add a smile , there is no single correct answer for how the smile would look , but taking zy = `` smile '' only allows for a single point estimate for the change . Finally , taking the labels to be explicit latent variables can cause a mismatch between the VAE prior p ( z ) and the pushforward distribution of the data to the latent space q ( z ) = EpD ( x ) [ qφ ( z | x ) ] . During training , latents are effectively generated according to q ( z ) , but once learned , p ( z ) is used to make generations ; variations between the two effectively corresponds to a train-test mismatch . As there is a ground truth data distribution over the labels ( which are typically not independent ) , taking the latents as the labels themselves implies that there will be a ground truth q ( zy ) . However , as this is not generally known a priori , we will inevitably end up with a mismatch . What do we want from supervision ? Given these issues , it is natural to ask whether having latents directly correspond to labels is actually necessary . To answer this , we need to think about exactly what it is we are hoping to achieve through the supervision itself . Along with uses of VAEs more generally , the three most prevalent tasks are : a ) Classification , predicting the labels of inputs where these are not known a priori ; b ) Conditional Generation , generating new examples conditioned on those examples conforming to certain desired labels ; and c ) Intervention , manipulating certain desired characteristics of a data point before reconstructing it . Inspecting these tasks , we see that for classification we need a classifier form z to y , for conditional generation we need a mechanism for sampling z given y , and for inventions we need to know how to manipulate z to bring about a desired change . None of these require us to have the labels directly correspond to latent variables . Moreover , as we previously explained , this assumption can be actively harmful , such as restricting the range of interventions that can be performed .
The paper proposes to re-think the fashion of using label information in the VAE framework. The authors propose to disentangle information about the label (or, more generally, the context) in a "hard-coded" manner, namely, by using a separate set of variables for the label (context). The paper is written in a lucid manner, and the presented results are sound.
SP:3b9ce25cba7d3b62e4927a76feccea0106d9b338
Regioned Episodic Reinforcement Learning
Goal-oriented reinforcement learning algorithms are often good at exploration , not exploitation , while episodic algorithms excel at exploitation , not exploration . As a result , neither of these approaches alone can lead to a sample-efficient algorithm in complex environments with high dimensional state space and delayed rewards . Motivated by these observations and shortcomings , in this paper , we introduce Regioned Episodic Reinforcement Learning ( RERL ) that combines the episodic and goal-oriented learning strengths and leads to a more sample efficient and effective algorithm . RERL achieves this by decomposing the space into several sub-space regions and constructing regions that lead to more effective exploration and high values trajectories . Extensive experiments on various benchmark tasks show that RERL outperforms existing methods in terms of sample efficiency and final rewards . 1 INTRODUCTION . Despite its notable success , the application of reinforcement learning ( RL ) still suffers from sample efficiency in real-world applications . To achieve human-level performance , episodic RL ( Pritzel et al. , 2017 ; Lee et al. , 2019 ) is proposed to construct episodic memory , enabling the agent to assimilate new experiences and act upon them rapidly . While episodic algorithms work well for tasks where it is easy to collect valuable trajectories and easy to design dense reward functions , both of these requirements become roadblocks when applying to complex environments with sparse reward . Goal-oriented RL ( Andrychowicz et al. , 2017 ; Paul et al. , 2019 ) decomposes the task into several goal-conditioned tasks , where the intrinsic reward is defined as the success probability of reaching each goal by the current policy and the ability to guide the agent to finally reach the target state . These methods intend to explore more unique trajectories and use all trajectories in the training procedure , which may involve unrelated ones and result in inefficient exploitation . In this paper , we propose a novel framework that can combine the strengths of episodic and goal-oriented algorithms and thus can efficiently explore and rapidly exploit high-value trajectories . The inefficient learning of deep RL has several plausible explanations . In this work , we focus on addressing these challenges : ( C1 ) Environments with a sparse reward signal can be difficult to learn , as there may be very few instances where the reward is non-zero . Goal-oriented RL can mitigate this issue by building intrinsic reward signals ( Ren et al. , 2019 ) , but suffer from the difficulty of generating appropriate goals from high-dimensional space . ( C2 ) Training goal-oriented RL models using all historical trajectories rather than selected ones would involve unrelated trajectories in training . The training process of goal generation algorithms could be unstable and inefficient ( Kumar et al. , 2019 ) , as data distribution shifts when the goal changes . It can be fairly efficient if updates happen only with highly related trajectories . ( C3 ) Redundant exploration is another issue that limits the performance as it is inefficient for the agent to explore the same areas repeatedly ( Ostrovski et al. , 2017 ) . Instead , it would be much more sensible for agents to learn to divide the task into several sub-tasks to avoid redundant exploration . In this paper , we propose Regioned Episodic Reinforcement Learning ( RERL ) , which tackles the limitations of deep RL listed above and demonstrates dramatic improvements in a wide range of environments . Our work is , in part , inspired by studies on psychology and cognitive neuroscience ( Lengyel & Dayan , 2008 ; Manns et al. , 2003 ) , which discovers that when we observe an event , we scan through our corresponding memory storing this kind of events and seek experiences related to this one . Our agent regionalizes the historical trajectories into several region-based memories∗ . At each timestep , the region controller will evaluate each region and select one for further exploration and exploitation . Each memory binds a specific goal and a series of goal-oriented trajectories and uses a value-based look-up to retrieve highly related and high-quality trajectories when updating the value function . We adopt hindsight ( i.e. , the goal state is always generated from visited states in the memory ) and diversity ( i.e. , goal state should be distant from previous goal states in other memories ) constraints in goal generation for goal reachability and agent exploration . This architecture conveys several benefits : ( 1 ) We can automatically construct region-based memory by goal-oriented exploration , where trajectories guided by the same goal share one memory ( see Section 3.1 ) . ( 2 ) Within each memory , we alleviate the high-dimensional issue ( C1 ) by enforcing that the goal space is a set of visited states ( see Section 3.2 ) . ( 3 ) In order to improve efficiency in exploitation ( C2 ) , our architecture stabilizes training using trajectories within the memory instead of randomly selected transitions ( see Section 3.3 for details ) . ( 4 ) Our algorithm takes previous goals in other memories when generating a goal in current memory . Specifically , we propose the diversity constraint to encourage the agent to explore unknown states ( see Section 3.2 ) , which aims at improving exploration efficiency ( C3 ) . The contributions of this paper are as follows : ( 1 ) We introduce RERL , a novel framework that combines the strengths of episodic RL and goal-oriented RL for efficient exploration and exploitation . ( 2 ) We propose hindsight and diversity constraints in goal generation , which allows the agents to construct and update the regioned memories automatically . ( 3 ) We evaluate RERL in challenging robotic environments and show that our method can naturally handle sparse reward environments without any additional prior knowledge and manually modified reward function . RERL can be closely incorporated with various policy networks such as deep deterministic policy gradient ( DDPG ( Lillicrap et al. , 2015 ) ) and proximal policy optimization ( PPO ( Schulman et al. , 2017 ) ) . Further , ablation studies demonstrate that our exploration strategy is robust across a wide set of hyper-parameters . 2 PRELIMINARIES . In RL ( Sutton & Barto , 2018 ) , the goal of an agent is to maximize its expected cumulative reward by interacting with a given environment . The RL problem can be formulated as a Markov Decision Process ( MDP ) by a tuple ( S , A , P , r , γ ) , where S is the state space , A is the action space , P : S × A → ∆ ( S ) is the state transition probability distribution , r : S × A → [ 0 , 1 ] is the reward function , and γ ∈ [ 0 , 1 ) is the discount factor for future rewards . Our objective is to find a stochastic policy π : S × A → [ 0 , 1 ) that maximizes the expected cumulative reward Rt = ∑T k=0 γ krt+k within the MDP , where T is the episode length . In the finite-horizon setting , the state-action value function Qπ ( s , a ) = E [ Rt|st = s , a ] is the expected return for executing action a on state s and following π afterward . The value function can be defined as V π ( s ) : = E [ T∑ k=0 γkrt+k ( st , at ) | st = s , π ] , ∀s ∈ S , ( 1 ) where T is the episode length and the goal of the agent is to maximize the expected return of each state st . Deep Q Network ( DQN , ( Mnih et al. , 2015 ) ) utilizes an off-policy learning strategy , which samples ( st , at , rt , st+1 ) tuples from a replay buffer for training . It is a typical parametric RL method and suffers from sample inefficiency due to slow gradient-based updates . The key idea of episodic RL is to store good past experiences in a tabular-based non-parametric memory and rapidly latch onto past successful policies when encountering similar states , instead of waiting for many optimization steps . However , in environments with sparse rewards , there may be very few instances where the reward is non-zero , making it difficult for an agent to find good past experiences . In order to address this issue , goal-oriented RL is proposed . In the goal-conditioned setting that we use here , the policy and the reward are also conditioned on a goal g ∈ G ( Schaul et al. , 2015 ) . The distance function d ( used to define goal completion and generate sparse reward upon the completion of goal ) may be exposed as a shaped intrinsic reward without any additional domain knowledge : r ( st , at|g ) = 1 , if d ( φ ( ·|st+1 ) , g ) ≤ δ , and r ( st , at|g ) = −d ( φ ( ·|st+1 ) , g ) otherwise , where φ : ∗The common idea our method shares with neuroscience is utilizing highly related information to promote learning efficiency . The difference is that memories are regioned according to the generated goals in this paper , and fictions in cognitive neuroscience . Algorithm 1 Framework of RERL 1 : repeat 2 : Select Region together with Region-based Memory . 3 : Generate goals for exploration with Goal- oriented RL . 4 : Interact with the Environment . 5 : Store historical trajectories into Memory . 6 : Update value estimation for exploitation with Episodic RL . 7 : until Q function Converges . S → G is a known and tractable mapping† . While we expect that cooperation of goal generation and distance function themselves to lead the agent to the final state ( global optimum ) , in practice , we need to consider that there exist local optima due to state space structure or transition dynamics ( Trott et al. , 2019 ) . Once we can generate appropriate goal g and anti-goal ḡ , we are able to redefine the intrinsic reward function as : r ( st , at|g , ḡ ) : = { 1 d ( φ ( ·|st+1 ) , g ) ≤ δ min [ 0 , −d ( φ ( ·|st+1 ) , g ) + d ( φ ( ·|st+1 ) , ḡ ) ] otherwise , ( 2 ) where st+1 ∼ P ( ·|st , at ) denotes the next state ; φ : S → G is the extended joint generation for both goal and anti-goal generations ; ḡ ∈ G is the anti-goal and acts as a state that the agent should avoid , which prevents the policy from getting stuck at the local optimum and enables the agent to learn to reach the goal location quickly ; ( Trott et al. , 2019 ) δ is a given threshold indicating whether the goal is considered to be reached ( Plappert et al. , 2018 ) . To make use of r ( st , at|g , ḡ ) in practice , we require a method to dynamically estimate the local optima that frustrate learning without relying on domain-expertise or hand-picked estimations . The idea of the universal value function ( Schaul et al. , 2015 ) is to use a universal functional approximator to represent a large number of value functions . In the goal-oriented scenario , the value function conditioned on any given goal of g and anti-goal ḡ can be defined as V π ( s , g , ḡ ) : = Eat∼π ( ·|st , g , ḡ ) , st+1∼P ( ·|st , at ) [ T∑ t=1 γtr ( st , at|g , ḡ ) | st = s ] . ( 3 ) Let X : { x | x = ( s , g , ḡ ) } , denote the joint set over state and goal spaces . Specifically , we define x∗ ∈ X over initial state s0 ∈ S , initial goal g∗ ∈ G and initial anti-goal ḡ∗ ∈ G. At the start of every goal-oriented task ( Plappert et al. , 2018 ) , an initial-terminal states pair will be drawn from the task distribution . In this paper , we regard the terminal state as the original goal g∗ and set the original anti-goal ḡ∗ as the initial state to encourage the agent to explore at the beginning . In this setting , the agent tries to find a policy π that maximizes the expectation of discounted cumulative reward V π ( x∗ ) . From the comparison of Eqs . ( 1 ) and ( 3 ) , one can see that the critical points for goal-oriented RL are to generate appropriate goals . However , as stated in ( Ren et al. , 2019 ) , in goaloriented RL , the value function V π ( x ) is optimized with respect to a shifting goal-conditioned task distribution , which makes learning unstable . This issue requires RL algorithms to rapidly obtain value estimation under current goal-conditioned tasks , which is the strength of episodic RL . For convenience , we replace all ( s , g , ḡ ) tuples with x in the following context .
This paper presents a new algorithm called Regioned Episodic Reinforcement Learning (RERL), which combines ideas from episodic memory, with automatic sub-goal creation or “goal-oriented” RL. The method works by dividing the state space into regions, where a different goal identifies each region. Then, using an episodic memory technique, the agent is able to learn about new experiences in a sample efficient way. This allows the agent to explore effectively, and learn a good policy quickly in problems where there are sparse rewards. The paper provides some theoretical justification for the new algorithm, and provides some empirical results that demonstrate its effectiveness.
SP:c80e745edb60717dcaa312fb3c01723bdb72f81d
Graph Representation Learning for Multi-Task Settings: a Meta-Learning Approach
1 INTRODUCTION Original Embeddings Transferred Embeddings NC GC- > NC LP- > NC0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 Ac cu ra cy - 13.21 % - 14.52 % Node Classification ( b ) GC NC- > GC LP- > GC0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Ac cu ra cy - 21.29 % - 10.82 % Graph Classification ( c ) LP NC- > LP GC- > LP0.55 0.60 0.65 0.70 0.75 0.80 RO C AU C - 5.89 % - 4.43 % Link Prediction ( d ) Figure 1 : Performance drop when transferring node embeddings between tasks on ( a ) Node Classification ( NC ) , ( b ) Graph Classification ( GC ) , and ( c ) Link Prediction ( LP ) on the ENZYMES dataset . On the horizontal axis , “ x - > y ” indicates that the embeddings obtained from a model trained on task x are used to train a network for task y. Graph Neural Networks ( GNNs ) are deep learning models that operate on graph structured data , and have become one of the main topics of the deep learning research community . Part of their success is given by great empirical performance on many graph-related tasks . Three tasks in particular , with many practical applications , have received the most attention : graph classification , node classification , and link prediction . GNNs are centered around the concept of node representation learning , and typically follow the same architectural pattern with an encoderdecoder structure ( Hamilton et al. , 2017 ; Chami et al. , 2020 ; Wu et al. , 2020 ) . The encoder produces node embeddings ( low-dimensional vec- tors capturing relevant structural and feature-related information about each node ) , while the decoder uses the embeddings to carry out the desired downstream task . The model is then trained in an end-to-end manner , giving rise to highly specialized node embeddings . While this can lead to state-of-the-art performance , it also affects the generalization and reusability of the embeddings . In fact , taking the encoder from a GNN trained on a given task and using its node embeddings to train a decoder for a different task leads to substantial performance loss , as shown in Figure 1 . The low transferability of node embeddings requires the use of one specialized encoder and one specialized decoder for each considered task . However , many practical machine learning applications operate in resource-constrained environments where being able to share part of the model architecture between tasks is of great importance . Furthermore , the training signal from multiple related tasks can lead to higher generalization . Nevertheless , making sure tasks do not negatively interfere with each other is not trivial ( Standley et al. , 2020 ) . The problem of learning models that can perform multiple tasks is known as Multi-Task Learning ( MTL ) , and is an open area of research , attracting many researchers in the deep learning community ( Vandenhende et al. , 2020 ) . MTL on graphs has not received much attention , and no single model capable of performing the three most common graph-related tasks has yet been proposed . In fact , we notice that training a multi-head model with the classical procedure , i.e . by performing multiple tasks concurrently on each graph , and updating the parameters with some form of gradient descent to minimize the sum of the single-task losses , can lead to a performance loss with respect to single-task models . Thus , we propose a novel optimization-based meta-learning ( Finn et al. , 2017 ) procedure with a focus on representation learning that can generate node embeddings that generalize across tasks . Our proposed meta-learning procedure produces task-generalizing node embeddings by not aiming at a setting of the parameters that can perform multiple tasks concurrently ( like a classical method would do ) , or to a setting that allows fast multi-task adaptation ( like traditional meta-learning ) , but to a setting that can easily be adapted to perform each of the tasks singularly . In fact , our metalearning procedure aims at a setting of the parameters where a few steps of gradient descent on a given task , can lead to good performance on that task , hence removing the burden of directly learning to solve multiple tasks concurrently . We summarize our contributions as follows : • We propose a novel method for learning representations that can generalize to multiple tasks . We apply it to the challenging setting of graph MTL , and show that a GNN trained with our method produces higher quality node embeddings with respect to classical end-toend training procedures . Our method is based on meta-learning and is model-agnostic and task-agnostic , which makes it easily applicable to a wide range of multi-task domains . • To the best of our knowledge , we are the first to propose a GNN model generating a single set of node embeddings that can be used to perform the three most common graph-related tasks ( graph classification , node classification , and link prediction ) . In particular , our embeddings lead to comparable or higher performance with respect to single-task models even when used as input to a simple linear classifier . • We show that the episodic training strategy at the base of our proposed meta-learning procedure leads to better node embeddings even for models trained on a single task . This unexpected finding provides interesting directions that we believe can be useful to the whole deep representation learning community . 2 RELATED WORK . GNNs , MTL , and meta-learning are very active areas of research . We highlight works that are at the intersections of these subjects , and point the interested reader to comprehensive reviews of each field . To the best of our knowledge there is no work using meta-learning for graph MTL , or proposing a GNN performing graph classification , node classification , and link prediction concurrently . Graph Neural Networks . GNNs have a long history ( Scarselli et al. , 2009 ) , but in the past few years the field has grown exponentially ; we refer the reader to Chami et al . ( 2020 ) ; Wu et al . ( 2020 ) for a thorough review of the field . The first popular GNN approaches were based on filters in the graph spectral domain ( Bronstein et al. , 2017 ) , and presented many challenges including high computational complexity . Defferrard et al . ( 2016 ) introduced ChebNet , which uses Chebyshev polynomials to produce localized and efficient filters in the graph spectral domain . Graph Convolutional Networks ( Kipf & Welling , 2017 ) then introduced a localized first-order approximation of spectral graph convolutions which was then extended to include attention mechanisms ( Veličković et al. , 2018 ) . Recently , Xu et al . ( 2019 ) provides theoretical proofs for the expressivity of GNNs . Multi-Task Learning . Works at the intersection of MTL and GNNs have mostly focused on multihead architectures . These models are all composed of a series of GNN layers followed by multiple heads that perform the desired downstream tasks . In this category , Montanari et al . ( 2019 ) propose a model for the prediction of physico-chemical properties . Holtz et al . ( 2019 ) and Xie et al . ( 2020 ) propose multi-task models for concurrently performing node and graph classification . Finally , Avelar et al . ( 2019 ) introduce a multi-head GNN for learning multiple graph centrality measures , and Li & Ji ( 2019 ) propose a MTL method for the extraction of multiple biomedical relations . The work by ( Haonan et al. , 2019 ) introduces a model that can be trained for several tasks singularly , hence , unlike the previously mentioned approaches and our proposed method , it can not perform multiple tasks concurrently . There are also some works that use GNNs as a tool for MTL : Liu et al . ( 2019b ) use GNNs to allow communication between tasks , while Zhang et al . ( 2018 ) use GNNs to estimate the test error of a MTL model . We further mention the work by Wang et al . ( 2020 ) which considers the task of generating “ general ” node embeddings , however their method is not based on GNNs , does not consider node attributes ( unlike our method ) , and is not focused on the three most common graph related tasks , which we consider . For an exhaustive review of deep MTL techniques we refer the reader to Vandenhende et al . ( 2020 ) . Meta-Learning . Meta-Learning consists in learning to learn . Many methods have been proposed ( see the review by Hospedales et al . ( 2020 ) ) , specially in the area of few-shot learning . Garcia & Bruna ( 2018 ) frame the few-shot learning problem with a partially observed graphical model and use GNNs as an inference algorithm . Liu et al . ( 2019a ) use GNNs to propagate messages between class prototypes and improve existing few-shot learning methods , while Suo et al . ( 2020 ) use GNNs to introduce domain-knowledge in the form of graphs . There are also several works that use metalearning to train GNNs in few-shot learning scenarios with applications to node classification ( Zhou et al. , 2019 ; Yao et al. , 2020 ) , edge labelling ( Kim et al. , 2019 ) , link prediction ( Alet et al. , 2019 ; Bose et al. , 2019 ) , and graph regression ( Nguyen et al. , 2020 ) . Finally , other combinations of metalearning and GNNs involve adversarial attacks on GNN models ( Zügner & Günnemann , 2019 ) and active learning ( Madhawa & Murata , 2020 ) . 3 PRELIMINARIES . 3.1 GRAPH NEURAL NETWORKS . Many popular state-of-the-art GNN models follow the message-passing paradigm ( Gilmer et al. , 2017 ) . Let us represent a graph G = ( A , X ) with an adjacency matrix A ∈ { 0 , 1 } n×n , and a node feature matrix X ∈ Rn×d , where the v-th row Xv represents the d dimensional feature vector of node v. Let H ( ` ) ∈ Rn×d′ be the matrix containing the node representations at layer ` . A message passing layer updates the representation of every node v as follows : msg ( ` ) v = AGGREGATE ( { H ( ` ) u ∀u ∈ Nv } ) H ( ` +1 ) v = UPDATE ( H ( ` ) v , msg ( ` ) v ) where H ( 0 ) = X , Nv is the set of neighbours of node v , AGGREGATE is a permutation invariant function , and UPDATE is usually a neural network . After L message-passing layers , the final node embeddings H ( L ) are used to perform a given task , and the network is trained end-to-end . 3.2 MODEL-AGNOSTIC META-LEARNING AND ANIL . MAML ( Model-Agnostic Meta-Learning ) is an optimization-based meta-learning strategy proposed by Finn et al . ( 2017 ) . Let fθ be a deep learning model , where θ represents its parameters . Let p ( E ) be a distribution over episodes1 , where an episode Ei ∼ p ( E ) is defined as a tuple containing a loss function , a support set , and a target set : Ei = ( LEi ( · ) , SEi , TEi ) , where support and target sets are simply sets of labelled examples . MAML ’ s goal is to find a value of θ that can quickly , i.e . in a few steps of gradient descent , be adapted to new episodes . This is done with a nested loop optimization procedure : an inner loop adapts the parameters to the support set of an episode by performing some steps of gradient descent , and an outer loop updates the initial parameters aiming at a setting that 1The meta-learning literature usually derives episodes from tasks ( i.e . tuples containing a dataset and a loss function ) . We focus on episodes to avoid using the term task for both a MTL task , and a meta-learning task . allows fast adaptation . Formally , by defining θ′i ( t ) as the parameters after t adaptation steps on the support set of episode Ei , we can express the computations in the inner loop as θ′i ( t ) = θ ′ i ( t− 1 ) − α∇θ′i ( t−1 ) LEi ( fθ′i ( t−1 ) , SEi ) , with θ ′ i ( 0 ) = θ where L ( fθ′i ( t−1 ) , SEi ) indicates the loss over the support set SEi of the model with parameters θ′i ( t − 1 ) , and α is the learning rate . The meta-objective that the outer loop tries to minimize is defined as Lmeta = ∑ Ei∼p ( E ) LEi ( fθ′i ( t ) , TEi ) , which leads to the following parameter update 2 θ = θ − β∇θLmeta = θ − β∇θ ∑ Ei∼p ( E ) LEi ( fθ′i ( t ) , TEi ) . Raghu et al . ( 2020 ) showed that feature reuse is the dominant factor in MAML : in the adaptation loop , only the last layer ( s ) in the network are updated , while the first layer ( s ) remain almost unchanged . The authors then propose ANIL ( Almost No Inner Loop ) where they split the parameters in two sets : one that is used for adaptation in the inner loop , and one that is only updated in the outer loop . This simplification leads to computational improvements while maintaining performance .
The manuscript proposes SAME, a model based on GNN and meta-learning for learning multi-task node embeddings. Unlike multi-task learning setting, SAME aims at learning to quickly adapt to multiple tasks. Two model variants iSAME and eSAME are proposed base on different settings in inner/outer loop of parameter update. Experiments on several datasets demonstrate the good performance of SAME.
SP:4b9cb72dcc70459c938b5ba8aaec2ea8fa253e1b
Semi-Supervised Audio Representation Learning for Modeling Beehive Strengths
Honey bees are critical to our ecosystem and food security as a pollinator , contributing 35 % of our global agriculture yield ( Klein et al. , 2007 ) . In spite of their importance , beekeeping is exclusively dependent on human labor and experiencederived heuristics , while requiring frequent human checkups to ensure the colony is healthy , which can disrupt the colony . Increasingly , pollinator populations are declining due to threats from climate change , pests , environmental toxicity , making their management even more critical than ever before in order to ensure sustained global food security . To start addressing this pressing challenge , we developed an integrated hardware sensing system for beehive monitoring through audio and environment measurements , and a hierarchical semi-supervised deep learning model , composed of an audio modeling module and a predictor , to model the strength of beehives . The model is trained jointly on audio reconstruction and prediction losses based on human inspections , in order to model both low-level audio features and circadian temporal dynamics . We show that this model performs well despite limited labels , and can learn an audio embedding that is useful for characterizing different sound profiles of beehives . This is the first instance to our knowledge of applying audio-based deep learning to model beehives and population size in an observational setting across a large number of hives . 1 INTRODUCTION . Pollinators are one of the most fundamental parts of crop production worldwide ( Klein et al. , 2007 ) . Without honey bee pollinators , there would be a substantial decrease in both the diversity and yield of our crops , which includes most common produce ( van der Sluijs & Vaage , 2016 ) . As a model organism , bees are also often studied through controlled behavioral experiments , as they exhibit complex responses to many environmental factors , many of which are yet to be fully understood . A colony of bees coordinate its efforts to maintain the overall health , with different types of bees tasked for various purposes . One of the signature modality of characterizing bee behavior is through the buzzing frequencies emitted through the vibration of the wings , which can correlate with various properties of the surroundings , including temperature , potentially allowing for a descriptive ’ image ’ of the hive in terms of strength ( Howard et al. , 2013 ; Ruttner , 1988 ) . However , despite what is known about honey bees behavior and their importance in agriculture and natural diversity , there remains a substantial gap between controlled academic studies and the field practices carried out ( López-Uribe & Simone-Finstrom , 2019 ) . In particular , beekeepers use their long-tenured experience to derive heuristics for maintaining colonies , which necessitates frequent visual inspections of each frame of every box , many of which making up a single hive . During each inspection , beekeepers visually examine each frame and note any deformities , changes in colony size , amount of stored food , and amount of brood maintained by the bees . This process is labor intensive , limiting the number of hives that can be managed effectively . As growing risk factors make human inspection more difficult at scale , computational methods are needed in tracking changing hive dynamics on a faster timescale and allowing for scalable management . With modern sensing hardware that can record data for months and scalable modeling with state-of-the-art tools in machine learning , we can potentially start tackling some of challenges facing the management of our pollinators , a key player in ensuring food security for the future . 2 BACKGROUND AND RELATED WORKS . Our work falls broadly in applied machine learning within computational ethology , where automated data collection methods and machine learning models are developed to monitor and characterize biological species in natural or controlled settings ( Anderson & Perona , 2014 ) . In the context of honey bees , while there has been substantial work characterizing bee behavior through controlled audio , image , and video data collection with classical signal processing methods , there has not been a large-scale effort studying how current techniques in deep learning can be applied at scale to the remote-monitoring of beehives in the field . Part of the challenge lies in data collection . Visual-sensing within beehives is nearly impossible given the current design of boxes used to house bees . These boxes are heavily confined with narrow spaces between many stacked frames for bees to hatch , rear brood , and store food . This makes it difficult to position cameras to capture complete data , without a redesign of existing boxes . Environment sensors , however , can capture information localized to a larger region , such as temperature and humidity . Sound , likewise , can travel across many stacked boxes , which are typically made from wood and have good acoustics . Previous works have explored the possibility of characterizing colony status with audio in highly stereotyped events , such as extremely diseased vs healthy beehives ( Robles-Guerrero et al. , 2017 ) or swarming ( Krzywoszyja et al. , 2018 ; Ramsey et al. , 2020 ) , where the old Queen leaves with a large portion of the original colony . However , we have not seen work that attempt to characterize more sensitive measurements , such as population of beehives , based on audio . We were inspired by these works and the latest advances in hardware sensing and deep learning audio models to collect audio data in a longitudinal setting across many months for a large number of managed hives , and attempt to characterize some of the standard hive inspection items through machine learning . While audio makes it possible to capture a more complete picture of the inside of a hive , there are still challenges related to data semantics in the context of annotations . Image and video data can be readily processed and labeled post-collection if the objects of interest are recognizable . However , with honey bees , the sound properties captured by microphones are extremely difficult to discriminate , even to experts , due to the fact that the sound is not semantically meaningful , and microphone sensitivity deviations across sensors makes it difficult to compare data across different hives . Thus , it is not possible to retrospectively assign labels to data , making humans inspections during data collection the only source of annotations . As beekeepers can not inspect a hive frequently due to the large number of hives managed and the potential disturbance caused to the hive , the task becomes few-shot learning . In low-show learning for audio , various works have highlighted the usefulness of using semisupervised or unsupervised objectives and/or learning an embedding of audio data , mostly for the purpose of sound classification or speech recognition ( Jansen et al. , 2020 ; Lu et al. , 2019 ) . These models typically capture semantic differences between different sound sources . We were inspired by the audio classification work with semi-supervised or contrastive-learning objectives to build an architecture that could model our audio and learn an embedding without relying only on task-specific supervision . Unlike previous audio datasets used in prior works , longitudinal data is unlikely to discretize into distinct groups due to the slower continuously shifting dynamics across time on the course of weeks . Therefore , we make the assumption that unlike current audio datasets which contain audio from distinct classes that can be clustered into sub-types , our data more likely occupy a smooth latent space , due to the slow progression in time of changing properties , such as the transition between healthy and low-severity disease , or changes in the size of the population , as bee colonies increase by only around one frame per week during periods of colony growth ( Russell et al. , 2013 ; Sakagami & Fukuda , 1968 ) . 3 METHODS . Hive Setup Each hive composes of multiple 10-frame standard Langstroth boxes stacked on top of one another , with the internal sensor located in the center frame of the bottom-most box , and the external sensor on the outside side wall of the box . This sensor placement is based on prior knowledge that bees tend to collect near the bottom box first prior to moving up the tower ( Winston , 1987 ) . Due to difficulties in obtaining data that would span the spectrum of different colony sizes without intervention in a timely manner , we set up hives of varying sizes in the beginning to capture a range of populations . This allowed our dataset to span a number of frame sizes , from 1 to 23 for bee frames , and 0 to 11 for brood frames . Aside from these manipulations , all other observed effects , such as progression of disease states , are of natural causes free from human intervention . 3.1 DATA COLLECTION Sensor Data Given prior works that showed the possibility of characterizing honey bee colonies through audio , we developed battery-powered sensorbars that can be fitted to a single frame of a standard Langstroth bee box . Each sensor is designed for longitudinal data collection over the span of many weeks on a single charge . The sensor records sub-sampled data every 15 minutes , at all hours of day . Each multi-modal data sample composes of a one-minute audio sample and point estimates of the temperature , humidity , and pressure , for both inside and outside the box ( Fig . 1 ) . For the purpose of the daily-snapshot model described in this work , we use data from all days with 96 samples collected . In sum , we have collected ∼1000 total days of data , across 26 hives , with up to 180 corresponding human inspection entries . These inspection entries captured information related to hive status , which for our purpose are frames of bees , frames of brood , disease status , and disease severity . Inspections We used data from one inspector for all collected data used in this work in order to increase annotation consistency . The inspector performed observation in each hive roughly once per week , during which they visually examine each individual frames in all boxes for that hive . The hives are placed 2 meters apart from one another such that cross contamination of audio is unlikely , given the sensor is isolated to within each stack of boxes . For frame labels , the inspector visually examine each frame to determine if that frame is at least 60 % covered , given which it would be added to the total frame count . We prevent overcrowding on each frame by introducing empty frames whenever necessary , such that each frame is covered at most up to 90 % , as is common practice . This allows us to obtain a lowerbound on the error range of our inspections at around ±20 % . During the same inspection , the inspector also check for the presence of any diseases and its severity . Severity scores between none , low , moderate , and severe , where low corresponds to a single observation of diseased bees , moderate for several observations of disease , and severe for prevalent signs of disease . 4 GENERATIVE-PREDICTION NETWORK . Given the difficulty of collecting ground truths due to the nature of our data , we sought out to develop a semi-supervised model and leverage our large number of audio samples . Additionally , behavior from bees leaving and returning to beehives means that data from one full-day circadian cycle must be used for predictions in order to model same-day variations . Therefore , we developed a model trained on hierarchical objectives to allow for modeling both low-level audio features on a minutelong basis , as well any complex temporal dynamics within a given day . We do not consider longer time horizon for this work as the focus is in modeling a snapshot of the hive ’ s current state , not where it will be in the future . Given prior works characterizing beehive sound profiles in lab settings , we know that local audio features are critical , as audio strength along certain known frequencies correlate with different behaviors and types of bees , which could potentially allow for discerning population sizes and disease statuses .
The paper presents a semi-supervised model to predict the vitality of beehives. The inputs of the model are data from sensors (audio on one hand and environmental on the other hand such as temperature, humidity ...). The objective is to predict simultaneously 3 values of interest: the frames state of beehives, the potential diseases and their severity. The architecure is composed of two modules, the first one is an auto-encoder in charge of embedding the audio spectrogram in a low latent dimensional space and the second one a MLP to predict the outputs from the latent spectrogram and the environmental data. The paper presents results of the proposed architecture on a small dataset, an ablation study to show the benefits of the auto-encoder module and the role of the environmental data and a latent space analysis to understand the ability of the model to capture relevant audio information linked to the diseases.
SP:a7713950962f783173dbcf3ecd14289782380561
Learning Two-Time-Scale Representations For Large Scale Recommendations
1 INTRODUCTION A hypothetical user ’ s interaction with recommendation systems gives us diminishing returns in terms of its information value in understanding the user . For an active user who has lots of historical interactions , she is typically well understood by the recommender , and each new interaction gives relatively little new information . In contrast , for an inactive or new user , every additional interaction will provide interesting information for understanding this user . Therefore , the representations for active and inactive users should be updated differently when a new interaction occurs . Figure 1 illustrates such information diminishing phenomenon , where the amount of change in user embedding from φt to φt+1 due to an additional interaction is decaying . One can select a particular threshold t∗ for the number of interactions , above which the users can be categorized to active users , and below which inactive users . Roughly active users ’ embeddings evolve slowly as a function of the number of interactions , while inactive users ’ embeddings evolve fast . Hence a two-time-scale embedding evolution . Apart from the time-scale difference in temporal dynamics , the simultaneous presence of active and inactive users also presents other modeling and computational challenges . On the one hand , active users lead to long sequences of interactions and high degree nodes in the user-item interaction graph . Existing sequence models , such as RNN models , have some limitations when dealing with long-range sequences , due to the difficulty in gradient propagation . Moreover , graph neural network-based models become computationally inefficient due to the intensive message passing operations through high-degree nodes introduced by active users . On the other hand , predicting preferences of inactive or new users ( also known as the cold-start problem ) is a challenging few-shot learning problem , where a decision needs to be made given only a few number of observations . To address various challenges imposed by the presence of two types of users , we leverage the different dynamics of these users and propose ( i ) a two-time-scale ( 2TS ) model and ( ii ) a two-stage training algorithm . 2TS model . Based on the number of observed interactions , we partition the users into two sets : active and inactive users . Our 2TS model ( Fig . 1 ) update the embeddings of active users and inactive users by two RNNs with independent parameters , in order to respect the two-time-scale nature . Moreover , the initial embeddings of inactive users are represented by a common embedding ψ , which is shared across all inactive users . Therefore , the overall model for inactive users is inductive , in the sense that the learned model can be applied to unseen users . In contrast , the initial embedding of each active user is a user-specific embedding φu , which is also called transductive embedding . Such embeddings are very expressive , which can better express users with a long history . Two-stage training . In stage 1 , we first learn transductive user embeddings φu and transductive item embeddings xi using a classical collaborative filtering method . Then we fix these embeddings , and in stage 2 , we will learn the parameters of the two RNNs and a common initialization ψ for inactive users . It is notable that the transductive embeddings for inactive users are abandoned in stage 2 . Only those for active users are finally used in the 2TS model . Besides , for active users , we do not use all interaction data to learn the RNN since their transductive embeddings have already encoded the information of their history . We only use a small number of last clicked items to learn the adaptation for active users , which improves the efficiency of the training process . The proposed 2TS model and the two-stage training algorithm lead to a few advantages : • Bias-variance trade-off . The differential use of transductive and inductive embeddings for the two RNN models allows 2TS to achieve a good overall bias-variance trade-off . We theoretically analyze such trade-off in Section 2 through the lens of learning-to-learn paradigm for designing online learning ( or adaptation ) algorithms . Our theory shows that there exists an optimal threshold to split users to achieve the best overall excessive risk . • Encode long-range sequence . The transductive embeddings φu for active users are user-specific vectors , so they can memorize the user ’ s long-range history during the training , without suffering from the difficulty of gradient propagation . The RNN on top of these transductive embeddings is only used for adaptation to recently engaged new items . • Computational efficiency . The efficiency of our method on large-scale problems mainly comes from two designs in the algorithm . First , stage 1 learns the transductive embeddings of active users and items , which contain a large number of parameters . However , it is fast since it does not involve any deep neural components and the loss is simply a convex function . Second , stage 2 only learns the RNNs which contain a small number of parameters , and the RNN for active users is only trained on a few last engaged items , which cuts off the long sequences . Experimentally , our method reveals to be much more efficient than the baselines on large-scale datasets . We summarize the contributions of this paper as follows : • To explain the intuition and motivation of the 2TS model , we provide theoretical analysis on a simplified setting , which rigorously argues the need for differential use of transductive and inductive embeddings for active and inactive users ( Section 2 ) . • Motivated by the analysis , we design the 2TS model and a two-stage training method , for practical use ( Section 3 ) . The proposed method is applied to two large-scale benchmark datasets and compared comprehensively to various baseline models , spanning a diverse set of categories , which shows that our method is advantageous in terms of both accuracy and efficiency ( Section 5 ) . 2 THEORETICAL MOTIVATION : WHY TWO-TIME-SCALE MODELS ? . We will first present the motivation for designing the 2TS model , through the lens of online learning and stochastic optimization . Our analysis quantitatively reveals that ( i ) the embeddings for active and inactive users evolve in different time scales , and ( ii ) two different online learning algorithms for active and inactive users respectively can lead to a better overall estimation of user embeddings . Our analysis will be carried out in a learning-to-learn setting , where online learning algorithms need to be designed to tackle a family of tasks for estimating the embedding vector of a user . Though this idealized setting can not cover all aspects of real-world recommendation problems , it leads to clear insights on the 2TS behavior of active and inactive users and the benefits of using two different online algorithms for these two respective user groups . These insights also motivate our practical implementation of the 2TS model using deep learning in Section 3 . 2.1 SETTING : LEARNING-TO-LEARN . Our setting consists of three components : the estimation task for an individual user , the distribution of tasks for a family of users , and the online algorithms which we want to design . Individual Task . We associate each user u with a ground truth embedding φ∗µ ∈ Rd , which can be thought of as a vector representation of a user ’ s preference . This embedding is defined by a distribution µ over user ’ s clicks over items ( x , y ) , where x ∈ X represents the item embedding which we assume is bounded , i.e. , ‖x‖2 ≤ Bx , and y ∈ { 0 , 1 } indicates whether the item is clicked . They follow the user-specific distribution ( x , y ) ∼ µ . More specifically , the ground truth user embedding is defined as the minimizer of the expected risk according to a regularized logistic loss φ∗µ : = arg minφ∈ΦRµ ( φ ) , whereRµ ( φ ) : = E ( x , y ) ∼µ ` ( φ , x , y ) , and ` ( φ , x , y ) : = −yx > φ+ log ( 1 + exp ( x > φ ) ) + c2‖φ‖ 2 2 , ( 1 ) where c > 0 is some regularization constant and Φ : = { φ ∈ Rd : ‖φ‖2 ≤ Bφ } . Typically , we do not have access to the distribution µ , but a sampled set of T observations z [ T ] : = { ( x1 , y1 ) , · · · , ( xT , yT ) } ∼ µT , which can be used as training samples to obtain an initial estimate φ ( z [ T ] ) of the user embedding . We assume φ ( z [ T ] ) is obtained by applying stochastic gradient descent ( SGD ) to the loss ` over z [ T ] in this section , but this training stage is not limited just to the SGD algorithm . It is expected that with more samples the estimation will be closer to the ground truth φ∗µ . The estimation φ ( z [ T ] ) is modeling the offline training stage of a recommendation system . A Distribution of Tasks . We consider a distribution of users , by assuming that the user-specific distribution µ is sampled from a meta-distribution µ ∼ pu . Furthermore , the number of observed interactions for a user , denoted by T ∼ pαT , is a random variable that follows a power law distribution with density p ( T ) ∝ ( T + 1 ) −α . The power law distribution models the fact that there will be lots of users with very few interactions , and very few users with many interactions.1 A key assumption of our model is that the variance of the ground truth user embeddings is small . That is Varm = Eµ∼pu‖φ∗µ −m‖22 ≤ r , wherem = Eµ∼puφ∗µ . ( 2 ) The assumption of small variance is critical in the sense that it will allow us to aggregate information from inactive users to obtain better estimations of their user embeddings . Online Algorithm Design Problem . Our goal is to design online adaptation algorithms for this distribution of users , such that the overall excessive risk of the online adaptation across all users is small . This is to model the online sequential recommendation stage where new user-item click information is incorporated after system deployment . Note that we are not restricted to design a single online learning algorithm for all users . In fact , we will show later that 2 online learning algorithms can actually lead to better risk bound than 1 online learning algorithm . In this algorithm design problem , each user µ corresponds to an online learning task , where items arrive sequentially . Starting from an initial embedding φ1µ , an online algorithm updates the user embedding whenever it observes a new user-item interaction ( xt , yt ) ∼ µ : φt+1µ ← Update ( φtµ , xt , yt ) , ( 3 ) and then it applies φt+1µ to the next item xt+1 , and suffers the loss ` ( φ t+1 µ , xt+1 , yt+1 ) in Eq . 1 . The excessive risk of this online algorithm after encountering N sequential samples is 1 N ∑N t=1 ` ( φ t µ , xt , yt ) − ` ( φ∗µ , xt , yt ) . The design problem involves ( 1 ) the initialization for the algorithm and ( 2 ) the update step of the algorithm . One obvious choice is to use the user-specific embedding φ ( z [ T ] ) estimated in the offline training phase as the initial embedding φ1µ for the online algorithm . However , is this estimation the best choice for the initial embedding φ1µ = φ ( z [ T ] ) ? Is there a better choice ? How should the initialization depend on T ? We answer these questions in the next subsection . 1This assumption is sensible since Fig . 5 shows that T approximately follow a power law in real-world datasets .
The paper considers the sequential recommendation problem. The proposed method essentially combines the following two ideas: (i) two-stage learning: using conventional CF to pretrain user/item embeddings, and feed them (fixed, unlearned) into the 2nd stage learning. (ii) two-time-scale: using 2 RNNs to model active users and inactive users respectively.
SP:47cbc46d73c5ad9d50744a7ff9fd6797eff273c4
You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling
1 INTRODUCTION . The Transformer model ( Vaswani et al. , 2017 ) is incredibly effective across a diverse set of natural language processing ( NLP ) applications including machine translation ( Vaswani et al. , 2017 ) , language inference ( Devlin et al. , 2018 ) and paraphrasing ( Raffel et al. , 2019 ) . Transformer-based models such as BERT ( Devlin et al. , 2018 ) are pretrained in an unsupervised manner and later finetuned on different downstream tasks , often providing state-of-the-art performance on standard benchmarks . While such models have strong empirical performance , their high computational and memory requirements remain quite high . Consequently , in the NLP setting , most current models have certain constraints on the sequence length , e.g. , BERT and other transformer-based language models ( Yang et al. , 2019 ; Liu et al. , 2019 ) limit the sentence length to be at most 512 . The Multi-Head Self-Attention is central to Transformer based models and provides a flexible global receptive field to exchange information among input tokens . While self-attention provides immense benefits , it is also a key bottleneck in training with long sequences . In particular , the output of self-attention is a combination of all tokens where coefficients are determined by the similarities among tokens . While this is empirically beneficial , it involves a sizable resource footprint . For sequence length n , this leads to a O ( n2 ) complexity in both time and memory to compute pairwise similarities among all input tokens . This quadratic cost is a roadblock in attaining potential benefits that may be realizable in various applications by capturing long term context dependencies . As we will discuss in more detail later , the foregoing issue is a major thrust of several recent and ongoing efforts focused on mitigating the sizable resource requirements of such models . Our work is inspired by ideas of importance sampling via hashing-based sampling techniques ( Spring & Shrivastava , 2017 ; Charikar & Siminelakis , 2017 ) . We proposed a Bernoulli based sampling to approximate self-attention , scaling linearly with the input sequence length . We achieve this by viewing self-attention as a sum of individual tokens associated with Bernoulli random variables whose success probability is determined by the similarities among tokens . In principle , we can sample all Bernoulli random variables at once with a single hash ( although in practice , this number may be a small constant to lower the approximation variance ) . This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of hashing-based importance sampling ( based on feasibility of deployment on GPU architectures ) . The resulting strategy ( You Only Sample Almost Once , YOSO-Attention ) is far more amenable to an efficient and backpropagation friendly implementation , and has a favorable empirical performance profile on natural language modeling tasks . We evaluate our proposed algorithm on the GLUE benchmark ( Wang et al. , 2019 ) with 512 sequence length as well as on long sequence language model pretraining where we see promising results with speed-ups and memory savings . 2 BACKGROUND : SELF-ATTENTION . Self-Attention . Self-attention is a scaled dot-product attention mechanism to capture token dependencies in the input sequence , which can be defined as , A ( Q , K , V ) = softmax ( QWQ ) ( KWK ) T√dh︸ ︷︷ ︸ P V WV = DP exp ( P ) V WV ( 1 ) where Q , K , V ∈ Rn×d are embedding matrices from the input sequence , called queries , key and values respectively . Here , n is the input sequence length , d is the embedding dimension of each token , WQ , WK , WV ∈ Rd×dh are learned parameter matrices , dh is the dimension of hidden embedding , and DP is a n × n diagonal matrix which normalizes each row of the exp ( P ) matrix such that the row entries sum up to 1 . For simplicity , we overload the notations for Q , K , V to denote QWQ , KWK , V WV in our presentation . Multi-Head Self-Attention . Multi-Head self-attention in Transformers runs through the scaled dot-product attention multiple times and the attention outputs are concatenated to help the model capture information from multiple representation subspaces Vaswani et al . ( 2017 ) . Multi-Head Selfattention can be formally written as , MultiHead ( Q , K , V ) = Concat ( A1 ( Q , K , V ) , · · · , Ah ( Q , K , V ) ) W ( 2 ) where h is the number of heads , Ai , i = 1 , . . . , h are heads with different parameter matrices . Self-Attention Bottleneck . A key bottleneck in self-attention is computing the softmax matrix , softmax ( P ) , which requires the calculation of all pairwise input token similarities . To reduce this cost , we seek to approximate the softmax matrix by viewing self-attention for each query as an expectation of a softmax distribution and computing the approximated self-attention with an efficient sampling mechanism . In the following sections , we will first review LSH-based importance sampling and then propose Bernoulli sampling with LSH to estimate self-attention efficiently . 3 IMPORTANCE SAMPLING VIA LOCALITY SENSITIVE HASHING . Importance sampling ( Press et al. , 2007 ) helps approximate properties of a target distribution by a weighted average of random draws from another distribution . It is known ( Press et al. , 2007 ) that importance sampling can be directly used for the softmax distribution by drawing samples from a uniform distribution – which avoids sampling from the softmax distribution directly which is harder . But this leads to a high variance estimate since the softmax distribution is usually concentrated in a small region . When using this idea for softmax matrix approximation for self-attention in particular , the variance tends to grow with the input sequence length . Before proceeding , we will summarize an interesting importance sampling method for low variance estimators , specifically , importance sampling via LSH from ( Charikar & Siminelakis , 2017 ; Spring & Shrivastava , 2017 ) . LSH-based Importance Sampling . Consider the case when the angular distance between a key and a query is small . In this case , the similarity ( between the key and the query ) as well as the softmax probability will be large . When viewed through the lens of a nearest neighbor retrieval , the above property coincides with a large collision probability of high similarity key-query pairs , assuming that the neighbor retrieval is implemented via LSH . Motivated by the link between softmax probability p and LSH collision probability q , Spring & Shrivastava ( 2017 ) and Charikar & Siminelakis ( 2017 ) suggest using LSH as an efficient sampler for low variance softmax estimators . ( a ) Spring & Shrivastava ( 2017 ) propose approximating softmax by sampling a set , S , a collection of neighboring keys for each query formed by the union of colliding keys using m hash tables . The estimator is computed using |S|−1 ∑ i∈S p ( q , ki ) q ( q , ki ) vi where q is a query vector , ki , vi are key and value vectors in the sampling set S , and p ( · , · ) and q ( · , · ) are softmax probability and collision probability of given pairs . The procedure is equivalent to performing importance sampling without replacement , which involves a dependency among the samples . Deduplication ( avoiding double counting ) requires memory to store keys in each hash table and runtime to deduplicate keys for each query . If the size of hash buckets is skewed , the GPU memory needs depend on the size of the hash bucket and the runtime depends on the size of S. ( b ) Charikar & Siminelakis ( 2017 ) proposed a Hash based Estimator to simulate a proposal distribution for importance sampling via LSH , which can be easily applied in the context of softmax . For each hash table , a key is uniformly selected from the bucket that the query is hashed to , for simulating a draw from a proposal distribution . The estimate is computed as m−1 ∑m i=1 p ( q , ki ) |Hi ( q ) | q ( q , ki ) vi where |Hi ( q ) | denotes the size of hash bucket in the i-th hash table which q is hashed to . This simulates m samples drawn with replacement from the proposal distribution . However , the probability of one key being sampled depends not only on ( a ) the angular distance to the query but also ( b ) the number of keys within the hash bucket , leading to a sampling dependency among all keys . Further , using it for self-attention causes a dependence between the sparsity in the softmax matrix and the number of hashes used . Specifically , the number of tokens that each query can attend to is bounded by the number of hashes : the procedure samples at most one key for each hash table and so , it adds one additional nonzero to the softmax matrix , at most . Remark 1 . While LSH-based importance sampling exploits the agreement between high probability p ( · , · ) and high collision probability q ( · , · ) , the alignment is not perfect . Samples from proposal distribution must be reweighted to compensate for the difference . Further , for different queries , the likelihood ratios between softmax distribution and proposal distribution w.r.t . a single key are different . Therefore , the reweighing has to be done during querying . Although maintaining hash tables for storing keys is not a major problem in general , the high memory cost for hash tables and computation time for reweighing would influence efficiency when applied to self-attention . 4 YOSO-ATTENTION . We start from LSH-based importance sampling and seek to address some of the aforementioned issues when it is deployed for approximating self-attention . Instead of using LSH to simulate sampling from a proposal distribution over tokens , we view attention as a sum of tokens associated with Bernoulli random variables . This modification relates better to LSH and less with LSH-based importance sampling – the probability of one query colliding with a key is not based on other keys . This strategy helps avoid the sampling dependency issue in LSH-based importance sampling and offers an opportunity to develop a strategy more amenable to GPUs . Remark 2 . We assume that the input keys and queries of self-attention are unit length – to unify dot-product similarity in self-attention and cosine similarity in LSH . This is simple using Neyshabur & Srebro ( 2015 ) : a temperature variable τ is used to bound the squared ` 2 norm of all queries and keys and to reconstruct new unit length keys and queries while preserving their pairwise similarities . We can work with the softmax matrix in angular distance metric and derive our algorithm . Self-Attention via Bernoulli Sampling . We aim to approximate self-attention , which uses a softmax matrix to capture the context dependency among tokens via their pairwise similarities . Assuming that we can represent this context dependency directly using collision probability q ( · , · ) , then the challenges discussed in importance sampling can be resolved . The coincidence of softmax probability p ( · , · ) and LSH collision probability q ( · , · ) makes q ( · , · ) a sensible starting point for approximating self-attention . Specifically , to model dependency based on similarity , the collision probability aligns well with the exponential function in softmax in the domain of interest [ −1 , 1 ] in Figure 1 : both functions have positive zeroth , first and second order derivatives . Note that ( a ) positive zeroth order derivative indicates that the dependency is positive , ( b ) positive first order derivative ensures that the dependency based on similarity is monotonic , and ( c ) positive second order derivative means that low similarity corresponds to almost no dependency . This leads us to hypothesize that a collision-based self-attention may be as effective as softmax-based self-attention . It can be formulated as , n∑ i=1 Bi ( q , ki ) vi ( 3 ) where Bi ( q , ki ) is a Bernoulli random variable where the success probability is given by the collision probability of q with the keys ki . Hence , it can be determined by the similarity between q , ki . In a single hash , each Bi ( q , ki ) generates a realization to determine whether the corresponding token will be part of attention output or not . Conceptually , when sampling from softmax distribution , only one token is sampled as the attention output . In contrast , Bernoulli sampling determines whether each individual token is a part of the attention output . In principle , to determine the context dependency among tokens , you only need to sample once ( YOSO ) using a single hash to generate realizations of all Bernoulli random variables , Bi ( q , ki ) , i = 1 , . . . , n. Specifically , when keys are hashed to a hash table using a single hash , the realization of Bi ( q , ki ) for each query q will be 1 if q collides with ki , otherwise it will be 0 . To our knowledge , using LSH collision probability to replace softmax dependencies for self-attention has not been studied before . YOSO-Attention . By replacing softmax dependency with Bernoulli random variables and using LSH as an efficient sampler to estimate the success probability , we achieve an efficient self-attention ( YOSO-Attention ) to approximate softmax-based self-attention . YOSO ( Q , K , V ) = B ( Q , K ) V ; E [ YOSO ( Q , K , V ) ] = ( 1− arccos ( QK T ) π ) τ V ( 4 ) where B ( Q , K ) is the Bernoulli sampling matrix using m hashes . B ( Q , K ) i , j = 1 m m∑ k=1 1fk ( Qi , : ) =fk ( Kj , : ) where fk , k = 1 , . . . , m are hash functions . ( 5 ) Normalizing Attention . In standard self-attention , each row of the softmax matrix is normalized so that the dependencies sum up to 1 . In the above , we have discussed how the pairwise query-key dependencies can be estimated using Bernoulli sampling . We now present how to normalize the dependency in our method as standard self-attention . We can first estimate the dependencies and then normalize them using the sum of estimated dependencies estimated by B ( Q , K ) 1 where 1 is a vector of all entries being 1 . B ( Q , K ) 1 can be computed by Eq . 4 by plugging 1 into V . To make the estimation of self-attention more efficient , we turn to adopt a ` 2 normalization to the attention output , similar as Levy et al . ( 2015 ) to use ` 2 normalization for word embedding . Thus , attention outputs are invariant of the scaling , B ( Q , K ) 1 , under ` 2 normalization . Therefore , we have , N-YOSO ( Q , K , V ) = ` 2 ( B ( Q , K ) V ) ( 6 ) Empirically , we show the ` 2 normalization does not affect the performance of our method as expected , which can be seen in Figure 3 . LSH-based Bernoulli Sampling . Now we discuss how to implement the procedure of using Bernoulli sampling to approximate self-attention . While a standard LSH procedure can be used , maintaining hash tables to store keys is inefficient on a GPU – the GPU memory size required for hash table can not be predetermined and the workload might be skewed due to skewed bucket sizes . To tackle this issue , we propose LSH-based Bernoulli Sampling by only saving the summation of values corresponding to hashed keys instead of storing a collection of hashed keys . The overview of our algorithm is shown in Figure 2 . To compute Y = B ( Q , K ) V , the procedure proceeds as follows . For each k ∈ [ 1 , . . . , m ] , we sample a hash function fk and create a hash table Hk ∈ R2τ×d representing 2τ d-dimensional buckets . For each key Kj , : , we add the value Vj , : to the bucket whose index is hash code fk ( Kj , : ) , denoted as Hkfk ( Kj , : ) , Hkfk ( Kj , : ) ←H k fk ( Kj , : ) + Vj , : ( 7 ) Note that the size of Hk is O ( 2τd ) and is independent of which bucket keys are hashed . With all keys processed for k ∈ [ 1 , . . . , m ] , for each query Qi , : , we maintain an output vector Yi , : initialized to 0 . Then , we allocate the bucket in Hk using fk ( Qi , : ) for k ∈ [ 1 , . . . , m ] and add all corresponding results in buckets to the output vector Yi , : as Yi , : ← Yi , : +Hkfk ( Qi , : ) , : ( 8 ) Therefore , each final output Yi , : can be computed as , Yi , : = m∑ k=1 n∑ j=1 1fk ( Qi , : ) =fk ( Kj , : ) Vj , : = n∑ j=1 B ( Q , K ) i , jVj , : ( 9 ) Remark 3 . The memory and time complexity of this algorithm are O ( m2τd ) and O ( nmd ) respectively , In addition , both time and memory are independent of the size of hash buckets . Further , We can improve the memory complexity to O ( m2τ ) by reusing hash table and processing a few dimensions each time without increasing time complexity . The constant τ is small as it controls the decay rate of attention weight with respect to the angular distance between query and key , and it can be chosen to be a function of log2 ( n ) . In our experiments , τ is set to log2 ( n ) . Speed-up . While not essential , we find that a fast random projection for computing the LSH hash code will be beneficial , since this step takes a large portion of the overall runtime . As suggested by Andoni et al . ( 2015 ) , we use the approximated random projection to reduce time complexity to O ( nmτ log2 ( d ) ) , allowing fast computation of hash codes . Backpropagation through YOSO-Attention . For training , we also need to show backward propagation steps for YOSO-Attention . Here , we discuss this last component of YOSO-Attention which enables end-to-end and efficient training . For backpropagation , the gradient of the loss L w.r.t . V can be estimated similar to equation 4 , ∇V L = ( ( 1− arccos ( QKT ) π ) τ ) T ( ∇YOSOL ) ≈ B ( K , Q ) ( ∇YOSOL ) ( 10 ) The gradients of L w.r.t . Q , K are similar , so we only provide the expression for Q , ∇QL = ( ( ∇YOSOL ) V T ) ( τ ( 1− arccos ( QK T ) π ) τ−1 ) ( π √ 1− ( QKT ) 2 ) ) K ( 11 ) where , are element-wise division and multiplication . The problem with the true gradient is that it goes to infinity as the alignment score between the query and the key approaches 1 , which might lead to divergence . To avoid this numerical issue , we use a lower bound of the actual derivative of the collision probability , [ [ ( ∇YOSOL ) V T ] τ2 ( 1 − arccos ( QKT ) π ) τ ] K , see Figure 1 , which can be efficiently estimated via a variation of LSH-based Bernoulli Sampling . Specifically , note that the approximation can be decomposed into sum of d LSH-based Bernoulli Sampling , ( ∇̂QL ) i , : = d∑ l=1 ( ∇YOSOL ) i , l n∑ j=1 B ( Q , K ) i , j ( Vj , l τ 2 Kj , : ) ( 12 ) Therefore , following LSH-based Bernoulli Sampling , the memory complexity is O ( m2τd2 ) , and time complexity is O ( nmd2 ) . The d2 term can be eliminated by repeatedly using the same hash tables d2 times without increasing runtime , which improves the memory complexity to O ( m2τ ) . The overall complexity of our method and comparison to standard self-attention is summarized in Table 1 . Further , to address the quadratic dependence on d , in the Appendix , we will discuss a scheme to estimate the same quantity but is linear in d .
The paper proposes to replace the weighted average of the values in standard self-attention with the average of values sampled in a way that the expectation is close to the result of self-attention. In particular, the authors associate with each query-key pair, a Bernoulli random variable with expected value close to the exponential of the dot-product. Sampling these variables and averaging the values per query is formulated in an efficient way using locality-sensitive hashing.
SP:2749a34e8528dfd4fcc733f9b9f175fcacbcb223
D2p-fed:Differentially Private Federated Learning with Efficient Communication
1 INTRODUCTION . Federated learning ( FL ) is a popular machine learning paradigm that allows a central server to train models over decentralized data sources . In federated learning , each client performs training locally on their data source and only updates the model change to the server , which then updates the global model based on the aggregated local updates . Since the data stays locally , FL can provide better privacy protection than traditional centralized learning . However , FL is facing two main challenges : ( 1 ) FL lacks a rigorous privacy guarantee ( e.g. , differential privacy ( DP ) ) and indeed , it has been shown to be vulnerable to various inference attacks ( Nasr et al. , 2019 ; Pustozerova & Mayer ; Xie et al. , 2019 ) ; ( 2 ) FL incurs considerable communication costs . In many potential applications of FL such as mobile devices , these two challenges are present simultaneously . However , privacy and communication-efficiency have mostly been studied independently in the past . As regards privacy , existing work has applied a gold-standard privacy notion – differential privacy ( DP ) – to FL , which ensures that the server could hardly determine the participation of each client by observing their updates ( Geyer et al. , 2017 ) . To achieve DP , each client needs to inject noise to their local updates and as a side effect , the performance of the trained model would inevitably degrade . To improve model utility , secure multiparty computation ( SMC ) has been used in tandem with DP to reduce noise ( Jayaraman et al. , 2018 ; Truex et al. , 2019 ) . The key idea is to prevent the server from observing the individual updates , make only the aggregate accessible , and thus transform from local DP to central DP . However , SMC introduces extra communication overhead to each client . There has been extensive research on improving communication efficiency of FL while ignoring the privacy aspect ( Tsitsiklis & Luo , 1987 ; Balcan et al. , 2012 ; Zhang et al. , 2013 ; Arjevani & Shamir , 2015 ; Chen et al. , 2016 ) . However , these communication reduction methods either have incompatible implementations with the existing DP mechanisms or would break the DP guarantees when combined with SMC . The only existing work that tries to reconcile DP and communication efficiency in FL is cpSGD ( Agarwal et al. , 2018 ) . The authors leveraged the Binomial mechanism , which adds Binomial noise into local updates to ensure differential privacy . The discrete nature of Binomial noise allows it to be transmitted efficiently . However , cpSGD faces several limitations when applied to real-world applications . Firstly , with Binomial noise , the output of a learning algorithm would have different supports on different input datasets ; as a result , Binomial noise can only guarantee approx- imate DP where the participation of the client can be completely exposed with nonzero probability . Also , there lacks a tight composition for DP with Binomial noise and the resulting privacy budget skyrockets in a multi-round FL protocol . Hence , the Binomial mechanism can not produce a useful model with a reasonable privacy budget on complex tasks . Last but not least , the Binomial mechanism involves several mutually constrained hyper-parameters and the privacy formula is extremely complicated , which makes hyper-parameter tuning a difficult task . In this paper , we propose the discrete Gaussian based differential private federated learning ( D2PFED ) , an alternative technique to reduce communication costs while maintaining differential privacy in FL . Our key idea is to leverage the discrete Gaussian mechanism in FL , which adds discrete Gaussian noise into client updates . We show that the discrete Gaussian mechanism satisfies Rényi DP which provides better composability . We employ secure aggregation along with the discrete Gaussian mechanism to lower the noise and exhibit the privacy guarantee for this hybrid privacy protection approach . To save the communication cost , we integrate the stochastic quantization and random rotation into the protocol . We then cast FL as a general distributed mean estimation problem and provide the analysis of the utility for the overall protocol . Our theoretical analysis sheds light on the superiority of D2P-FED to cpSGD . Our experiments show that D2P-FED can lead to state-ofthe-art performance in terms of managing the trade-off among privacy , utility , and communication . 2 RELATED WORK . It is well studied how to improve the communication cost in traditional distributed learning settings ( Tsitsiklis & Luo ( 1987 ) ; Balcan et al . ( 2012 ) ; Zhang et al . ( 2013 ) ; Arjevani & Shamir ( 2015 ) ; Chen et al . ( 2016 ) ) . However , most of the approaches either require communication between the workers or are designed for specific learning tasks so they can not be applied directly to generalpurpose FL . The most relevant work is Suresh et al . ( 2017 ) which proposed to use stochastic quantization to save the communication cost and random rotation to lower mean squared error of the estimated mean . We follow their approach to improve the communication efficiency and model utility of D2P-FED . Nevertheless , our work differs from theirs in that we also study how to ensure DP for rotated and quantized data transmission and prove a convergence result for the learning algorithm with both communication cost reduction and privacy protection steps in place . On the other hand , differentially private FL is undergoing rapid development during the past few years ( Geyer et al . ( 2017 ) ; McMahan et al . ( 2017 ) ; Jayaraman et al . ( 2018 ) ) . However , these methods mainly focus on improving utility under a small privacy budget and ignore the issue of communication cost . In particular , we adopt a similar hybrid approach to Truex et al . ( 2019 ) , which combines SMC with DP for reducing the noise . SMC ensures that the centralized server can only see the aggregated update but not individual ones from clients and as a result , the noise added by each client can be reduced by a factor of the number of clients participating in one round . The difference of our work from theirs is that we inject discrete Gaussian noise to local updates instead of the continuous Gaussian noise . This allows us to use secure aggregation ( Bonawitz et al. , 2017 ) which is much cheaper than threshold homomorphic encryption used by Truex et al . ( 2019 ) . We further study the interaction between discrete Gaussian noise and the secure aggregation as well as their effects on the learning convergence . We identify cpSGD ( Agarwal et al . ( 2018 ) ) as the most comparable work to D2P-FED . Just like D2P-FED , cpSGD aims to improve both the communication cost and the utility under rigorous privacy guarantee . However , cpSGD suffers from three main defects discussed in Section 1 . This paper proposes to use the discrete Gaussian mechanism to mitigate these issues in cpSGD . 3 BACKGROUND AND NOTATION . In this section , we provide an overview of FL and DP and establish the notation system . We use bold lower-case letters ( e.g . a , b , c ) to denote vectors , and bold upper-case letters ( e.g . A , B , C ) for matrices . We denote 1 · · ·n by [ n ] . FL Overview . In a FL system , there are one server and n clients Ci , i ∈ [ n ] . The server holds a global model of dimension d. Each client holds ( IID or non-IID ) samples drawn from some unknown distribution D. The goal is to learn the global model w ∈ Rd that minimizes some loss function L ( w , D ) . To achieve this , the system runs a T -round FL protocol . The server initializes the global model with w0 . In round t ∈ [ T ] , the server randomly sub-samples γn clients from [ n ] with sub-sampling rate γ and broadcasts the global model wt−1 to the chosen clients . Each chosen client Ci then runs the local optimizers ( e.g . SGD , Adam , and RMSprop ) , computes the difference between the locally optimized model w ( i ) t and the global model wt−1 : g ( i ) t = w ( i ) t − wt−1 , and uploads g ( i ) t to the server . The server takes the average of the differences and update the global model wt = wt−1 + 1k ∑ g ( i ) t . Communication in FL . The clients in FL are often edge devices , where the upload bandwidth is fairly limited ; therefore , communication efficiency is of uttermost importance to FL . Let π denote a communication protocol . We denote the per-round communication cost as C ( π , g [ n ] ) . To lower the communication cost , the difference vectors are typically compressed before sent to the server . The compression would degrade model performance and we measure the performance loss via the mean squared error . Specifically , letting ḡ denote the actual mean of difference vectors 1n ∑n i=1 g ( i ) and g̃ denote the server ’ s estimated mean of difference vectors using some protocol such as D2PFED , we could measure the performance loss by E ( π , g [ n ] ) = E [ ‖g̃ − ḡ‖2 ] , i.e. , the mean squared error between the estimated and the actual mean . This mean squared error is directly related to the convergence rate of FL ( Agarwal et al. , 2018 ) . Threat Model & Differential Privacy . We assume that the server is honest-but-curious . Namely , the server will follow the protocol honestly under the law enforcement or reputation pressure , but is curious to learn the client-side data from the legitimate client-side messages . In the FL context , the server wants to get information about the client-side data by studying the local updates received without deviating from the protocol . The above attack , widely known as the inference attack ( Shokri et al. , 2017 ; Yeom et al. , 2018 ; Nasr et al. , 2019 ) , can be effectively mitigated using a canonical privacy notation namely differential privacy ( DP ) . Intuitively , DP , in the context of ML , ensures that the trained model is nearly the same regardless of the participation of any arbitrary client . Definition 1 ( ( , δ ) -DP ) . A randomized algorithm f : D → R is ( , δ ) -differentially private if for every pair of neighboring datasets D and D′ that differs only by one datapoint , and every possible ( measurable ) output set E the following inequality holds : P [ f ( D ) ⊆ E ] ≤ e P [ f ( D′ ) ⊆ E ] + δ . ( , δ ) -DP has been used as a privacy notion in most of the existing works of privacy-preserving FL . However , in this paper , we consider a generalization of DP , Rényi differential privacy ( RDP ) , which is strictly stronger than ( , δ ) -DP for δ > 0 and allows tighter analysis for compositing multiple mechanisms . This second point is particularly appealing , as FL mostly comprises multiple rounds yet the existing works suffer from skyrocketing privacy budgets for multi-round learning . Definition 2 ( ( α , ) -RDP ) . For two probability distributions P and Q with the same support , the Rényi divergence of order α > 1 is defined by Dα ( P‖Q ) ∆ = 1α−1 logEx∼Q ( P ( x ) Q ( x ) ) α . A randomized mechanism f : D → R is ( α , ) -RDP , if for any neighboring datasets D , D′ ∈ D it holds that Dα ( f ( D ) ‖f ( D′ ) ) ≤ . The intuition behind RDP is the same as other variants of differential privacy : “ Similar inputs should yield similar output distributions , ” and the similarity is measured by the Rényi divergence under RDP . RDP can also be converted to ( , δ ) -DP using the following transformation . Lemma 1 ( RDP-DP conversion ( Mironov ( 2017 ) ) ) . If M obeys ( α , ) -RDP , then M obeys ( + log ( 1/δ ) / ( α− 1 ) , δ ) -DP for all 0 < δ < 1 . RDP enjoys an operationally convenient and quantitatively accurate way of tracking cumulative privacy loss when compositing multiple mechanisms ( Lemma 2 ) or being combined with subsampling ( Wang et al. , 2018 ) . As a result , RDP is particularly suitable for the context of ML . Lemma 2 ( Adaptive composition of RDP ( Mironov ( 2017 ) ) ) . If ( randomized ) mechanismM1 obeys ( α , 1 ) -RDP , andM2 obeys ( α , 2 ) -RDP , then their composition obeys ( α , 1 + 2 ) -RDP .
The paper proposes the discrete Gaussian based differentially private federated learning algorithm to achieve both differential privacy and communication efficiency in federated learning. In particular, it adds discrete Gaussian noise into client updates and uses secure aggregation to prevent the server from observing the individual updates. The algorithm satisfies RDP and has lower communication cost compared to the previous method cpSGD.
SP:f17c1ecc9bb74a6c267c54a8863d0fcd336f4fdf
Trusted Multi-View Classification
1 INTRODUCTION . Multi-view data , typically associated with multiple modalities or multiple types of features , often exists in real-world scenarios . State-of-the-art multi-view learning methods achieve tremendous success across a wide range of real-world applications . However , this success typically relies on complex models ( Wang et al. , 2015a ; Tian et al. , 2019 ; Bachman et al. , 2019 ; Zhang et al. , 2019 ; Hassani & Khasahmadi , 2020 ) , which tend to integrate multi-view information with deep neural networks . Although these models can provide accurate classification results , they are usually vulnerable to yield unreliable predictions , particularly when presented with views that are not well-represented ( e.g. , information from abnormal sensors ) . Consequently , their deployment in safety-critical applications ( e.g. , computer-aided diagnosis or autonomous driving ) is limited . This has inspired us to introduce a new paradigm for multi-view classification to produce trusted decisions . For multi-view learning , traditional algorithms generally assume an equal value for different views or assign/learn a fixed weight for each view . The underlying assumption is that the qualities or importance of these views are basically stable for all samples . In practice , the quality of a view often varies for different samples which the designed models should be aware of for adaption . For example , in multi-modal medical diagnosis ( Perrin et al. , 2009 ; Sui et al. , 2018 ) , a magnetic resonance ( MR ) image may be sufficient for one subject , while a positron emission tomography ( PET ) image may be required for another . Therefore , the decision should be well explained according to multi-view inputs . Typically , we not only need to know the classification result , but should also be able to answer ∗Corresponding author : Changqing Zhang “ How confident is the decision ? ” and “ Why is the confidence so high/low for the decision ? ” . To this end , the model should provide in accurate uncertainty for the prediction of each sample , and even individual view of each sample . Uncertainty-based algorithms can be roughly divided into two main categories , i.e. , Bayesian and non-Bayesian approaches . Traditional Bayesian approaches estimate uncertainty by inferring a posterior distribution over the parameters ( MacKay , 1992a ; Bernardo & Smith , 2009 ; Neal , 2012 ) . A variety of Bayesian methods have been developed , including Laplace approximation ( MacKay , 1992b ) , Markov Chain Monte Carlo ( MCMC ) ( Neal , 2012 ) and variational techniques ( Graves , 2011 ; Ranganath et al. , 2014 ; Blundell et al. , 2015 ) . However , compared with ordinary neural networks , due to the doubling of model parameters and difficulty in convergence , these methods are computationally expensive . Recent algorithm ( Gal & Ghahramani , 2016 ) estimates the uncertainty by introducing dropout ( Srivastava et al. , 2014 ) in the testing phase , thereby reducing the computational cost . Several non-Bayesian algorithms have been proposed , including deep ensemble ( Lakshminarayanan et al. , 2017 ) , evidential deep learning ( Sensoy et al. , 2018 ) and deterministic uncertainty estimate ( van Amersfoort et al. , 2020 ) . Unfortunately , all of these methods focus on estimating the uncertainty on single-view data , despite the fact that fusing multiple views through uncertainty can improve performance and reliability . In this paper , we propose a new multi-view classification algorithm aiming to elegantly integrate multiview information for trusted decision making ( shown in Fig . 1 ( a ) ) . Our model combines different views at an evidence level instead of feature or output level as done previously , which produces a stable and reasonable uncertainty estimation and thus promotes both classification reliability and robustness . The Dirichlet distribution is used to model the distribution of the class probabilities , parameterized with evidence from different views and integrated with the Dempster-Shafer theory . In summary , the specific contributions of this paper are : ( 1 ) We propose a novel multi-view classification model aiming to provide trusted and interpretable ( according to the uncertainty of each view ) decisions in an effective and efficient way ( without any additional computations and neural network changes ) , which introduces a new paradigm in multi-view classification . ( 2 ) The proposed model is a unified framework for promising sample-adaptive multi-view integration , which integrates multi-view information at an evidence level with the DempsterShafer theory in an optimizable ( learnable ) way . ( 3 ) The uncertainty of each view is accurately estimated , enabling our model to improve classification reliability and robustness . ( 4 ) We conduct extensive experiments which validate the superior accuracy , robustness , and reliability of our model thanks to the promising uncertainty estimation and multi-view integration strategy . 2 RELATED WORK . Uncertainty-based Learning . Deep neural networks have achieved great success in various tasks . However since most deep models are essentially deterministic functions , the uncertainty of the model can not be obtained . Bayesian neural networks ( BNNs ) ( Denker & LeCun , 1991 ; MacKay , 1992b ; Neal , 2012 ) endow deep models with uncertainty by replacing the deterministic weight parameters with distributions . Since BNNs struggle in performing inference and usually come with prohibitive computational costs , a more scalable and practical approach , MC-dropout ( Gal & Ghahramani , 2016 ) , was proposed . In this model , the inference is completed by performing dropout sampling from the weight during training and testing . Ensemble based methods ( Lakshminarayanan et al. , 2017 ) train and integrate multiple deep networks and also achieve promising performance . Instead of indirectly modeling uncertainty through network weights , the algorithm ( Sensoy et al. , 2018 ) introduces the subjective logic theory to directly model uncertainty without ensemble or Monte Carlo sampling . Building upon RBF networks , the distance between test samples and prototypes can be used as the agency for deterministic uncertainty ( van Amersfoort et al. , 2020 ) . Benefiting from the learned weights of different tasks with homoscedastic uncertainty learning , ( Kendall et al. , 2018 ) achieves impressive performance in multi-task learning . Multi-View Learning . Learning on data with multiple views has proven effective in a variety of tasks . CCA-based multi-view models ( Hotelling , 1992 ; Akaho , 2006 ; Wang , 2007 ; Andrew et al. , networks ( 1 ) . The obtained evidence parameterizes the Dirichlet distribution ( 2 ) to induce the classification probability and uncertainty ( 3 ) . The overall uncertainty and classification probability are inferred by combining the beliefs of multiple views based on the DST ( 4 ) . The combination rule and an example are shown in Definition 4 and ( b ) , respectively . Given two sets of beliefs ( blue and green blocks ) , we recombine the compatible parts of the two sets ( brown blocks ) and ignore the mutually exclusive parts ( white blocks ) of the two sets to obtain the combined beliefs . 2013 ; Wang et al. , 2015a ; 2016 ) are representative ones that have been widely used in multi-view representation learning . These models essentially seek a common representation by maximizing the correlation between different views . Considering common and exclusive information , hierarchical multi-modal metric learning ( HM3L ) ( Zhang et al. , 2017 ) explicitly learns shared multi-view and view-specific metrics , while AE2-Nets ( Zhang et al. , 2019 ) implicitly learn a complete ( view-specific and shared multi-view ) representation for classification . Recently , the methods ( Tian et al. , 2019 ; Bachman et al. , 2019 ; Chen et al. , 2020 ; Hassani & Khasahmadi , 2020 ) based on contrastive learning have also achieved good performance . Due to its effectiveness , multi-view learning has been widely used in various applications ( Kiela et al. , 2018 ; Bian et al. , 2017 ; Kiela et al. , 2019 ; Wang et al. , 2020 ) . Dempster-Shafer Evidence Theory ( DST ) . DST , which is a theory on belief functions , was first proposed by Dempster ( Dempster , 1967 ) and is a generalization of the Bayesian theory to subjective probabilities ( Dempster , 1968 ) . Later , it was developed into a general framework to model epistemic uncertainty ( Shafer , 1976 ) . In contrast to Bayesian neural networks , which indirectly obtain uncertainty through multiple stochastic samplings from weight parameters , DST directly models uncertainty . DST allows beliefs from different sources to be combined with various fusion operators to obtain a new belief that considers all available evidence ( Sentz et al. , 2002 ; Jøsang & Hankin , 2012 ) . When faced with beliefs from different sources , Dempster ’ s rule of combination tries to fuse their shared parts , and ignores conflicting beliefs through normalization factors . A more specific implementation will be discussed later . 3 TRUSTED MULTI-VIEW CLASSIFICATION . It has been shown that using a softmax output as confidence for predictions often leads to high confidence values , even for erroneous predictions since the largest softmax output is used for the final prediction ( Moon et al. , 2020 ; van Amersfoort et al. , 2020 ) . Therefore , we introduce an evidencebased uncertainty estimation technique which can provide more accurate uncertainty and allow us to flexibly integrate multiple views for trusted decision making . 3.1 UNCERTAINTY AND THE THEORY OF EVIDENCE . In this subsection , we elaborate on evidential deep learning to quantify the classification uncertainty for each of multiple views , which simultaneously models the probability of each class and overall uncertainty of the current prediction . In the context of multi-class classification , Subjective logic ( SL ) ( Jøsang , 2018 ) associates the parameters of the Dirichlet distribution ( Definition A.1 in the Appendix ) with the belief distribution , where the Dirichlet distribution can be considered as the conjugate prior of the categorical distribution ( Bishop , 2006 ) . Accordingly , we need to determine the concentration parameters , which are closely related to the uncertainty . We elaborate on the Subjective logic ( Jøsang , 2018 ) , which defines a theoretical framework for obtaining the probabilities ( belief masses ) of different classes and overall uncertainty ( uncertainty mass ) of the multi-classification problem based on the evidence collected from data . Note that evidence refers to the metrics collected from the input to support the classification ( step 1 in Fig . 1 ( a ) ) and is closely related to the concentration parameters of Dirichlet distribution . Specifically , for the K classification problems , subjective logic tries to assign a belief mass to each class label and an overall uncertainty mass to the whole frame based on the evidence . Accordingly , for the vth view , the K + 1 mass values are all non-negative and their sum is one : uv + K∑ k=1 bvk = 1 , ( 1 ) where uv ≥ 0 and bvk ≥ 0 indicate the overall uncertainty and the probability for the kth class , respectively . For the vth view , subjective logic connects the evidence ev = [ ev1 , · · · , evK ] to the parameters of the Dirichlet distribution αv = [ αv1 , · · · , αvK ] ( step 2 in Fig . 1 ( a ) ) . Specifically , the parameter αvk of the Dirichlet distribution is induced from evk , i.e. , α v k = e v k + 1 . Then , the belief mass b v k and the uncertainty uv ( step 3 in Fig . 1 ( a ) ) are computed as bvk = evk Sv = αvk − 1 Sv and uv = K Sv , ( 2 ) where Sv = ∑K i=1 ( e v i + 1 ) = ∑K i=1 α v i is the Dirichlet strength . Eq . 2 actually describes the phenomenon where the more evidence observed for the kth category , the greater the probability assigned to the kth class . Correspondingly , the less total evidence observed , the greater the total uncertainty . The belief assignment can be considered as a subjective opinion . Given an opinion , the mean of the corresponding Dirichlet distribution p̂v for the class probability p̂vk is computed as p̂vk = αvk Sv ( Frigyik et al. , 2010 ) . Differences from traditional deep-neural-network classifiers . Firstly , the output of traditional neural network classifiers can be considered as a point on a simplex , while Dirichlet distribution parametrizes the density of each such probability assignment on a simplex . Therefore , with the Dirichlet distribution , SL models the second-order probability and uncertainty of the output . Secondly , the softmax function is widely used in the last layer of traditional neural network classifiers . However , using the softmax output as the confidence often leads to over-confidence . In our model , the introduced SL can avoid this problem by adding overall uncertainty mass . Existing methods ( Gal & Ghahramani , 2016 ; Lakshminarayanan et al. , 2017 ) usually require additional computations during inference to output uncertainty . Since the uncertainty is obtained during the inference stage , it is difficult to seamlessly train a model with high accuracy , robustness and reasonable uncertainty in a unified framework . Accordingly , the limitations underlying existing algorithms ( e.g. , inability to directly obtain uncertainty ) also limits their extension to trusted multi-view classification . For clarity , we provide typical examples under a triple classification task to illustrate the above formulation . Let us assume that e = 〈40 , 1 , 1〉 and accordingly we have α = 〈41 , 2 , 2〉 . The corresponding Dirichlet distribution , shown in Fig . 2 ( a ) , yields a sharp distribution centered on the top of the standard 2-simplex . This indicates that sufficient evidence has been observed to ensure accurate classification . In contrast , let us assume that we have the evidence e = 〈0.0001 , 0.0001 , 0.0001〉 , which is little evidence for classification . Accordingly , we obtain the Dirichlet distribution parameter α = 〈1.0001 , 1.0001 , 1.0001〉 and the uncertainty mass u ≈ 1 . As shown in Fig . 2 ( b ) , in this case , the evidence induces quite a flat distribution over the simplex . Finally , when e = 〈5 , 5 , 5〉 , there is also a high uncertainty , as shown in Fig . 2 ( c ) , even though the overall uncertainty is reduced compared to the second case . As shown in Fig . 2 ( d ) , we can convert a Dirichlet distribution into a standard 3-simplex ( a regular tetrahedron with vertices ( 1,0,0,0 ) , ( 0,1,0,0 ) , ( 0,0,1,0 ) and ( 0,0,0,1 ) in R4 ) based on the subjective logic theory ( Eq . 1 and Eq . 2 ) , where the point ( M ) in the simplex corresponding to { { bk } 3k=1 , u } indicates an opinion . Accordingly , the expectation value p̂ of the Dirichlet distribution is the projection ofM on the bottom .
This paper proposes a reliable multi-view classification mechanism equipped with uncertainty, called Trusted Multi-View Classification. The goal is to dynamically assess the quality of different views for different samples to provide reliable uncertainty estimation. The idea is clear and well-motivated. The authors perform empirical studies on diverse datasets to conclude that the proposed algorithm is effective, robust and reliable.
SP:20a4cfac4c8e66208f4a4bd6b2ceeb3c8cabac3a
Secure Byzantine-Robust Machine Learning
1 INTRODUCTION . Recent years have witnessed fast growth of successful machine learning applications based on data collected from decentralized user devices . Unfortunately , however , currently most of the important machine learning models on a societal level do not have their utility , control , and privacy aligned with the data ownership of the participants . This issue can be partially attributed to a fundamental conflict between the two leading paradigms of traditional centralized training of models on one hand , and decentralized/collaborative training schemes on the other hand . While centralized training violates the privacy rights of participating users , existing alternative training schemes are typically not robust . Malicious participants can sabotage the training system by feeding it wrong data intentionally , known as data poisoning . In this paper , we tackle this problem and propose a novel distributed training framework which offers both privacy and robustness . When applied to datasets containing personal data , the use of privacy-preserving techniques is currently required under regulations such as the General Data Protection Regulation ( GDPR ) or Health Insurance Portability and Accountability Act ( HIPAA ) . The idea of training models on decentralized datasets and incrementally aggregating model updates via a central server motivates the federated learning paradigm ( McMahan et al. , 2016 ) . However , the averaging in federated learning , when viewed as a multi-party computation ( MPC ) , does not preserve the input privacy because the server observes the models directly . The input privacy requires each party learns nothing more than the output of computation which in this paradigm means the aggregated model updates . To solve this problem , secure aggregation rules as proposed in ( Bonawitz et al. , 2017 ) achieve guaranteed input privacy . Such secure aggregation rules have found wider industry adoption recently e.g . by Google on Android phones ( Bonawitz et al. , 2019 ; Ramage & Mazzocchi , 2020 ) where input privacy guarantees can offer e.g . efficiency and exactness benefits compared to differential privacy ( both can also be combined ) . The concept of Byzantine robustness has received considerable attention in the past few years for practical applications , as a way to make the training process robust to malicious actors . A Byzantine participant or worker can behave arbitrarily malicious , e.g . sending arbitrary updates to the server . This poses great challenge to the most widely used aggregation rules , e.g . simple average , since a single Byzantine worker can compromise the results of aggregation . A number of Byzantine-robust aggregation rules have been proposed recently ( Blanchard et al. , 2017 ; Muñoz-González et al. , 2017 ; Alistarh et al. , 2018 ; Mhamdi et al. , 2018 ; Yin et al. , 2018 ; Muñoz-González et al. , 2019 ) and can be used as a building block for our proposed technique . Achieving both input privacy and Byzantine robustness however remained elusive so far , with Bagdasaryan et al . ( 2020 ) stating that robust rules “ ... are incompatible with secure aggregation ” . We here prove that this is not the case . Closest to our approach is ( Pillutla et al. , 2019 ) which tolerates data poisoning but does not offer Byzantine robustness . Prio ( Corrigan-Gibbs & Boneh , 2017 ) is a private and robust aggregation system relying on secret-shared non-interactive proofs ( SNIP ) . While their setting is similar to ours , the robustness they offer is limited to check the range of the input . Besides , the encoding for SNIP has to be affine-aggregable and is expensive for clients to compute . In this paper , we propose a secure aggregation framework with the help of two non-colluding honestbut-curious servers . This framework also tolerates server-worker collusion . In addition , we combine robustness and privacy at the cost of leaking only worker similarity information which is marginal for high-dimensional neural networks . Note that our focus is not to develop new defenses against state-of-the-art attacks , e.g . ( Baruch et al. , 2019 ; Xie et al. , 2019b ) . Instead , we focus on making arbitary current and future distance-based robust aggregation rules ( e.g . Krum by Mhamdi et al . ( 2018 ) , RFA by Pillutla et al . ( 2019 ) ) compatible with secure aggregation . Main contributions . We propose a novel distributed training framework which is • Privacy-preserving : our method keeps the input data of each user secure against any other user , and against our honest-but-curious servers . • Byzantine robust : our method offers Byzantine robustness and allows to incorporate existing robust aggregation rules , e.g . ( Blanchard et al. , 2017 ; Alistarh et al. , 2018 ) . The results are exact , i.e . identical to the non-private robust methods . • Fault tolerant and easy to use : our method natively supports workers dropping out or newly joining the training process . It is also easy to implement and to understand for users . • Efficient and scalable : the computation and communication overhead of our method is negligible ( less than a factor of 2 ) compared to non-private methods . Scalability in terms of cost including setup and communication is linear in the number of workers . 2 PROBLEM SETUP , PRIVACY , AND ROBUSTNESS . We consider the distributed setup of n user devices , which we call workers , with the help of two additional servers . Each worker i has its own private part of the training dataset . The workers want to collaboratively train a public model benefitting from the joint training data of all participants . In every training step , each worker computes its own private model update ( e.g . a gradient based on its own data ) denoted by the vector xi . The aggregation protocol aims to compute the sum z = ∑n i=1 xi ( or a robust version of this aggregation ) , which is then used to update a public model . While the result z is public in all cases , the protocol must keep each xi private from any adversary or other workers . Security model . We consider honest-but-curious servers which do not collude with each other but may collude with malicious workers . An honest-but-curious server follows the protocol but may try to inspect all messages . We also assume that all communication channels are secure . We guarantee the strong notion of input privacy , which means the servers and workers know nothing more about each other than what can be inferred from the public output of the aggregation z. Byzantine robustness model . We allow the standard Byzantine worker model which assumes that workers can send arbitrary adversarial messages trying to compromise the process . We assume that a fraction of up to α ( < 0.5 ) of the workers is Byzantine , i.e . are malicious and not follow the protocol . Additive secret sharing . Secret sharing is a way to split any secret into multiple parts such that no part leaks the secret . Formally , suppose a scalar a is a secret and the secret holder shares it with k parties through secret-shared values 〈a〉 . In this paper , we only consider additive secret-sharing where 〈a〉 is a notation for the set { ai } ki=1 which satisfy a = ∑k p=1 ap , with ap held by party p. Crucially , it must not be possible to reconstruct a from any ap . For vectors like x , their secret-shared values 〈x〉 are simply component-wise scalar secret-shared values . Two-server setting . We assume there are two non-colluding servers : model server ( S1 ) and worker server ( S2 ) . S1 holds the output of each aggregation and thus also the machine learning model which is public to all workers . S2 holds intermediate values to perform Byzantine aggregation . Another key assumption is that the servers have no incentive to collude with workers , perhaps enforced via a potential huge penalty if exposed . It is realistic to assume that the communication link between the two servers S1 and S2 is faster than the individual links to the workers . To perform robust aggregation , the servers will need access to a sufficient number of Beaver ’ s triples . These are data-independent values required to implement secure multiplication in MPC on both servers , and can be precomputed beforehand . For completeness , the classic algorithm for multiplication is given in in Appendix B.1 . Byzantine-robust aggregation oracles . Most of existing robust aggregation algorithms rely on distance measures to identity potential adversarial behavior ( Blanchard et al. , 2017 ; Yin et al. , 2018 ; Mhamdi et al. , 2018 ; Li et al. , 2019 ; Ghosh et al. , 2019 ) . All such distance-based aggregation rules can be directly incorporated into our proposed scheme , making them secure . While many aforementioned papers assume that the workers have i.i.d datasets , our protocol is oblivious to the distribution of the data across the workers . In particular , our protocol also works with schemes such as ( Li et al. , 2019 ; Ghosh et al. , 2019 ; He et al. , 2020 ) designed for non-iid data . 3 SECURE AGGREGATION PROTOCOL : TWO-SERVER MODEL . Each worker first splits its private vector xi into two additive secret shares , and transmits those to each corresponding server , ensuring that neither server can reconstruct the original vector on its own . The two servers then execute our secure aggregation protocol . On the level of servers , the protocol is a two-party computation ( 2PC ) . In the case of non-robust aggregation , servers simply add all shares ( we present this case in detail in Algorithm 1 ) . In the robust case which is of our main interest here , the two servers exactly emulate an existing Byzantine robust aggregation rule , at the cost of revealing only distances of worker gradients on the server ( the robust algorithm is presented in Algorithm 2 ) . Finally , the resulting aggregated output vector z is sent back to all workers and applied as the update to the public machine learning model . 3.1 NON-ROBUST SECURE AGGREGATION . In each round , Algorithm 1 consists of two stages : • WorkerSecretSharing ( Figure 1a ) : each worker i randomly splits its private input xi into two additive secret shares xi = x ( 1 ) i + x ( 2 ) i . This can be done e.g . by sampling a large noise value ξi and then using ( xi ± ξi ) /2 as the shares . Worker i sends x ( 1 ) i to S1 and x ( 2 ) i to S2 . We write 〈xi〉 for the two secret-shared values distributed over the two servers . • AggregationAndUpdate ( Figure 1c ) : Given binary weights { pi } ni=1 , each server locally computes 〈 ∑n i=1 pixi〉 . Then S2 sends its share 〈 ∑n i=1 pixi〉 ( 2 ) to S1 so that S1 can then compute z = ∑n i=1 pixi . S1 updates the public model with z . Our secure aggregation protocol is extremely simple , and as we will discuss later , has very low communication overhead , does not require heavy cryptographic primitives , gives strong input privacy and is compatible with differential privacy , and is robust to worker dropouts and failures . We believe this makes our protocol especially attractive for federated learning applications . We now argue about correctness and privacy . It is clear that the output z of the above protocol satisfies z = ∑n i=1 pixi , ensuring that all workers compute the right update . Now we argue about the privacy guarantees . We track the values stored by each of the servers and workers : • S1 : Its own secret shares { x ( 1 ) i } ni=1 and the sum of the other shares 〈 ∑n i=1 pixi〉 ( 2 ) . • S2 : Its own secret shares { x ( 2 ) i } ni=1 . • Worker i : xi and z = ∑n i=1 pixi . Clearly , the workers have no information other than the aggregate z and their own data . S2 only has the secret share which on their own leak no information about any data . Hence surprisingly , S2 does not learn anything in this process . S1 has its own secret share and also the sum of the other shares . If n = 1 , then z = xi and hence S1 is allowed to learn everything . If n > 1 , then S1 can not recover information about any individual secret share x ( 2 ) i from the sum . Thus , S1 learns z and nothing else .
This work proposes a method to robustly (<.5 adversarial workers) aggregate model updates using two non-colluding servers. The proposed method scales well with the number of workers and is compatible with local DP and different robust aggregation protocols. Especially the scalability is a big improvement compared to previous methods. The authors discuss related work that relies on public key infrastructure and requires pairwise secrets between clients. One big advantage of the proposed protocol is that there is no communication between the workers.
SP:697a56b8f9152e50ee683f5a1b59bc272b01c4db
Deep Reinforcement Learning For Wireless Scheduling with Multiclass Services
In this paper , we investigate the problem of scheduling and resource allocation over a time varying set of clients with heterogeneous demands . This problem appears when service providers need to serve traffic generated by users with different classes of requirements . We thus have to allocate bandwidth resources over time to efficiently satisfy these demands within a limited time horizon . This is a highly intricate problem and solutions may involve tools stemming from diverse fields like combinatorics and optimization . Recent work has successfully proposed Deep Reinforcement Learning ( DRL ) solutions , although not yet for heterogeneous user traffic . We propose a deep deterministic policy gradient algorithm combining state of the art techniques , namely Distributional RL and Deep Sets , to train a model for heterogeneous traffic scheduling . We test on diverse number scenarios with different time dependence dynamics , users ’ requirements , and resources available , demonstrating consistent results . We evaluate the algorithm on a wireless communication setting and show significant gains against state-of-theart conventional algorithms from combinatorics and optimization ( e.g . Knapsack , Integer Linear Programming , Frank-Wolfe ) . 1 INTRODUCTION . User scheduling ( i.e. , which user to be served when ) and associated resource allocation ( i.e. , which and how many resources should be assigned to scheduled users ) are two long-standing fundamental problems in communications , which have recently attracted vivid attention in the context of next generation communication systems ( 5G and beyond ) . The main reason is the heterogeneity in users ’ traffic and the diverse Quality of Service ( QoS ) requirements required by the users . The goal of this paper is to design a scheduler and resource assigner , which takes as inputs the specific constraints of the traffic/service class each user belongs in order to maximize the number of satisfied users . This problem is hard to solve since we have at least two main technical challenges : ( i ) except for some special cases , there is no simple closed-form expression for the problem and a fortiori for its solution ; ( ii ) the problem solving algorithm has to be scalable with the number of users . Current solutions rely on combinatorial approaches or suboptimal solutions , which seem to work satisfactorily in specific scenarios , failing though to perform well when the number of active users is large . This motivates the quest for alternative solutions ; we propose to resort to Deep Reinforcement Learning ( DRL ) to tackle this problem . In the context of DRL , we propose to combine together several ingredients in order to solve the aforementioned challgening problem . In particular , we leverage on the theory of Deep Sets to design permutation equivariant and invariant models , which solves the scalability issue , i.e. , the number of users can be increased without having to increase the number of parameters . We also stabilize the learning process by adding in a new way the distributional dimension marrying it with Dueling Networks to ” center the losses ” . Finally , we compare the proposed DRL-based algorithm with conventional solutions based on combinatorial or suboptimal optimization approaches . Our experiments and simulation results clearly show that our DRL method significanlty outperforms conventional state-of-the-art algorithms . 2 RELATED WORK . The scheduling problem is a well known problem appearing in various fields and as technologies progress and more people want to take advantage of the new services , how to schedule them in an efficient way becomes more intricate . This is exactly the case in wireless communication systems . Researchers are resorting to new methods , such as deep reinforcement learning , which have shown impressive results Mnih et al . ( 2015 ) ; Silver et al . ( 2016 ) . For example in ( Chinchali et al. , 2018 ) they perform scheduling on a cellular level using Deep Reinforcement learning ( DRL ) . Also ideas using DRL in a distributed way to perform dynamic power allocation has appeared in ( Naparstek & Cohen , 2018 ; Nasir & Guo , 2019 ) . Nevertheless , to the best of our knowledge , the problem of scheduling on traffic of users with heterogeneous performance requirements has not been appropriately addressed . To solve this hard problem , one can resort to distributional Reinforcement Learning researched in Jaquette ( 1973 ) and followed by ( Dabney et al. , 2018a ; b ) in order to have richer representations of the environment and obtain better solutions . Also techniques like noisy network for better explorations ( Fortunato et al. , 2018 ) or architectures like dueling networks ( Wang et al. , 2016 ) have greatly improved stability of the trained models . Finally ideas ( Zaheer et al. , 2017 ) managed to simplify and improve neural network models when permutation invariance properties apply . We combine those ideas with a deep deterministic policy gradient method ( Lillicrap et al. , 2016 ) to reach a very efficient scheduling algorithm . 3 THE SCHEDULING AND RESOURCE ALLOCATION PROBLEM . 3.1 THE PROBLEM . The problem we consider here involves a set of randomly arriving users that communicate wirelessly with a base station ( service provider ) ; users require that their traffic is served according to the quality of service ( QoS ) requirements imposed by the service class they belong to . We consider the case where users belong to different service classes with heterogeneous requirements . Each class specifies the amount of data to be delivered , the maximum tolerable latency , and the “ importance/priority ” of the user . A centralized scheduler ( at the base station ) at each time step takes as input this time varying set of users belonging to different service classes and has to decide how to allocate its limited resources per time step in order to maximize the long-term ” importance ” weighted sum of satisfied users . A user is considered to be satisfied whenever it successfully received its data within the maximum tolerable latency specified by its service class . The hard problem of scheduling and resource allocation - which is combinatorial by nature - is exacerbated by the wireless communication , which in turn brings additional uncertainty due to time-varying random connection quality . The scheduler that assigns resources does not exclude the possibility of a bad connection ( low channel quality ) which renders data transmission unsuccessful . In order to mitigate that effect , some protocols make use of channel state information ( CSI ) to the transmitter , i.e. , the base station/scheduler knows in advance the channel quality and adapts the allocated resources to the instantaneous channel conditions . We consider here two extreme cases of channel knowledge : ( i ) full-CSI , in which perfect ( instantaneous , error free ) CSI is provided to the scheduler enabling accurate estimation of exact resources each user needs ; and ( ii ) no-CSI , in which the scheduler is agnostic to the channel quality . In case of unsuccessful/erroneous data reception , we employ a simple retransmission protocol ( HARQ-type I ) . A widely used way to model the channel connection dynamics is to make the wireless channel quality depend on the distance of the user from the base station and evolve in a Markovian way from the quality on the channel realization of the previous time step . The mathematical description of the traffic generator model and the channel dynamics is provided in the appendix A . To better understand the problem , we draw the following analogy . Imagine a server having a water pitcher that is full at every time step and has to distribute it across a set of people . Every person has a glass and leaves satisfied only if its glass is filled ( or overfilled ) at any time instant prior to certain maximum waiting time . As mentioned before , in this work we consider a retransmission protocol ( HARQ-type I ) , which in our analogy means that the server can not fill a glass using multiple trials ; if a glass is not filled completely at a time step , then it will be emptied and the server has to retry . The wireless communication setting brings the additional complication that the sizes of the glasses are not actually fixed but fluctuate ( due to the randomness of the connection quality of each user ) . In the full-CSI case , the server could know at every time step the size of glasses and therefore the exact amount of resources required . On the other hand , with no-CSI , the server can only roughly estimate the size , mainly using the amount of data requested and the distance between user and base station . The problem can be modeled as a Markov Decision Process ( MDP ) ( Bellman , 1957 ) ( S , A , R , P , γ ) 1 , where S is the state space of the environment ( described in detail in appendix A.2 ) , and A is the action space ( in our case the set of all feasible allocations ) . After action at ∈ A at state st ∈ S a reward rt ∼ R ( ·|st , at ) is obtained and the next state follows the probability st+1 ∼ P ( ·|st , at ) . The discount factor is γ ∈ [ 0 , 1 ) . Under a fixed policy π : S → A the return is a random variable defined as Zπt = ∑∞ i=0 γ t+irt+i representing the discounted sum of rewards when a trajectory of states is taken following the policy π . An agent ( scheduler ) ideally aims to find the optimal policy π ? maximizing the mean reward Eπ [ Zπ ] . Being more rigorous , only in the full-CSI case the states st are fully observed , whereas for no-CSI , the channel qualities are unknown to the agent and the observation ot ⊂ st is a part of the state , leading to a Partially Observable MDP ( POMDP ) ( Åström , 1965 ) . A way to reduce POMDP to MDP is by substituting the states with the ” belief ” of the states ( Kaelbling et al. , 1998 ) of the value of st. Another way is using the complete history { o0 , a0 , o1 , a1 , · · · , at−1 , ot−1 } , which fortunately in our case it works since only the most recent part is relevant , i.e. , the one representing if and how many resources have been previously allocated to only the currently active users . 3.2 THE DEEP REINFORCEMENT LEARNING APPROACH . Deep reinforcement learning ( DRL ) has shown impressive results in many problems modeled as MDP , but mainly in cases where the environment is close to deterministic due to game rules ( Atari , Chess , Go ( Mnih et al. , 2015 ; Silver et al. , 2017 ; 2016 ) ) or physical laws ( robotics and physics tasks ( Kober et al. , 2013 ; Lillicrap et al. , 2015 ) ) . A very relevant question to ask is whether we can develop a DRL algorithm coping successfully with environments exhibiting high variance randomness as in our case due to the channel dynamics and heterogeneous traffic . Existing applications of DRL to problems with similar properties in other fields are encouraging ( trading , pricing , vehicle routing ( Nazari et al. , 2018 ; Charpentier et al. , 2020 ) ) . 3.2.1 POLICY NETWORK . Our objective is to a scheduler that can handle a large number of users K , say K = 100 , in which case the action space becomes infeasibly large for a traditional Deep Q-learning Network approach . For that , we propose to employ a deep deterministic policy gradient ( DDPG ) method ( Lillicrap et al. , 2016 ) , with which we aim at training a policy πθ : S → A modelled as a Neural Network ( NN ) with parameters θ . Moreover , our method should work in both full-CSI and no-CSI cases with minor , if any , modification . With full-CSI the exact amount of required resources ( bandwidth ) per user is known , so the ( discrete ) action is just to select the subset of user to satisfy , while for noCSI , it is continuous since on top of selecting the users , the scheduler has to decide on the portion of resources each user takes . For no-CSI those portion are exactly the output of πθ but for fullCSI we do a continuous relaxation2 and the output provides the value ( related to importance ) per resources ; that way , a user ranking is obtained , which allows the scheduler to proceed sequentially : the scheduler serves/satisfies as many of the most “ valuable ” ( highest rank ) users as possible subject to available resources . This discrepancy in the output process is the only difference in the model between full-CSI and no-CSI . Setting Zπ ( st , at ) = rt + Zπt+1 with rt ∼ R ( ·|st , at ) being the return if at t the action at is taken followed by policy π and Qπ ( st , at ) = E [ Zπ ( st , at ) ] be the expected return conditioned on the action at st is at then the objective of the agent to maximize is J ( θ ) = Est0∼pt0 [ Q πθst0 , πθ ( st0 ) ) ] with pt0 being the probability of the initial state st0 at time t0 . The gradient can be written ( Silver et al. , 2014 ) : ∇θJ ( θ ) = Est0∼pt0 , s∼ρπθst0 [ ∇θπθ ( s ) ∇aQ πθ ( s , a ) |a = πθ ( s ) ] ( 1 ) 1The only discrepancy is that the scheduler aims to ideally maximize the sum of rewards , i.e. , for γ = 1 , and not the discounted one . 2The continuous relaxation is also mandatory for a DDPG approach to work so that the gradients can pass from the value network . with ρπθst0 the discounted state ( improper ) distribution defined as ρ πθ st0 ( s ) = ∑∞ i=0 γ i P ( st+i = s|st0 , πθ ) . In practice ρπθst0 is approximated by the ( proper ) distribution % πθ st0 ( s ) : = ∑∞ i=0P ( st+i = s|st0 , πθ ) . To compute the gradient , the function Qπθ ( s , a ) is needed which is approximated by another NN Qψ ( s , a ) , named value network , described in the next subsection . We now explain the architecture of the model πθ . The policy falls in a category of permutation equivariant functions meaning that permuting the users should only result in permuting likewise the resource allocation . In ( Zaheer et al. , 2017 ) necessary and sufficient conditions are shown for permutation equivariance in neural networks ; we adopt their model with minor changes . At first the characteristics xi ∈ RNx , i ∈ { 1 , · · ·K } of each ( active ) user are processed individually by the same function φuser : RNx → RHx modeled as a two layer fully connected network . Then all those features per user are aggregated with the permutation equivariant fσ : RK×H → RK×H ′ of H/H ′ input/output channels : fσ ( x ) = σ ( xΛ + 11 ᵀxΓ ) , 1 = [ 1 , · · · , 1 ] ∈ RK , Λ , Γ ∈ RD×D ′ and σ ( · ) an element wise non-linear function . We stack two of those , one frelu : RK×Hx → R K×H′x with σ ( ) being the relu ( x ) = max ( 0 , x ) and a second flinear : RK×H ′ x → RK×1 without any non-linearity σ ( ) . This structure on top of preserving the desirable permutation equivariance property it also brings a significant reduction of parameters since an increase of the number of users does not necessitate additional parameters with bigger network prone to overfitting traps . Before the final non-linearity , which is a smooth approximation of ReLU , namely softplus ( x ) = log ( 1 + ex ) , guaranteeing the output is positive , there is a critical normalization step x → x−E [ x ] ||x||2 with || · ||2 being the ` 2 norm . To better understand the criticality of that step , consider the case of full-CSI where the output denotes how valuable each user is . Without the normalization step , the value network perceives that the higher the value assigned to a user , the more probable is to get resources , be satisfied and take reward , leading to a pointless attempt of increasing every user ’ s value . However , by subtracting the mean value , whenever the value of a user increases , the value of the rest decreases , giving hence the sense of the total resources being limited . In the case of no-CSI , there is an additional benefit . Here there is an extra final operation , i.e. , x→ x||x||1 , see Figure 1 , so as to signify portions ( of the total bandwidth ) adding up to 1 . Having done the normalization step previously ( dividing by ||x||2 ) , helps keeping the denominator ||x||1 stable . A final note regards the exploration . The output has to satisfy properties ( like positivity and/or adding to 1 ) which makes the common approach of adding noise on the actions cumbersome . An easy way out is through noisy networks ( Fortunato et al. , 2018 ) , which introduce noise to the weights of a layer , resulting to changed decision of the policy network . The original approach considers the variance of the noise to be learnable ; we keep it constant though since it provides better results . The noise is added at φusers parameters , resulting to altered output features per user and therefore different allocations .
Basically, it seems that the proposed method is interesting and meaningful. The scheduling problem in this paper is based on the analogy to a server having a water pitcher, and the deep reinforcement learning approach for the scheduling problem has been designed. However, the scheduling problem in wireless networks is a very famous issue. Of course, applying DRF to it is quite interesting. However, the authors need to describe the conventional well-known scheduling algorithms and compare them with the proposed scheme (now, the current paper only focuses on applying the DRF to the scheduling and evaluating its performance in aspects of an optimization problem.). Further, typically, in scheduling problems, efficiency (total data rate) and fairness are the key factors and it is needed to describe the relationship between these conventional performance metrics and the satisfaction probability.
SP:9caede157f5546829e12c95bd290a760c1aa2dce
Multi-modal Self-Supervision from Generalized Data Transformations
1 INTRODUCTION . Recent works such as PIRL ( Misra & van der Maaten , 2020 ) , MoCo ( He et al. , 2019 ) and SimCLR ( Tian et al. , 2019 ) have shown that it is possible to pre-train state-of-the-art image representations without the use of any manually-provided labels . Furthermore , many of these approaches use variants of noise contrastive learning ( Gutmann & Hyvärinen , 2010 ) . Their idea is to learn a representation that is invariant to transformations that leave the meaning of an image unchanged ( e.g . geometric distortion or cropping ) and distinctive to changes that are likely to alter its meaning ( e.g . replacing an image with another chosen at random ) . An analysis of such works shows that a dominant factor for performance is the choice of the transformations applied to the data . So far , authors have explored ad-hoc combinations of several transformations ( e.g . random scale changes , crops , or contrast changes ) . Videos further allow to leverage the time dimension and multiple modalities . For example , Arandjelovic & Zisserman ( 2017 ) ; Owens et al . ( 2016 ) learn representations by matching visual and audio streams , as a proxy for objects that have a coherent appearance and sound . Their formulation is similar to noise contrastive ones , but does not quite follow the pattern of expressing the loss in terms of data transformations . Others ( Chung & Zisserman , 2016 ; Korbar et al. , 2018 ; Owens & Efros , 2018 ) depart further from standard contrastive schemes by learning representations that can tell whether visual and audio streams are in sync or not ; the difference here is that the representation is encouraged to be distinctive rather than invariant to a time shift . Overall , it seems that finding an optimal noise contrastive formulation for videos will require combining several transformations while accounting for time and multiple modalities , and understanding how invariance and distinctiveness should relate to the transformations . However , the ad-hoc nature of these choices in previous contributions make a systematic exploration of this space rather difficult . In this paper , we propose a solution to this problem by introducing the Generalized Data Transformations ( GDT ; fig . 1 ) framework . GDTs reduce most previous methods , contrastive or not , to a noise contrastive formulation that is expressed in terms of data transformations only , making it simpler to systematically explore the space of possible combinations . This is true in particular for multi-modal data , where separating different modalities can also be seen as a transformation of an input video . The formalism also shows which combinations of different transformations are valid and how to enumerate them . It also clarifies how invariance and distinctiveness to different effects can be incorporated in the formulation and when doing so leads to a valid learning objective . These two aspects allows the search space of potentially optimal transformations to be significantly constrained , making it amenable to grid-search or more sophisticated methods such as Bayesian optimisation . By using GDTs , we make several findings . First , we find that using our framework , most previous pretext representation learning tasks can be formulated in a noise-contrastive manner , unifying previously distinct domains . Second , we show that just learning representations that are invariant to more and more transformations is not optimal , at least when it comes to video data ; instead , balancing invariance to certain factors with distinctiveness to others performs best . Third , we find that by investigating what to be variant to can lead to large gains in downstream performances , for both visual and audio tasks . With this , we are able to set the new state of the art in audio-visual representation learning , with both small and large video pretraining datasets on a variety of visual and audio downstream tasks . In particular , we achieve 95.2 % and 72.8 % on the standardized UCF-101 and HMDB-51 action recognition benchmarks . 2 RELATED WORK . Self-supervised learning from images and videos . A variety of pretext tasks have been proposed to learn representations from unlabelled images . Some tasks leverage the spatial context in images ( Doersch et al. , 2015 ; Noroozi & Favaro , 2016 ) to train CNNs , while others create pseudo classification labels via artificial rotations ( Gidaris et al. , 2018 ) , or clustering features ( Asano et al. , 2020b ; Caron et al. , 2018 ; 2019 ; Gidaris et al. , 2020 ; Ji et al. , 2018 ) . Colorization ( Zhang et al. , 2016 ; 2017 ) , inpainting ( Pathak et al. , 2016 ) , solving jigsaw puzzles ( Noroozi et al. , 2017 ) , as well as the contrastive methods detailed below , have been proposed for self-supervised image representation learning . Some of the tasks that use the space dimension of images have been extended to the space-time dimensions of videos by crafting equivalent tasks . These include jigsaw puzzles ( Kim et al. , 2019 ) , and predicting rotations ( Jing & Tian , 2018 ) or future frames ( Han et al. , 2019 ) . Other tasks leverage the temporal dimension of videos to learn representations by predicting shuffled frames ( Misra et al. , 2016 ) , the direction of time ( Wei et al. , 2018 ) , motion ( Wang et al. , 2019 ) , clip and sequence order ( Lee et al. , 2017 ; Xu et al. , 2019 ) , and playback speed ( Benaim et al. , 2020 ; Cho et al. , 2020 ; Fernando et al. , 2017 ) . These pretext-tasks can be framed as GDTs . Multi-modal learning . Videos , unlike images , are a rich source of a variety of modalities such as speech , audio , and optical flow , and their correlation can be used as a supervisory signal . This idea has been present as early as 1993 ( de Sa , 1994 ) . Only recently , however , has multi-modal learning been used to successfully learn effective representations by leveraging the natural correspondence ( Alwassel et al. , 2020 ; Arandjelovic & Zisserman , 2017 ; Asano et al. , 2020a ; Aytar et al. , 2016 ; Morgado et al. , 2020 ; Owens et al. , 2016 ) and synchronization ( Chung & Zisserman , 2016 ; Korbar et al. , 2018 ; Owens & Efros , 2018 ) between the audio and visual streams . A number of recent papers have leveraged speech as a weak supervisory signal to train video representations ( Li & Wang , 2020 ; Miech et al. , 2020 ; Nagrani et al. , 2020 ; Sun et al. , 2019a ; b ) and recently Alayrac et al . ( 2020 ) , which uses speech , audio and video . Other works incorporate optical flow and other modalities ( Han et al. , 2020 ; Liu et al. , 2019 ; Piergiovanni et al. , 2020 ; Zhao et al. , 2019 ) to learn representations . In ( Tian et al. , 2019 ) , representations are learned with different views such as different color channels or modalities ) to induce invariances . In contrast , our work analyses multi-modal transformations and examines their utility when used as an invariant or variant learning signal . Noise Contrastive Loss . Noise contrastive losses ( Gutmann & Hyvärinen , 2010 ; Hadsell et al. , 2006 ) measure the similarity between sample pairs in a representational space and are at the core of several recent works on unsupervised feature learning . It has been shown to yield good performance for learning image ( Chen et al. , 2020b ; He et al. , 2019 ; Hénaff et al. , 2019 ; Hjelm et al. , 2019 ; Li et al. , 2020 ; Misra & van der Maaten , 2020 ; Oord et al. , 2018 ; Tian et al. , 2019 ; 2020 ; Wu et al. , 2018 ) and video ( Han et al. , 2019 ; Li & Wang , 2020 ; Miech et al. , 2020 ; Morgado et al. , 2020 ; Sohn , 2016 ; Sun et al. , 2019a ) representations , and circumvents the need to explicitly specify what information needs to be discarded via a designed task . We leverage the noise contrastive loss as a learning framework to encourage the network to learn desired invariance and distinctiveness to data transformations . The GDT framework can be used to combine and extend many of these cues , contrastive or not , in a single noise contrastive formulation . 3 METHOD . A data representation is a function f : X → RD mapping data points x to vectors f ( x ) . Representations are useful because they help to solve tasks such as image classification . Based on the nature of the data and the task , we often know a priori some of the invariances that the representation should possess ( for example , rotating an image usually does not change its class ) . We can capture those by means of the contrast function1 c ( x1 , x2 ) = δf ( x1 ) =f ( x2 ) , where c ( x1 , x2 ) = 1 means that f is invariant to substituting x2 for x1 , while c ( x1 , x2 ) = 0 means that f is distinctive to this change . Any partial knowledge of the contrast c can be used as a cue to learn f , but c is not arbitrary : in order for c to be valid , the expression c ( x1 , x2 ) = 1 must be an equivalence relation on X , i.e . be reflexive c ( x , x ) = 1 , symmetric c ( x1 , x2 ) = c ( x2 , x1 ) and transitive c ( x1 , x2 ) = c ( x2 , x3 ) = 1⇒ c ( x1 , x3 ) = 1 . This is justified in Appendix A.1 and will be important in establishing which particular learning formulations are valid and which are not . We introduce next our Generalized Data Transformations ( GDTs ) framework by generalizing two typical formulations : the first is analogous to ‘ standard ’ methods such as MoCo ( He et al. , 2019 ) and SimCLR ( Chen et al. , 2020b ) and the second tackles multi-modal data . Standard contrastive formulation . Recall that the goal is to learn a function f that is compatible with a known contrast c , in the sense explained above . In order to learn f , we require positive ( c ( x1 , x2 ) = 1 ) and negative ( c ( x1 , x2 ) = 0 ) example pairs ( x1 , x2 ) . We generate positive pairs by sampling x1 from a data source and then by setting x2 = g ( x1 ) as a random transformation of the first sample , where g ∈ G is called a data augmentation ( e.g . image rotation ) . We also generate negative pairs by sampling x1 and x2 independently . It is convenient to express these concepts via transformations only . To this end , let D = ( x1 , . . . , xN ) ∈ XN be a collection of N i.i.d . training data samples . A Generalized Data Transformation ( GDT ) T : XN → Z is a mapping that acts on the set of training samplesD to produce a new sample z = TD . Note that the GDT is applied to the entire training set , so that sampling itself can be seen as a transformation . In the simplest case , Z = X and a GDT T = ( i , g ) extracts the sample corresponding to a certain index i and applies an augmentation g : X → X to it , i.e . TD = g ( xi ) . 1We use the symbol δ to denote the Kronecker delta . Usually , we want the function f to be distinctive to the choice of sample but invariant to its augmentation . This is captured by setting the contrast c ( T , T ′ ) 2 to c ( ( i , g ) , ( i′ , g′ ) ) = δi=i′ . Given a batch T = { T1 , . . . , TK } of K GDTs , we then optimize a pairwise-weighted version of the noisecontrastive loss ( Chen et al. , 2020b ; Gutmann & Hyvärinen , 2010 ; Oord et al. , 2018 ; Tian et al. , 2019 ; Wu et al. , 2018 ) , the GDT-NCE loss : L ( f ; T ) = − ∑ T , T ′∈T c ( T , T ′ ) w ( T , T ′ ) log ( exp 〈f ( TD ) , f ( T ′D ) 〉/ρ∑ T ′′∈T w ( T , T ′′ ) exp 〈f ( TD ) , f ( T ′′D ) 〉/ρ ) . ( 1 ) Here , the scalar ρ is a temperature parameter and the weights w ( T , T ′ ) are set to δT 6=T ′ in order to discount contrasting identical transformations , which would result in a weak learning signal . Minimizing eq . ( 1 ) pulls together vectors f ( TD ) and f ( T ′D ) if c ( T , T ′ ) = 1 and pushes them apart if c ( T , T ′ ) = 0 , similar to a margin loss , but with a better handling of hard negatives ( Chen et al. , 2020b ; Khosla et al. , 2020 ; Tian et al. , 2019 ) .3 When using a single modality , T = T ′ and positive pairs are computed from two differently augmented versions . Multi-modal contrastive formulation . We now further extend GDTs to handle multi-modal data . In this case , several papers ( Arandjelovic & Zisserman , 2017 ; Aytar et al. , 2016 ; Korbar et al. , 2018 ; Owens et al. , 2016 ; Wei et al. , 2018 ) have suggested to learn from the correlation between modalities , albeit usually not in a noise-contrastive manner . In order to encode this with a GDT , we introduce modality projection transformationsm ∈M . For example , a video x = ( v , a ) has a visual component v and an audio component a and we we have two projectionsM = { ma , mv } extracting respectively the visualmv ( x ) = v and audioma ( x ) = a signals . We can plug this directly in eq . ( 1 ) by considering GDTs T = ( i , m ) and setting TD = m ( xi ) , learning a representation f which is distinctive to the choice of input video , but invariant to the choice of modality.4 General case . Existing noise contrastive formulations learn representations that are invariant to an ad-hoc selection of transformations . We show here how to use GDTs to build systematically new valid combinations of transformations while choosing whether to encode invariance or distinctiveness to each factor . Together with the fact that all components , including data sampling and modality projection , are interpreted as transformations , this results in a powerful approach to explore a vast space of possible formulations systematically , especially for the case of video data with its several dimensions . In order to do so , note that to write the contrastive loss eq . ( 1 ) , we only require : the contrast c ( T , T ′ ) , the weight w ( T , T ′ ) and a way of sampling the transformations T in the batch . Assuming that each generalized transformation T = tM ◦ · · · ◦ t1 is a sequence of M transformations tm , we start by defining the contrast c for individual factors as : c ( tm , t ′ m ) = { 1 , if we hypothesize invariance , δtm=t′m , if we hypothesize distinctiveness . ( 2 ) The overall contrast is then c ( T , T ′ ) = ∏M m=1 c ( tm , t ′ m ) . In this way , each contrast c ( tm , t ′ m ) is an equivalence relation and so is c ( T , T ′ ) ( see Appendix A.1 ) , making it valid in the sense discussed above . We also assume that w ( T , T ′ ) = 1 unless otherwise stated . Next , we require a way of sampling transformations T in the batch . Note that each batch must contain transformations that can be meaningfully contrasted , forming a mix of invariant and distinctive pairs , so they can not be sampled independently at random . Furthermore , based on the definition above , a single ‘ distinctive ’ factor in eq . ( 2 ) such that tm 6= t′m implies that c ( T , T ′ ) = 0 . Thus , the batch must contain several transformations that have equal distinctive factors in order to generate a useful learning signal . A simple way to satisfy these constraints is to use a hierarchical sampling scheme ( fig . 1 ) First , we sample K1 instances of transformation t1 ; then , for each sample t1 , we sample K2 instances 2Note that , differently from the previous section , we have now defined c on transformations T rather than on samples x directly . In Appendix A.1 , we show that this is acceptable provided that c ( T , T ′ ) = 1 also defines an equivalence relation . 3We can think of eq . ( 1 ) as a softmax cross-entropy loss for a classification problem where the classes are the equivalence classes T /c of transformations . 4For this , as f must accept either a visual or audio signal as input , we consider a pair of representations f = ( fv , fa ) , one for each modality . of transformation t2 and so on , obtaining a batch of K = ∏M m=1Km transformations T . In this manner , the batch contains exactly KM × · · · ×Km+1 transformations that share the same first m factors ( t1 = t′1 , . . . , tm = t ′ m ) . While other schemes are possible , in Appendix A.2.1 , we show that this is sufficient to express a large variety of self-supervised learning cues that have been proposed in the literature . In the rest of the manuscript , however , we focus on audio-visual data .
The paper introduces a general framework dubbed Generalized Data Transformations (GDT) for self supervised learning. The framework is used to perform video-audio self supervised learning and analyze what kind of transformations the representations should be invariant to or on the contrary variant to thanks to a contrastive loss. The author demonstrate the effectiveness of the proposed approach by showing that the resulting learned video representations achieve very good performance on the HMDB51 and UCF101 downstream task.
SP:858bb0278078b780b1fe163c7a7a084fd142f186
Overparameterisation and worst-case generalisation: friend or foe?
1 INTRODUCTION . Overparameterised neural networks have demonstrated the remarkable ability to perfectly fit training samples , while still generalising to unseen test samples ( Zhang et al. , 2017 ; Neyshabur et al. , 2019 ; Nakkiran et al. , 2020 ) . However , several recent works have revealed that overparameterised models ’ good average performance does not translate to good worst-case performance ( Buolamwini & Gebru , 2018 ; Hashimoto et al. , 2018 ; Sagawa et al. , 2020a ; b ) . In particular , the test performance of such models may be poor on certain subgroups that are under-represented in the training data . Worse still , such degradation can be exacerbated as model complexity increases . This indicates the unsuitability of such models in ensuring fairness across subgroups , a topical concern given the growing societal uses of machine learning ( Dwork et al. , 2012 ; Hardt et al. , 2016 ; Buolamwini & Gebru , 2018 ) . Why does overparameterisation induce such unfavourable bias , and how can one correct for it ? Sagawa et al . ( 2020a ) demonstrated how such models may fit to spurious correlations that explain under-represented samples , which can generalise poorly . Sagawa et al . ( 2020b ) further posited that overparameterised models have an inductive bias towards memorising labels for as few samples as possible , which are invariably those from under-represented subgroups . To mitigate such bias , existing approaches include subsampling majority subgroups ( Sagawa et al. , 2020b ) , and modifying the training objective ( Sagawa et al. , 2020a ; Nam et al. , 2020 ; Zhang et al. , 2020 ; Goel et al. , 2020 ) . This suggests two important points regarding overparameterised models ’ performance : ( a ) with standard training , increasing model complexity exacerbates degradation on rare subgroups ; ( b ) controlling this degradation may require alternate training objectives or procedures . In this paper , we establish that while overparameterised models are biased against under-represented examples , in certain settings , such bias may be easily corrected via post-hoc processing of the model outputs . Specifically , such models ’ bias can be largely restricted to their classification layers , and manifest as structured shifts in predictions for rare subgroups . We thus show how two simple techniques applied to the model outputs — classifier retraining based on the learned representations , and correction of the classification threshold — can help overparameterised models improve worstsubgroup performance over underparameterised counterparts . Consequently , even with standard training , overparameterised models can learn sufficient information to model rare subgroups . To make the above concrete , Figure 1 plots a histogram of model predictions for a synthetic dataset from Sagawa et al . ( 2020b ) ( cf . §2 ) . The data comprises four subgroups generated from combinations ( y , a ( x ) ) of labels y ∈ { ±1 } and a feature a ( x ) ∈ { ±1 } . Most samples ( x , y ) have y = a ( x ) , and so these comprise two dominant subgroups within the positive and negative samples . We train an overparameterised linear model , yielding logits f±1 ( x ) . We then plot the decision scores f+1 ( x ) − f−1 ( x ) , which are expected to be > 0 iff y = +1 . Strikingly , there is a distinct separation amongst the subgroup scores : e.g. , samples with y = +1 , a ( x ) = −1 have systematically lower scores than those with y = +1 , a ( x ) = +1 . Consequently , the model incurs a significant error rate on rare subgroups . The structured nature of the separation implies suggests to post-hoc shift the scores to align the distributions ; this markedly improves performance on the rare subgroups ( Figure 1b ) . Scope and contributions . The primary aim of this work is furthering the understanding of the behaviour of overparameterised models , rather than proposing new techniques . Indeed , the post-hoc correction techniques we employ have been well-studied in the related problem setting of longtail learning or learning under class imbalance ( He & Garcia , 2009 ; Buda et al. , 2017 ; Van Horn & Perona , 2017 ) . Several works have demonstrated that the representations learned by standard networks contain sufficient information to distinguish between dominant and rare labels ( Liu et al. , 2019 ; Zhang et al. , 2019 ; Kang et al. , 2020 ; Menon et al. , 2020 ) . Similar techniques are also common the fairness literature ( Hardt et al. , 2016 ; Chzhen et al. , 2019 ) . However , it is not a-priori clear whether such techniques are effective for overparameterised models , whose ability to perfectly fit the training labels can thwart otherwise effective approaches ( Sagawa et al. , 2020a ) . Existing techniques for improving the worst-subgroup error of overparameterised models involve altering the inputs to the model ( Sagawa et al. , 2020b ) , or the training objective ( Sagawa et al. , 2020a ) . By contrast , the techniques we study alter the outputs of a standard network , trained to minimise the softmax cross-entropy on the entire data . Our findings illustrate that such models do not necessarily require bespoke training modifications to perform well on rare subgroups : even with standard training , overparameterised models can ( in certain settings ) learn useful information about rare subgroups . In summary , our contributions are : ( i ) we demonstrate that , in certain settings , overparameterised models ’ poor performance on underrepresented subgroups is the result of a structured bias in the classification layer ( cf . §3 ) ; ( ii ) we show that two simple post-hoc correction procedures ( cf . §4 ) can mitigate the above bias , and thus significantly reduce their worst-subgroup error ( cf . §5 ) . 2 BACKGROUND AND SETTING . Suppose we have a labelled training sample S = { ( xi , yi ) } ni=1 ∈ ( X × Y ) n , for instance space X ⊂ Rd and label space Y . One typically assumes S is an i.i.d . draw from some unknown distribution P ( x , y ) . Further , suppose each ( x , y ) has an associated group membership g ( x , y ) ∈ G , with G . = |G| . This induces G data subgroups , with a prior P ( g ) and conditional distributions P ( x , y | g ) . Following Sagawa et al . ( 2020a ; b ) , we consider groups g ( x , y ) = ( y , a ( x ) ) , where a ( x ) ∈ R is some attribute within x . We assume a ( x ) is fully specified during train and test time ; while not always realistic , such an assumption has precedent in the fairness literature ( Lipton et al. , 2018 ) . The standard goal in classification is to learn a classifier h : X→ Y that minimises the average error Lavg ( h ) . = E g E x , y|g [ ` 01 ( y , h ( x ) ) ] , where ` 01 ( y , h ( x ) ) = Jy 6= h ( x ) K is the 0-1 loss . Typically , one constructs h ( x ) = argmaxy fy ( x ) , where f ( x ) ∈ RY comprises real-valued logits , as learned by empirical risk minimisation ( ERM ) : minf∈F 1 n ∑n i=1 ` ( yi , f ( xi ) ) . Here , ` is a surrogate loss such as the softmax cross-entropy , and F is a function class , such as a neural networks with a fixed architecture . A network is overparametrised if it can perfectly fit the training labels , and thus drive the training error to zero . Remarkably — and in apparent contrast to orthodox statistical wisdom — this does not come at the expense of generalisation on test samples ( Zhang et al. , 2017 ; Belkin et al. , 2019 ; Nakkiran et al. , 2020 ) . This apparent power comes at a price , however . Let us define the worst-subgroup error as Lmax ( h ) . = max g∈G E x , y|g [ ` 01 ( y , h ( x ) ) ] , ( 1 ) i.e. , the worst-case error over all data subgroups . Prior work ( Sagawa et al. , 2020a ; b ) established that for overparameterised models , the worst-subgroup training error can go to zero ( since the model can fit all samples ) , but the worst-subgroup test error can devolve to that of random guessing ( since the model can fit spurious correlations for rare subgroups ) . Further , the degree of degradation can increase with the model complexity . This indicates that the naïve use of overparametrised models may be at odds with ensuring fairness across data subgroups , a core concern in moden applications of machine learning ( Calders & Verwer , 2010 ; Dwork et al. , 2012 ; Hardt et al. , 2016 ; Zafar et al. , 2017 ) . There are several potential strategies to cope with this . One is to perform distributionally robust optimisation ( Hashimoto et al. , 2018 ; Mohri et al. , 2019 ; Sagawa et al. , 2020a ) , and minimise : LDRO ( h ) . = max g∈G [ E x , y|g [ ` ( y , f ( x ) ) ] + Ωg ( f ) ] , where Ωg is some per-group regulariser . In settings where P ( g ) is non-uniform , Sagawa et al . ( 2020a ) proposed to set Ωg ( f ) ≡ 1√ng , where ng is the number of training samples with group g. Alternatively , one can reweight samples to upweight the contribution of rarer groups and minimise : LRW ( h ) . = ∑ g∈G wg · E x , y|g [ ` ( y , f ( x ) ) ] , ( 2 ) where , e.g. , wg = P ( g ) leads to the standard average error , while wg = 1 implicitly upweights rare subgroups . While intuitive , Sagawa et al . ( 2020b ) established that such an approach is also subject to poor worst-subgroup performance , owing to a broader issue with using importance weighting in conjunction with neural networks ( Wen et al. , 2014 ; Byrd & Lipton , 2019 ) . Sagawa et al . ( 2020b ) established that one can achieve good performance by instead subsampling dominant groups , an operation equivalent in expectation to minimising LRW ( h ) with wg = 1 . Recent developments in the mitigation of worst-subgroup errors include Nam et al . ( 2020 ) ; Zhang et al . ( 2020 ) ; Goel et al . ( 2020 ) . In the sequel , we shall make extensive use of three datasets from Sagawa et al . ( 2020a ; b ) , each of which involve binary labels y ∈ Y and a binary attribute a ( x ) ∈ A : ( i ) synth , a synthetic dataset where X ⊂ R200 , Y = { ±1 } , and A ∈ { ±1 } . ( ii ) waterbirds , a dataset of bird images with Y = { land bird , water bird } corresponding to the bird type , and A = { land background , water background } corresponding to the background . ( iii ) celebA , a dataset of celebrity images with Y = { blond , dark } corresponding to individuals ’ hair colour , and A = { male , female } . For each dataset , we construct four subgroups g ( x , y ) = ( y , a ( x ) ) , with two such subgroups being under-represented . On synth and waterbirds , these correspond to subgroups with y 6= a ( x ) , while on celebA , these corespond to the subgroups { ( blond , male ) } and { ( dark , female ) } . Owing to the rarity of certain subgroups , it is intuitively easy for an overparameterised network to learn to predict a ( x ) rather than y , and memorise spurious patterns to predict the rare subgroups . To train overparameterised models , we follow the setup of Sagawa et al . ( 2020a ; b ) , which we briefly summarise . For celebA and waterbirds , we use a ResNet-50 , which can attain perfect training accuracy . For synth , we train a weakly regularised ( λ = 10−16 ) logistic regression model on a fixed representation Φ constructed as follows : for fixed m , we construct Φ ( x ) = ReLU ( V x ) , where V ∈ Rm×200 is a random Gaussian matrix with normalised rows . Overparameterised models consistently demonstrate a significant gap between the average and worst-subgroup error : e.g . ( see Figure 4 ) , on synth , the model achieves 91 % average accuracy , but 36 % worst-subgroup accuracy .
The paper builds upon prior work that shows that overparameterized networks learned by ERM can have poor worst-case performance over pre-defined groups. Specifically, the paper demonstrates that this result is not necessarily due to overparameterized learning poor representations for rare subgroups, but rather mis-calibration in the classification layer that can be addressed with two simple correct techniques: thresholding and re-training the classification layer. They show improvements over ERM in worst-case subgroup error.
SP:3e9c01477200929c84f6725472107beab75a573e
High-Capacity Expert Binary Networks
1 INTRODUCTION . A promising , hardware-aware , direction for designing efficient deep learning models case is that of network binarization , in which filter and activation values are restricted to two states only : ±1 ( Rastegari et al. , 2016 ; Courbariaux et al. , 2016 ) . This comes with two important advantages : ( a ) it compresses the weights by a factor of 32× via bit-packing , and ( b ) it replaces the computationally expensive multiply-add with bit-wise xnor and popcount operations , offering , in practice , a speed-up of ∼ 58× on a CPU ( Rastegari et al. , 2016 ) . Despite this , how to reduce the accuracy gap between a binary model and its real-valued counterpart remains an open problem and it is currently the major impediment for their wide-scale adoption . In this work , we propose to approach this challenging problem from 3 key perspectives : 1 . Model capacity : To increase model capacity , we firstly introduce the first application of Conditional Computing ( Bengio et al. , 2013 ; 2015 ; Yang et al. , 2019 ) to the case of a binary networks , which we call Expert Binary Convolution . For each convolutional layer , rather than learning a weight tensor that is expected to generalize well across the entire input space , we learn a set of N experts each of which is tuned to specialize to portions of it . During inference , a very light-weight gating function dynamically selects a single expert for each input sample and uses it to process the input features . Learning to select a single , tuned to the input data , expert is a key property of our method which renders it suitable for the case of binary networks , and contrasts our approach to previous works in conditional computing ( Yang et al. , 2019 ) . 2 . Representation capacity : There is an inherent information bottleneck in binary networks as only 2 states are used to characterize each feature , which hinders the learning of highly accurate models . To this end , for the first time , we highlight the question of depth vs width in binary networks and propose a surprisingly unexplored efficient mechanism for increasing the effective width of the network by preserving the original computational budget . We show that our approach leads to noticeable gains in accuracy without increasing computation . 3 . Network design : Finally , and inspired by similar work in real-valued networks ( Tan & Le , 2019 ) , we propose a principled approach to search for optimal directions for scaling-up binary networks . Main results : Without increasing the computational budget of previous works , our method improves upon the state-of-the-art ( Martinez et al. , 2020 ) by∼ 6 % , reaching a groundbreaking∼ 71 % on ImageNet classification . 2 RELATED WORK . 2.1 NETWORK BINARIZATION . Since the seminal works of Courbariaux et al . ( 2015 ; 2016 ) which showed that training fully binary models ( both weights and activations ) is possible , and Rastegari et al . ( 2016 ) which reported the very first binary model of high accuracy , there has been a great research effort to develop binary models that are competitive in terms of accuracy when compared to their real-valued counterparts , see for example ( Lin et al. , 2017 ; Liu et al. , 2018 ; Alizadeh et al. , 2018 ; Bulat et al. , 2019 ; Bulat & Tzimiropoulos , 2019 ; Ding et al. , 2019 ; Wang et al. , 2019 ; Zhuang et al. , 2019 ; Zhu et al. , 2019 ; Kim et al. , 2020 ; Bulat et al. , 2020 ; Martinez et al. , 2020 ) . Notably , many of these improvements including real-valued down-sampling layers ( Liu et al. , 2018 ) , double skip connections ( Liu et al. , 2018 ) , learning the scale factors ( Bulat & Tzimiropoulos , 2019 ) , PReLUs ( Bulat et al. , 2019 ) and two-stage optimization ( Bulat et al. , 2019 ) have been put together to build a strong baseline in Martinez et al . ( 2020 ) which , further boosted by a sophisticated distillation and data-driven channel rescaling mechanism , yielded an accuracy of ∼ 65 % on ImageNet . This method , along with the recent binary NAS of Bulat et al . ( 2020 ) reporting accuracy of ∼ 66 % , are to our knowledge , the state-of-the-art in binary networks . Our method further improves upon these works achieving an accuracy of ∼ 71 % on ImageNet , crucially without increasing the computational complexity . To achieve this , to our knowledge , we propose for the first time to explore ideas from Conditional Computing ( Bengio et al. , 2013 ; 2015 ) and learn data-specific binary expert weights which are dynamically selected during inference conditioned on the input data . Secondly , we are the first to identify width as an important factor for increasing the representation capacity of binary networks , and introduce a surprisingly simple yet effective mechanism to enhance it without increasing complexity . Finally , although binary architecture design via NAS ( Liu et al. , 2018 ; Real et al. , 2019 ) has been recently explored in ( Kim et al. , 2020 ; Bulat et al. , 2020 ) , we propose to approach it from a different perspective that is more related to Tan & Le ( 2019 ) , which was developed for real-valued networks . 2.2 CONDITIONAL COMPUTATION . Conditional computation is a very general data processing framework which refers to using different models or different parts of a model conditioned on the input data . Wang et al . ( 2018 ) and Wu et al . ( 2018 ) propose to completely bypass certain parts of the network during inference using skip connections by training a policy network via reinforcement learning . Gross et al . ( 2017 ) proposes to train large models by using a mixture of experts trained independently on different partitions of the data . While speeding-up training , this approach is not end-to-end trainable nor tuned towards improving the model accuracy . Shazeer et al . ( 2017 ) trains thousands of experts that are combined using a noisy top-k expert selection while Teja Mullapudi et al . ( 2018 ) introduces the HydraNet in which a routing function selects and combines a subset of different operations . The later is more closely related to online network search . Chen et al . ( 2019 ) uses a separate network to dynamically select a variable set of filters while Dai et al . ( 2017 ) learns a dynamically computed offset . More closely related to the proposed EBConv is Conditional Convolution , where Yang et al . ( 2019 ) propose to learn a Mixture of Experts , i.e . a set of filters that are linearly combined using a routing function . In contrast , our approach learns to select a single expert at a time . This is critical for binary networks for two reasons : ( 1 ) The linear combination of a binary set of weights is nonbinary and , hence , a second binarization is required giving rise to training instability and increased memory consumption . In Section 5 , we compare with such a model and show that our approach works significantly better . ( 2 ) The additional computation to multiply and sum the weights , while negligible for real-valued networks , can lead to a noticeable computational increase for binary ones . Finally , we note that our single expert selection mechanism is akin to the Gumbel-max trick ( Gumbel , 1948 ) and the Gumbel-Softmax Estimator ( Jang et al. , 2016 ; Maddison et al. , 2016 ) previously used in various forms for NAS ( Chang et al. , 2019 ) , multi-task learning ( Guo et al. , 2020 ) and variational auto-encoders ( Jang et al. , 2016 ) . To our knowledge , the proposed EBConv is the very first adaptation for conditional computing within binary neural networks . 3 BACKGROUND ON BINARY NETWORKS . Following Rastegari et al . ( 2016 ) ; Bulat & Tzimiropoulos ( 2019 ) , a binary convolution is defined as : BConv ( x , θ ) = ( sign ( x ) ©∗ sign ( θ ) ) α , ( 1 ) where x is the input , θ the weights , ©∗ denotes the binary convolutional operation , the Hadamard product , and α ∈ RC is learned via back-propagation , as in Bulat & Tzimiropoulos ( 2019 ) . The binarization is performed in two stages Bulat et al . ( 2019 ) ; Martinez et al . ( 2020 ) . During Stage I , we train a network with binary activations and real-valued weights . Note that the accuracy of Stage I models are very representative to that of the final fully binary model ( see Table 4 ) . During Stage II , we initialize from Stage I to train a network with both weights and activations binary . When reporting results , if no stage is specified , the model ( weights and activations ) is fully binary . We set as baseline the Strong Baseline model ( denoted as SBaseline ) from Martinez et al . ( 2020 ) on top of which we implemented the proposed method . We denote as Real-to-bin their full model . 4 METHOD . 4.1 EXPERT BINARY CONVOLUTION . Assume a binary convolutional layer with input x ∈ RCin×W×H and weight tensor θ ∈ RCin×Cout×kH×kW . In contrast to a normal convolution that applies the same weights to all input features , we propose to learn a set of expert weights ( or simply experts ) { θ0 , θ1 , ... , θN−1 } , θi ∈ RCin×Cout×kH×kW alongside a selector gating function which , given input x , selects only a single expert to be applied to it . The proposed EBConv layer is depicted in Fig . 1a . To learn the experts , let us first stack them in matrix Θ ∈ RN×CinCoutkHkW . We propose to learn the following function : EBConv ( x , θ ) = BConv ( x , ( ϕ ( ψ ( x ) ) TΘ ) r ) , ( 2 ) where ϕ ( . ) is a gating function ( returning an N−dimensional vector as explained below ) that implements the expert selection mechanism using as input ψ ( x ) which is an aggregation function of the input tensor x , and ( . ) r simply reshapes its argument to a tensor of appropriate dimensions . Gating function ϕ : A crucial component of the proposed approach is the gating function that implements the expert selection mechanism . An obvious solution would be to use a Winners-TakeAll ( WTA ) function , however this is not differentiable . A candidate that comes in mind to solve this problem is the softargmax with temperature τ : as τ → 0 , the entry corresponding to the max will tend to 1 while the rest to 0 . However , as τ → 0 , the derivative of the softargmax converges to the Dirac function δ which provides poor gradients and hence hinders the training process . This could be mitigated if a high τ is used , however this would require hard thresholding at test time which , for the case of binary networks , and given that the models are trained using Eq . 2 , leads to large errors . To mitigate the above , and distancing from reinforcement learning techniques often deployed when discrete decisions need to be made , we propose , for the forward pass , to use a WTA function for defining ϕ ( . ) , as follows : ϕ ( z ) = { 1 , if i = argmax ( z ) 0 , otherwise . ( 3 ) Note that we define ϕ as ϕ : RC → RN i.e . as a function that returns an N−dimensional vector which is used to multiply ( element-wise ) Θ in Eq . 2 . This is crucial as , during training , we wish to back-propagate gradients for the non-selected experts . To this end , we propose , for the backward pass , to use the Softmax function for approximating the gradients ϕ ( . ) : ∂φ ∂z : = ∂ ∂z Softmax ( z ) . ( 4 ) Overall , our proposal , WTA for forward and Softmax for backward , effectively addresses the mismatch during inference between training and testing while , at the same time , it allows meaningful gradients to flow to all experts during training . In Section A.3.3 of the appendix , we also explore the impact of adding a temperature to the softmax showing how its value affects the training process . Note that backpropagating gradients for the non-selected experts applies to the gating function , only ; the binary activations and weights continue to use the STE introduced in ( Courbariaux et al. , 2016 ; Rastegari et al. , 2016 ) . Aggregation function ψ : The purpose of this function is to give a summary of the input feature tensor which will be used to select the expert . To avoid overfitting and to keep the computational cost low , we opt for a simple and fast linear function : ψ ( x ) = [ x̄ [ 0 ] x̄ [ 1 ] · · · x̄ [ C−1 ] ] ω , ( 5 ) where x̄ [ i ] = 1HW x [ i ] is the spatial average of the i−th channel and ω ∈ RC×N a learnable projection matrix . Note that no other non-linearity was used as the WTA function is already a non-linear function . Data-specific experts : One expected property of EBConv implied by the proposed design is that the experts should specialize on portions of data . This is because , for each data sample , a single expert is chosen per convolutional layer . Fig . 1b confirms this experimentally by t-SNE embedding visualisation of the features before the classifier along with the corresponding expert that was activated for each sample of the ImageNet validation set . Optimization policy : As in Bulat et al . ( 2019 ) , we adopt a two-stage training policy where firstly the input features are binarized while learning real-valued weights , and then both input and weights are binarized . Note that the aggregation function ψ is kept real across all steps since its computational cost is insignificant . Furthermore , due to the discrete decision making process early on , the training can be unstable . Therefore , to stabilize the training we firstly train one expert , and then use this to initialize the training of all N experts . This ensures that early on in the process any decision made by the gating function is a good decision . Overall , our optimization policy can be summarized as follows : 1 . Train one expert , parametrized by θ0 , using real weights and binary activations . 2 . Replicate θ0 to all θi , i = { 1 , N − 1 } to initialize matrix Θ . 3 . Train the model initialized in step 2 using real weights and binary activations . 4 . Train the model obtained from step 3 using binary weights and activations .
This paper proposes some techniques to improve the accuracy of binary networks without adding much computational overhead. To improve model capacity, the author proposes mixture-of-experts convolution with a winner-takes-all gating mechanisms. To deal with the limited representation power of binary activations, the paper proposes utilizing group convolutions. The performance is further improved by careful selection of hyperparameters and improved training techniques.
SP:a0c493b218741a8b49a12458bf78c88dc3aa596a
Neural CDEs for Long Time Series via the Log-ODE Method
1 INTRODUCTION . Neural controlled differential equations ( Neural CDEs ) ( Kidger et al. , 2020 ) are the continuous-time analogue to a recurrent neural network ( RNN ) , and provide a natural method for modelling temporal dynamics with neural networks . Neural CDEs are similar to neural ordinary differential equations ( Neural ODEs ) , as popularised by Chen et al . ( 2018 ) . A Neural ODE is determined by its initial condition , without a direct way to modify the trajectory given subsequent observations . In contrast the vector field of a Neural CDE depends upon the time-varying data , so that the trajectory of the system is driven by a sequence of observations . 1.1 CONTROLLED DIFFERENTIAL EQUATIONS . We begin by stating the definition of a CDE . Let a , b ∈ R with a < b , and let v , w ∈ N. Let ξ ∈ Rw . LetX : [ a , b ] → Rv be a continuous function of bounded variation ( which is for example implied by it being Lipschitz ) , and let f : Rw → Rw×v be continuous . Then we may define Z : [ a , b ] → Rw as the unique solution of the controlled differential equation Za = ξ , Zt = Za + ∫ t a f ( Zs ) dXs for t ∈ ( a , b ] , ( 1 ) The notation “ f ( Zs ) dXs ” denotes a matrix-vector product , and if X is differentiable then∫ t a f ( Zs ) dXs = ∫ t a f ( Zs ) dX ds ( s ) ds . If in equation ( 1 ) , dXs was replaced with ds , then the equation would just be an ODE . Using dXs causes the solution to depend continuously on the evolution ofX . We say that the solution is “ driven by the control , X ” . 1.2 NEURAL CONTROLLED DIFFERENTIAL EQUATIONS . We recall the definition of a Neural CDE as introduced in Kidger et al . ( 2020 ) . Consider a time series x as a collection of points xi ∈ Rv−1 with corresponding time-stamps ti ∈ R such that x = ( ( t0 , x0 ) , ( t1 , x1 ) , ... , ( tn , xn ) ) , and t0 < ... < tn . LetX : [ t0 , tn ] → Rv be some interpolation of the data such thatXti = ( ti , xi ) . Kidger et al . ( 2020 ) use natural cubic splines . Here we will actually end up finding piecewise linear interpolation to be a more convenient choice . ( We avoid issues with adaptive solvers as discussed in Kidger et al . ( 2020 , Appendix A ) simply by using fixed solvers . ) Let ξθ : Rv → Rw and fθ : Rw → Rw×v be neural networks . Let ` θ : Rw → Rq be linear , for some output dimension q ∈ N. Here θ is used to denote dependence on learnable parameters . We define Z as the hidden state and Y as the output of a neural controlled differential equation driven by X if Zt0 = ξθ ( t0 , x0 ) , with Zt = Zt0 + ∫ t t0 fθ ( Zs ) dXs and Yt = ` θ ( Zt ) for t ∈ ( t0 , tn ] . ( 2 ) That is – just like an RNN – we have evolving hidden state Z , which we take a linear map from to produce an output . This formulation is a universal approximator ( Kidger et al. , 2020 , Appendix B ) . The output may be either the time-evolving Yt or just the final Ytn . This is then fed into a loss function ( L2 , cross entropy , . . . ) and trained via stochastic gradient descent in the usual way . The question remains how to compute the integral of equation ( 2 ) . Kidger et al . ( 2020 ) let gθ , X ( Z , s ) = fθ ( Z ) dX ds ( s ) , ( 3 ) where the right hand side denotes a matrix multiplication , and then note that the integral can be written as Zt = Zt0 + ∫ t t0 gθ , X ( Zs , s ) ds . ( 4 ) This reduces the CDE to an ODE , so that existing tools for Neural ODEs may be used to evaluate this , and to backpropagate . By moving from the discrete-time formulation of an RNN to the continuous-time formulation of a Neural CDE , then every kind of time series data is put on the same footing , whether it is regularly or irregularly sampled , whether or not it has missing values , and whether or not the input sequences are of consistent length . Besides this , the continuous-time or differential equation formulation may be useful in applications where such models are explicitly desired , as when modelling physics . 1.3 CONTRIBUTIONS . Neural CDEs , as with RNNs , begin to break down for long time series . Training loss/accuracy worsens , and training time becomes prohibitive due to the sheer number of forward operations within each training epoch . Here , we apply the log-ODE method , which is a numerical method from stochastic analysis and rough path theory . It is a method for converting a CDE to an ODE , which may in turn be solved via standard ODE solvers . Thus this acts as a drop-in replacement for the original procedure that uses the derivative of the control path . In particular , we find that this method is particularly beneficial for long time series ( and incidentally does not require differentiability of the control path ) . With this method both training time and model performance of Neural CDEs are improved , and memory requirements are reduced . The resulting scheme has two very neat interpretations . In terms of numerical differential equation solvers , this corresponds to taking integration steps larger than the discretisation of the data , whilst incorporating substep information through additional terms1 . In terms of machine learning , this corresponds to binning the data prior to running a Neural CDE , with bin statistics carefully chosen to extract precisely the information most relevant to solving a CDE . 1For the reader familiar with numerical methods for SDEs , this is akin to the additional correction term in Milstein ’ s method as compared to Euler-Maruyama . 2 THEORY . We begin with motivating theory , though we note that this section is not essential for using the method . Readers more interested in practical applications should feel free to skip to section 3 . 2.1 SIGNATURES AND LOG-SIGNATURES . The signature transform is a map from paths to a vector of real values , specifying a collection of statistics about the path . It is a central component of the theory of controlled differential equations , since these statistics describe how the data interacts with dynamical systems . The log-signature is then formed by representing the same information in a compressed format . We begin by providing a formal definition of the signature , and a description of the log-signature . We will then give some intuition , first into the geometry of the first few terms of the ( log- ) signature , and then by providing a short example of how these terms appear when solving CDEs . Signature transform Let x = ( x1 , ... , xn ) , where xi ∈ Rv . Let T > 0 and 0 = t1 < t2 < ... < tn−1 < tn = T be arbitrary . Let X = ( X1 , ... , Xd ) : [ 0 , T ] → Rd be the unique continuous function such that X ( ti ) = xi and is affine on the intervals between ( essentially just a linear interpolation of the data ) . Letting2 Si1 , ... ika , b ( X ) = ∫ ... ∫ 0 < t1 < ... < tk < T k∏ j=1 dXij dt ( tj ) dtj , ( 5 ) then the depth-N signature transform of X is given by SigNa , b ( X ) = ( { S ( X ) ( i ) } d i=1 , { S ( x ) ( i , j ) } d i , j=1 , . . . , { S ( x ) ( i1 , ... , iN ) } d i1 , ... , iN=1 ) . ( 6 ) This definition is independent of the choice of T and ti ( Bonnier et al. , 2019 , Proposition A.7 ) . We see that the signature is a collection of integrals , with each integral defining a real value . It is a graded sequence of statistics that characterise the input time series . In particular , ( Hambly & Lyons , 2010 ) show that under mild conditions , Sig∞ ( X ) completely determines X up to translation ( provided time is included in a channel in X ) . Log-signature transform However , the signature transform has some redundancy : a little algebra shows that for example S1,2a , b ( X ) + S 2,1 a , b ( X ) = S 1 a , b ( X ) S 2 a , b ( X ) , so that for instance we already know S2,1a , b ( X ) provided we know the other three quantities . 2This is a slightly simplified definition , and the signature is often instead defined using the notation of stochastic calculus ; see Definition A.2 . The log-signature transform is then essentially obtained by computing the signature transform , and throwing out redundant terms , to obtain some ( nonunique ) minimal collection . Starting from the depth-N signature transform and removing some fixed set of redundancies produces the depth-N log-signature transform.3 We denote this LogSigNa , b , which is a map from Lipschitz continuous paths [ a , b ] → Rv into Rβ ( v , N ) , where β ( v , N ) denotes the dimension of the log-signature . The precise procedure is a little involved ; both this and a formula for β ( v , N ) can be found in Appendix A. Geometric intuition In figure 2 we provide a geometric intuition for the first two levels of the log-signature ( which have particularly natural interpretations ) . ( Log- ) Signatures and CDEs ( Log- ) signatures are intrinsically linked to solutions of CDEs . Let Df denote the Jacobian of a function f . Now expand equation ( 1 ) by linearising the vector field f and neglecting higher order terms : Zt ≈ Za + ∫ t a ( f ( Za ) +Df ( Za ) ( Zs − Za ) ) dX dt ( s ) ds = Za + ∫ t a ( f ( Za ) +Df ( Za ) ∫ s a f ( Zu ) dX dt ( u ) du ) dX dt ( s ) ds ≈ Za + f ( Za ) ∫ t a dX dt ( s ) ds+Df ( Za ) f ( Za ) ∫ t a ∫ s a dX dt ( u ) du dX dt ( s ) ds = Za + f ( Za ) { S ( X ) ( i ) } di=1 +Df ( Za ) f ( Za ) { S ( X ) ( i , j ) } d i , j=1 . ( 7 ) This gives a Taylor expansion of the solution , and moreover the coefficients involve the terms in the signature . Higher order Taylor expansions results in corrections using higher order signature terms . We refer the reader to section 7.1 of Friz & Victoir ( 2010 ) for further details .
.** The authors describe how to apply a log signature to temporal datasets. This operation reduces dimensionality along the time axis at the price of adding some dimensionality to the spatial dimension. Then they train a neural controlled differential equation (Neural CDE) on the transformed dataset and show that their model learns more quickly and achieves better test generalization. They report results on two real-world datasets (EigenWorms and the TSR vitals dataset).
SP:b8f49fdda704b0206febd3c09d1f475047919099
Counterfactual Fairness through Data Preprocessing
1 INTRODUCTION . The rapid popularization of machine learning methods and the growing availability of personal data have enabled decision-makers from various fields such as graduate admission ( Waters & Miikkulainen , 2014 ) , hiring ( Ajunwa et al. , 2016 ) , credit scoring ( Thomas , 2009 ) , and criminal justice ( Brennan et al. , 2009 ) to make data-driven decisions efficiently . However , the community and the authorities have also raised concern that these automatically learned decisions may inherit the historical bias and discrimination from the training data and would cause serious ethical problems when used in practice ( Nature Editorial , 2016 ; Angwin & Larson , 2016 ; Dwoskin , 2015 ; Executive Office of the President et al. , 2016 ) . Consider a training dataset D consisting of sensitive attributes S such as gender and race , nonsensitive attributes A and decisions Y . If the historical decisions Y are not fair across the sensitive groups , a powerful machine learning algorithm will capture this pattern of bias and yield learned decisions Ŷ that mimic the preference of the historical decision-maker , and it is often the case that the more discriminative an algorithm is , the more discriminatory it might be . While researchers agree that methods should be developed to learn fair decisions , opinions vary on the quantitative definition of fairness . In general , researchers use either the observational or counterfactual approaches to formalize the concept of fairness . The observational approaches often describe fairness with metrics of the observable data and predicted decisions ( Hardt et al. , 2016 ; Chouldechova , 2017 ; Yeom & Tschantz , 2018 ) . For example , Demographic Parity ( DP ) or Group Fairness ( Zemel et al. , 2013 ; Khademi et al. , 2019 ) considers the learned decision Ŷ to be fair if it has the same distribution for different sensitive groups , i.e. , P ( Ŷ |S = s ) = P ( Ŷ |S = s′ ) . The Individual Fairness ( IF ) definition ( Dwork et al. , 2012 ) views fairness as treating similar individuals similarly , which means the distance between Ŷ ( si , ai ) and Ŷ ( sj , aj ) should be small if individuals i and j are similar . The other branch of fairness and/or discrimination definitions are built upon the causal framework of Pearl ( 2009a ) , such as direct/indirect discrimination ( Zhang et al. , 2017 ; Nabi & Shpitser , 2018 ) , path-specific effect ( Wu et al. , 2019b ) , counterfactual error rate ( Zhang & Bareinboim , 2018a ) and counterfactual fairness ( Kusner et al. , 2017 ; Wang et al. , 2019 ; Wu et al. , 2019a ) . These definitions often involve the notion of counterfactuals , which means what the attributes or decision would be if an individual were in a different sensitive group . With the help of the potential outcome concept , the measuring of fairness is no longer restricted to the observable quantities ( Kilbertus et al. , 2017 ; Zhang & Bareinboim , 2018b ) . For example , the Equal Opportunity ( EO ) definition Wang et al . ( 2019 ) has the same idea as IF but it can directly compare the actual and counterfactual decisions of the same individual instead of the actual decisions of two similar individuals . The Counterfactual Fairness ( CF ) definition ( Kusner et al. , 2017 ) or equivalently , the Affirmative Action ( AA ) definition ( Wang et al. , 2019 ) goes one step further than EO and derives the counterfactual decisions from the counterfactual non-sensitive attributes . We adopt CF as our definition of fairness and it is formally described in Section 2 . We believe causal reasoning is the key to fair decisions as DeDeo ( 2014 ) pointed out that even the most successful algorithms would fail to make fair judgments due to the lack of causal reasoning ability . For the observational definitions , fair decisions can be learned by solving optimization problems , either adding the fairness condition as a constraint ( Dwork et al. , 2012 ) or directly optimize the fairness metric as an object ( Zemel et al. , 2013 ) . When using the counterfactual definitions , however , an approximation of the causal model or the counterfactuals is often needed since the counterfactuals are unobservable . In the FairLearning algorithm proposed by Kusner et al . ( 2017 ) , the unobserved parts of the graphical causal model are sampled using the Markov chain Monte Carlo method . Then they use only the non-descendants of S to learn the decision , which ensures CF but will have a low prediction accuracy . In Wang et al . ( 2019 ) , the counterfactual of A had S been s′ is imputed as the sum of the counterfactual group mean E ( A|S = s′ ) and the residuals from the original group A− E ( A|S = s ) . As we discuss later , this approach would only work when a strong assumption of the relationship between A and S is satisfied . 1.1 CONTRIBUTIONS . We develop the Fair Learning through dAta Preprocessing ( FLAP ) algorithm to learn counterfactually fair decisions from biased training data . While current literature is vague about the assumptions needed for their algorithms to achieve fairness , we formalize the weak and strong conditions where different data preprocessing procedures should be used to guarantee CF and prove the results under the causal framework of Pearl ( 2009a ) . We show that our algorithm can predict fairer decisions with similar accuracy when compared with other counterfactual fair learning algorithms using three simulated datasets and three real-world applications , including the loan approval data from a fintech company , the adult income data , and the COMPAS recidivism data . On the other hand , the processed data also enable us to detect discrimination in the original decision . We prove that CF is equivalent to the conditional independence of the decisions and the sensitive attributes given the processed non-sensitive attributes under certain conditions . Therefore any wellestablished conditional independence tests can be used to test CF with the processed data . To our knowledge , it is the first time that a formal statistical test for CF is proposed . We illustrate the idea using the Conditional Distance Correlation test ( Wang et al. , 2015 ) in our simulation and test the fairness of the decisions in the loan approval data using a parametric test . 2 CAUSAL MODEL AND COUNTERFACTUAL FAIRNESS . For the discussion below , we consider the sensitive attributes S ∈ S to be categorical , which is a reasonable restriction for the commonly discussed sensitive information such as race and gender . The non-sensitive attributes A ∈ A ⊆ Rd , and the decision Y is binary as admit or not in graduate admission , hire or not in the hiring process , approve or not in loan assessment . To bring the discussion of fairness into the framework of causal inference , we begin by constructing the Structural Causal Model ( SCM ) for the data . As described in Pearl ( 2009b ) , an SCMM consists of a set of exogenous variables U , a set of endogenous variables V , and F , a set of functions that assign value to each endogenous variable given its parents in V and the exogenous variables U . In our case ( Figure 1 ) , we consider V = { S , A , Y , Ŷ } , where { S , A , Y } are the observed data and Ŷ is the prediction of Y we made based on S and A . The only exogenous variable affecting Ŷ is a Uniform ( 0 , 1 ) random variable UŶ so that we can conveniently express the value of Ŷ with a structural equation . We assume that US , UA , and UY , which are the exogenous variables that affect S , A , and Y respectively , are independent of each other . The structural equations on the right side of Figure 1 are described with the functions in F , one for each component in V . Here we express fŶ as an indicator function so that Ŷ is a Bernoulli random variable that takes value one with probability p ( S , A ) . In general , p ( s , a ) could be any function that maps S × A to [ 0 , 1 ] , but we are more interested in such functions that will result in a fair decision , more details of which will be discussed in Section 3 . It can be seen that the subset of exogenous variables { US , UA , UY } characterize everything we should know about a unit . Any two units with the same realization will have the same behavior and result irrespective of the other differences in their identities . Here we give a simplified loan approval model as a running example to help understand the SCM we considered . Example 1 . A bank asks each loan applicant for her/his race S and annual income A to decide to approve the application ( Y = 1 ) or not ( Y = 0 ) . There are two races in the population of the applicants , S = 1 represents the advantageous group , and S = 0 for the disadvantageous one . Let US ∼ Uniform ( 0 , 1 ) , we generate S = 1 { US < 0.7 } . The annual income is log-normally distributed for each race group and its scale and location parameters may depend on race : A = c1 exp { c2 + λaS + c3σSaUA } , where UA is a standard normal random variable , c1 , c3 > 0 , and c2 are constants that affect the median and spread of the population income , λa decides the difference in mean log income between the two race groups , and σa > 0 determines the standard deviation ratio of the log incomes . The decision by the bank can be simulated from a logistic model : Y = 1 { UY < expit ( β0 + βaA+ βsS ) } , where UY ∼ Uniform ( 0 , 1 ) and expit ( u ) = ( 1 + e−u ) −1 . In this example , βs characterizes the direct effect of the sensitive attribute on the decision : when βs > 0 , the applications from the advantageous group are more likely to be approved by the bank when holding the income fixed . On the other hand , λa partly describes the indirect effect because when both λa and βa are positive , the advantageous group will have a higher income than the other group on average and thus be favored by the bank even if βs = 0 . It is worth noting that , apart from the difference in the mean , the difference in higher moments could also cause unfairness indirectly as alluded to in Fuster et al . ( 2018 ) . In general , if there are any differences in the distribution of A across the categories in S , the decision based on A might be unfair . However , the indirect effect caused by the differences in the higher moments of A could be case dependent and thus harder to interpret . In our case , σa > 1 will lead to a higher average income and hence higher approval probability on average for the advantageous group since the income distribution is right-skewed . With the SCM in hand , we are ready to define the causal quantity we are interested in . Since most sensitive attributes , such as gender and race , can not be altered in experiments , we will look into the counterfactuals , namely , what the results Y would be had S been different from the observed facts . This quantity is expressed as Ys ( U ) had S been s for a random unit with exogenous variables U sampled from the population . Define Ms to be the modified SCM from M ( Figure 1 ) with the equation for S replaced with S = s. Then for any realization U = u , the unit level counterfactuals Ys ( u ) can be calculated from Ms . Similarly , we can define Ŷs ( U ) and Ŷs ( u ) as the counterfactual predicted decision and its realization . The counterfactual fairness can then be defined on both the decision and the prediction based on the counterfactual result . Here we denote Y as a placeholder for either Y or Ŷ . Definition 1 . Counterfactual Fairness . Given a new pair of attributes ( s∗ , a∗ ) , a ( predicted ) decision Y is counterfactually fair if for any s′ ∈ S , Ys′ ( U ) | { S = s∗ , A = a∗ } d = Ys∗ ( U ) | { S = s∗ , A = a∗ } . In other words , the conditional distribution of the counterfactual result should not depend on the sensitive attributes . It should be noted that there are two stages in evaluating the conditional counterfactuals . The first is updating the conditional distribution ofU . Take the decision Y from Example 1 , if s∗ = 0 , then US | { S = s∗ , A = a∗ } is from Uniform ( 0.7 , 1 ) and UA| { S = s∗ , A = a∗ } is a constant ( log ( a∗/c1 ) − c2 ) /c3 , but UY | { S = s∗ , A = a∗ } is still a Uniform ( 0 , 1 ) random variable since UY is independent of S and A from the SCM . The next stage is deriving the conditional distribution of the counterfactuals from the structural equations of Ms and the conditional distribution of U . Continuing with our example , Y1 ( U ) | { S = 0 , A = a∗ } would be equal in distribution to fY ( 1 , fA ( 1 , UA ) , UY ) | { S = 0 , A = a∗ } d =fY ( 1 , fA ( 1 , ( log ( a ∗/c1 ) − c2 ) /c3 ) , UY ) d =1 { UY < expit ( β0 + βac1 ( a∗/c1 ) σa exp { λa + ( 1− σa ) c2 } + βs ) } and Y0 ( U ) | { S = 0 , A = a∗ } d = 1 { UY < expit ( β0 + βaa∗ ) } . Thus the bank ’ s decision Y would be counterfactually fair if σa = 1 , λa = 0 and βs = 0 .
The paper addresses the problem of preprocessing the data in a way that the predictions of a learning task will be counterfactually fair. The counterfactual fairness definition is borrowed from that of (Kusner et al., 2017). The authors propose ortogonaliza tion and marginal distribution mapping so as to achieve counterfactual fairness. They test their proposed approach on synthetic and real data.
SP:e3e7028a84d8a272b7714e91bc08e67af40152c1
ZCal: Machine learning methods for calibrating radio interferometric data
1 INTRODUCTION . Modern-day astronomy is at an unprecedented stage , with a deluge of data from different telescopes . In contrast to conventional methods , today astronomical discoveries are data-driven . The upcoming Square Kilometer Array ( SKA ) is expected to produce terabytes of data every hour ( The SKA telescope ) . With this exponential growth of data , challenges for data calibration , reduction , and analysis also increase ( Aniyan & Thorat , 2017 ) , making it difficult for astronomers to manually process and analyse the data ( Yatawatta , 2020 ) . Therefore , intelligent and automated systems are required to overcome these challenges . One of the main issues in radio astronomy is determining the quality of observational data . Astronomical signals are very weak by the time they reach the Earth ’ s surface . They are easily corrupted by atmospheric interferences , incorrect observational parameters ( e.g . telescope locations or telescope pointing parameters ) , malfunctioning signal receivers , interference from terrestrial man-made radio sources and tracking inaccuracies ( Taylor et al. , 1999 ) . Therefore , it is required to do proper corrections to the observational data before processing the data . Radio astronomers spend a considerable amount of time performing a series of preprocessing steps called calibration , which involves the determination of a set of parameters to correct the received data . These generally include instrumental as well as astronomical parameters . The general strategy for doing these corrections makes use of a calibrator source . Calibrator sources are well suited for determining astronomical parameters for data corrections because they have known characteristics such as the brightness , shape , and frequency spectrum ( Taylor et al. , 1999 ) . This process of calibration is iterative and time-consuming . During scientific observations , different external parameters such as atmospheric pressure , temperature wind conditions , and relative humidity are collected through thousands of sensors attached to the telescopes and its adjoining instrumentation . The data coming from different sensors may provide information about the external conditions that may have corrupted the observed data . This piece of information is not always included in the conventional calibration steps . We propose to use machine learning methods to predict the calibration solutions , looking at pointing and environmental sensor data . This is mainly motivated by the fact that calibration steps make corrections to data that has been corrupted by environmental parameters . In this study , we make use of data from the Karoo Array Telescope ( KAT-7 ) , an array consisting of seven telescopes , which is a precursor to the MeerKAT radio telescope The SKA telescope . We look at eight types of sensor data recorded during observations , with a calibrator source PKS1613-586 to generate the training and testing dataset . The overall generated dataset contains sensor data per telescope and calibration solutions for the signal received by each telescope in horizontal polarization ( H-pol ) and vertical polarization ( V-pol ) . These calibrator solutions are calculated using the astronomy software called Common Astronomy Software Applications . 2 CALIBRATION . In radio astronomy , ideally one might think that after obtaining the observed visibilities the next step would be to directly retrieve the actual visibilities of the target source and perform imaging . However , the measured visibilities V obs are different from the actual visibilities V True and this is due to instrumental and environmental effects ( Richard Thompson et al. , 2017 ) . An example of these effects on the signal measured by a radio interferometry include antenna gains ( slowly and fast time-varying instrumental part ) , atmospheric effects , pointing errors ( tracking inaccuracies ) and incorrect observation parameters ( antenna pointing parameters ) . Signal effects are classified into two types , direction-independent effects ( affecting the signal from all directions equally ) and directiondependent effects ( which vary based on the sky position of the signal ) ( Taylor et al. , 1999 ) . These effects can be corrected by estimating the errors associated with the measured visibilities , thereby recovering the true visibilities . This process is called calibration . In its simplest form , calibration minimizes the error between observed and predicted ( model ) visibilities by estimating the correct complex instrumental gain response ( Grobler et al. , 2016 ) . Suppose for baseline pair ( i , j ) , the observed visibility is V obsi , j ( t ) and the true visibility is V True i , j ( t ) at observation time t. The basic calibration formula is written as , V obsi , j = Gi , jV True i , j + i , j ( t ) ( 1 ) where , Gi , j ( t ) denotes the complex antenna gains for baseline ( i , j ) as a result of unwanted effects and may vary with time ( Thompson et al. , 2001 ) . The extra term i , j ( t ) is a stochastic complex noise ( Taylor et al. , 1999 ) . Most of the corruptions in data occur before the signal is correlated and the response associated with antenna i does not depend on the response of antenna j . Note that the sources that are the subject of astronomical investigation will be referred to as ” target sources ” to distinguish them from calibrator sources ( Thompson et al. , 2001 ) . 3 KAT-7 TELESCOPE . The KAT-7 is a seven-dish interferometry that was built as an engineering prototype for techniques and technologies in preparation for the 64-dish Karoo Array Telescope ( MeerKAT ) ( Foley et al. , 2016 ) . These instruments are located in the Northern Cape Karoo desert region and are operated remotely from Cape Town . The construction of KAT-7 began in 2008 with the writing of the telescope requirements specification and was completed in 2010 . It was then operated in engineering ( commissioning ) mode until its shut-down in 2016 ( Foley et al. , 2016 ) . 3.1 SENSOR DATA . During science observations , different external parameters like atmospheric pressure , temperature wind conditions , and relative humidity are also collected through thousands of sensors attached to the telescopes and its adjoining instrumentation . The data coming from different sensors may provide information about the external conditions that may have corrupted the observed data . This piece of information is not always included in the conventional calibration steps . We propose to use machine learning methods to predict the calibration solutions looking at pointing and environmental sensor data . This is mainly motivated by the fact that calibration steps do corrections to data that is corrupted by environmental parameters . In this study , we make use of the data from the Karoo Array Telescope ( KAT-7 ) . We look at pointing azimuth , elevation , scan , offset , temperature , wind speed , air pressure , relative humidity sensor data recorded during observations with a calibrator source PKS1613 − 586 to generate the training and testing dataset . The overall generated dataset contains sensor data per telescope and calibration solutions for correcting the signal received by each telescope in horizontal polarization ( h-pol ) and vertical polarization ( v-pol ) . These calibrator solutions are calculated using one of the traditional astronomy software called CASA which is used for data calibration and imaging in radio astronomy . 3.2 PREPARATION OF TRAINING DATA . The objective of this study is to find correlations between calibration solutions and sensor information on the telescope . Therefore , the main dataset for the study is the time-based sensor information of each antenna . The process of data collection encompasses all of the steps required to obtain the desired data in digital format . Methods of data collection include acquiring and archiving new observations , querying existing databases according to the science problem at hand , and performing as necessary any cross-matching or data combining ( Ball & Brunner , 2010 ) . In every observation , the collected data are stored by the data capturing system in the Hierarchical Data Format ( HDF5 ) , which is a set of file formats designed to store and organize large amounts of data . The HDF5 file consists of two parts meta-data and observed visibilities . In meta-data one finds static information of the data set , including observer , dump rate and all the available subarrays and spectral windows in the data set selection criteria ( antennas , channel frequencies , targets , scan information ) and sensor data of interest as a function of time . The data observed by the radio telescope are in the form of complex numbers referred to as visibilities . Each source observed contains its own visibilities as a function of time along with sensor data , which keep a record of the telescope ’ s activity and behaviour as these are observed . In preparation for the training and testing dataset , we look at environmental sensors and instrumental sensors recorded during observations with a flux calibrator and a phase calibrator source PKS1613586 in Figure 1 . The chosen sensors of interest from each observation are : air temperature , wind speed , wind direction , air pressure , relative humidity , actual refraction elevation , actual refraction azimuth , actual scan elevation , actual scan azimuth , actual pointing elevation and actual pointing azimuth . 4 PROPOSED METHOD . Different calibration techniques have been developed with the enhancement of the dynamics of the modern radio astronomy instruments to address these challenges raised by the new instruments , providing precise calibration performance . These techniques are loosely classified into first generation calibration ( 1GC ) , second generation calibration ( 2GC ) and third generation calibration ( 3GC ) ( Noordam & Smirnov , 2010 ) . In this study , we concentrate on generating 1GC calibration with the help of machine learning techniques . Our aim is to provide a machine learning model that predicts calibration solutions from sensor data from the telescope . This approach would help to speed up the calibration processes and decrease the time period of the calibrator monitoring , thus improving the time duration duration for tracking the target source observed as shown in 2 . Several different approaches are employed in machine learning regression . These approaches learn the relationship between the input and output by fitting a model directly from the data . In this study , we consider tree-based approaches : Decision tree , random forest , extremely randomized tree , and the neighborhood search approach : ( K-nearest neighbor ) to tackle our problem . We call our approach ZCal model which is multi-output regression . We formulate our regression estimation problem as follows : Suppose we have a feature matrix of sensor data Xt = x11 x12 x13 . . . x1n x21 x22 x23 . . . x2n ... ... ... . . . ... xd1 xd2 xd3 . . . xdn = ( xi , j ) ∈ Rd×n , i ∈ { 1 , 2 , . . . , d } , j ∈ { 1 , 2 , . . . , n } and corresponding complex target variables to learn and predict on , Yt = y11 y12 y13 . . . y1m x21 y22 y23 . . . y2m ... ... ... . . . ... yd1 yd2 yd3 . . . ydm = ( yk , l ) ∈ Cd×m , k ∈ { 1 , 2 , . . . , d } , l ∈ { 1 , 2 , . . . , m } where each column represents a vector of length d , containing unique calibration solutions as function of time t per observation represented as a complex variable Aeiφ = A ( cosφ+ i sinφ ) for each polarization H & V ( Thompson et al. , 2017 ) . Due to different physical causes on the received signal , we therefore choose to treat the antenna phases and amplitudes separately by splitting equation the complex variable into gain amplitude solutions ∣∣Aeiφ∣∣ and gain phase solutions φ . We construct a learning machine , M : Xt → Yt , which when given a validation set of sensor examples , X∗t , minimises some measure of discrepancy between its prediction M ( X ∗ t ) ≈ Ŷt , and the value of Yt , where M represents the predictor . We measure the discrepancy using four commonly used statistical measures in regression ( Borchani et al. , 2015 ) : coefficient of determination , explained variance , root mean squared error ( RMSE ) and root mean absolute error ( RMAE ) . The aim of this regression exercise is to predict multiple target variables Ŷt hence it is referred to as multi-output regression . The learned model will then be used to predict multi-output values Ŷt+1 of all target variables of the new incoming unlabelled instances Xt+1 . It has been proven that multioutput regression methods provide means to model the multi-output datasets effectively and produce better predictive results ( Borchani et al. , 2015 ) . This method does not only consider the underlying relationships between the features and the corresponding targets but also the relationships between the targets themselves , thereby producing simpler models with better computational efficiency ( Borchani et al. , 2015 ) . Borchani et al . ( 2015 ) discuss several applications of multi-output regression including the challenges such as missing data , i.e. , when some features or target variables are not observed .
The paper presents a study of using machine learning methods to calibrate a radio telescope using information from sensor data on, e.g., atmospheric conditions. The authors consider tree- and neighbourhood-based methods for predicting amplitudes and phases for seven antennas. The results show that the methods perform quite well in terms of RMSE and explained variance.
SP:4f59251101a0aad11518673e5571dceb4fcff65e
Hierarchical Reinforcement Learning by Discovering Intrinsic Options
1 INTRODUCTION . Imagine a wheeled robot learning to kick a soccer ball into a goal with sparse reward supervision . In order to succeed , it must discover how to first navigate in its environment , then touch the ball , and finally kick it into the goal , only receiving a positive reward at the end for completing the task . This is a naturally difficult problem for traditional reinforcement learning ( RL ) to solve , unless the task has been manually decomposed into temporally extended stages where each stage constitutes a much easier subtask . In this paper we ask , how do we learn to decompose the task automatically and utilize the decomposition to solve sparse reward problems ? Deep RL has made great strides solving a variety of tasks recently , with hierarchical RL ( hRL ) demonstrating promise in solving such sparse reward tasks ( Sharma et al. , 2019b ; Le et al. , 2018 ; Merel et al. , 2019 ; Ranchod et al. , 2015 ) . In hRL , the task is decomposed into a hierarchy of subtasks , where policies at the top of the hierarchy call upon policies below to perform actions to solve their respective subtasks . This abstracts away actions for the policies at the top levels of the hierarchy . hRL makes exploration easier by potentially reducing the number of steps the agent needs to take to explore its state space . Moreover , at higher levels of the hierarchy , temporal abstraction results in more aggressive , multi-step value bootstrapping when temporal-difference ( TD ) learning is employed . These benefits are critical in sparse reward tasks as they allow an agent to more easily discover reward signals and assign credit . Many existing hRL methods make assumptions about the task structure ( e.g. , fetching an object involves three stages : moving towards the object , picking it up , and combing back ) , and/or the skills needed to solve the task ( e.g. , pre-programmed motor skills ) ( Florensa et al. , 2016 ; Riedmiller et al. , 2018 ; Lee et al. , 2019 ; Hausman et al. , 2018 ; Lee et al. , 2020 ; Sohn et al. , 2018 ; Ghavamzadeh & Mahadevan , 2003 ; Nachum et al. , 2018 ) . Thus these methods may require manually designing the correct task decomposition , explicitly formulating the option space , or programming pre-defined options for higher level policies to compose . Instead , we seek to formulate a general method that can learn these abstractions from scratch , for any task , with little manual design in the task domain . The main contribution of this paper is HIDIO ( HIerarchical RL by Discovering Intrinsic Options ) , a hierarchical method that discovers task-agnostic intrinsic options in a self-supervised manner while ∗Denotes equal contribution . Email to jessez @ usc.edu , { haonan.yu , wei.xu } @ horizon.ai †Work done as an intern at Horizon Robotics . learning to schedule them to accomplish environment tasks . The latent option representation is uncovered as the option-conditioned policy is trained , both according to the same self-supervised worker objective . The scheduling of options is simultaneously learned by maximizing environment reward collected by the option-conditioned policy . HIDIO can be easily applied to new sparsereward tasks by simply re-discovering options . We propose and empirically evaluate various instantiations of the option discovery process , comparing the resulting options with respect to their final task performance . We demonstrate that HIDIO is able to efficiently learn and discover diverse options to be utilized for higher task reward with superior sample efficiency compared to other hierarchical methods . 2 PRELIMINARIES . We consider the reinforcement learning ( RL ) problem in a Markov Decision Process ( MDP ) . Let s ∈ RS be the agent state . We use the terms “ state ” and “ observation ” interchangeably to denote the environment input to the agent . A state can be fully or partially observed . Without loss of generality , we assume a continuous action space a ∈ RA for the agent . Let πθ ( a|s ) be the policy distribution with learnable parameters θ , and P ( st+1|st , at ) the transition probability that measures how likely the environment transitions to st+1 given that the agent samples an action by at ∼ πθ ( ·|st ) . After the transition to st+1 , the agent receives a deterministic scalar reward r ( st , at , st+1 ) . The objective of RL is to maximize the sum of discounted rewards with respect to θ : E πθ , P [ ∞∑ t=0 γtr ( st , at , st+1 ) ] ( 1 ) where γ ∈ [ 0 , 1 ] is a discount factor . We will omit P in the expectation for notational simplicity . In the options framework ( Sutton et al. , 1999 ) , the agent can switch between different options during an episode , where an option is translated to a sequence of actions by an option-conditioned policy with a termination condition . A set of options defined over an MDP induces a hierarchy that models temporal abstraction . For a typical two-level hierarchy , a higher-level policy produces options , and the policy at the lower level outputs environment actions conditioned on the proposed options . The expectation in Eq . 1 is taken over policies at both levels . 3 HIERARCHICAL RL BY DISCOVERING INTRINSIC OPTIONS . We now introduce our hierarchical method for solving sparse reward tasks . We assume little prior knowledge about the task structure , except that it can be learned through a hierarchy of two levels . The higher-level policy ( the scheduler πθ ) , is trained to maximize environment reward , while the lower-level policy ( the worker πφ ) is trained in a self-supervised manner to efficiently discover options that are utilized by πθ to accomplish tasks . Importantly , by self-supervision the worker gets access to dense intrinsic rewards regardless of the sparsity of the extrinsic rewards . Without loss of generality , we assume that each episode has a length of T and the scheduler outputs an option every K steps . The scheduled option u ∈ [ −1 , 1 ] D ( where D is a pre-defined dimensionality ) , is a latent representation that will be learned from scratch given the environment task . Modulated by u , the worker executes K steps before the scheduler outputs the next option . Let the time horizon of the scheduler be H = d TK e. Formally , we define Scheduler policy : uh ∼ πθ ( ·|sh,0 ) , 0 ≤ h < H Worker policy : ah , k ∼ πφ ( ·|sh , k , uh ) , 0 ≤ k < K Environment dynamics : sh , k+1 ∼ P ( ·|sh , k , ah , k ) , 0 ≤ h < H , 0 ≤ k < K ( 2 ) where we denote sh , k and ah , k as the k-th state and action respectively , within the h-th option window of length K. Note that given this sampling process , we have sh , K ≡ sh+1,0 , namely , the last state of the current option uh is the initial state of the next option uh+1 . The overall framework of our method is illustrated in Figure 1 . 3.1 LEARNING THE SCHEDULER . Every time the scheduler issues an option uh , it receives an reward Rh computed by accumulating environment rewards over the next K steps . Its objective is : max θ Eπθ [ H−1∑ h=0 βhRh ] , where β = γK and Rh = Eπφ [ K−1∑ k=0 γkr ( sh , k , ah , k , sh , k+1 ) ] ( 3 ) This scheduler objective itself is not a new concept , as similar ones have been adopted by other hRL methods ( Vezhnevets et al. , 2017 ; Nachum et al. , 2018 ; Riedmiller et al. , 2018 ) . One significant difference between our option with that of prior work is that our option u is simply a latent variable ; there is no explicit constraint on what semantics u could represent . In contrast , existing methods usually require their options to reside in a subspace of the state space , to be grounded to the environment , or to have known structures , so that the scheduler can compute rewards and termination conditions for the worker . Note that our latent options can be easily re-trained given a new task . 3.2 LEARNING THE WORKER . The main focus of this paper is to investigate how to effectively learn the worker policy in a selfsupervised manner . Our motivation is that it might be unnecessary to make an option dictate the worker to reach some “ -space ” of goals ( Vezhnevets et al. , 2017 ; Nachum et al. , 2018 ) . As long as the option can be translated to a short sequence of primitive actions , it does not need to be grounded with concrete meanings such as goal reaching . Below we will treat the option as a latent variable that modulates the worker , and propose to learn its latent representation in a hierarchical setting from the environment task . 3.2.1 WORKER OBJECTIVE . We first define a new meta MDP on top of the original task MDP so that for any h , k , and t : 1 ) sh , k : = ( sh,0 , . . . , sh , k ) , 2 ) ah , k : = ( ah,0 , . . . , ah , k ) , 3 ) r ( sh , k , ah , k , sh , k+1 ) : = r ( sh , k , ah , k , sh , k+1 ) , and 4 ) P ( sh , k+1|sh , k , ah , k ) : = P ( sh , k+1|sh , k , ah , k ) . This new MDP equips the worker with historical state and action information since the time ( h , 0 ) when an option hwas scheduled . Specifically , each state sh , k or action ah , k encodes the history from the beginning ( h , 0 ) up to ( h , k ) within the option . In the following , we will call pairs { ah , k , sh , k+1 } option sub-trajectories . The worker policy now takes option sub-trajectories as inputs : ah , k ∼ πφ ( ·|sh , k , ah , k−1 , uh ) , 0 ≤ k < K , whereas the scheduler policy still operates in the original MDP . Denote ∑ h , k ≡ ∑H−1 h=0 ∑K−1 k=0 for simplicity . The worker objective , defined on this new MDP , is to minimize the entropy of the option uh conditioned on the option sub-trajectory { ah , k , sh , k+1 } : max φ E πθ , πφ ∑ h , k log p ( uh|ah , k , sh , k+1 ) ︸ ︷︷ ︸ negative conditional option entropy −β log πφ ( ah , k|sh , k , ah , k−1 , uh ) ︸ ︷︷ ︸ worker policy entropy ( 4 ) where the expectation is over the current πθ and πφ but the maximization is only with respect to φ . Intuitively , the first term suggests that the worker is optimized to confidently identify an option given a sub-trajectory . However , it alone will not guarantee the diversity of options because potentially even very similar sub-trajectories can be classified into different options if the classification model has a high capacity , in which case we say that the resulting sub-trajectory space has a very high “ resolution ” . As a result , the conditional entropy alone might not be able to generate useful options to be exploited by the scheduler for task solving , because the coverage of the sub-trajectory space is poor . To combat this degenerate solution , we add a second term which maximizes the entropy of the worker policy . Intuitively , while the worker generates identifiable sub-trajectories corresponding to a given option , it should act as randomly as possible to separate sub-trajectories of different options , lowering the “ resolution ” of the sub-trajectory space to encourage its coverage . Because directly estimating the posterior p ( uh|ah , k , sh , k+1 ) is intractable , we approximate it with a parameterized posterior log qψ ( uh|ah , k , sh , k+1 ) to obtain a lower bound ( Barber & Agakov , 2003 ) , where qψ is a discriminator to be learned . Then we can maximize this lower bound instead : max φ , ψ E πθ , πφ ∑ h , k log qψ ( uh|ah , k , sh , k+1 ) − β log πφ ( ah , k|sh , k , ah , k−1 , uh ) . ( 5 ) The discriminator qψ is trained by maximizing likelihoods of options given sampled sub-trajectories . The worker πφ is trained via max-entropy RL ( Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) ) with the intrinsic reward rloh , k+1 : = log qψ ( · ) − β log πφ ( · ) . β is fixed to 0.01 in our experiments . Note that there are at least four differences between Eq . 5 and the common option discovery objective in either VIC ( Gregor et al. , 2016 ) or DIAYN ( Eysenbach et al. , 2019 ) : 1 . Both VIC and DIAYN assume that a sampled option will last through an entire episode , and the option is always sampled at the beginning of an episode . Thus their option trajectories “ radiate ” from the initial state set . In contrast , our worker policy learns options that initialize every K steps within an episode , and they can have more diverse semantics depending on the various states sh,0 visited by the agent . This is especially helpful for some tasks where new options need to be discovered after the agent reaches unseen areas in later stages of training . 2 . Actions taken by the worker policy under the current option will have consequences on the next option . This is because the final state sh , K of the current option is defined to be the initial state sh+1,0 of the next option . So in general , the worker policy is trained not only to discover diverse options across the current K steps , but also to make the discovery easier in the future steps . In other words , the worker policy needs to solve the credit assignment problem across options , under the expectation of the scheduler policy . 3 . To enable the worker policy to learn from a discriminator that predicts based on option subtrajectories { ah , k , sh , k+1 } instead of solely on individual states sh , k , we have constructed a new meta MDP where each state sh , k encodes history from the beginning ( h , 0 ) up to ( h , k ) within an option h. This new meta MDP is critical , because otherwise one simply can not learn a worker policy from a reward function that is defined by multiple time steps ( sub-trajectories ) since the learning problem is no longer Markovian . 4 . Lastly , thanks to the new MDP , we are able to explore various possible instantiations of the discriminator ( see Section 3.3 ) . As observed in the experiments , individual states are actually not the optimal features for identifying options . These differences constitute the major novelty of our worker objective .
The paper develops a hierarchical reinforcement learning algorithm and analyzes its behaviour in four robotic manipulation and navigation tasks. The approach is based on a two-level hierarchy, *scheduler* at the top and *worker* at the bottom. This is similar to other approaches in the literature and the algorithm uses many ideas and elements from existing algorithms. However, these ideas and elements are combined in a novel and well-justified manner. The result is an algorithm that yields good results in a range of problems. The experiments are well done. The paper is generally organised well and written clearly. Relevant literature is reviewed well.
SP:62750e67412021ffe9ef18e104833255aa6ed606
Towards Practical Second Order Optimization for Deep Learning
1 Introduction . Second order methods are among the most powerful algorithms in mathematical optimization . Algorithms in this family often use a preconditioning matrix to transform the gradient before applying each step . Classically , the preconditioner is the matrix of second-order derivatives ( i.e. , the Hessian ) in the context of exact deterministic optimization ( e.g. , Fletcher , 2013 ; Lewis & Overton , 2013 ; Nocedal , 1980 ) . While second-order methods often have significantly better convergence properties than first-order methods , the size of typical problems prohibits their use in practice , as they require quadratic storage and cubic computation time for each gradient update . Approximate algorithms such as quasi-Newton methods are aimed at significantly reducing these requirements ; nonetheless , they still impose non-trivial memory costs equivalent to storing several copies of the model ( and often quadratic computation , as in the popular two-loop recursion ( Nocedal , 1980 ) ) , which severely limits their use at the immense scale of present-day deep learning . Arguably , one of the greatest challenges of modern optimization is to bridge this gap between theoretical and practical optimization towards making second-order methods feasible to implement and deploy at immense scale . Besides the compelling scientific and mathematical developments it may stimulate , this challenge has also a clear real-world significance : recent practice of training deep learning models suggests that the utility of common first-order methods is quickly reaching a plateau , in large part because their time-per-step is already negligible ( compared to other parts of the computation ) and can not be optimized further ; thus , the only way to obtain faster training performance is by drastically reducing the number of update steps . To this end , utilizing second-order methods seem a very natural and promising approach . In this paper we attempt to narrow the gap between theory and practice of second-order methods , focusing on second-order adaptivemethods for stochastic optimization . These methods can be thought of as full-matrix analogues of common adaptive algorithms such as AdaGrad ( Duchi et al. , 2011 ; McMahan & Streeter , 2010 ) and Adam ( Kingma & Ba , 2014 ) : they precondition each gradient with a second moment matrix , akin to a covariance matrix , that accumulates the outer products of the stochastic gradients . Full-matrix versions are potentially more powerful than first-order methods as they can exploit statistical correlations between ( gradients of ) different parameters ; geometrically , they can scale and rotate gradients whereas first order methods only scale gradients . However they suffer from similar prohibitive runtime and memory costs as Hessian-based methods . Recent developments in the space of second-order methods , on which we focus on in this paper , include the K-FAC ( Heskes , 2000 ; Martens & Grosse , 2015 ) and Shampoo ( Gupta et al. , 2018 ) algorithms that exploit the structure of deep networks ( and more generally , models described by a collection of tensors ) for mitigating the space and runtime costs of full-matrix second-order algorithms . These methods approximate each preconditioning matrix using a factored representation that stems from the network structure . However , in very large applications , such algorithms are still impractical due to a number of numerical and infrastructural pitfalls and are difficult to parallelize . Contributions . We provide solutions to practical concerns and challenges that arise in implementing and using second-order methods at large scale . Our focus will be on the Shampoo algorithm , but most of the challenges we address are relevant to the implementation of many other second-order methods . These include : • We design and implement an pipelined version of the optimization algorithm , critically exploiting the heterogeneity and computing power of CPU-Accelerator coupled architectures ; • We extend Shampoo in a number of ways so as to make it applicable to a larger range of deep architectures ; in particular , the extensions allow Shampoo to be used for training very large layers such as embedding layers ubiquitous in language and translation models ; • We replace expensive spectral decompositions ( e.g. , SVD ) used formanipulating preconditioners with an efficient and numerically-stable iterative method for computing roots of PSD matrices ; • We describe practical challenges and limitations we faced in our design , which we argue could be useful for the design considerations of next-generation accelerator hardware architectures . Our distributed implementation demonstrates significant improvements in performance , both in terms of number of steps , and often in actual wall-clock time , on some extremely large deep learning tasks : • Machine translation : we train Transformer models ( Vaswani et al. , 2017 ) on the WMT ’ 14 English to French translation task ( Bojar et al. , 2014 ) in half as many steps compared to state-of-the-art ( well tuned Adam ) , resulting with up to 45 % reduction in wall-time . • Language modeling : we trained BERT ( Devlin et al. , 2018 ) in 16 % fewer steps and achieve higher masked-LM accuracy compared to state-of-the-art optimizer ( You et al. , 2019 ) at 32K batch size ; overall wall-time decreased by 4 % from 3.8 to 3.65 hours . ( For this task , our system has not yet been tuned for performance ; we discuss several possible optimizations below . ) • Click-Through Rate ( CTR ) prediction : we trained the DLRM model ( Naumov et al. , 2019 ) on the terabyte Criteo dataset ( Criteo Labs , 2015 ) at 64K batch size in half as many steps as the current state-of-the-art optimizer , with a wall-time reduction of 37.5 % . We achieve a new state-of-the-art performance of 80.56 % AUC ( ≈ 0.3 % improvement ) on this task . ( An improvement of 0.1 % is considered significant ; see Rong et al. , 2020 ; Wang et al. , 2017 . ) • Image classification : we achieve MLPerf target accuracy of 75.9 % ( Mattson et al. , 2019 ) at 32K batch size on the standard ResNet-50 ImageNet benchmark in 10 % fewer steps than previous state-of-the-art . Here we do not see wall-time gains , mainly because the problem is too small ( only few thousand steps for convergence which does not allow for amortization of costs ) . However , we expect that one would be able to better exploit parallelism via improved software and hardware support . We note that one of our main points in this work was to demonstrate wall-time speedups with secondorder methods implemented on a real-world distributed setup being used to train state-of-the-art deep models . In our view , this is important for influencing future hardware accelerator design and runtime software . Indeed , first-order methods have received huge investments in tuning , implementation , platform support and tailored accelerator hardware over the last decade ; we believe there are numerous opportunities to improve the per-step time performance of preconditioned methods as well . For example , our results provide a concrete justification for incorporating 64bit accumulation units in hardware for distributed training , adding larger on-chip memory , better model parallelism and tighter coupling between accelerators and CPUs , which would make second order methods feasible across more domains and models . Related work . Classic techniques for addressing the high storage and computation costs of secondorder methods mostly belong to the quasi-Newton or the trust-region families of algorithms ( Conn et al. , 2000 ; Nocedal & Wright , 2006 ) . Traditionally , these methods need nearly-accurate gradients in order to construct useful quadratic approximations and implement reliable line searches , rendering them as suitable for training with very large batch sizes , and resulting in expensive iterations that make the overall algorithm slow compared with stochastic first-order methods ( see , e.g. , Bollapragada et al. , 2018 for a recent account ) . Hence , our focus in this paper is on adaptive second-order methods which are directly applicable in a stochastic setting . That said , our effort could be relevant to quasi-Newton and trust-region methods as well : e.g. , each iteration of typical trust-region methods amounts to solving a certain generalized eigenvalue problem , which presents numerical difficulties of similar nature to those encountered in matrix root/inverse computations , being addressed here . Various approximations to the preconditioning matrix have been proposed in the recent literature ( e.g. , Gonen & Shalev-Shwartz , 2015 ; Erdogdu & Montanari , 2015 ; Agarwal et al. , 2016 ; Xu et al. , 2016 ; Pilanci & Wainwright , 2017 ) . However , so far the only prevalent and pragmatic approximation is the diagonal approximation . Some recent approaches for approximating a full-matrix preconditioner are K-FAC ( Martens & Grosse , 2015 ) , Shampoo ( Gupta et al. , 2018 ) and GGT ( Agarwal et al. , 2018 ) . K-FAC uses a factored approximation of the Fisher-information matrix as a preconditioner . While our focus in this paper is on Shampoo , we believe that many of the techniques presented here could also be applied to make K-FAC practical in large scale ( see Appendix C ) . GGT uses a clever trick to compute a low-rank approximation to the AdaGrad preconditioner . However , GGT maintains several hundred copies of the gradient in memory , which is too expensive even for mid-sized models . Ba et al . ( 2017 ) took a first important step at experimenting with distributed K-FAC for training deep models , using a single machine with 8 GPUs to simulate a distributed environment for training . In contrast , a main thrust of our work is to demonstrate wall-time speedups with second-order methods on a real-world distributed setup used for training state-of-the-art deep models , that call for design considerations crucially different than in ( Ba et al. , 2017 ) . More recently , Osawa et al . ( 2019 ) scaled up K-FAC for training convolutional networks , but fell short of reaching the accuracy of first order methods , despite making changes to data augmentation and model architecture . 2 Preliminaries . Adaptive preconditioning methods . First order methods iteratively update the parameters solely based on gradient information : wt+1 = wt − ηt ḡt where wt and ḡt are ( column ) vectors in Rd . Here ḡt denotes a linear combination of the current and past gradients g1 , . . . , gt , where different algorithms use different combinations . Preconditioned methods take the form wt+1 = wt − Pt ḡt where Pt is an d × d matrix . Whereas in Newton-type methods this matrix is related to the Hessian matrix of second-order derivatives , adaptive preconditioning is based on gradient-gradient correlations . The parameters of a deep network are structured as a set of tensors of order two ( i.e. , a matrix ) , three , or four . For simplicity of presentation we focus on the matrix case—however our design , analysis , and implementation hold for tensors of arbitrary order . We denote the space of parameters by the matrix W ∈ Rm×n and an estimate of its gradient by G. Full matrix Adagrad flattens W , G to vectors of dimension mn , it thus requires m2n2 space to store the preconditioner and m3n3 time to perform the update . m and n are in the 1000 ’ s in state-of-the-art models , thus rendering full-matrix preconditioning impractical . For this reason , both AdaGrad and Adam constrain the preconditioning matrices to be diagonal . Shampoo bridges the gap between full matrix preconditioning and the diagonal version by approximating the matrices . The Shampoo algorithm . We describe Shampoo in the context of the Online Convex Optimization ( OCO ) framework , which generalizes stochastic optimization ( see , e.g. , Shalev-Shwartz , 2012 ; Hazan , 2016 ) . In OCO , learning progresses in rounds where on round t the learner receives an input Xt and then uses the parameters Wt to form a prediction denoted ŷt . After making the prediction , the true outcome yt is revealed . The discrepancy between the true and predicted outcomes is assessed by a loss function ` which takes values in R+ . The learner then uses the discrepancy to update the matrix to Wt+1 and prepare for the next round . For instance , the input on round t can be an example xt ∈ Rn for which the learner predicts ŷ = f ( Wt , xt ) where f : Rm → R and the loss is a function ` : R ×R→ R+ such as ` ( ŷ , y ) = ( y − ŷ ) 2 or ` ( ŷ , y ) = log ( 1 + exp ( −yŷ ) ) . Stochastic gradient methods use the gradient Gt = ∇W ` ( f ( W , xt ) , yt ) , thus Gt ∈ Rm×n if the parameters are shaped as a matrix W ∈ Rm×n . For matrix-shaped parameters , Shampoo tracks two statistics over the course of its run , Lt and Rt , which are defined as follows : Lt = ²Im + ∑t s=1 GsG T s ; Rt = ²In + ∑t s=1 G T sGs . Note that Lt ∈ Rm×m and Rt ∈ Rn×n . These are used to precondition the gradient and update W : Wt+1 = Wt − η L−1/4t GtR −1/4 t . The primary complexity of Shampoo arises from the computation of L−1/4t and R −1/4 t , which was naively implemented using spectral decompositions ( i.e. , SVD ) .
This work addresses practical challenges in applying full matrix pre-conditioner methods (such as Shampoo) on problems involving large datasets and architectures trained using a distributed setup. In particular, this work presents a practical extension for the Shampoo algorithm by (1) using only a left or right preconditioner for large layers (2) computing inverse pth roots via coupled Newton iteration algorithms (3) distributing preconditioner computation across CPU cores in a CPU-GPU/TPU cluster and (4) delaying preconditioner computation to occur only once per several steps. The proposed modifications lead to an implementation of Shampoo that consistently decreases the number of training steps and in certain cases provides a direct wall time improvement over Adagrad/Adam.
SP:8bdbbc8a8bc54620675393fd822f56fb9ec53ffc
Towards Understanding Fast Adversarial Training
1 INTRODUCTION . Adversarial examples are carefully crafted versions of the original data that successfully mislead a classifier ( Szegedy et al. , 2013 ) , while realizing minimal change in appearance when viewed by most humans . Although deep neural networks have achieved impressive success on a variety of challenging machine learning tasks , the existence of such adversarial examples has hindered the application of deep neural networks and drawn great attention in the deep-learning community . Empirically , the most successful defense thus far is based on Projected Gradient Descent ( PGD ) adversarial training ( Goodfellow et al. , 2014 ; Madry et al. , 2017 ) , augmenting the data of interest with strong adversarial examples , to help improve model robustness . Although effective , this approach is not efficient and may take multiple days to train a moderately large model . On the other hand , one of the early versions of adversarial training , based on a weaker Fast Gradient Signed Method ( FGSM ) attack , is much more efficient but suffers from “ catastrophic overfitting , ” a phenomenon where the robust accuracy with respect to strong attacks suddenly drops to almost zero during training ( Tramèr et al. , 2017 ; Wong et al. , 2019 ) , and fails to provide robustness against strong attacks . Fast adversarial training ( Wong et al. , 2019 ) is a simple modification to FGSM , that mitigates this issue . By initializing FGSM attacks with large randomized perturbations , it can efficiently obtain robust models against strong attacks . Although the modification is simple , the underlying reason for its success remains unclear . Moreover , fast adversarial training is only compatible with a cyclic learning rate schedule ( Smith & Topin , 2019 ) , with a limited number of training epochs , resulting in sub-optimal robust accuracy compared to PGD adversarial training ( Rice et al. , 2020 ) . When fast adversarial training runs for a large number of epochs , it still suffers from catastrophic overfitting , similar to vanilla FGSM adversarial training . Therefore , it remains an unfinished task to obtain the effectiveness of PGD adversarial training and the efficiency of FGSM adversarial training simultaneously . In this paper , we conduct experiments to show that the key to the success of fast adversarial training is not avoiding catastrophic overfitting , but being able to retain the robustness of the model when catastrophic overfitting occurs . We then utilize this understanding to propose a simple fix to fast adversarial training , making possible the training of it for a large number of epochs , without sacrificing efficiency . We demonstrate that , as a result , we yield improved performance . We also revisit a previously developed technique , FGSM adversarial training as a warmup ( Wang et al. , 2019 ) , and combine it with our training strategy to further improve performance with small additional computational overhead . The resulting method outperforms the state-of-the-art approach , PGD adversarial training ( Rice et al. , 2020 ) , while consuming much less training time . Our contributions are summarized as follows : • We conduct experiments to explain both the success and the failure of fast adversarial training for various cases . • We propose an alternative training strategy as a fix to fast adversarial training , which is equivalently efficient but allows training for a large number of epochs , and hence achieves better performance . • We propose to utilize the improved fast adversarial training as a warmup for PGD adversarial training , to outperform the state-of-the-art adversarial robustness , with reduced computation . 2 BACKGROUND AND RELATED WORK . The existence of adversarial examples in deep learning was initially reported in ( Szegedy et al. , 2013 ) . Since then , many approaches have been proposed to mitigate this issue and improve the adversarial robustness of models . A straightforward method is data augmentation , where adversarial examples are generated before the back-propagation at each iteration and used for model updates . This approach is referred to as adversarial training . It was first used with a gradient-based single-step adversarial attack , also known as the Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2014 ) . Later , ( Kurakin et al. , 2016 ) found that models trained with FGSM tend to overfit and remain vulnerable to stronger attacks . They proposed a multi-step version of FGSM , namely the Basic Iterative Method ( BIM ) , seeking to address its weaknesses . Randomized initialization for FGSM then was introduced in ( Tramèr et al. , 2017 ) , leading to R+FGSM to increase the diversity of attacks and mitigate the overfitting issue . Finally , ( Madry et al. , 2017 ) combined randomized initialization with multi-step attacks to propose projected gradient descent ( PGD ) attacks , and showed its corresponding adversarial training is able to provide strong adversarial robustness ( Athalye et al. , 2018 ) . As PGD adversarial training is effective , many works have tried to improve upon it ( Zhang et al. , 2019b ; Xie et al. , 2019 ) . However , a recent study ( Rice et al. , 2020 ) conducted extensive experiments on adversarially trained models and demonstrated that the performance gain from almost all recently proposed algorithmic modifications to PGD adversarial training is no better than a simple piecewise learning rate schedule and early stopping to prevent overfitting . In addition to adversarial training , a great number of adversarial defenses have been proposed , yet most remain vulnerable to stronger attacks ( Goodfellow et al. , 2014 ; Moosavi-Dezfooli et al. , 2016 ; Papernot et al. , 2016 ; Kurakin et al. , 2016 ; Carlini & Wagner , 2017 ; Brendel et al. , 2017 ; Athalye et al. , 2018 ) . A major drawback of many defensive models is that they are heuristic and vulnerable to adaptive attacks that are specifically designed for breaking them ( Carlini et al. , 2019 ; Tramer et al. , 2020 ) . To address this concern , many works have focused on providing provable/certified robustness of deep neural networks ( Hein & Andriushchenko , 2017 ; Raghunathan et al. , 2018 ; Kolter & Wong , 2017 ; Weng et al. , 2018 ; Zhang et al. , 2018 ; Dvijotham et al. , 2018 ; Wong et al. , 2018 ; Wang et al. , 2018 ; Lecuyer et al. , 2018 ; Li et al. , 2019 ; Cohen et al. , 2019 ) , yet their certifiable robustness can not match the empirical robustness obtained by adversarial training . Among all adversarial defenses that claim empirical adversarial robustness , PGD adversarial training has stood the test of time . The only major caveat to PGD adversarial training is its computational cost , due to the iterative attacks at each training step . Many recent works try to reduce the computational overhead of PGD adversarial training . ( Shafahi et al. , 2019 ) proposes to update adversarial perturbations and model parameters simultaneously . By performing multiple updates on the same batch , it is possible to imitate PGD adversarial training with accelerated training speed . Redundant calculations are removed in ( Zhang et al. , 2019a ) during back-propagation for constructing adversarial examples , to reduce computational overhead . Recently , ( Wong et al. , 2019 ) shows surprising results that FGSM adversarial training can obtain strongly robust models if a large randomized initialization is used for FGSM attacks . However , they are forced to use a cyclic learning rate schedule ( Micikevicius et al. , 2017 ) and a small number of epochs for the training . This issue limits its performance , especially when compared to state-of-the-art PGD adversarial training with early stopping ( Rice et al. , 2020 ) . 3 FAST ADVERSARIAL TRAINING . 3.1 PRELIMINARIES . We consider the task of classification over samples ( x , y ) ∈ ( X , Y ) . Consider a classifier fθ : X → Y parameterized by θ , and a loss function L. For a natural example x ∈ X , an adversarial example x′ satisfies D ( x , x′ ) < for a small > 0 , and fθ ( x ) 6= fθ ( x′ ) , where D ( · , · ) is some distance metric , i.e. , x′ is close to x but yields a different classification result . The distance is often described in terms of an ` p metric , and we focus on the ` ∞ metric in this paper . Adversarial training is an approach for training a robust model against adversarial attacks . It represents the objective of obtaining adversarial robustness in terms of a robust optimization problem , defined as min θ E ( x , y ) ∼X max ‖x′−x‖∞ < ( L ( fθ ( x′ ) , y ) ) ( 1 ) It approximates the inner maximization by constructing adversarial examples based on natural examples , and then the model parameters θ are updated via an optimization method with respect to the adversarial examples , instead of the natural ones . One of the simplest choices of attack for adversarial training is the Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2014 ) : x′ = x+ sign ( ∇xL ( fθ ( x ) , y ) ) ( 2 ) Before the introduction of fast adversarial training ( Wong et al. , 2019 ) , which we will introduce later , it was commonly believed that FGSM adversarial training fails to provide strong robustness ( Kurakin et al. , 2016 ) . During FGSM adversarial training , the robust accuracy of the model would suddenly drop to almost 0 % after a certain point , when evaluated against PGD attacks . This phenomenon was referred to as “ catastrophic overfitting ” in ( Wong et al. , 2019 ) . The cause of catastrophic overfitting was studied extensively in ( Tramèr et al. , 2017 ) : during training , since FGSM is a simple attack , the model learns to fool the FGSM attacks by inducing gradient masking/obfuscated gradient ( Athalye et al. , 2018 ) ; that is , the gradient is no longer a useful direction for constructing adversarial examples . The existence of catastrophic overfitting has prohibited the use of FGSM adversarial training . To mitigate this issue , ( Madry et al. , 2017 ) introduced a multi-step variant of FGSM , namely Projected Gradient Descent ( PGD ) , which takes multiple small steps with stepsize α to construct adversarial examples instead of one large step as in FGSM : x′t+1 = Π‖x′−x‖∞≤ ( x′t + αsign ( ∇x′tL ( fθ ( x ′ t ) , y ) ) ) ( 3 ) Extensive experimental results ( Madry et al. , 2017 ; Athalye et al. , 2018 ) have shown that , unless the model is particularly designed for creating obfuscated gradients ( Tramer et al. , 2020 ) , PGD attacks are generally exempt from overfitting . Consequently , adversarial training with PGD leads to robust models against strong attacks , although its computational cost is often an order of magnitude more expensive than standard training and FGSM adversarial training . Recently , in contrast to conventional believe , ( Wong et al. , 2019 ) proposed fast adversarial training and suggested it is possible to construct strongly robust models via FGSM adversarial training . They showed it is important to initialize a FGSM attack with large randomized perturbations , to protect FGSM adversarial training from overfitting . Although randomly initialized FGSM ( R+FGSM ) has been used in previous works ( Tramèr et al. , 2017 ) , ( Wong et al. , 2019 ) points out that the scale of the randomized initialization was restrictive and needs to be enlarged . As a result , this simple modification enables R+FGSM adversarial training to obtain reasonable robustness against strong attacks .
The authors claimed in this paper that as the most empirically successful approach to defending adversarial examples, PGD-based adversarial training, is computationally inefficient. Fast adversarial training could mitigate this issue by training a model using FGSM attacks initialized with large randomized perturbations, but the underlying reason for its success remains unclear and it may still suffer from catastrophic overfitting. The authors conducted a series of experiments to figure out the key to the success and properties of fast adversarial training. The experimental results showed that fast adversarial training cannot avoid catastrophic overfitting, but could be able to recover from catastrophic overfitting quickly. Based on all of the observations, the authors proposed a simple method to improve fast adversarial training by using PGD attack as training instead of R+FGSM attack (proposed in fast adversarial training) when overfitting happens, or using fast adversarial training as a warmup. The proposed methods could achieve slightly better performance than the current state-of-art approach while reducing the training time significantly.
SP:f30f2cd322e3995e29563d5f6045e0f427c267af
ALFA: Adversarial Feature Augmentation for Enhanced Image Recognition
1 INTRODUCTION . Neural networks often fall vulnerable when presented adversarial examples injected with imperceptible perturbations , and suffer significant performance drop when facing such attacks ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015b ) . Such susceptibility has motivated abundant studies on adversarial defense mechanisms for training robust neural networks ( Schmidt et al. , 2018 ; Sun et al. , 2019 ; Nakkiran , 2019 ; Stutz et al. , 2019 ; Raghunathan et al. , 2019 ) , among which adversarial training based methods ( Madry et al. , 2018b ; Zhang et al. , 2019a ) have achieved consistently superior robustness than others . The general focus of adversarial training is to enhance the robustness of gradient-based adversarial examples . A few recent studies ( Zhu et al. , 2020 ; Gan et al. , 2020 ) turn to investigate the generalization ability of adversarial training on language models . However , in-depth exploration of extending this to the vision domain is still missing . Xie et al . ( 2020 ) proposes to utilize adversarial examples with an auxiliary batch normalization to improve standard accuracy for image recognition , but it still suffers from expensive computational cost from the generation of pixel-level perturbations . To address this issue , we propose AdversariaL Feature Augmentation ( ALFA ) as a natural extension of adversarial training , with a focus on leveraging adversarial perturbations in the feature space to improve image recognition on clean data . As illustrated in Figure 1 , ALFA introduces adversarial perturbations to multiple intermediate layers . These perturbed feature embeddings act as a special feature augmentation and implicit regularization to enhance the generalization ability of deep neural networks . Consequently , two challenges arise : ( i ) how to efficiently find the best locations to introduce adversarial perturbations ; and ( ii ) how to decide on the strength of the created perturbations . Although a few recent works ( Zhu et al. , 2020 ; Gan et al. , 2020 ; Sankaranarayanan et al. , 2017 ) look into this field , they either add perturbations in the input embeddings or all the intermediate features , yet have not reached a coherent conclusion . To efficiently learn an optimal strategy of perturbation injection , we further propose a learnable adversarial feature augmentation ( L-ALFA ) framework , which is capable of automatically adjusting the position and strength of introduced feature perturbations . The proposed approach not only circumvents laborious hyper-parameter tuning , but also fully unleashes the power of adversarial feature augmentation . Experiments show that this strategy gains a substantial performance margin over existing feature augmentation methods ( Li et al. , 2020 ) . In addition , we find that learnable ALFA and exhaustively-tuned ALFA exhibit consistent patterns : applying weak adversarial feature augmentations to the last layers of deep neural networks can boost generalization performance . The main contributions are summarized as follows . ( i ) We introduce a new approach of adversarial feature augmentation ( ALFA ) to improve the generalization ability of neural networks , which applies adversarial perturbations to the feature space rather than raw image pixels . ( ii ) To tackle the dilemma of laborious hyper-parameter tuning in generating adversarial features , we propose learnable adversarial feature augmentation ( L-ALFA ) to automatically tailor target perturbations and their locations . ( iii ) Comprehensive experiments on CIFAR-10 , CIFAR-100 , and ImageNet datasets across multiple backbone networks demonstrate the superiority of the proposed methods . 2 RELATED WORK . Adversarial Training Deep neural networks are notoriously vulnerable to adversarial samples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015b ) , which are crafted with malicious yet negligible perturbations ( Goodfellow et al. , 2015a ; Kurakin et al. , 2016 ; Madry et al. , 2018a ) . In order to improve the robustness against adversarial samples , various defense mechanisms have been proposed ( Zhang et al. , 2019a ; Schmidt et al. , 2018 ; Sun et al. , 2019 ; Nakkiran , 2019 ; Stutz et al. , 2019 ; Raghunathan et al. , 2019 ) . Among these works , adversarial-training-based methods ( Madry et al. , 2018b ; Zhang et al. , 2019a ) have achieved consistently superior performance in defending stateof-the-art adversarial attacks ( Goodfellow et al. , 2015a ; Kurakin et al. , 2016 ; Madry et al. , 2018a ) . Although adversarial training substantially improves model robustness , it usually comes at the price of compromising the standard accuracy ( Tsipras et al. , 2019 ) , which has been demonstrated both empirically and theoretically ( Zhang et al. , 2019a ; Schmidt et al. , 2018 ; Sun et al. , 2019 ; Nakkiran , 2019 ; Stutz et al. , 2019 ; Raghunathan et al. , 2019 ) . Recently , researchers start to investigate improving clean set accuracy with adversarial training ( Xie et al. , 2020 ; Zhu et al. , 2020 ; Wang et al. , 2019a ; Gan et al. , 2020 ; Wei & Ma , 2019 ) ( Ishii & Sato , 2019 ) . Xie et al . ( 2020 ) shows that performance on the clean dataset can be enhanced by using adversarial samples with pixel-level perturbation generation . Zhu et al . ( 2020 ) and Wang et al . ( 2019a ) apply adversarial training to natural language understanding and language modeling , both successfully achieving better standard accuracy . Gan et al . ( 2020 ) achieves similar success on many vision-and-language tasks . There also exist parallel studies that employ handcrafted or auto- generated perturbed features to ameliorate generalization ( Wei & Ma , 2019 ) ( Ishii & Sato , 2019 ) or robustness ( Sankaranarayanan et al. , 2017 ) . However , two key issues remain unexplored : ( i ) which layers to introduce adversarial feature augmentations ; ( ii ) how strong the perturbations should be . For the former , Zhu et al . ( 2020 ) ; Wang et al . ( 2019a ) ; Gan et al . ( 2020 ) try to perturb the input embeddings of transformer models , while Wei & Ma ( 2019 ) ; Sankaranarayanan et al . ( 2017 ) insert perturbations to all layers of a convolutional network . Regarding the above issue , all the methods need arduous and heuristic tunings . In our paper , we present a different observation that augmenting the last layers ’ feature embeddings with weak adversarial feature perturbations can gain higher standard accuracy . The L-ALFA framework inspired by this observation can effectively alleviate laborious tuning , which otherwise is inevitable . Feature Augmentation Although pixel-level data augmentation techniques ( Simard et al. , 1993 ; Schölkopf et al. , 1996 ) have been widely adopted , feature space augmentations have not received the same level of attention . A few pioneering works propose generative-based feature augmentation approaches for domain adaptation ( Volpi et al. , 2018 ) , imbalance classification ( Zhang et al. , 2019b ) , and few-shot learning ( Chen et al. , 2019 ) . Another loosely related field is feature normalization ( Ioffe & Szegedy , 2015 ; Li et al. , 2020 ) . MoEx ( Li et al. , 2020 ) is a newly proposed method that can be regarded as a feature augmentation technique , which leverages the first and second-order moments extracted and re-injected by feature normalization . It is worth mentioning that all the approaches aforementioned are orthogonal to our proposed method , and can be combined for further generalization improvement , which is left as future work . 3 ADVERSARIAL FEATURE AUGMENTATIONS ( ALFA ) . In the proposed ALFA framework , we generate adversarial perturbations in the intermediate feature embedding space , rather than applying perturbations to raw image pixels as in common practice . Thus , adversarial training can be formulated as an effective regularization to improve the generalization ability of deep neural networks . 3.1 NOTATIONS . Given a dataset D = { x , y } , where x is the input image and y is the corresponding one-hot groundtruth label . Let f ( x ; Θ ) represents predictions of a deep neural networks , and fi ( x ; Θ ( i ) ) |r+1i=1 is the intermediate feature embedding from the i-th layer . The ( r + 1 ) -th layer denotes the classifier , therefore fr+1 ( x ; Θ ( r+1 ) ) = f ( x ; Θ ) . Adversarial training can be formulated as the following min-max optimization problem : min Θ E ( x , y ) ∈D [ max ||δ||p≤ Lat ( f ( x + δ ; Θ ) ; Θ ; y ) ] , ( 1 ) where δ is the adversarial perturbation bounded by the ` p norm ball , which is centered at x with radius which is the maximum perturbation magnitude . Lat is the cross-entropy loss for adversarial training ( AT ) . E ( x , y ) ∈D takes the expectation over the empirical objective on the training dataset D. The inner optimization generates adversarial perturbation δ via maximizing the empirical objective . It can be reliably solved by multi-step projected gradient descent ( PGD ) ( Madry et al. , 2018b ) ( without loss of generality , we take || · ||∞ perturbation as an example ) : δt+1 = Π||δ||∞≤ [ δt + α · sgn ( ∇xLat ( f ( x + δt ; Θ ) ; Θ ; y ) ] , ( 2 ) where t is the number of steps , α denotes the learning rate of inner maximization , sgn is the sign function , and Lat is the adversarial training objective on adversarial images . 3.2 PERTURBATIONS IN THE EMBEDDING SPACE VIA ALFA . Here , we extend the conventional adversarial perturbations to the feature embedding space . We start from the training objective of ALFA as follows : min Θ E ( x , y ) ∈D [ Lstd ( x ; Θ ; y ) + λ · ∑ i max ||δ ( i ) ||∞≤ Lat ( fi ( x ; Θ ( i ) ) + δ ( i ) ; Θ ; y ) ] , ( 3 ) where Lstd is the cross-entropy ( XE ) loss on clean images , Lat here is the cross-entropy loss for adversarial training ( AT ) on adversarial augmented feature embeddings . λ is the hyperparamter to control the influence of AT regularization , which is tuned by grid search . δ ( i ) is the adversarial perturbation on the feature of layer i , generated as follows : δ ( i ) t+1 = Π||δ||∞≤ [ δ ( i ) t + α · sgn ( ∇xLat ( fi ( x ; Θ ( i ) ) + δ ( i ) t ; Θ ; y ) ) ] . ( 4 ) It is worth noting that , for crafting δ ( i ) , at each step , the gradient is only back-propagated to the i-th layer without going further , which is much more computationally efficient compared to generating perturbations in the input embedding space . In practice , we set the maximum magnitude of crafted feature perturbation to be unbounded , and the projected gradient descent will be replaced by gradient descent . In ALFA , the two most essential factors are : ( i ) where to introduce adversarial perturbations ; and ( ii ) how strong the perturbations should be . Table 1 and Figure 2 present some preliminary results to understand this . Results shows that the performance of ALFA relies particularly on the location ( i.e. , which blocks ) and strength ( i.e. , step size α ) of the introduced feature perturbations . An inadequate configuration ( e.g. , applying ALFA to all blocks 1 , 2 and 3 as shown in Figure 2 ) might cause accuracy degradation . More analyses are provided in Section 4.3 . To determine the best configuration , we further design a learnable adversarial feature augmentation ( L-ALFA ) approach to automatically adjusting the location and strength of perturbations for best augmentation performance , which will be explained in the next sub-section .
Overview of paper: this work tackles the task of adversarial augmentation for better generalization. Instead of augmentation the pixels space, which is expensive and potentially harder, they augment the intermediate feature representation. As the choice of the particular layer for application of the perturbations affects performance, the authors, optimize it jointly with the rest of the parameters. Experiments show this method improves accuracy over standard training.
SP:b5daf21a7a1df819b39afd967085b64a55d14fb4
Using Deep Reinforcement Learning to Train and Evaluate Instructional Sequencing Policies for an Intelligent Tutoring System
1 INTRODUCTION . An Intelligent Tutoring System ( ITS ) aims at teaching a set of skills to users by individualizing instructions . Giving instruction to users requires many sequential decisions , such as what to teach , what activities to present , what problems to include , and what help to give . Our aim is to take decisions which maximize long-term rewards in the form of learning gains , so Reinforcement Learning ( RL ) is a natural approach to pursue , and was first proposed by Liu ( 1960 ) . The goal of an RL agent is to learn a policy π , defined as a mapping from state space S to action space A . Given any state , the RL agent follows a series of actions proposed by the learned policy to maximize the long-term expected reward . In the context of an ITS , we specify the RL agent as follows : • State st : We define the state as a combination of the student state and the tutor state . The tutor state determines the set of actions available to the RL agent at a given timestep . We represent the student state as a vector of probabilities where element i is the estimated probability that the student knows skill i . • Action at : The action taken by the RL agent corresponds to a tutor decision at a particular grain size . • Reward rt ( st , at ) : Defined as the average difference between prior and posterior knowledge states based on the simulated student ’ s response to the tutor action at to the student simulator . • Next state st+1 : The knowledge vector of a student after a Bayesian update based on the simulated student ’ s response to tutor action at in state st is the updated student knowledge state . The updated tutor state is given by the tutor simulator . The updated student knowledge state and tutor state , together gives the next state st+1 . We instantiate STEP in the context of RoboTutor , a Finalist in the Global Learning XPRIZE Competition to develop an open source Android tablet tutor to teach basic literacy and numeracy to chil- dren without requiring adult intervention . XPRIZE independently field-tested the Swahili version of RoboTutor for 15 months in 28 villages in Tanzania . Figure 1 shows an diagrammatic overview of STEP and the rest of the paper is organized as follows . Section 2 discusses the simulation of tutor and student ( the environment block ) . Section 3 elaborates on the training of decision policies ( the RL agent block ) . Section 4 evaluates the learned policies . Section 5 relates this work to prior research . Section 6 concludes . 2 SIMULATING THE TUTOR AND THE STUDENT . To apply RL , we need to simulate the tutor ’ s actions and the student ’ s responses to them . 2.1 TUTOR SIMULATOR . The data for this paper comes from the version of RoboTutor used during the last 3 months of XPRIZE ’ s 15-month field study . This version rotates through three content areas ( literacy , numeracy , and stories ) , tracking the child ’ s position in each area ’ s curricular sequence of successively more advanced activities . It lets the child select among doing the activity at that position , advancing to the next activity , repeating the same activity ( from the previous content area ) , or exiting RoboTutor . After selecting an activity , the child may complete all or part of it before selecting the next activity . RoboTutor has 1710 learning activities , each of which gives assisted practice of one or more skills on a sequence of items , such as letters or words to write , number problems to solve , or sentences to read . Each item requires one or more steps . Each step may take one or more attempts . The simulated tutor state identifies the current content area and the child ’ s position in it . RoboTutor ( actual or simulated ) updates the position in the content area based on the percentage of correct attempts to perform the steps in an activity . Specifically , it uses fixed heuristic thresholds ( called LOW , MID , HI ) on this percentage to demote BACK to the previous position , stay at the SAME position , promote to the NEXT position , or SKIP to the position thereafter . Figure 2 gives an illustration of the same . 2.2 STUDENT SIMULATOR . A student simulator should behave like students who use the tutor . Accordingly , the simulator uses a Bayesian Knowledge Tracing ( BKT ) student model trained on logged data using HOT-DINA . It has the same Guess , Slip , and Learn parameters as standard BKT , but estimates the Knew parameter based on skill difficulty and discrimination and student proficiency from Item Response Theory . Thus , HOT-DINA extrapolates from the student ’ s knowledge of other skills , and other students ’ knowledge of this skill , albeit at a high computational cost to fit the model . Xu & Mostow ( 2014 ) found HOT-DINA to have higher predictive accuracy than standard BKT . To limit computation time , we fit the model on logged data from a single village , consisting of 42,010 attempts by 8 children to apply 22 skills . We fit one proficiency parameter for each child and 5 parameters for each skill ( Guess , Slip , Learn , Difficulty , and Discrimination ) , 118 parameters in total . ( Fitting 5 separate parameters per activity instead of per skill might achieve higher accuracy but would require fitting 8,558 parameters . ) We use MCMC sampling for Bayesian inference with PyStan rather than the OpenBUGS Gibbs sampling package used in the original HOT-DINA work because PyStan is faster and handles larger datasets . Nevertheless , fitting the 118-parameter HOTDINA model to 42,010 attempts took approximately 4 days on a supercomputer with 128 GB and 28 cores . Table 1 shows converged values for a subset of HOT-DINA parameters . For example , the eight θ values refer to the 8 student proficiency parameters of the student model . For simplicity , we show only the first 6 values of b ( skill difficulty parameter ) in the table . Once we obtain the model parameters , we need two things to be done for the student simulator to be successful : given an activity , we should be able to simulate whether a student gets an activity right or wrong . Based on this response , we should be able to perform knowledge tracing over multiple skills to update the student ’ s knowledge probabilities . For simulating a student ’ s performance on an activity we first estimate P ( Getting Activity j Correct ) as in equations 2 and 3 below . We then simulate the student response ( right or wrong ) by doing a biased coin flip based on this estimated probability . Since we now have a simulated student response , we perform knowledge tracing over multiple skills using the update equations 3-5 . The next few lines cover some basic notation and update equations for simulated learning of a student . It should be noted that variables α , y , and Y are all binary , ie. , they take on value of either 1 or 0. θn Proficiency of student n ak Discrimination of skill k bk Difficulty of skill k qjk 1 if activity j exercises skill k , 0 otherwise α ( t ) nk = 1 Probability that student n knows skill k at time-step t y ( t ) nk = 1 Probability that student n answers an activity exercising only skill k at time-step t Y ( t ) nj = 1 Probability that student n gets activity j correct at time-step t α ( 0 ) nk = K∏ k=1 ( 1 1 + exp ( −1.7ak ( θn − bk ) ) ) qjk ( 1 ) ( y ( t ) nk = 1 ) = ( 1− slipk ) ( α ( t ) nk = 1 ) + guessk ( α ( t ) nk = 0 ) ( 2 ) ( Y ( t ) nj = 1 ) = K∏ k=1 ( y ( t ) nk = 1 ) qjk ( 3 ) ( α ( t ) nk = 1|Y ( t ) nj = 1 ) = ( α ( t ) nk = 1 ) ∗ ( 1− slipk ( y ( t ) nk = 1 ) ) qjk ( 4 ) ( α ( t ) nk = 1|Y ( t ) nj = 0 ) = ( α ( t ) nk = 0 ) ∗ ( guessk ( y ( t ) nk = 1 ) ) qjk ( 5 ) ( α ( t+1 ) nk = 1 ) = ( α ( t ) nk = 1|Y ( t ) nj ) + ( learnk ∗ ( α ( t ) nk = 0|Y ( t ) nj ) ) ( 6 ) 3 TRAINING POLICIES WITH PPO . We have already discussed the student simulator and tutor simulator in last section . In this section , we discuss the training a policy using STEP in the context of RoboTutor . 3.1 THE REWARD FUNCTION . The RL agent learns a decision policy – that is , a mapping from states to actions – that maximizes the total expected reward of following the policy πθ . As the reward function for student n , we use the knowledge gain as estimated by the student model , i.e . posterior minus prior estimates of Pr ( student i knows skill k ) , averaged over all skills . The posterior and prior refer to the knowledge states before and after applying the bayesian updates ( equations 3-5 ) on an activity decided by action at . The information for prior knowledge is implicitly present in the knowledge state of st In order to save computational time , we learn policy for episodes of 100 timesteps using PPO after which the episode terminates . Though our experiments stick to finite-horizon undiscounted returns with 100 steps , it is trivial to extend this approach to any finite number of steps or even to infinite-horizon discounted returns with discount factor γ ∈ ( 0 , 1 ) so the rewards vanish at large timesteps . The reward function rt for student n at a given step is given by learning gains of a student due to attempting an activity , as given in equation ( 7 ) where K is the total number of skills ( 22 for RoboTutor ) . The returns are just the sum of rewards over T=100 steps . ( Previous methods used reward=0 or 1 based on correct attempt or something else . Useful to mention this ? ) rt ( st , at ) = K∑ k=1 ( α ( t+1 ) nk = 1 ) − ( α ( t ) nk = 1 ) K ( 7 ) According to the student model trained by HOT-DINA on the 8 children ’ s log data , their prior averaged 0.55 and their posterior averaged 0.73 , a gain of 0.18 over their final usage consisting of 42,010 attempts ( up to 3 months ) . Their posterior after their first 100 attempts averaged across the 8 students was 0.64 , for an average gain per attempt of 0.09/100 = 0.0009 . We can train different types of RL agents depending on their state space and range of actions , which depend on how far they depart from RoboTutor ’ s current decision policy .
The paper describes variants of an intelligent tutoring system (ITS) developed using a newer (but previously published) variant of Knowledge Tracing (HOT-DINA) for assessing student proficiency and an RL algorithm (PPO) for making decisions on items and content areas to try next. An empirical simulation calibrated to 8 students is reassessed on the same student simulations and improvements over the original tutoring system are empirically demonstrated. Four variants with differing levels action granularity and knowledge racing are analyzed.
SP:bba6a0856c8f3bb5a7ef8a768c38b999e6438df9
Nonconvex Continual Learning with Episodic Memory
1 INTRODUCTION . Learning new tasks without forgetting previously learned tasks is a key aspect of artificial intelligence to be as versatile as humans . Unlike the conventional deep learning that observes tasks from an i.i.d . distribution , continual learning train sequentially a model on a non-stationary stream of data ( Ring , 1995 ; Thrun , 1994 ) . The continual learning AI systems struggle with catastrophic forgetting when the data acess of previously learned tasks is restricted ( French & Chater , 2002 ) . To overcome catastrophic forgetting , continual learning algorithms introduce a memory to store and replay the previously learned examples ( Lopez-Paz & Ranzato , 2017 ; Aljundi et al. , 2019b ; Chaudhry et al. , 2019a ) , penalize neural networks with regularization methods ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) , use Bayesian approaches ( Nguyen et al. , 2018 ; Ebrahimi et al. , 2020 ) , and other novel methods ( Yoon et al. , 2018 ; Lee et al. , 2019 ) . Although Gradient Episodic Memory ( GEM ) ( Lopez-Paz & Ranzato , 2017 ) first formulated the continual learning as a constrained optimization problem , the theoretical convergence analysis of the performance of previously learned tasks , which implies a measure of catastrophic forgetting , has not been investigated yet . Continual learning with episodic memory utilizes a small subset of the data for previous tasks to keep the model staying in a feasible region corresponding to moderate suboptimal region . GEM-based approaches use the rephrased constraints , which are inequalities based on the inner product of loss gradient vectors for previous tasks and a current task . This intuitive reformulation of constrained optimization does not provide theoretical guarantee to prevent catastrophic forgetting . In addition , the memory-based approaches have the critical limitation of overfitting to memory . Choosing the perfect memory for continual learning is an NP-hard problem ( Knoblauch et al. , 2020 ) , then the inductive bias by episodic memory is inevitable . This problem also degrades the performance on previously learned tasks like catastrophic forgetting but has not been discussed quantitatively to analyze backward transfer ( BWT ) . In this paper , we address the continual learning with episodic memory as a smooth nonconvex finitesum optimization problem . This generic form is well studied to demonstrate the convergence and complexity of stochastic gradient methods for the nonconvex setting ( Zhou & Gu , 2019 ; Lei et al. , 2017 ; Reddi et al. , 2016 ; Zaheer et al. , 2018 ) . Unlike the convex case , the convergence is generally measured by the expectation of the squared norm of the gradient E‖∇f ( x ) ‖2 . The theoretical complexity is derived from the -accurate solution , which is also known as a stationary point with E‖∇f ( x ) ‖2 ≤ . We formulate the proposed continual learning algorithm as a Stochastic gradient descent ( SGD ) based method that updates both previously learned tasks from episodic memory and the current task simultaneously . By leveraging the update method , we can introduce a theoretical analysis of continual learning problems . We highlight our main contributions as follows . • We develop convergence analysis for continual learning with episodic memory • We show the degradation of backward transfer theoretically and experimentally as prob- lems of catastrophic forgetting and overfitting to memory . • We propose a nonconvex continual learning algorithm that scales learning rates based on sampled mini-batch . 1.1 RELATED WORK . The literature in continual learning can be divided into episodic learning and task-free learning . Episodic learning based methods assume that a training model is able to access clear task boundaries and stores observed examples in the task-wise episodic memory ( Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2019a ) . On the other hand , an AI system experiences arbitrarily shifting data streams , which we are not able to access task boundaries in the real world . Task-free continual learning studies the general scenario without the task-boundary assumption . Aljundi et al . ( 2019a ) introduces Memory-aware Synapses ( MAS ) and applies a learning protocol without waiting until a task is finished . Furthermore , the following work ( Aljundi et al. , 2019b ) adopt the memory system of GEM selecting observed examples to store for preventing catastrophic forgetting . Smooth nonconvex finite-sum optimization problem has been widely employed to derive the theoretical complexity of computation for stochastic gradient methods ( Ghadimi & Lan , 2013 ; 2016 ; Lei et al. , 2017 ; Zaheer et al. , 2018 ; Reddi et al. , 2016 ) . Unlike the convex optimization , the gradient based algorithms are not expected to converge to the global minimum but are evaluated by measuring the convergence rate to the stationary points in the nonconvex case . The complexity to reach a stationary point is a key aspect of building a new stochastic gradient method for nonconvex optimization . In constrast with general optimization , memory-based continual learning methods have a limited data pool for previously learned tasks , which causes an overfitting problem to memory . ( Knoblauch et al. , 2020 ) found that optimal continual learning algorithms and building a perfect memory is equivalent . Furthermore , the authors proved that these two problems are NP-hard . The theoretical result shows that overfitting to memory is inevitable . 2 PRELIMINARIES . We consider a continual learning problem with episodic memory where a learner can access the boundary between the previous task and the current task . The continuum of data in ( Lopez-Paz & Ranzato , 2017 ) is adopted as our task description of continual learning . First , we formulate our goal as the smooth nonconvex finite-sum optimization problems with two objectives , min x∈Rd F ( x ) = f ( x ) + g ( x ) = 1 nf nf∑ i=1 fi ( x ) + 1 ng ng∑ j=1 gj ( x ) ( 1 ) where x ∈ Rd is the model parameter , each objective component fi ( x ) , gj ( x ) is differentiable and nonconvex , and nf , ng are the numbers of components . We define two different components of the finite-sum optimization as objectives from a sample i of previously learned tasks fi ( x ) and a sample j of the current task gj ( x ) . Unlike the general stochastic optimization problem , we assume that the initial point x0 in continual learning is an -accurate solution of f ( x ) with E‖∇f ( x ) ‖2 ≤ for some 1 . By the property of nonconvex optimization , we know that there might exist multiple local optimal points that satisfy moderate performance on the previously learned task ( Garipov et al. , 2018 ) . This implies that the model parameter x stays in the neighborhood of x0 or usually moves from an initial local optimal point x0 to the other local optimal point at the t-th iteration , xt over T iterations of a successful continual learning scenario . The continual learning algorithm with an episodic memory with size m can not access the whole dataset of the previously learned tasks with nf samples but use limited samples in the memory when a learner trains on the current task . This limited access allows us to prevent catastrophic forgetting partially . However the fixed samples from memory cause a biased gradient and the overfitting problem . In Section 3 , we provide the convergence analysis of the previously learned tasks f ( x ) , which are vulnerable to catastrophic forgetting . We denote fi ( x ) as the component , which indicates the loss of sample i from the previously learned tasks with the model parameter x and ∇fi ( x ) as its gradient . We use It , Jt as the mini-batch of samples at iteration t and denote bft , b g t as the mini-batch size |It| , |Jt| for brevity throughout the paper . We also note that gj from the current task holds the above and following assumptions . To formulate the convergence over iterations , we introduce the Incremental First-order Oracle ( IFO ) framework ( Ghadimi & Lan , 2013 ) , which is defined as a unit of cost by sampling the pair ( ∇fi ( x ) , fi ( x ) ) . For example , a stochastic gradient descent algorithm requires the cost as much as the batch size bt at each step , and the total cost is the sum of batch sizes ∑T t=1 bt . Let T ( ) be the minimum number of iterations to guarantee -accurate solutions . Then the average bound of IFO complexity is less than or equal to ∑T ( ) t=1 bt . To analyze the convergence and compute the IFO complexity , we define the loss gap between two local optimal points ∆f as ∆f = f ( x 0 ) − inf 0≤t≤T f ( xt ) , ( 2 ) which might be much smaller than the loss gap of SGD . Suppose that the losses of all optimal points have the same values , i.e. , f ( x∗ ) = f ( x0 ) , then we have ∆f ≤ 0 . This implies that ∆f is not a reason for moving away from a stationary point of f , which we will explain details in Section 3 . We also define σf , σg for f , g , respectively , as the upper bounds on the variance of the stochastic gradients of a given mini-batch . For brevity , we write only one of them σf , σf = sup x 1 bf bf∑ i=1 ‖∇fi ( x ) −∇f ( x ) ‖2 . ( 3 ) Throughout the paper , we assume the L-smoothness . Assumption 1 fi is L-smooth that there exists a constant L > 0 such that for any x , y ∈ Rd , ‖∇fi ( x ) −∇fi ( y ) ‖ ≤ L‖x− y‖ ( 4 ) where ‖·‖ denotes the Euclidean norm . Then the following inequality directly holds that −L 2 ‖x− y‖2 ≤ fi ( x ) − fi ( y ) − 〈∇fi ( y ) , x− y〉 ≤ L 2 ‖x− y‖2 . ( 5 ) In this paper , we consider the framework of continual learning with episodic memory . By the assumption of GEM , we assign each task sample from i.i.d . distribution within its episode to the same memory budget m. In the learning phase at task k ∈ { 1 , 2 , · · · , K } , we sample a batch with size nf from memories of all task with size [ m · ( k − 1 ) ] . 3 NONCONVEX CONTINUAL LEARNING . In this section , we present the convergence analysis of continual learning in the nonconvex setting . The theoretical result shows why catastrophic forgetting occurs in view of the nonconvex optimization problem . As a result , we can propose the Non-Convex Continual Learning ( NCCL ) algorithm , where the learning rates for the previously learned tasks and the current tasks are scaled by the value of the inner product by their gradients for the parameter in Section 3.3 . 3.1 ONE EPISODE ANALYSIS . The key element behind preventing catastrophic forgetting is to use gradient compensation on the training step of the current task . It can be considered as an additive gradient , in turn , is applied to the gradient of the current task , although GEM ( Lopez-Paz & Ranzato , 2017 ) uses the quadratic programming and EWC ( Kirkpatrick et al. , 2017 ) introduces the auxiliary loss function . First , we present the proposed gradient compensation , which uses samples of the episodic memory for a single new task episode . We define the gradient update xt+1 = xt − αHt∇fIt ( xt ) − βHt∇gJt ( xt ) ( 6 ) where αHt , βHt are learning rates scaled by the sampled mini-batches for Ht = It ∪ Jt and ∇fHt ( xt ) , ∇gHt ( xt ) are the estimates of the gradient ∇f ( xt ) , ∇g ( xt ) respectively . Equation 6 implies that the parameter is updated on the current task g with a gradient compensation on previously learned tasks f by αHt∇fIt ( xt ) . Our goal is to explain the effect of the gradient update βHt∇gJt ( xt ) on the convergence to stationary points of f ( x ) and observe the properties of the expectation of each element over It . For iteration t ∈ [ 1 , T ] and a constant L , we define the catastrophic forgetting term Ct to be the expectation in terms of∇gJt ( xt ) : Ct = E [ β2HtL 2 ‖∇gJt ( xt ) ‖2 − βHt〈∇f ( xt ) , ∇gJt ( xt ) 〉 ] , ( 7 ) which we derive in Appendix A . We temporally assume the following to show the convergence analysis of continual learning . Assumption 2 Suppose that the episodic memory M contains the entire data points of previously learned tasks [ k − 1 ] on the k-th episode and replays the mini-batch It ⊂ M . Then ∇fIt ( xt ) is an unbiased estimate that E [ et ] = 0 for et = ∇fIt ( xt ) −∇f ( xt ) . In the next section , we do not use Assumption 2 and investigate the biasedness of the episodic memory M that causes the overfitting on memory . Our first main result is the following theorem that provides the stepwise change of convergence of our algorithm . Theorem 1 Suppose that Lα2Ht − α 2 Ht ≤ γ for some γ > 0 and αHt ≤ 2L . Under Assumption 1 , 2 , we have E‖∇f ( xt ) ‖2 ≤ 1 1− L2 αHt ( 1 αHt ( E [ f ( xt ) − f ( xt+1 ) ] + Ct ) + αHtL 2bf σ2f ) . ( 8 ) We present the proof in Appendix A . Note that the catastrophic forgetting term Ct exists , unlike the general SGD , and this term increases the IFO complexity . Fortunately , we can tighten the upper bound of Equation ( 8 ) by minimizing Ct. Now we telescope over a single episode for the current task . Then we obtain the following theorem . Theorem 2 Let αHt = α = c√T for some c > 0 and all t ∈ [ T ] and 1− L 2 α = 1 A > 0 for some A . Under Theorem 1 , we have min t E‖∇f ( xt ) ‖2 ≤ A√ T ( 1 c ( ∆f + T−1∑ t=0 Ct ) + Lc 2bf σ2f ) . ( 9 ) This theorem can explain the theoretical background of catastrophic forgetting . The cumulative summation of catastrophic forgetting terms ∑ Ct increases drastically over iterations . This fact implies that the stationary point x0 can diverge . An immediate consequence of Equation 9 is that we can consider the amount of catastrophic forgetting as an optimization-viewed factor . Without the additive catastrophic forgetting term , Theorem 2 is equivalent to the result for SGD with a fixed learning rate ( Ghadimi & Lan , 2013 ) . Similar to SGD , the upper bound of Equation 9 can be made O ( A√ T ( ∆f + ∑ Ct ) ) when we assume that Lc2bf σ 2 f = O ( 1 ) . Conversely , we consider the convergence analysis of g ( x ) by changing roles for f and g in Theorem 2 . In the very beginning of iterations , ∆g is dominant in Equation 9 , and its catastrophic forgetting term Ct , g with regard to ∇fIt ( xt ) is relatively small because xt is the neighborhood of the stationary point . When we consider Assumption 2 and the samples from previously learned tasks are constantly provided , the norm of gradients ‖fIt ( xt ) ‖ is bounded . Therefore , g ( x ) can reach a stationary point by the same rate as SGD . However , We can not access the full dataset of previously learned tasks because of the setting of continual learning . There exists an extra term that interrupts the convergence of g ( x ) , which is called the overfitting . We now ignore the extra term to conjecture that ‖∇gJt ( x ) ‖ is at least bounded . Then we have the following corollary . Corollary 1 Let the expected stationary of g ( x ) be O ( δ√ T ) for a constant δ > 0 and the upper bound of learning rate for g ( x ) be β > 0 . The cumulative sum of the catastrophic forgetting term C is O ( β2δ √ T ) . Nonconvex continual learning by Equation ( 6 ) does not converge as iterating the algorithm for the worst case , where min t E‖∇f ( xt ) ‖2 isO ( β2δ ) for 1 β2δ √ T . When β2δ ≤ 1√ T , we have min t E‖∇f ( xt ) ‖2 = O ( 1√ T ) . ( 10 ) Then , the IFO complexity for achieving an -accurate solution of f ( x ) is O ( 1/ 2 ) . We would like to emphasize that catastrophic forgetting is inevitable in the worst case scenario because the stationary of f ( x ) is not decreasing and the convergence on f ( x ) can not be recovered no matter how long we proceed training . Building a tight bound of C is the key to preventing catastrophic forgetting . Note that the general setting to minimize C is scaling down the learning rate β to β2δ ≤ 1/ √ T . Then we have the decreasingC = O ( 1/ √ T ) . However , this method is slowing down the convergence of the current task g ( x ) and not an appropriate way . The other option is to minimize Ct itself rather than tightening the loose upper bound O ( β2δ √ T ) . We discuss how to minimize this term by scaling two learning rates in Section 3.3 . The constrained optimization problem of GEM provided a useful rephrased constraint but can not explain and guarantee the catastrophic forgetting in the nonconvex setting . Our convergence analysis of continual learning is the first quantitative result of catastrophic forgetting in the manner of nonconvex optimization .
This paper analyses the convergence of episodic memory-based continual learning methods by looking at it as a nonconvex optimisation problem. They analyse the convergence rates for the case where all memory from past tasks is stored, and then consider the case where there is only a subset of past data, leading to overfitting on the episodic memory. They then introduce a method that scales the learning rates of the their update method, with the goal of tightening the bound obtained in the convergence analysis. Finally, experiments are shown on different benchmarks, and the proposed method is compared to some competing baselines.
SP:f4fc140928d2b4901d76664e62569545c70d8a5e
Control-Aware Representations for Model-based Reinforcement Learning
1 INTRODUCTION . Control of non-linear dynamical systems is a key problem in control theory . Many methods have been developed with different levels of success in different classes of such problems . The majority of these methods assume that a model of the system is known and its underlying state is low-dimensional and observable . These requirements limit the usage of these techniques in controlling dynamical systems from high-dimensional raw sensory data ( e.g. , image ) , where the system dynamics is unknown , a scenario often seen in modern reinforcement learning ( RL ) . Recent years have witnessed a rapid development of a large arsenal of model-free RL algorithms , such as DQN ( Mnih et al. , 2013 ) , TRPO ( Schulman et al. , 2015 ) , PPO ( Schulman et al. , 2017 ) , and SAC ( Haarnoja et al. , 2018 ) , with impressive success in solving high-dimensional control problems . However , most of this success has been limited to simulated environments ( e.g. , computer games ) , mainly due to the fact that these algorithms often require a large number of samples from the environment . This restricts their applicability in real-world physical systems , for which data collection is often a difficult process . On the other hand , model-based RL algorithms , such as PILCO ( Deisenroth & Rasmussen , 2011 ) , MBPO ( Janner et al. , 2019 ) , and Visual Foresight ( Ebert et al. , 2018 ) , despite their success , still face difficulties in learning a model ( dynamics ) in a high-dimensional ( pixel ) space . To address the problems faced by model-free and model-based RL algorithms in solving highdimensional control problems , a class of algorithms have been developed , whose main idea is to first learn a low-dimensional latent ( embedding ) space and a latent model ( dynamics ) , and then use this model to control the system in the latent space . This class has been referred to as learning controllable embedding ( LCE ) and includes algorithms , such as E2C ( Watter et al. , 2015 ) , RCE ( Banijamali et al. , 2018 ) , SOLAR ( Zhang et al. , 2019 ) , PCC ( Levine et al. , 2020 ) , Dreamer ( Hafner et al. , 2020a ; b ) , PC3 ( Shu et al. , 2020 ) , and SLAC ( Lee et al. , 2020 ) . The following two properties are extremely important in designing LCE models and algorithms . First , to learn a representation that is the most suitable for the control problem at hand . This suggests incorporating the control algorithm in the process of learning representation . This view of learning control-aware representations is aligned with the value-aware and policy-aware model learning , VAML ( Farahmand , 2018 ) and PAML ( Abachi et al. , 2020 ) , frameworks that have been recently proposed in model-based RL . Second , to interleave the representation learning and control , and to update them both , using a unifying objective function . This allows to have an end-to-end framework for representation learning and control . LCE methods , such as SOLAR , Dreamer , and SLAC , have taken steps towards the second objective by performing representation learning and control in an online fashion . This is in contrast to offline methods like E2C , RCE , PCC , and PC3 that learn a representation once and then use it in the entire control process . On the other hand , methods like PCC and PC3 address the first objective by adding a term to their representation learning loss function that accounts for the curvature of the latent dynamics . This term regularizes the representation towards smoother latent dynamics , which are suitable for the locally-linear controllers , e.g. , iLQR ( Li & Todorov , 2004 ) , used by these methods . In this paper , we take a few steps towards the above two objectives . We first formulate a LCE model to learn representations that are suitable to be used by a policy iteration ( PI ) style algorithm in the latent space . We call this model control-aware representation learning ( CARL ) and derive a loss function for it that exhibits a close connection to the prediction , consistency , and curvature ( PCC ) principle for representation learning ( Levine et al. , 2020 ) . We derive three implementations of CARL : offline , online , and value-guided . Similar to offline LCE methods , such as E2C , RCE , PCC , and PC3 , in offline CARL , we first learn a representation and then use it in the entire control process . However , in offline CARL , we replace the locally-linear control algorithm ( e.g. , iLQR ) used by these LCE methods with a PI-style ( actor-critic ) RL algorithm . Our choice of RL algorithm is the model-based implementation of soft actor-critic ( SAC ) ( Haarnoja et al. , 2018 ) . Our experiments show significant performance improvement by replacing iLQR with SAC . Online CARL is an iterative algorithm in which at each iteration , we first learn a latent representation by minimizing the CARL loss , and then perform several policy updates using SAC in this latent space . Our experiments with online CARL show further performance gain over its offline version . Finally , in value-guided CARL ( V-CARL ) , we optimize a weighted version of the CARL loss function , in which the weights depend on the TD-error of the current policy . This would help to further incorporate the control algorithm in the representation learning process . We evaluate the proposed algorithms by extensive experiments on benchmark tasks and compare them with several LCE baselines : PCC , SOLAR , and Dreamer . 2 PROBLEM FORMULATION . We are interested in learning control policies for non-linear dynamical systems , where the states s ∈ S ⊆ Rns are not fully observed and we only have access to their high-dimensional observations x ∈ X ⊆ Rnx , nx ns . This scenario captures many practical applications in which we interact with a system only through high-dimensional sensory signals , such as image and audio . We assume that the observations x have been selected such that we can model the system in the observation space using a Markov decision process ( MDP ) 1 MX = 〈X , A , r , P , γ〉 , where X and A are observation and action spaces ; r : X ×A → R is the reward function with maximum value Rmax , defined by the designer of the system to achieve the control objective ; 2 P : X×A → P ( X ) is the unknown transition kernel ; and γ ∈ ( 0 , 1 ) is the discount factor . Our goal is to find a mapping from observations to control signals , µ : X → P ( A ) , with maximum expected return , i.e. , J ( µ ) = E [ ∑∞ t=0 γ tr ( xt , at ) | P , µ ] . Since the observations x are high-dimensional and the observation dynamics P is unknown , solving the control problem in the observation space may not be efficient . As discussed in Section 1 , the class of learning controllable embedding ( LCE ) algorithms addresses this by learning a low-dimensional latent ( embedding ) space Z ⊆ Rnz , nz nx , together with a latent dynamics , and controlling the system there . The main idea behind LCE is to learn an encoder E : X → P ( Z ) , a latent space dynamics F : Z × A → P ( Z ) , and a decoder D : Z → P ( X ) ,3 such that a good or optimal controller ( policy ) in Z performs well in the observation space X . This means that if we model the control problem in Z as a MDPMZ = 〈Z , A , r̄ , F , γ〉 and solve it using a model-based RL algorithm to obtain a policy π : Z → P ( A ) , the image of π back in the observation space , i.e. , 1A method to ensure observations are Markovian is to buffer them for several time steps ( Mnih et al. , 2013 ) . 2For example , in a goal tracking problem in which the agent ( robot ) aims at finding the shortest path to reach the observation goal xg ( the observation corresponding to the goal state sg ) , we may define the reward for each observation x as the negative of its distance to xg , i.e. , −‖x− xg‖2 . 3Some recent LCE models , such as PC3 ( Shu et al. , 2020 ) , are advocating latent models without a decoder . Although we are aware of the merits of such approach , we use a decoder in the models proposed in this paper . Algorithm 1 Latent Space Learning with Policy Iteration ( LSLPI ) 1 : Inputs : E ( 0 ) , F ( 0 ) , D ( 0 ) ; 2 : Initialization : µ ( 0 ) = random policy ; D ← samples generated from µ ( 0 ) ; 3 : for i = 0 , 1 , . . . do 4 : Compute π ( i ) as the projection of µ ( i ) in the latent space w.r.t . DKL ( π ◦ E || µ ) ; # µ ( i ) ≈ π ( i ) ◦ E ( i ) 5 : Compute the value function of π ( i ) and set V ( i ) = Vπ ( i ) ; # policy evaluation ( critic ) 6 : Compute the greedy policy w.r.t . V ( i ) and set π ( i ) + = G [ V ( i ) ] ; # policy improvement ( actor ) 7 : Set µ ( i+1 ) = π ( i ) + ◦ E ( i ) ; # project the improved policy π ( i ) + back into the observation space 8 : Learn ( E ( i+1 ) , F ( i+1 ) , D ( i+1 ) , r̄ ( i+1 ) ) from D , π ( i ) , and π ( i ) + ; # representation learning 9 : Generate samples D ( i+1 ) = { ( xt , at , rt , xt+1 ) } nt=1 from µ ( i+1 ) ; D ← D ∪D ( i+1 ) ; 10 : end for ( π ◦E ) ( a|x ) = ∫ z dE ( z|x ) π ( a|z ) , should have high expected return . Thus , the loss function to learn Z and ( E , F , D ) from observations { ( xt , at , rt , xt+1 ) } should be designed to comply with this goal . This is why in this paper , we propose a LCE framework that tries to incorporate the control algorithm used in the latent space in the representation learning process . We call this model , control-aware representation learning ( CARL ) . In CARL , we set the class of control ( RL ) algorithms used in the latent space to approximate policy iteration ( PI ) , and more specifically to soft actor-critic ( SAC ) ( Haarnoja et al. , 2018 ) . Before describing CARL in details in the following sections , we present a number of useful definitions and notations here . For any policy µ in X , we define its value function Uµ and Bellman operator Tµ as Uµ ( x ) = E [ ∞∑ t=0 γtrµ ( xt ) | Pµ , x0 = x ] , Tµ [ U ] ( x ) = Ex′∼Pµ ( ·|x ) [ rµ ( x ) + γU ( x ′ ) ] , ( 1 ) for all x∈X and U : X →R , where rµ ( x ) = ∫ a dµ ( a|x ) r ( x , a ) and Pµ ( x′|x ) = ∫ a dµ ( a|x ) P ( x′|x , a ) are the reward function and dynamics induced by µ . Similarly , for any policy π in Z , we define its induced reward function and dynamics as r̄π ( z ) = ∫ a dπ ( a|z ) r̄ ( z , a ) and Fπ ( z′|z ) =∫ a dπ ( a|z ) F ( z′|z , a ) . We also define its value function Vπ and Bellman operator Tπ as Vπ ( z ) = E [ ∞∑ t=0 γtr̄π ( zt ) | Fπ , z0 = z ] , Tπ [ V ] ( z ) = Ez′∼Fπ ( ·|z ) [ r̄π ( z ) + γV ( z ′ ) ] . ( 2 ) For any policy π and value function V in the latent space Z , we denote by π ◦ E and V ◦ E , their image in the observation space X , given encoder E , and define them as ( π ◦ E ) ( a|x ) = ∫ z dE ( z|x ) π ( a|z ) , ( V ◦ E ) ( x ) = ∫ z dE ( z|x ) V ( z ) . ( 3 )
This paper aims to address an important question in reinforcement learning: policy learning from high-dimensional sensory observations. The authors propose an algorithm for Learning Controllable Embedding (LCE) based on policy iteration in the latent space. The authors provide a theorem to show how the policy performance in latent-space policy improvement depends on the learned representation and develop three algorithmic variations that attempt to maximize the theoretical lower bounds. In the experiments, the proposed algorithm CARL shows improved performance when compared with other LCE baseline algorithms.
SP:84f9003af6de793a1fd9c75c2cf9bb9dc495d56e
Search Data Structure Learning
1 INTRODUCTION . In many applications , the machines need to perform many searches in a gigantic database where the number of relevant documents is minuscule , e.g . ten in a billion . It is like searching for some needles in a haystack . In those cases , considering every document is extremely inefficient . For productivity , the search should not consider the whole database . Traditionally , this is accomplished by building a search data structure and seeking within it . Those data structures can take many forms . For example , there are tree-based structures such as the B-Tree ( Bayer & McCreight , 1970 ) , the k-d tree ( Friedman et al. , 1977 ) , the R-Tree ( Guttman , 1984 ) or the M-Tree ( Ciaccia et al. , 1997 ) to name a few . In addition to trees , KNNG ( Paredes & Chávez , 2005 ) build a graph designed for the k-nearest neighbour search . Later approaches improve on KNNG , both for construction and search time and for the search quality itself . In those lines , there is Efanna ( Fu & Cai , 2016 ) , HNSW ( Malkov & Yashunin , 2018 ) and ONNG ( Iwasaki & Miyazaki , 2018 ) . One of the most common types of search data structures is the hash table . It is so useful that it is implemented natively in programming languages such as Python ( with the dictionary type ) . Hash table is often the main tool an application will use for efficiency . For example , from a short and noisy song sample , Shazam ( Wang et al. , 2003 ) can retrieve the whole song by using hash tables filled with well-designed fingerprints of each song . Traditionally , the design of a search data structure was for a particular type of search . For example , hash tables can retrieve documents very quickly , even in gigantic databases . However , the query must be equal to the key . This requirement makes the hash table not always applicable . For instance , if the database is indexed by date and time and we seek all documents from a specific day , then it might not be optimal to query every second of that day with an equality search . B-Tree ( Bayer & McCreight , 1970 ) was precisely introduced for applications where a range search is preferable ( and faster insertion than dichotomic search is needed ) . Equality and range are far from being the only types of search . For instance , the k-nearest neighbours is another well-studied type a search . Also , the subset search is a more exotic example that occurs when every queries and documents are sets and when a document is relevant if and only if it is a subset of the query . As a final example , the auto-complete function often uses a Trie data structure ( De La Briandais , 1959 ) to suggest the end of each word . It is easy not to realize how the problem of efficiently finding needles in a haystack was solved multiple times for specific applications . This is the ascertainment that make Search Data Structure Learning ( SDSL ) a significant subject . Machine Learning has been a very flexible paradigm , whether by solving multiple NLP ( Natural Language Processing ) tasks with a unique Transformer ( Vaswani et al. , 2017 ) or solving most Atari games with Reinforcement Learning ( Mnih et al. , 2013 ) , the capacity of a single learning algorithm to perform on multiple tasks is outstanding . Search Data Structure Learning aims at developing generic learning algorithms meant for multiple types of search . Furthermore , what makes a document relevant need not to be described formally or even understood by a human . It might be k-nearest neighbour with a complex metric or something else altogether , the only thing we need for learning is a dataset . While we use the term ” Search Data Structure Learning ” for the first time , algorithms that fall into its paradigm already exist . The large video-hosting platform YouTube implements an SDSL algorithm ( Covington et al. , 2016 ) for its recommendation system ( the user being the query and the videos being the documents ) . Not having a formalized definition of what makes a document relevant and relying on Machine Learning has its challenges , the most important being the evaluation . Traditional search data structure , such as the hash table , the Trie , the B-Tree , are exact , meaning the documents retrieved contains all and uniquely the relevant documents . For comparing those exact search data structures , when possible , the comparison between two exact search data structures is made with the asymptotic time complexity ( the big-O notation ) . However , when the search is not exact , it is unclear how to compare structures with different efficiency and exactitude . The precision at Hamming distance of 2 is an attempt to unify those two properties into a single measure specific to the context of binary encoding . However , as described below , it fails in many aspects . It might seem like it is up to the programmer to decide what is more important between the speed and the quality of the retrieved documents . For example , the recall-queries per second ( Aumüller et al. , 2017 ) helps to visually understand the trade-off between speed and quality . In section 3 , we describe a reliable measure to evaluate the efficiency and quality simultaneously of any search data structure . This metric solidifies the Machine Learning subfield of Search Data Structure Learning . This article presents the SDSL framework that brings two crucial generalization w.r.t . its predecessors ( Li et al. , 2011 ; Cayton & Dasgupta , 2008 ) . First , it allows for dynamic databases , i.e . databases that might change or evolve after training . For example , it is plausible that a company wants to design and train a search engine ready for distribution to multiple clients without further training on each client ’ s database . The current mindset is to retrain each time a new database is given ; however , this is not feasible in many cases . Hopefully , this article motivates the research towards models that can generalize to never seen databases . Secondly , the previous framework does not support relative relations , i.e . when the relevance of a document w.r.t . a query depends on the other documents in the database . The most studied relative relations is probably the KNN task , which is relative since it is impossible to know if a document is in the k-nearest neighbour of a query without knowing the other documents . In contrast , radius search is an example of what we call an absolute relation because it is possible to know if a document is relevant to a query only by looking at the query-document pair . In this work , however , we did not introduce relative relations only for KNN . Many interesting relative relation tasks exist ; for example , another rather exciting example of relative relation is the multiple supporting facts : A harder task is to answer questions where two supporting statements have to be chained to answer the question [ ... ] where to answer the question “ Where is the football ? ” one has to combine information from two sentences “ John is in the playground ” and “ John picked up the football ” . ( Weston et al. , 2015 ) . In this work , we first introduce a general framework to formalize the SDSL task in which we present a novel metric to simultaneously evaluate the efficiency and quality of the search . Then , we inaugurate the field of SDSL with Efficient Learning Binary Access ( ELBA ) ( Section 4 ) that describes a family of models that use a traditional search data structure and parametric functions ( e.g . neural networks ) to create a discrete binary code ( s ) for both the queries and documents . A reader familiar with the field must appreciate the difficulty that has to be overcome when dealing with ( semi ) discrete structure . To instantiate ELBA , we concocted the F-beta Loss used for the training and Multi-Bernoulli Search ( MBS ) , a novel SDS technique designed for probabilistic binary codes . Finally , for comparisons , we will instantiate ELBA with other loss functions and another SDS , namely the MIHash ’ s loss ( Cakir et al. , 2017 ) , the HashNet ’ s loss ( Cao et al. , 2017 ) , and the Hamming Radius Search C.4 . We will then experimentally show the F-beta Loss and MBS ’ s advantage by putting in evidence their synergy . 2 RELATED WORK . In data structure terminology , the concepts of dynamic and static structures describe whether or not the structure can change via insertion , deletion or merge . In SDSL , if the database ( s ) used for training are not the same as the one ( s ) used for evaluation , then the structure has to search for documents only seen once at insertion . From a Machine Learning perspective , this is known as a One-Shot Learning task . For example , Matching Network ( Vinyals et al. , 2016 ) tries to match never seen elements together . However , applying their technique would require a database scan . Hence it is incompatible with a gigantic database . In the same vein , soft addressing ( or attention ) is a differentiable mechanism to select an element from many , thus compatible with gradient descent . Memory Network ( Kumar et al. , 2016 ) , Neural-Turing Machine ( Graves et al. , 2014 ) and Transformer ( Vaswani et al. , 2017 ) all use some kind of soft addressing . It is interesting for training our models but can not be used alone in SDSL . For the same reason as above , it would require considering the whole database . Finding the k-nearest neighbour is trivial with unlimited resources . In this field , the research focuses mainly on the efficiency of both the search and the structure ’ s construction . The exact algorithms are as efficient in higher dimensions than a random search due to the curse of dimensionality . Consequently , the focus has recently been on approximate k-nearest neighbour . The search data structure developed are mostly tree-based , such as the k-d tree ( Friedman et al. , 1977 ) or the K-Means tree ( Nister & Stewenius , 2006 ) , and graph-based , such as the KNNG ( Paredes & Chávez , 2005 ) , Efanna ( Fu & Cai , 2016 ) , HNSW ( Malkov & Yashunin , 2018 ) or ONNG ( Iwasaki & Miyazaki , 2018 ) just to name a few . A good resource for comparing those approaches is the ann-benchmark Aumüller et al . ( 2017 ) . In this work , we generalize the problem to conceive algorithms able to learn what to search efficiently . Efficient Learnable Binary Access , described below , encodes queries and documents into binary vectors . In this work , we will use neural networks as the encoders . Such encoders already exists in the literature . For example , CNNH ( Xia et al. , 2014 ) , DPSH ( Li et al. , 2015 ) , DHN ( Zhu et al. , 2016 ) , GreedyHash ( Su et al. , 2018 ) , PGDH ( Yuan et al. , 2018 ) , HashGan ( Cao et al. , 2018 ) , ADSH ( Jiang & Li , 2018 ) or JMLH ( Shen et al. , 2019 ) , just to name a few . Below we compare different loss functions , the F-beta Loss 4 the MIHash ’ s loss Cakir et al . ( 2017 ) and HashNet ’ s loss Cao et al . ( 2017 ) . Graph learning , introduced in Zhu et al . ( 2003 ) for semi-supervised learning , is a type of data structure learning that has shown experimentally to be a strong idea . Those models learn to do inference from graphs , sometime by generating them first . Some approaches work with static graphs ( static structures ) ( Zhu et al. , 2003 ; Perozzi et al. , 2014 ; Scarselli et al. , 2008 ; Bruna et al. , 2013 ) while other work with dynamic graphs ( dynamic structures ) ( Narayan & Roe , 2018 ; Manessi et al. , 2020 ) . While this literature does not focus on retrieval , they learn to compute using a data structure . To put SDSL in contrast with the Learning to Search framework ( Li et al. , 2011 ) . As mentioned in the introduction , it does not support dynamic databases and relative relation . It is possible to update the framework to deal with dynamic databases by taking an expectation over the databases in the retrieval quality Q ( T ) and computational cost C ( T ) . However , it is not clear how to deal with relative relations because the selection function T ( q , x ) is a ” matching function ” that does not exist for relative tasks . Generalizing the selection function by allowing it to consider the whole database ( i.e . with T ( q , X ) ) does not work because T ( q , X ) could use the ranking function s ( x , q ) on every document and nothing would penalize such exhaustive strategies since the computational cost is the number of candidates . Nevertheless , this is not the main issue . As with the framework proposed in Cayton & Dasgupta ( 2008 ) , the computational cost does not consider the retrieval cost but only the size of the candidates set ( divided by the number of documents in the database for the latter framework ) . Those frameworks fail to quantify the work needed to retrieve the candidates . For example , while proposing the Learning to Search framework , the authors relied on timing to evaluate their model . The SDSL framework , proposed below , provides a unique quantity that quantifies both the cost of retrieval and the candidates ’ quality simultaneously . Finally , while not introduced as such , an SDSL algorithm is used in NLP . In this field , many articles attempt to accelerate the training and inference of the neural network based models , in which the main bottleneck is the normalization over a large vocabulary . Morin & Bengio ( 2005 ) use a precomputed tree and train their model to travel from the root to a leaf , where each leaf corresponds to a word . Doing so accelerates both training and inference . Latter , Mnih & Hinton ( 2009 ) proposed a way to learn the structure of the tree .
In this paper, the authors proposed Search Data Structure Learning (SDSL), which they claim to be a generalization of the standard Search Data Structure. They also present a new metric called Sequential Search Work Ratio (SSWR) to evaluate the quality and efficiency of the search. They introduced a new loss called F-beta Loss, showing their algorithm is better than two previous results, MIHash (Cakir et al. 2017) and HashNet (Cao et al. 2017).
SP:892315ac5e3431d1be76ae8dbeb2121ea22b4ed8
Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets
1 INTRODUCTION . The rapid progress in the design of neural architectures has largely contributed to the success of deep learning on many applications ( Krizhevsky et al. , 2012 ; Cho et al. , 2014 ; He et al. , 2016 ; Szegedy et al . ; Vaswani et al. , 2017 ; Zhang et al. , 2018 ) . However , due to the vast search space , designing a novel neural architecture requires a time-consuming trial-and-error search by human experts . To tackle such inefficiency in the manual architecture design process , researchers have proposed various Neural Architecture Search ( NAS ) methods that automatically search for optimal architectures , achieving models with impressive performances on various tasks that outperform human-designed counterparts ( Baker et al. , 2017 ; Zoph & Le , 2017 ; Kandasamy et al. , 2018 ; Liu et al. , 2018 ; Luo et al. , 2018 ; Pham et al. , 2018 ; Liu et al. , 2019 ; Xu et al. , 2020 ; Chen et al. , 2021 ) . Recently , large benchmarks for NAS ( NAS-101 , NAS-201 ) ( Ying et al. , 2019 ; Dong & Yang , 2020 ) have been introduced , which provide databases of architectures and their performances on benchmark datasets . Yet , most conventional NAS methods can not benefit from the availability of such databases , due to their task-specific nature which requires repeatedly training the model from scratch for each new dataset ( See Figure 1 Left ) . Thus , searching for an architecture for a new task ( dataset ) may require a large number of computations , which may be problematic when the time and mon- ∗These authors contributed equally to this work . Conventional NAS Approach Training NAS Model NAS Model NAS Model ... Search Cost O ( N ) Target Dataset 1 Target Dataset 2 Target Dataset N Heavy Computation Cost Training Training Our Meta-Trained NAS Model Our NAS Approach ... Search Cost O ( 1 ) Target Dataset 1 Target Dataset 2 Target Dataset N Rapid Search No Retraining on Target Target Dataset Search Space Valid Zone Source Database Meta-Training Our Meta-Learning Framework Meta-Test Figure 1 : Left : Most conventional NAS approaches need to repeatedly train NAS model on each given target dataset , which results in enormous total search time on multiple datasets . Middle : We propose a novel NAS framework that generalizes to any new target dataset to generate specialized neural architecture without additional NAS model training after only meta-training on the source database . Thus , our approach cut down the search cost for training NAS model on multiple datasets from O ( N ) to O ( 1 ) . Right : For unseen target dataset , we utilize amortized meta-knowledge represented as set-dependent architecture generative representations . etary budget are limited . How can we then exploit the vast knowledge of neural architectures that have been already trained on a large number of datasets , to better generalize over an unseen task ? In this paper , we introduce amortized meta-learning for NAS , where the goal is to learn a NAS model that generalizes well over the task distribution , rather than a single task , to utilize the accumulated meta-knowledge to new target tasks . Specifically , we propose an efficient NAS framework that is trained once from a database containing datasets and their corresponding neural architectures and then generalizes to multiple datasets for searching neural architectures , by learning to generate a neural architecture from a given dataset . The proposed MetaD2A ( Meta Dataset-to-Architecture ) framework consists of a set encoder and a graph decoder , which are used to learn a cross-modal latent space for datasets and neural architectures via amortized inference . For a new dataset , MetaD2A stochastically generates neural architecture candidates from set-dependent latent representations , which are encoded from a new dataset , and selects the final neural architecture based on their predicted accuracies by a performance predictor , which is also trained with amortized meta-learning . The proposed meta-learning framework reduces the search cost from O ( N ) to O ( 1 ) for multiple datasets due to no training on target datasets . After one-time building cost , our model only takes just a few GPU seconds to search for neural architecture on an unseen dataset ( See Figure 1 ) . We meta-learn the proposed MetaD2A on subsets of ImageNet-1K and neural architectures from the NAS-Bench201 search space . Then we validate it to search for neural architectures on multiple unseen datasets such as MNIST , SVHN , CIFAR-10 , CIFAR-100 , Aircraft , and Oxford-IIIT Pets . In this experiment , our meta-learned model obtains a neural architecture within 33 GPU seconds on average without direct training on a target dataset and largely outperforms all baseline NAS models . Further , we compare our model with representative transferable NAS method ( Lu et al. , 2020 ) on MobileNetV3 search space . We meta-learn our model on subsets of ImageNet-1K and neural architectures from the MobileNetV3 search space . The meta-learned our model successfully generalizes , achieving extremely fast search with competitive performance on four unseen datasets such as CIFAR-10 , CIFAR-100 , Aircraft , and Oxford-IIIT Pets . To summarize , our contribution in this work is threefold : • We propose a novel NAS framework , MetaD2A , which rapidly searches for a neural architecture on a new dataset , by sampling architectures from latent embeddings of the given dataset then selecting the best one based on their predicted performances . • To this end , we propose to learn a cross-modal latent space of datasets and architectures , by performing amortized meta-learning , using a set encoder and a graph decoder on subsets of ImageNet-1K . • The meta-learned our model successfully searches for neural architectures on multiple unseen datasets and achieves state-of-the-art performance on them in NAS-Bench201 search space , especially searching for architectures within 33 GPU seconds on average . 2 RELATED WORK . Neural Architecture Search ( NAS ) NAS is an automated architecture search process which aims to overcome the suboptimality of manual architecture designs when exploring the extensive search space . NAS methods can be roughly categorized into reinforcement learning-based methods ( Zoph & Le , 2017 ; Zoph et al. , 2018 ; Pham et al. , 2018 ) , evolutionary algorithm-based methods ( Real et al. , 2019 ; Lu et al. , 2020 ) , and gradient-based methods ( Liu et al. , 2019 ; Cai et al. , 2019 ; Luo et al. , 2018 ; Dong & Yang , 2019b ; Chen et al. , 2021 ; Xu et al. , 2020 ; Fang et al. , 2020 ) . Among existing approaches , perhaps the most relevant approach to ours is NAO ( Luo et al. , 2018 ) , which maps DAGs onto a continuous latent embedding space . However , while NAO performs graph reconstruction for a single task , ours generates data-dependent Directed Acyclic Graphs ( DAGs ) across multiple tasks . Another important open problem in NAS is reducing the tremendous computational cost resulting from the large search space ( Cai et al. , 2019 ; Liu et al. , 2018 ; Pham et al. , 2018 ; Liu et al. , 2019 ; Chen et al. , 2021 ) . GDAS ( Dong & Yang , 2019b ) tackles this by optimizing sampled sub-graphs of DAG . PC-DARTS ( Xu et al. , 2020 ) reduces GPU overhead and search time by partially selecting channel connections . However , due to the task-specific nature of those methods , they should be retrained from the scratch for each new unseen task repeatedly and each will take a few GPU hours . The accuracy-predictor-based transferable NAS called NSGANetV2 ( Lu et al. , 2020 ) alleviates this issue by adapting the ImageNet-1K pre-trained network to multiple target datasets , however , this method is still expensive due to adapting procedure on each dataset . Meta-learning Meta-learning ( learning to learn ) aims to train a model to generalize over a distribution of tasks , such that it can rapidly adapt to a new task ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Finn et al. , 2017 ; Nichol et al. , 2018 ; Lee et al. , 2019b ; Hou et al. , 2019 ) . Recently , LEO ( Rusu et al. , 2019 ) proposed a scalable meta-learning framework which learns the latent generative representations of model parameters for a given data in a low-dimensional space for few-shot classification . Similarly to LEO ( Rusu et al. , 2019 ) , our method learns a low-dimensional latent embedding space , but we learn a cross-modal space for both datasets and models for task-dependent model generation . Neural Architecture Search with Meta-Learning Recent NAS methods with gradient-based meta-learning ( Elsken et al. , 2020 ; Lian et al. , 2019 ; Shaw et al. , 2019 ) have shown promising results on adapting to different tasks . However , they are only applicable on small scale tasks such as few-shot classification tasks ( Elsken et al. , 2020 ; Lian et al. , 2019 ) and require high-computation time , due to the multiple unrolling gradient steps for one meta-update of each task . While some attempt to bypass the bottleneck with a first-order approximation ( Lian et al. , 2019 ; Shaw et al. , 2019 ) or parallel computations with GPUs ( Shaw et al. , 2019 ) , but their scalability is intrinsically limited due to gradient updates over a large number of tasks . To tackle such a scalability issue , we perform amortized inference over the multiple tasks by encoding a dataset into the low-dimensional latent vector and exploit fast GNN propagation instead of the expensive gradient update . 3 METHOD . Our goal is to output a high-performing neural architecture for a given dataset rapidly by learning the prior knowledge obtained from the rich database consisting of datasets and their corresponding neural architectures . To this end , we propose Meta Dataset-to-Architecture ( MetaD2A ) framework which learns the cross-modal latent space of datasets and their neural architectures . Further , we introduce a meta-performance predictor , which predicts accuracies of given architectures without training the predictor on an unseen target dataset . Overview of the proposed approach is illustrated in Figure 1 . 3.1 META-TRAINING NAS MODEL . To formally define the problem , let us assume that we have a source database of Nτ number of tasks , where each task τ = { D , G , s } consists of a dataset D , a neural architecture represented as a Directed Acyclic Graph ( DAG ) G and an accuracy s obtained from the neural architecture G trained onD . In the meta-training phase , the both dataset-to-architecture generator and meta-predictor learn to generalize over task distribution p ( τ ) using the source database . We describe how to empirically construct source database in Section 4.1.1 . 3.1.1 LEARNING TO GENERATE GRAPHS FROM DATASETS . We propose a dataset-to-architecture generator which takes a dataset and then generates high-quality architecture candidates for the set . We want the generator to generate even novel architectures , which are not contained in the source database , at meta-test . Thus , the generator learns the continuous cross-modal latent space Z of datasets and neural architectures from the source database . For each task τ , the generator encodes datasetD as a vector z through the set encoder qφ ( z|D ) parameterized by φ and then decodes a new graph G̃ from z which are sampled from the prior p ( z ) by using the graph decoder pθ ( G|z ) parameterized by θ . Then , our goal is that G̃ generated from D to be the true G which is pair of D. We meta-learn the generator using set-amortized inference , by maximizing the approximated evidence lower bound ( ELBO ) as follows : max φ , θ ∑ τ∼p ( τ ) Lτφ , θ ( D , G ) ( 1 ) where Lτφ , θ ( D , G ) = Ez∼qφ ( z|D ) [ log pθ ( G|z ) ] − λ · LτKL [ qφ ( z|D ) ||p ( z ) ] ( 2 ) Each dimension of the prior p ( z ) factorizes into N ( 0 , 1 ) . LτKL is the KL divergence between two multivariate Gaussian distributions which has a simple closed form ( Kingma & Welling , 2014 ) and λ is the scalar weighting value . Using the reparameterization trick on z , we optimize the above objective by stochastic gradient variational Bayes ( Kingma & Welling , 2014 ) . We use a set encoder described in Section 3.1.3 and we adopt a Graph Neural Network ( GNN ) -based decoder for directed acyclic graph ( DAG ) s ( Zhang et al. , 2019 ) , which allows message passing to happen only along the topological order of the DAGs . For detailed descriptions for the generator , see Section A of Suppl .
The authors address neural architecture search (NAS) scenarios. In particular, a framework, MetaD2A, is proposed, which yields a neural architecture for a new dataset. In a nutshell, the framework learns a "dataset-to-neural-network-architecture" transformation using a database of datasets and architectures. Each dataset is encoded via a "set encode" and the architecutres are obtained via a "graph decoder". The experiments demonstrate the usefullness of the approach and its improvements over conventual NAS approaches. The approach could be described
SP:d366dee57fb1f10beeef03e52f8a93ee6ff39f33
To Learn Effective Features: Understanding the Task-Specific Adaptation of MAML
1 INTRODUCTION . Few-shot learning , aiming to learn from few labelled examples , is a great challenge for modern machine learning systems . Meta learning , an effective way for tracking this challenge , enables the model to learn general knowledge across a distribution of tasks . Various ideas of meta learning have been proposed to address the few-shot problems . Gradient-based meta learning ( Finn et al . ( 2017 ) ; Nichol et al . ( 2018 ) ) learns the meta-parameters that can be quickly adapted to new tasks by few gradient descent steps . Metric-based meta learning ( Koch et al . ( 2015 ) ; Vinyals et al . ( 2016 ) ; Snell et al . ( 2017 ) ) proposes to learn a metric space by comparing different datapoints . Memorybased meta learning ( Santoro et al . ( 2016 ) ) can rapidly assimilate new data and leverage the stored information to make predictions . Model Agnostic Meta-Learning ( MAML ) ( Finn et al . ( 2017 ) ) is one of the most well-known gradient-based meta learning algorithms , that learns the meta-initialization parameters through the inner optimization loop and the outer optimization loop . For a given task , the inner loop is to perform fast adaptation in several gradient descent steps with the support datapoints , while the outer loop to generalize the updated model to the query datapoints . With the learned meta-initialization , the model can be quickly adapted to the unseen tasks with few labelled samples . Following the MAML algorithm , many significant variants ( Finn et al . ( 2018 ) ; Rusu et al . ( 2018 ) ; Oreshkin et al . ( 2018 ) ; Bertinetto et al . ( 2018 ) ; Lee et al . ( 2019b ) ) are studied under the few-shot setting . To understand how the MAML works , Raghu et al . ( 2019 ) conduct a series of experiments and claim that rather than rapid learning and adaptation , the learned meta-initialization has already absorbed the high-quality features prior , thus the representations after fine-tuning are almost the same for the coming unseen tasks . Also , the task specific head of MAML at training facilitates the learning of better features . In this paper , we further design more representative experiments and present a formal argument to explain the importance of the task specific adaptation . Actually , the multi-step taskspecific adaptation , making the body and head have similar classification capabilities , can provide better gradient descent direction for the features learning of body . We also notice that for both the gradient-based methods ( e.g . MAML ( Finn et al . ( 2017 ) ) , MetaOptNet ( Lee et al . ( 2019b ) ) ) and metric-based methods ( e.g . Prototypical Networks ( Snell et al . ( 2017 ) ) ) that attempt to learn a taskspecific head using the support datapoints , the adaptation is a common mode for features learning of body but varied in different methods . Based on our analysis , we first propose a new training paradigm to find a decision plane ( linear classifier ) for guidance with no gradient descent step during the inner loop and get more supporting conclusions . Moreover , we devise another training paradigm that removes the inner loop and trains the model with only the query datapoints . Specifically , inspired by contrastive representation learning ( Oord et al . ( 2018 ) ; Chen et al . ( 2020 ) ; He et al . ( 2020 ) ) , we exploit the inter-samples relationship of query set to find a guidance for the body across different tasks . This meta contrastive learning algorithm even achieves competitive results comparable to some state-of-the-art methods . In total , our contributions can be listed as follows : 1 . We present sufficient experiments and formal argument to explore the impact of the taskspecific adaptation for body features learning and discuss the general formula for other gradient-based and metric-based meta-learning approaches . 2 . We devise a training algorithm to obtain a decision plane with no gradient descent step during the inner loop , named as Random Decision Planes ( RDP ) , and get more supporting conclusions . 3 . Unlike prior gradient-based methods , we propose the Meta Contrastive Learning ( MCL ) algorithm to exploit the inter-samples relations instead of training a task-specific head during the inner loop . Even without the task-specific adaptation for guidance , our algorithm still achieve better results with even less computation costs . 4 . We empirically shows the effectiveness of the proposed algorithm with different backbones on four benchmark datasets : miniImageNet ( Vinyals et al . ( 2016 ) ) , tieredImageNet ( Ren et al . ( 2018 ) ) , CIFAR-FS ( Bertinetto et al . ( 2018 ) ) and FC100 ( Oreshkin et al . ( 2018 ) ) . 2 RELATED WORKS . MAML ( Finn et al . ( 2017 ) ) is a highly influential gradient-based meta learning algorithm for fewshot learning . The amazing experiment results on several public few-shot datasets have proved its effectiveness . Following the core idea of MAML , there are numerous works to handle the data insufficiency problem in few-shot learning . Some works ( Oreshkin et al . ( 2018 ) ; Vuorio et al . ( 2019 ) ) introduce the task-dependent representations via conditioning the feature extractor on the specific task to improve the performance . Sun et al . ( 2019 ) also employ the meta-learned scaling and shifting parameters for transferring from another large-scale dataset . Others ( Grant et al . ( 2018 ) ; Finn et al . ( 2018 ) ; Lee et al . ( 2019a ) ) study this problem from the perspective of Bayesian approach . Unlike prior methods , we provide two training paradigms , one with no gradient descent step during the inner loop and another removing the inner loop and exploiting the inter-sample relations for training . Recent works also explore the key factors that makes the meta-learned model perform better than others at few-shot tasks . Chen et al . ( 2019 ) discovers that a deeper backbone has a large effect on the success of meta learning algorithm , while Goldblum et al . ( 2020 ) finds that the meta learning tends to cluster object classes more tightly in feature space for those methods that fix the backbone during the inner loop ( Bertinetto et al . ( 2018 ) ; Rusu et al . ( 2018 ) ) . A very recent work ( Raghu et al . ( 2019 ) ) argues that the meta-trained model can be applied to new task due to the high-quality features prior learned by the meta-initialized parameters rather than rapid learning . In this paper , we further study the impact of the task-specific adaptation for feature learning . Based on the analysis , we devise two algorithms , Random Decision Planes ( RDP ) and Meta Contrastive Learning ( MCL ) requiring less computation cost but still with competitive performance . 3 MODEL-AGNOSTIC META LEARNING ( MAML ) . The MAML aims to learn the meta-initialized parameters θ for the coming unseen tasks through the inner optimization loop and the outer optimization loop . Under the N -way-K-shot setting , for a task Tb sampled from the task distribution P ( T ) , we have a support set of N ×K examples T sb and a query set T qb , where N is the number of sampled class and K is the number of instances for each class . During the inner loop , with the support set T sb , we perform fast adaptation in several gradient descent steps and obtain the task-specific parameters θtTb where t is the number of gradient descent steps , given by : θtTb = θ t−1 Tb − α∇θt−1Tb LT sb ( θ t−1 Tb ) ( 1 ) where α is the step size for inner loop and LT sb ( θ t−1 Tb ) denoted as the loss on the support set T sb after t − 1 steps . With the query set T qb , we compute the meta loss on the task-specific parameters θtTb and backward to update the meta-initialized parameters θ , given by θ = θ − β∇θ 1 B B∑ b=1 LT qb ( θ t Tb ) ( 2 ) where β is the learning rate and B is the number of sampled tasks in a batch . 4 IMPACT OF TASK-SPECIFIC ADAPTATION . 4.1 THE MULTI-STEP TASK-SPECIFIC ADAPTATION IS IMPORTANT .. To explore the effectiveness of MAML , Raghu et al . ( 2019 ) have conducted sufficient experiments , indicating that the network body ( the representation layers ) has already absorbed the high-quality features prior . During meta-testing , instead of fine tuning on the network head ( the classifier ) , simply building the prototypes with the support set can achieve comparable performance to MAML . Raghu et al . ( 2019 ) also shows that the task specificity of head at training can facilitate feature learning and ensure good representation learning in the network body . In our work , we show that besides the task specificity of head , the multi-step adaptation is also essential , and further study the role of network body and head during meta-training . We devise several methods using different training regimes : ( 1 ) Multi-Task , where all the tasks simply share one common head and the model is trained in a traditional way without inner loop adaptation ; ( 2 ) Multi-Head , where different tasks are equipped with different heads for task specificity and the model is trained in a traditional way without inner loop adaptation ; ( 3 ) Almost No Inner Loop ( ANIL ) , where the network body is fixed during the inner loop ; ( 4 ) Body Outer Loop , Head Inner Loop ( BOHI ) , where the network body is updated only by the outer loop and the head is adapted only during the inner loop , making the head ’ s meta-initialized parameters unchanged . More algorithms ’ details can be found in Appendix B , and implementation details can be found in Appendix C.1 . Following Raghu et al . ( 2019 ) , we employ the cosine similarities between prototypes and the query datapoints to evaluate the quality of features learned . As Table 1 shows , even equipped with taskspecific head , the Multi-Head training still performs worse than the standard MAML algorithm by a large margin , indicating the multi-step adaptation of MAML is helpful for features learning . The results of Multi-Head and Multi-Task show the importance of multi-step task-specific adaptation . As the results shown in Table 1 , the ANIL training remains effective comparable to the standard MAML algorithm , indicating that the task-specific adaptation of network body is unnecessary to learn good features . More interestingly , the BOHI training that keeps the meta-initialization of head unchanged even performs better than MAML , further demonstrating that good features learning depends on the multi-step task-specific adaptation of head during inner loop more than updating the meta-initialization of head in outer loop . Also , the ANIL and BOHI have similar performance , indicating that compared with learned prior knowledge in head , the inner loop adaptation , as a guidance , contributes more to the features learning . More experimental results can be found in Appendix C.2 . 4.2 WHY IS MULTI-STEP TASK-SPECIFIC ADAPTATION IMPORTANT ? . Having observed that the MAML algorithm outperforms the Multi-Task training by a large margin and the multi-step task-specific adaptation is important for features learning , we extend our analysis to explore the reason why the inner loop adaptation is essential for MAML at different stages of meta training . Specifically , we freeze the initialized MAML model and model at 5,000 iterations , sample validation tasks from the task distribution , and record the test accuracy of model in different inner loop steps . Both the body accuracy based on prototypes construction and head accuracy based on fine-tuning are given in Figure 1 and Figure 2 , where “ Task ID ” stands for different tasks . As the results shows , at different stages of meta training , the head accuracy increases significantly in the first few adaptation steps since the model has learnt the correspondence between sample and label . However , at the beginning of training , there is only a small improvement on the body accuracy after first adaptation step . In Figure 2 , as the model converges , the body accuracy even decreases in the first few adaptation steps . In the following steps , with the task-specific adaptation of head , the network body then learns better representations , further demonstrating that the multi- Algorithm 1 The Random Decision Planes ( RDP ) Algorithm for N-way-K-shot learning Input : Network Body fθ , Learning Rate β , Task Distribution P ( T ) Perform the Gram-Schmidt method on random metrices to get the classifier set P = { Wi } np i=1 while not done do Sample a batch of tasks { Tb } Bb=1 , where Tb ∼ P ( T ) for b ∈ { 1 , ... , B } do Sample the support set T sb = { ( xsi , ysi ) } N×K i=1 and query set T qb = { ( x q i , y q i ) } N×K i=1 from task Tb . for each sample x in { T sb , T q b } do z = ‖fθ ( x ) ‖ end for define CrossEntropyLoss ( H , D ) as the cross entropy loss on the features representations set D with head H . W ? = arg min W∈P CrossEntropyLoss ( W , { ( zsi , ysi ) } N×K i=1 ) Lb = CrossEntropyLoss ( W ? , { ( zqi , y q i ) } N×K i=1 ) end for θ = θ − β∇θ 1B ∑B b=1 Lb end while step task-specific adaptation , making the body and head have similar classification capabilities , can be regarded as a guidance to provide better gradient descent direction for the feature learning of body . To understand this intuitive argument better , we consider a sample ( x , y ) for few-shot classification where the cross entropy loss is employed , formulated as : Lc = −log ( exp ( w > y h ) ∑ k exp ( w > k h ) ) = −w > y h + log ( ∑ k exp ( w > k h ) ) ( 3 ) where { w1 , w2 , ... , wk } is the weights of the classifier head , h is the body representation of x . The gradients of loss Lc with respect to the body representation h are denoted by , ∂Lc ∂h = −wy + ∑ kwkexp ( w > k h ) ∑ k exp ( w > k h ) = −wy + w̄ ( 4 ) where w̄ is exactly the weighted average of the weights { w1 , w2 , ... , wk } . As shown in Equation 4 , a reasonable direction for the network body to minimize the target lossLc is to make the representation h closer to the corresponding class weight wy , given by h = h + λ ( wy − w̄ ) . As the model converges , in the first few adaptation steps , there is a significant margin between the performance of head and body , and the classifier weights contain little knowledge about correspondence between samples and labels and differences between different classes . With the low-performance head , this updating rule for body may lead to a decline in the quality of features , which also explains why the simpler BOHI , ANIL even performs better than MAML in Table 1 . After several adaptation steps during the inner loop , the body then receives the useful guidance for features learning from the taskspecific head since wy can better express its corresponding class . The formulation above shows that the multi-step task-specific adaptation , making the body and head have similar classification capabilities , can provide better gradient descent direction for the features learning of body .
In this paper, the authors investigate the inner-loop optimization mechanism of meta-learning algorithms. The analysis shows the effectiveness of the multi-step adaptation and (1) the key of meta-learning is how to design a well-differentiated classifier. They then propose Random Decision Planes (RDP) and Meta Contrastive Learning (MCL) and achieve comparable performance with existing methods.
SP:74dc640c4b7e724036bc4f772059fab7e9e33007
Interpretable Meta-Reinforcement Learning with Actor-Critic Method
1 INTRODUCTION . Reinforcement learning problems have been studied for a long time and there are many impressive works that achieved human-level control in real world tasks ( Mnih et al. , 2013 ; Silver et al. , 2017 ; Vinyals et al. , 2019 ; Schrittwieser et al. , 2019 ) . These agents are trained separately on each task and may require huge sampled data and millions of trails . However , in a many real world tasks , the cost of sampling data is not negligible , thus we can not give agent a large number of trails in environment . In contrast , human can laverage past experiences and learn new tasks quickly in few trails , which is very efficient . Many tasks in fact share similar structures that can be extracted as prior knowledge , e.g. , shooting games aims to eliminate enemies with weapons in different environments , which can help agent generalize quickly through different tasks . Meta-learn ( Thrun & Pratt , 2012 ) reinforcement learning tasks can be a suitable chioce . Meta-reinforcement learning ( meta-RL ) aims to learn a policy that can adapt to the unknown environment within few interactions with environment . Meta-policy can be seen as a policy that can derive new a policy maximizes the performance in the new environment . Gradient-based algorithms in meta-RL ( Finn et al. , 2017 ; Stadie et al. , 2018 ; Rothfuss et al. , 2018 ; Liu et al. , 2019 ) showed that a meta-policy can be obtained by reinforcement learning a policy adapted by few reinforcement learning steps . The experiment results suggests that gradient-based methods can learn to sample and utilize sampled data in some extent . Nevertheless , the learning style and learned meta-policy are still far from human . Human learns a new task by interacting with the task sequentially and efficiently . With the obtaining of environment data , human gradually understanding where to sampling data and how to utilize the sampled data to adjust the policy , while gradient-based algorithms use parallel sampling neglecting the relations between data . Sampling independently is not data-efficient , usually needs a number of stochastic trajectories to do plicy adaptation . This causes the agent relying on the stochasticity to sample and only learns how to utilize data . Inspired by the human behavior , we propose a K-shot meta-RL problem that constrains on the data amount accessed by agent , e.g. , adapting policy within only two trails . Low resource environment simulates the real world tasks that have high costs on data obtaining , therefore , requires agent to learn a stable strategy to explore environment . To address the K-shot problem , we also propose a contextual gradient-based algorithm using actor-critic method . The adptation step uses a trail buffer D to store all the transitions in K-shot sampling and optimizes expected value for the states in D. The meta-learning step optimizes the expected return performed by adapted policy while learning the value functions and context encoder using soft actor-critic ( Haarnoja et al. , 2018 ) objectives . We learn the policy with reparameterized objective that derives an unbiased meta-gradient estimation and reduces the estimation variance for Q-value . Our contribution can be summarized as follows : • We reformulate and propose the K-shot meta-RL problem to simulate the real world environment . • We propose a new gradient-based objective to address the K-shot problem . • We introduce context based policy and value functions to perform efficient data sampling . • We use actor-critic method to reduce the variance and bias of estimation in Q-value and meta-gradien . 2 RELATED WORK . Meta-reinforce learning algorithms mainly have three different categories : gradient-based motheds ( Finn et al. , 2017 ; Stadie et al. , 2018 ; Rothfuss et al. , 2018 ; Liu et al. , 2019 ; Nichol et al. , 2018 ) , recurrent meta-learners ( Wang et al. , 2016 ; Duan et al. , 2016 ) , multi-task learners ( Fakoor et al. , 2019 ; Rakelly et al. , 2019 ) . Gradient-based algorithms like MAML ( Finn et al. , 2017 ) optimizing the policy updated by one step reinforcement learning , aiming at learning a good initialization of the policy weights . E-MAML ( Stadie et al. , 2018 ) considered the impact that the data obtained by meta-policy can influence the adapted policy ’ s performance and assigned credit for meta-policy . While ProMP ( Rothfuss et al. , 2018 ) modified the adaptation gradient estimator to be low variance on second-order gradient . Recurrent meta-learners ( Wang et al. , 2016 ; Duan et al. , 2016 ) use RNN as a meta-learner that can learn new task from environment data while exploring . The RNN learners are optimized with sequentially performed episodes end-to-end , which is more similar to the learning process of human and more interpretable in meta-policy . Multi-task learners ( Fakoor et al. , 2019 ; Rakelly et al. , 2019 ) aim at learning multi-task objective to solve meta-learning problems . They argue that meta-learning can be done by explicitly resuing the learned features through context variable . MQL ( Fakoor et al. , 2019 ) can even perform well without adaptation . PEARL ( Rakelly et al. , 2019 ) constructs context encoder to infer the latent task variable and also learns a multi-task objective . The trained policy can perform structured exploration by inferring the task while interacting with environment.Our approach is related closely to the gradient-based researches which also tries to reduce the estimation variance and bias of the second-order gradient , however , we estimate the second-order gardient with value functions , and we still want perform structured exploration in data expensive environments . 3 BACKGROUND . This section focuses on the problem definition and notation of reinforcement learning and metareinforcement learning problems . 3.1 REINFORCEMENT LEARNING . Reinforcement learning ( RL ) problems aim to maximize the expectation of episode returns Eτ∼P ( τ |θ ) [ R ( τ ) ] = Eτ∼P ( τ |θ ) [ ∑ t γtr ( st , at ) ] ( 1 ) with single task and agent , where τ = { s0 , a0 , r0 , . . . } is the trajectory performed by the agent , s0 ∼ ρ0 is the initial state , at ∼ πθ ( at|st ) is the action sampled from the policy π that parameterized by θ , st+1 ∼ P ( st+1|at , st ) is the state at timestep t , and P ( st+1|at , st ) is the transition probability . The problem can be represented by a Markov Desicion Process ( MDP ) with tuple M = ( S , A , P , R , ρ0 , γ , H ) , where S ⊆ Rn is the set of states , A ⊆ Rm is the set of actions , P ( s′|s , a ) ∈ R+ is the system transition probability , R ( s , a ) ∈ R is the reward function of the task , and H is the horizon . Optimizing ( 1 ) usually uses gradient descent and the gradient is estimated using vanilla policy gradient ( VPG ) estimator ( Williams , 1992 ) ∇θEτ∼P ( τ |θ ) [ R ( τ ) ] = Eτ∼P ( τ |θ ) [ ∇θ log π ( τ ) R ( τ ) ] ≈ 1 N ∑ i ∑ t ∇θ log πθ ( ai , t|si , t ) ( H∑ t′=t R ( si , t′ , ai , t′ ) ) ( 2 ) 3.2 GRADIENT-BASED META-REINFORCEMENT LEARNING . Meta-reinforcement learning ( meta-RL ) aims to learn a fast adaptation procedure that can leverage the learned prior knowledge from training tasks and adapt to new tasks with few steps . A task T in meta-RL can also be defined by an MDPMT = ( S , A , PT , RT , ρ0 , γ , H ) . The task is drawn from a distribution T ∼ P ( T ) , for simplicity , we only consider tasks with different reward functions or system transitions but the same state and action space . Gradient-based meta-RL algorithms ( Finn et al. , 2017 ; Stadie et al. , 2018 ) are mainly based on the basic meta-objective ( Rothfuss et al. , 2018 ) J ( θ ) = ET∼P ( T ) [ Eτ ′∼PT ( τ ′|θ′ ) [ R ( τ ′ ) ] ] , θ′ = U ( θ , T ) = θ + α∇θEτ∼PT ( τ |θ ) [ R ( τ ) ] , ( 3 ) where θ is the weights of meta-policy , and θ′ is the adapted weights after one step gradient descent . The meta-objective J ( θ ) optimizes the expectation of episode return sampled from the adapted policy πθ′ . The meta-gradient can be written as ∇θJ ( θ ) = ET∼P ( T ) [ Eτ ′∼PT ( τ ′|θ′ ) [ ∇θ′ logPT ( τ ′|θ′ ) R ( τ ′ ) ∇θθ′ ] ] ∇θθ′ = I + α∇2θEτ∼PT ( τ |θ ) [ R ( τ ) ] ( 4 ) 4 METHOD . 4.1 REFORMULATE META-REINFORCEMENT LEARNING PROBLEM . Different tasks have different features in MDP , a task can be inferred from few important states and transitions in the environment , e.g. , different friction coefficients on floor , different rewards for the same state and action , or some states only exists in certain environments . We name these states and transitions the feature points of the environment . Humans usually learn a task sequentially and efficiently since they can easily recognize the feature points in an environment . The exploration policy of a human changes significantly after obtaining data from the envioronment , thus they can decide where to explore and learn a task quickly . However , as formula ( 3 ) , fast adaptation U ( θ , T ) usually refers to few gradient descent steps in initial weights θ , and unlike humans , the updating is performed in a batched style as normal reinforcement learning . Batched sampling usually contains a large number of trajectories in parallel , which can be inefficient for inferring the task . E-MAML ( Stadie et al. , 2018 ) also tried to improve the sampling efficiency of meta-policy by accounting for the fact that samples drawn from meta-policy will impact the adapted policy . Inspired by the learning procedure of human , we reformulate the meta-RL problem as K-shot meta-reinforcement learning . Definition . Given a task T ∼ P ( T ) , the agent samples data in trail phase and perform good policy in test phase . In trail phase , the agent can only sequentially sample K trajectories in total to adjust its policy , with each trajectory of H length . In test phase , the agent is required to perform only one trajectory and make the return as high as possible . K-shot meta-RL problem defined above constrains the amount of data that can be accessed by agent , and is more similar to the real world meta-RL problem , e.g. , super mario maker . In K-shot setting , meta-policy can still be updated using U ( θ , T ) with batched trajectories , since they can be seen as sampled independently in sequence . However , the variance of the gradient estimation grows as K descends , which means the performance becomes more unstable . To optimize the problem , we propose a new meta-objective JK−shot ( θ ) = ET∼P ( T ) [ Eτ ′∼PT ( τ ′|θ′ ) [ R ( τ ′ ) ] ] , θ′ = U ( θ , D ) = θ + α∇θEs∼D [ V π ( s|c ) ] ( 5 ) for the K-shot setting . Here D is the state buffer sampled by meta-policy in trail phase , and V π ( s|c ) is the expected return of policy π at state s under context c ( see 4.2 for details ) . The state buffer D contains K ∗H states as described in definition , which means the agent can only use few states to update its policy . Due to the constraint on availble environment information , the agent is encouraged to learn to explore more important states that can help performing well in test phase .
Authors introduce a new meta-RL algorithm based on SAC. It uses a context variable $c$ that they condition the Q-function on and the adaptation mechanism which is based on the values of the value function (ie. $\mathbb{E_a} Q(\dot, a)$) instead of the true returns. Authors claim their method reduces variance and bias of the meta-gradient estimation, is closer to human learning, encourages the agent to learn to explore, is more data-efficient in test-time and has competitive performance among gradient-based algorithms.
SP:5b537c8e2d4559f2980b079e46f23eeb8b6f30ad
UPDeT: Universal Multi-agent RL via Policy Decoupling with Transformers
1 INTRODUCTION . Reinforcement Learning ( RL ) provides a framework for decision-making problems in an interactive environment , with applications including robotics control ( Hester et al . ( 2010 ) ) , video gaming ( Mnih et al . ( 2015 ) ) , auto-driving ( Bojarski et al . ( 2016 ) ) , person search ( Chang et al . ( 2018 ) ) and visionlanguage navigation ( Zhu et al . ( 2020 ) ) . Cooperative multi-agent reinforcement learning ( MARL ) , a long-standing problem in the RL context , involves organizing multiple agents to achieve a goal , and is thus a key tool used to address many real-world problems , such as mastering multi-player video games ( Peng et al . ( 2017 ) ) and studying population dynamics ( Yang et al . ( 2017 ) ) . A number of methods have been proposed that exploit an action-value function to learn a multiagent model ( Sunehag et al . ( 2017 ) , Rashid et al . ( 2018 ) , Du et al . ( 2019 ) , Mahajan et al . ( 2019 ) , Hostallero et al . ( 2019 ) , Zhou et al . ( 2020 ) , Yang et al . ( 2020 ) ) . However , current methods have poor representation learning ability and fail to exploit the common structure underlying the tasks this is because they tend to treat observation from different entities in the environment as an integral part of the whole . Accordingly , they give tacit support to the assumption that neural networks are able to automatically decouple the observation to find the best mapping between the whole observation and policy . Adopting this approach means that they treat all information from other agents or different parts of the environment in the same way . The most commonly used method involves concatenating ∗Corresponding author . the observations from each entity in to a vector that is used as input ( Rashid et al . ( 2018 ) , Du et al . ( 2019 ) , Zhou et al . ( 2020 ) ) . In addition , current methods ignore the rich physical meanings behind each action . Multi-agent tasks feature a close relationship between the observation and output . If the model does not decouple the observation from the different agents , individual functions maybe misguided and impede the centralized value function . Worse yet , conventional models require the input and the output dimensions to be fixed ( Shao et al . ( 2018 ) , Wang et al . ( 2020 ) ) , which makes zero-shot transfer impossible . Thus , the application of current methods is limited in real-world applications . Our solution to these problems is to develop a multi-agent reinforcement learning ( MARL ) framework with no limitation on input or output dimension . Moreover , this model should be general enough to be applicable to any existing MARL methods . More importantly , the model should be explainable and capable of providing further improvement for both the final performance on singletask scenarios and transfer capability on multi-task scenarios . Inspired by the self-attention mechanism ( Vaswani et al . ( 2017 ) ) , we propose a transformer-based MARL framework , named Universal Policy Decoupling Transformer ( UPDeT ) . There are four key advantages of this approach : 1 ) Once trained , it can be universally deployed ; 2 ) it provide more robust representation with a policy decoupling strategy ; 3 ) it is more explainable ; 4 ) it is general enough to be applied on any MARL model . We further design a transformer-based function to handle various observation sizes by treating individual observations as ” observation-entities ” . We match the related observation-entity with action-groups by separating the action space into several action-groups with reference to the corresponding observation-entity , allowing us to get matched observation-entity — action-group pairs set . We further use a self-attention mechanism to learn the relationship between the matched observation-entity and other observation-entities . Through the use of self-attention map and the embedding of each observation-entity , UPDeT can optimize the policy at an action-group level . We refer to this strategy as Policy Decoupling . By combining the transformer and policy decoupling strategies , UPDeT significantly outperforms conventional RNNbased models . In UPDeT , there is no need to introduce any new parameters for new tasks . We also prove that it is only with decoupled policy and matched observation-entity — action-group pairs that UPDeT can learn a strong representation with high transfer capability . Finally , our proposed UPDeT can be plugged into any existing method with almost no changes to the framework architecture required , while still bringing significant improvements to the final performance , especially in hard and complex multi-agent tasks . The main contributions of this work are as follows : First , our UPDeT-based MARL framework outperforms RNN-based frameworks by a large margin in terms of final performance on state-of- the-art centralized functions . Second , our model has strong transfer capability and can handle a number of different tasks at a time . Third , our model accelerates the transfer learning speed ( total steps cost ) to make it roughly 10 times faster compared to RNN-based models in most scenarios . 2 RELATED WORK . Attention mechanisms have become an integral part of models that capture global dependencies . In particular , self-attention ( Parikh et al . ( 2016 ) ) calculates the response at a specific position in a sequence by attending to all positions within this sequence . Vaswani et al . ( 2017 ) demonstrated that machine translation models can achieve state-of-the-art results solely by using a self-attention model . Parmar et al . ( 2018 ) proposed an Image Transformer model that applies self-attention to image generation . Wang et al . ( 2018 ) formalized self-attention as a non-local operation in order to model the spatial-temporal dependencies in video sequences . In spite of this , self-attention mechanisms have not yet been fully explored in multi-agent reinforcement learning . Another line of research is multi-agent reinforcement learning ( MARL ) . Existing work in MARL focuses primarily on building a centralized function to guide the training of individual value function ( Lowe et al . ( 2017 ) , Sunehag et al . ( 2017 ) , Rashid et al . ( 2018 ) , Mahajan et al . ( 2019 ) , Hostallero et al . ( 2019 ) , Yang et al . ( 2020 ) , Zhou et al . ( 2020 ) ) . Few works have opted to form a better individual functions with strong representation and transfer capability . In standard reinforcement learning , this generalization has been fully studied ( Taylor & Stone ( 2009 ) , Ammar et al . ( 2012 ) , Parisotto et al . ( 2015 ) , Gupta et al . ( 2017 ) , Da Silva & Costa ( 2019 ) ) . While multi-agent transfer learning has been proven to be more difficult than the single-agent scenario ( Boutsioukis et al . ( 2011 ) , Shao et al . ( 2018 ) , Vinyals et al . ( 2019 ) ) . However , the transfer capability of a multi-agent system is of greater significance due to the various number of agents , observations sizes and policy distributions . To the best of our knowledge , we are the first to develop a multi-agent framework capable of handling multiple task at a time . Moreover , we provide a policy decoupling strategy to further improve the model performance and facilitate the multi-agent transfer learning , which is a significant step towards real world multi-agent applications . 3 METHOD . We begin by introducing the notations and basic task settings necessary for our approach . We then describe a transformer-based individual function and policy decoupling strategy under MARL . Finally , we introduce different temporal units and assimilate our Universal Policy Decoupling Transformer ( UPDeT ) into Dec-POMDP . 3.1 NOTATIONS AND TASK SETTINGS . Multi-agent Reinforcement Learning A cooperative multi-agent task is a decentralized partially observable Markov decision process ( Oliehoek et al . ( 2016 ) ) with a tuple G = 〈S , A , U , P , r , Z , O , n , γ〉 . Let S denote the global state of the environment , while A represents the set of n agents and U is the action space . At each time step t , agent a ∈ A ≡ { 1 , ... , n } selects an action u ∈ U , forming a joint action u ∈ U ≡ Un , which in turn causes a transition in the environment represented by the state transition function P ( s′|s , u ) : S × U × S → [ 0 , 1 ] . All agents share the same reward function r ( s , u ) : S ×U → R , while γ ∈ [ 0 , 1 ) is a discount factor . We consider a partially observable scenario in which each agent makes individual observations z ∈ Z according to the observation function O ( s , a ) : S × A → Z . Each agent has an actionobservation history that conditions a stochastic policy πt , creating the following joint action value : Qπ ( st , ut ) = Est+1 : ∞ , ut+1 : ∞ [ Rt|st , ut ] , where Rt = ∑∞ i=0γ irt+i is the discounted return . Centralized training with decentralized execution Centralized training with decentralized execution ( CTDE ) is a commonly used architecture in the MARL context . Each agent is conditioned only on its own action-observation history to make a decision using the learned policy . The centralized value function provides a centralized gradient to update the individual function based on its output . Therefore , a stronger individual value function can benefit the centralized training . 3.2 TRANSFORMER-BASED INDIVIDUAL VALUE FUNCTION . In this section , we present a mathematical formulation of our transformer-based model UPDeT . We describe the calculation of the global Q-function with self-attention mechanism . First , the observation O is embedded into a semantic embedding to handle the various observation space . For example , if an agent ai observes k other entities { oi,1 , ... , oi , k } at time step t , all observation entities are embedded via an embedding layer E as follows : eti = { E ( oti,1 ) , ... , E ( oti , k ) } . ( 1 ) Here , i is the index of the agent , i ∈ { 1 , ... , n } . Next , the value functions { Q1 , ... , Qn } for the n agents for each step are estimated as follows : qti = Qi ( h t−1 i , e t i , ut ) . ( 2 ) We introduce ht−1i , the temporal hidden state at the last time step t − 1 , since POMDP policy is highly dependent on the historical information . eti denotes the observation embedding , while u t i is the candidate action , uti ∈ U . θi is the parameter that defines Qi . Finally , the global Q-function Qπ is calculated by all individual value functions , as follows : Qπ ( st , ut ) = F ( q t 1 , .. , q t n ) ( 3 ) Fi is the credit assignment function for defined by φi for each agent ai , as utilized in Rashid et al . ( 2018 ) and Sunehag et al . ( 2017 ) . For example , in VDN , F is a sum function that can be expressed as F ( qt1 , .. , q t n ) = ∑n i=1 q t i . Implement Q-function with Self-attention Vaswani et al . ( 2017 ) adopts three matrices , K , Q , V representing a set of keys , queries and values respectively . The attention is computed as follows : Attention ( Q , K , V ) = softmax ( QKT√ dk ) V , ( 4 ) where dk is a scaling factor equal to the dimension of the key . In our method , we adopt the selfattention to learn the features and relationships from the observation entity embedding and the global temporal information . To learn the independent policy in decentralized multi-agent learning , we define Ki , Qi and Vi as the key , query and value metrics for each agent ai . We further consider the query , key and value for the same matrices Rli = Ki = Qi = Vi , where l ∈ { 1 , ... , L } is the number of layers of the transformer . Thus , we formulate our transformer as follows : R1i = { ht−1i , e t i } Qli , K l i , V l i = LFQ , K , V ( R l i ) Rl+1i = Attention ( Q l i , K l i , V l i ) . ( 5 ) where LF represents the linear functions used to compute K , Q , V. Finally we project the entity features of the last transformer layerRLi to the output space of the value functionQi . We implement the projection using a linear function P : Qi ( h t−1 i , e t i , ui ) = P ( R L i , ui ) . ( 6 )
1. In this paper the authors proposed a transferrable framework for multi-agent RL, which enables the learned policies easily generalize to more challenging scenarios. This seems to be a good contribution to the community of multi-agent RL. It bears a potential to handle large-scale tasks with only limited training data, while also demonstrates more explanable policies.
SP:e7976ca1bd206e20cbff3147a2b607ff6d658b2a
Gradient Descent Ascent for Min-Max Problems on Riemannian Manifolds
1 INTRODUCTION . In the paper , we study a class of useful non-convex minimax ( a.k.a . min-max ) problems on the Riemannian manifoldM with the definition as : min x∈M max y∈Y f ( x , y ) , ( 1 ) where the function f ( x , y ) is µ-strongly concave in y but possibly nonconvex in x . Here Y ⊆ Rd is a convex and closed set . f ( · , y ) : M → R for all y ∈ Y is a smooth but possibly nonconvex real-valued function on manifoldM , and f ( x , · ) : Y → R for all x ∈ M a smooth and ( strongly ) concave real-valued function . In this paper , we mainly focus on the stochastic minimax optimization problem f ( x , y ) : = Eξ∼D [ f ( x , y ; ξ ) ] , where ξ is a random variable that follows an unknown distribution D. In fact , the problem ( 1 ) is associated to many existing machine learning applications : 1 ) . Robust Training DNNs over Riemannian manifold . Deep Neural Networks ( DNNs ) recently have been demonstrating exceptional performance on many machine learning applications . However , they are vulnerable to the adversarial example attacks , which show that a small perturbation in the data input can significantly change the output of DNNs . Thus , the security properties of DNNs have been widely studied . One of secured DNN research topics is to enhance the robustness of DNNs under the adversarial example attacks . To be more specific , given training data D : = { ξi = ( ai , bi ) } ni=1 , where ai ∈ Rd and bi ∈ R represent the features and label of sample ξi respectively . Each data sample ai can be corrupted by a universal small perturbation vector y to generate an adversarial attack sample ai + y , as in ( Moosavi-Dezfooli et al. , 2017 ; Chaubey et al. , 2020 ) . To make DNNs robust against adversarial attacks , one popular approach is to solve the following robust training problem : min x max y∈Y 1 n n∑ i=1 ` ( h ( ai + y ; x ) , bi ) , ( 2 ) where y ∈ Rd denotes a universal perturbation , and x is the weight of the neural network ; h ( · ; x ) is the the deep neural network parameterized by x ; and ` ( · ) is the loss function . Here the constraint Y = { y : ‖y‖ ≤ ε } indicates that the poisoned samples should not be too different from the original ones . Recently , the orthonormality on weights of DNNs has gained much interest and has been found to be useful across different tasks such as person re-identification ( Sun et al. , 2017 ) and image classification ( Xie et al. , 2017 ) . In fact , the orthonormality constraints improve the performances of DNNs ( Li et al. , 2020 ; Bansal et al. , 2018 ) , and reduce overfitting to improve generalization ( Cogswell et al. , 2015 ) . At the same time , the orthonormality can stabilize the distribution of activations over layers within DNNs ( Huang et al. , 2018 ) . Thus , we consider the following robust training problem over the Stiefel manifoldM : min x∈M max y∈Y 1 n n∑ i=1 ` ( h ( ai + y ; x ) , bi ) . ( 3 ) When data are continuously coming , we can rewrite the problem ( 3 ) as follows : min x∈M max y∈Y Eξ [ f ( x , y ; ξ ) ] , ( 4 ) where f ( x , y ; ξ ) = ` ( h ( a+ y ; x ) , b ) with ξ = ( a , b ) . 2 ) . Distributionally Robust Optimization over Riemannian manifold . Distributionally robust optimization ( DRO ) ( Chen et al. , 2017 ; Rahimian & Mehrotra , 2019 ) is an effective method to deal with the noisy data , adversarial data , and imbalanced data . At the same time , the DRO in the Riemannian manifold setting is also widely applied in machine learning problems such as robust principal component analysis ( PCA ) . To be more specific , given a set of data samples { ξi } ni=1 , the DRO over Riemannian manifoldM can be written as the following minimax problem : min x∈M max p∈S { n∑ i=1 pi ` ( x ; ξi ) − ‖p− 1 n ‖2 } , ( 5 ) where p = ( p1 , · · · , pn ) , S = { p ∈ Rn : ∑n i=1 pi = 1 , pi ≥ 0 } . Here ` ( x ; ξi ) denotes the loss function over Riemannian manifold M , which applies to many machine learning problems such as PCA ( Han & Gao , 2020a ) , dictionary learning ( Sun et al. , 2016 ) , DNNs ( Huang et al. , 2018 ) , structured low-rank matrix learning ( Jawanpuria & Mishra , 2018 ) , among others . For example , the task of PCA can be cast on a Grassmann manifold . To the best of our knowledge , the existing explicitly minimax optimization methods such as gradient descent ascent method only focus on the minimax problems in Euclidean space . To fill this gap , in the paper , we propose a class of efficient Riemannian gradient descent ascent algorithms to solve the problem ( 1 ) via using general retraction and vector transport . When the problem ( 1 ) is deterministic , we propose a new deterministic Riemannian gradient descent ascent algorithm . When the problem ( 1 ) is stochastic , we propose two efficient stochastic Riemannian gradient descent ascent algorithms . Our main contributions can be summarized as follows : 1 ) We propose a novel Riemannian gradient descent ascent ( RGDA ) algorithm for the deterministic minimax optimization problem ( 1 ) . We prove that the RGDA has a sample complexity of O ( κ2 −2 ) for finding an -stationary point . 2 ) We also propose a new Riemannian stochastic gradient descent ascent ( RSGDA ) algorithm for the stochastic minimax optimization . In the theoretical analysis , we prove that the SRGDA has a sample complexity of O ( κ4 −4 ) . 3 ) To further reduce the sample complexity , we introduce a novel momentum variancereduced Riemannian stochastic gradient descent ascent ( MVR-RSGDA ) algorithm based on a new momentum variance-reduced technique of STORM ( Cutkosky & Orabona , 2019 ) . We prove the MVR-RSGDA achieves a lower sample complexity of Õ ( κ4 −3 ) ( please see Table 1 ) , which reaches near the best known sample complexity for its Euclidean counterparts . 4 ) Extensive experimental results on the robust DNN training over Stiefel manifold demonstrate the efficiency of our proposed algorithms . 2 RELATED WORKS . In this section , we briefly review the minimax optimization and Riemannian manifold optimization research works . 2.1 MINIMAX OPTIMIZATION . Minimax optimization recently has been widely applied in many machine learning problems such as adversarial training ( Goodfellow et al. , 2014 ; Liu et al. , 2019 ) , reinforcement learning ( Zhang et al. , 2019 ; 2020 ) , and distribution learning ( Razaviyayn et al. , 2020 ) . At the same time , many efficient min-max methods ( Rafique et al. , 2018 ; Lin et al. , 2019 ; Nouiehed et al. , 2019 ; Thekumparampil et al. , 2019 ; Lin et al. , 2020 ; Yang et al. , 2020 ; Ostrovskii et al. , 2020 ; Yan et al. , 2020 ; Xu et al. , 2020a ; Luo et al. , 2020 ; Xu et al. , 2020b ; Boţ & Böhm , 2020 ; Huang et al. , 2020 ) have been proposed for solving these minimax optimization problems . For example , Thekumparampil et al . ( 2019 ) have proposed a class of efficient dual implicit accelerated gradient algorithms to solve smooth min-max optimization . Lin et al . ( 2019 ) have proposed a class of efficient gradient decent ascent methods for non-convex minimax optimization . Subsequently , accelerated first-order algorithms Lin et al . ( 2020 ) have been proposed for minimax optimization . Xu et al . ( 2020b ) have proposed a unified singleloop alternating gradient projection algorithm for ( non ) convex- ( non ) concave minimax problems . Ostrovskii et al . ( 2020 ) have proposed an efficient algorithm for finding first-order Nash equilibria in nonconvex concave minimax problems . Xu et al . ( 2020a ) ; Luo et al . ( 2020 ) have proposed a class of fast stochastic variance-reduced GDA algorithms to solve the stochastic minimax problems . More recently , Huang et al . ( 2020 ) have presented a class of new momentum-based first-order and zeroth-order descent ascent method for the nonconvex strongly concave minimax problems . 2.2 RIEMANNIAN MANIFOLD OPTIMIZATION . Riemannian manifold optimization methods have been widely applied in machine learning problems including dictionary learning ( Sun et al. , 2016 ) , matrix factorization ( Vandereycken , 2013 ) , and DNNs ( Huang et al. , 2018 ) . Many Riemannian optimization methods were recently proposed . E.g . Zhang & Sra ( 2016 ) ; Liu et al . ( 2017 ) have proposed some efficient first-order gradient methods for geodesically convex functions . Subsequently , Zhang et al . ( 2016 ) have presented fast stochastic variance-reduced methods to Riemannian manifold optimization . More recently , Sato et al . ( 2019 ) have proposed fast first-order gradient algorithms for Riemannian manifold optimization by using general retraction and vector transport . Subsequently , based on these retraction and vector transport , some fast Riemannian gradient-based methods ( Zhang et al. , 2018 ; Kasai et al. , 2018 ; Zhou et al. , 2019 ; Han & Gao , 2020a ) have been proposed for non-convex optimization . Riemannian Adam-type algorithms ( Kasai et al. , 2019 ) were introduced for matrix manifold optimization . In addition , some algorithms ( Ferreira et al. , 2005 ; Li et al. , 2009 ; Wang et al. , 2010 ) have been studied for variational inequalities on Riemannian manifolds , which are the implicit min-max problems on Riemannian manifolds . Notations : ‖ · ‖ denotes the ` 2 norm for vectors and spectral norm for matrices . 〈x , y〉 denotes the inner product of two vectors x and y . For function f ( x , y ) , f ( x , · ) denotes function w.r.t . the second variable with fixing x , and f ( · , y ) denotes function w.r.t . the first variable with fixing y . Given a convex closed set Y , we define a projection operation on the set Y as PY ( y0 ) = arg miny∈Y 12‖y − y0‖2 . We denote a = O ( b ) if a ≤ Cb for some constant C > 0 , and the notation Õ ( · ) hides logarithmic terms . Id denotes the identity matrix with d dimension . The operation ⊕ denotes the Whitney sum . Given Bt = { ξit } Bi=1 for any t ≥ 1 , let ∇fBt ( x , y ) = 1B ∑B i=1∇f ( x , y ; ξit ) .
In this paper, the authors present and analyze a class of gradient-descent algorithms for solving min-max problems when the first (minimization) variable is constrained to live on a Riemaniann manifold. In the case when i) a retraction and an isometric transport are available on the manifold; and ii) the objective is strongly convex and smooth in the second variable, the authors show convergence rates. Experiments are performed with the setting of minimizing losses of neural nets whose weights are constrained to live in the Stiefeld manifold while an attacker of small norm perturbs the input.
SP:63859002bed6542b5fe469aecb01e3070572885c
Fooling a Complete Neural Network Verifier
1 INTRODUCTION . In their seminal work , Szegedy et al . found that for a given neural network and input example one can always find a very small adversarial input perturbation that results in an incorrect output ( Szegedy et al. , 2014 ) . This striking discovery motivated a substantial amount of research . In this area , an important research direction is verification , that is , the characterization of the robustness of a given network in a principled manner . A usual way of defining the verification problem involves the specification of an input domain and a property that should hold over the entire domain . For example , we might require that all the points within a certain distance from an input example share the same output label as the example itself . The verification problem is then to prove or disprove the property over the domain for a given network ( Bunel et al. , 2020 ) . There are a large number of verifiers offering different types of guarantees about their output . Complete verifiers offer the strongest guarantee : they are able to decide whether a given property holds in any given input domain . For example , the verifier of Tjeng et al . is a state-of-the-art complete verifier that we will focus on in this paper ( Tjeng et al. , 2019 ) . However , it is currently standard practice to ignore the details of the computations that the network under investigation performs , such as the floating point representation or the order in which input signals are summed . In this paper , we claim that such implicit assumptions make verifiers vulnerable to a new kind of attack where the attacker designs a network that fools the verifier , exploiting the differences between how the verifier models the computation and how the computation is actually performed in the network . We will argue that such attacks can achieve an arbitrary divergence between the modeled and the actual behavior . This new attack has practical implications as well . Concerns about the safety of AI systems are expected to lead to the establishment of standard requirements certified by a designated authority ( Salis-Madinier , 2019 ) . These certification procedures might involve verification methods as well . Fooling such methods makes it possible to get unsafe systems certified that might even contain a backdoor allowing for triggering arbitrary behavior . Numerical precision has not been a key practical concern in machine learning . Networks do sometimes produce numerical errors ( e.g. , Inf or NaN values ) , most often due to the non-linear operations within the loss function ( Odena et al. , 2019 ) or divergence during training . However , the network weights are normally robust to small perturbations due to stochastic learning algorithms ( Bottou , 2010 ) , and due to regularizers such as standard variants of weight decay and dropout ( Srivastava et al. , 2014 ) . Due to this robustness , low precision arithmetic can be applied as well ( Courbariaux et al. , 2015 ; Gupta et al. , 2015 ) . Our results indicate that , when it comes to exact methods for verification , numerical issues become a central problem that can cause arbitrary errors and enable backdoors . Our contributions are the following . In Section 3 , we introduce a simple adversarial network that misleads the verifier of Tjeng et al . ( 2019 ) . In Section 4 , we show how to hide the large weights that are present in the simple network . In Section 5 , we describe a way to add a backdoor to an existing network with the help of the adversarial networks we proposed . Finally , in Section 6 we offer a defense against the attack we presented . 2 BACKGROUND . Let us first formulate the verification problem , namely the problem of checking whether a given property holds in a given domain . We adopt the notation used in ( Tjeng et al. , 2019 ) . For a possible input x , let G ( x ) denote the set of inputs that are considered similar to x in the sense that we expect all the points in G ( x ) to get the same label as x . The set G ( x ) is normally defined as a ball around x in some metric space defined by a suitable vector norm . The input domain we need to consider is given as G ( x ) ∩Xvalid whereXvalid denotes the valid input points . For example , we haveXvalid = [ 0 , 1 ] m if the input is an image of m pixels with each pixel taking values from the interval [ 0 , 1 ] . We now have to formulate the property that we wish to have in this domain . Informally , we want all the points in the domain G ( x ) ∩ Xvalid to get the same classification label as x . Let λ ( x ) denote the true label of x and let f ( x ; θ ) : Rm → Rn denote the neural network , parameterized by θ . This network has n outputs classifying each input x into n classes . The label of x as predicted by the network is given by argmaxi f ( x ; θ ) i . Using this notation , the property we wish to have for an input x′ ∈ ( G ( x ) ∩ Xvalid ) is that λ ( x ) = argmaxi f ( x′ ; θ ) i . Putting it all together , the verification problem can be expressed as deciding the feasibility of the constraint x′ ∈ ( G ( x ) ∩ Xvalid ) ∧ ( λ ( x ) 6= argmax i f ( x′ ; θ ) i ) , ( 1 ) with x′ as our variable . If this constraint is feasible then there is an x′ that violates the property . If it is infeasible then ( provided G ( x ) ∩ Xvalid is not empty ) there is no such x′ . 2.1 APPROACHES TO VERIFICATION . There are many approaches to tackle this problem . We can , for example , search for a suitable x′ in the given domain using some heuristic optimization methods ( Goodfellow et al. , 2015 ; MoosaviDezfooli et al. , 2016 ; Kurakin et al. , 2017 ; Carlini & Wagner , 2017 ; Brendel et al. , 2019 ) . If the search succeeds , we can decide that equation 1 is feasible . Otherwise we can not decide . Other methods attempt to find a proof for the infeasibility of equation 1 , however , they do not guarantee such a proof . Examples include ( Wong & Kolter , 2018 ; Weng et al. , 2018 ; Gehr et al. , 2018 ; Raghunathan et al. , 2018 ; Singh et al. , 2019 ) . If a proof is found , we can decide that equation 1 is infeasible . Otherwise we can not decide . Such methods are sometimes called incomplete ( Tjeng et al. , 2019 ; Bunel et al. , 2020 ) . The strongest guarantee is given by methods that are able to decide the feasibility of equation 1 . These methods are sometimes called complete ( Tjeng et al. , 2019 ; Bunel et al. , 2020 ) . Examples for such methods include Reluplex ( Katz et al. , 2017 ) , a method based on an SMT solver . A number of verifiers are based on MILP solvers , for example , ( Cheng et al. , 2017 ; Dutta et al. , 2018 ) . MIPVerify ( Tjeng et al. , 2019 ) also uses an MILP formulation along with several additional techniques to improve efficiency ( see Section 2.2 ) . Symbolic interval propagation has also been proposed for ReLU networks by Wang et al . in ReluVal ( Wang et al. , 2018b ) , and as part of Neurify ( Wang et al. , 2018a ) . In Neurify , interval propagation is used as a technique to tighten the bounds used for linear relaxation . Nnenum is another geometric method that is based on propagating linear star sets ( Bak et al. , 2020 ) . 2.2 MIPVERIFY . Although the idea behind the attack is not specific to a particular verifier—as we discuss in Section C of the Appendix—we develop and evaluate the attack in detail for a state-of-the-art complete verifier : MIPVerify ( Tjeng et al. , 2019 ) . It is based on a mixed integer linear programming ( MILP ) formulation . As long as the domain G ( x ) ∩ Xvalid is the union of a set of polyhedra , and the neural network f ( x , θ ) is a piecewise linear function of x with parameters θ , the problem of checking the feasibility of the constraint in equation 1 can be formulated as a MILP instance . G ( x ) is normally defined as a ball in a suitable norm with x as the center . In ` ∞ or ` 1 norms G ( x ) is thus a cube . Also , Xvalid is normally a box or a set of boxes , so the domain is indeed the union of a set of polyhedra . The neural network is piecewise linear as long as the nonlinearities used are ReLUs ( note that the last softmax normalization layer adds no extra information and can thus be ignored ) . For the details of the MILP formalization , please see ( Tjeng et al. , 2019 ) . Importantly , MIPVerify applies a presolve step that greatly increases its efficiency . In this step , the authors attempt to tighten the bounds on the variables of the problem , including on the inputs to each ReLU computation . If in this step it turns out that the input of a ReLU gate is always non-positive , the output can be fixed as a constant zero , and if the input is always non-negative then the ReLU gate can be removed from the model as it will have no effect . The presolve step applies three approaches in a progressive manner . First , a fast but inaccurate interval arithmetic approach is used . The resulting bounds are further improved by solving a relaxed LP problem on every variable . Finally , the full MILP problem is solved for the variables but with early stopping . 2.3 FLOATING POINT REPRESENTATION . Floating point real number representations are successful and efficient tools for most real life applications ( Muller et al. , 2010 ) . This arithmetic is available on most modern computers via sophisticated hardware implementations . A floating point number is represented as s · be , where s is the signed significand , b is the base and e is the exponent . There are numerous standards to implement the exact details of this idea that differ mainly in the number of bits that the significand and the exponent use . The formula to compute the represented real number has several possible variations as well . Here , we will use the double precision ( binary64 ) arithmetic defined by the IEEE 754-1985 standard ( IEEE , 1985 ) . There , b = 2 and we have a sign bit , an 11 bit exponent , and a 53 bit significand ( with 52 bits stored explicitly ) . The so called machine epsilon ( the maximum relative rounding error ) is 2−53 ≈ 1.11e−16 . This means that , for example , the computation 1020 + 2020 − 1020 will result in zero in this representation , if executed in the specified order . In our attack , we will exploit roundoff errors of this type . Note , that in the order of 1020 − 1020 + 2020 we obtain the correct result of 2020 .
The authors show that certain complete neural network verifiers can be mislead by carefully crafted neural networks that exploit round-off errors, which when large magnitude values overwhelm low magnitude values. Such a construction can be obfuscated by taking advantage of the compounding effect when there are many layers of the network. This can also be used to add backdoors to existing networks, albeit in a way that looks quite artificial.
SP:2308aac0572e5a7bca7552cfaf89617012da87b4
FAST GRAPH ATTENTION NETWORKS USING EFFECTIVE RESISTANCE BASED GRAPH SPARSIFICATION
1 INTRODUCTION . Graphs are efficient representations of pairwise relations , with many real-world applications including product co-purchasing network ( ( McAuley et al. , 2015 ) ) , co-author network ( ( Hamilton et al. , 2017b ) ) , etc . Graph neural networks ( GNN ) have become popular as a tool for inference from graph based data . By leveraging the geometric structure of the graph , GNNs learn improved representations of the graph nodes and edges that can lead to better performance in various inference tasks ( ( Kipf & Welling , 2016 ; Hamilton et al. , 2017a ; Veličković et al. , 2018 ) ) . More recently , the attention mechanism has demonstrated superior performance for inference over nodes in GNNs ( ( Veličković et al. , 2018 ; Xinyi & Chen , 2019 ; Thekumparampil et al. , 2018 ; Lee et al. , 2020 ; Bianchi et al. , 2019 ; Knyazev et al. , 2019 ) ) . However , attention based GNNs suffer from huge computational cost . This may hinder the applicability of the attention mechanism to large graphs . GNNs generally rely on graph convolution operations . For a graph G with N nodes , graph convolution with a kernel gw : R ! R is defined as gw ? h = Ugw ( ⇤ ) U > h ( 1 ) where U is the matrix of eigenvectors and ⇤ is the diagonal matrix of the eigenvalues of the normalized graph Laplacian matrix defined as Lnorm = I D 1/2AD 1/2 , ( 2 ) with D and A being the degree matrix and the adjacency matrix of the graph , and gw is applied elementwise . Since computing U and ⇤ can be very expensive ( O ( N3 ) ) , most GNNs use an approximation of the graph convolution operator . For example , in graph convolution networks ( GCN ) ( Kipf & Welling , 2016 ) , node features are updated by computing averages as a first order approximation of Eq.equation 1 over the neighbors of the nodes . A single neural network layer is defined as : H ( l+1 ) GCN = ⇣ eD 1/2 eA eD 1/2H ( l ) W ( l ) ⌘ , ( 3 ) where H ( l ) and W ( l ) are the activations and the weight matrix at the lth layer respectively and eA = A+ I and eD is the degree matrix of eA . Attention based GNNs add another layer of complexity : they compute pairwise attention coefficients between all connected nodes . This process can significantly increase the computational burden , especially on large graphs . Approaches to speed up GNNs were proposed in ( Chen et al. , 2018 ; Hamilton et al. , 2017a ) . However , these sampling and aggregation based methods were designed for simple GCNs and are not applicable to attention based GNNs . There has also been works in inducing sparsity in attention based GNNs ( Ye & Ji , 2019 ; Zheng et al. , 2020 ) , but they focus on addressing potential overfitting of attention based models rather than scalability . In this paper , we propose Fast Graph Attention neTwork ( FastGAT ) , an edge-sampling based method that leverages effective resistances of edges to make attention based GNNs lightweight . The effective resistance measures importance of the edges in terms of preserving the graph connectivity . FastGAT uses this measure to prune the input graph and generate a randomized subgraph with far fewer edges . Such a procedure preserves the spectral features of a graph , hence retaining the information that the attention based GNNs need . At the same time , the graph is amenable to more complex but computationally intensive models such as attention GNNs . With the sampled subgraph as their inputs , the attention based GNNs enjoy much smaller computational complexity . Note that FastGAT is applicable to all attention based GNNs . In this paper , we mostly focus on the Graph Attention NeTwork model ( GAT ) proposed by ( Veličković et al. , 2018 ) . However we also show FastGAT is generalizable to two other attention based GNNs , namely the cosine similarity based approach ( Thekumparampil et al. , 2018 ) and Gated Attention Networks ( Zhang et al. , 2018 ) . We note that Graph Attention Networks can be re-interpreted as convolution based GNNs . We show this explicitly in the Appendix . Based on this re-interpretation , we theoretically prove that spectral sparsification preserves the feature representations computed by the GAT model . We believe this interpretation also opens up interesting connections between sparsifying state transition matrices of random walks and speeding up computations in GNNs . The contributions of our paper are as outlined below : • We propose FastGAT , a method that uses effective resistance based spectral graph sparsification to accelerate attention GNNs in both inductive and transductive learning tasks . The rapid subsampling and the spectrum preserving property of FastGAT help attention GNNs retain their accuracy advantages and become computationally light . • We provide a theoretical justification for using spectral sparsification in the context of attention based GNNs by proving that spectral sparsification preserves the features computed by GNNs . • FastGAT outperforms state-of-the-art algorithms across a variety of datasets under both transductive and inductive settings in terms of computation , achieving a speedup of up to 10x in training and inference time . On larger datasets such as Reddit , the standard GAT model runs out of memory , whereas FastGAT achieves an F1 score 0.93 with 7.73 second per epoch time in training . • Further , FastGAT is generalizable to other attention based GNNs such as the cosine similarity based attention ( Thekumparampil et al. , 2018 ) and the Gated Attention Network ( Zhang et al. , 2018 ) . 2 RELATED WORK . Accelerating graph based inference has drawn increasing interest . Two methods proposed in ( Chen et al. , 2018 ) ( FastGCN ) and ( Huang et al. , 2018 ) speed up GCNs by using importance sampling to sample a subset of nodes per layer during training . Similarly , GraphSAGE ( Hamilton et al. , 2017a ) also proposes an edge sampling and aggregation based method for inductive learning based tasks . All of the above works use simple aggregation and target simple GCNs , while our work focus on more recent attention based GNNs such as ( Veličković et al. , 2018 ) . We are able to take advantage of the attention mechanism , while still being computationally efficient . Graph sparsification aims to approximate a given graph by a graph with fewer edges for efficient computation . Depending on final goals , there are cut-sparsifiers ( ( Benczúr & Karger , 1996 ) ) , pairwise distance preserving sparsifiers ( ( Althöfer et al. , 1993 ) ) and spectral sparsifiers ( ( Spielman & Teng , 2004 ; Spielman & Srivastava , 2011 ) ) , among others ( ( Zhao , 2015 ; Calandriello et al. , 2018 ; Hübler et al. , 2008 ; Eden et al. , 2018 ; Sadhanala et al. , 2016 ) ) . In this work , we use spectral sparsification to choose a randomized subgraph while preserving spectral properties . Apart form providing the strongest guarantees in preserving graph structure ( ( Chu et al. , 2018 ) ) , they align well with GNNs due to their connection to spectral graph convolutions . Graph sparsification on neural networks have been studied recently ( ( Ye & Ji , 2019 ; Zheng et al. , 2020 ; Ioannidis et al. , 2020 ; Louizos et al. , 2017 ) ) . However , their main goal is to alleviate overfitting in GNNs not reducing the training time . They still require learning attention coefficients and binary gate values for all edges in the graph , hence not leading to any computational or memory benefit . In contrast , FastGAT uses a fast subsampling procedure , thus resulting in a drastic improvement in training and inference time . It is also highly stable in terms of training and inference . 3 FASTGAT : ACCELERATING GRAPH ATTENTION NETWORKS VIA EDGE SAMPLING . 3.1 THE FASTGAT ALGORITHM . Let G ( E , V ) be a graph with N nodes and M edges . An attention based GNN computes attention coefficients ↵i , j for every pair of connected nodes i , j 2 V in every layer ` . The ↵i , j ’ s are then used as averaging weights to compute the layer-wise feature updates . In the original GAT formulation , the attention coefficients are ↵ij = exp LeakyReLU ( a > [ Whi||Whj ] ) P j2Ni exp ( LeakyReLU ( a > [ Whi||Whj ] ) ) , ( 4 ) where hi ’ s are the input node features to the layer , W and a are linear mappings that are learnt , Ni denotes the set of neighbors of node i , and || denotes concatenation . With the ↵ij ’ s as defined above , the node-i output embedding of a GAT layer is h 0 i = 0 @ X j2Ni ↵ijWhj 1 A . ( 5 ) For multi-head attention , the coefficients are computed independently in each attention head with head-dependent matrices W and attention vector a . Note that the computational burden in GATs arises directly from computing the ↵i , j ’ s in every layer , every attention head and every forward pass during training . Goal : Our objective is to achieve performance equivalent to that of full graph attention networks ( GAT ) , but with only a fraction of the original computational complexity . This computational saving is achieved by reducing the number of attention computations . Idea : We propose to use edge-sampling functions that sparsify graphs by removing nonessential edges . This leads to direct reduction in the number of attention coefficients to be computed , hence reducing the burden . Choosing the sampling function is crucial for retaining the graph connectivity . Let EdgeSample ( E , A , q ) denote a randomized sampling function that , given an edge set E , adjacency matrix A and a number of edges to be sampled q , returns a subset of the original edge set Es ⇢ E with |Es| = q . Our algorithm then uses this function to sparsify the graph in every layer and attention head . Following this , the attention coefficients are computed only for the remaining edges . A more detailed description is given in Algorithm 1 . In every layer and attention head , a randomized subgraph with q ⌧ M edges is generated and the attention coeffients are computed only for this subset of edges . We use a specialized distribution that depends on the contribution of each edge to the graph connectivity . We provide further details in Section 3.2 . Note that in the general description below , the attention coefficients themselves are used as weights for sparsification and the reweighted attention coefficients are used to compute the feature update . Doing so helps in theoretical analysis of the algorithm . However in practice , we replace this expensive procedure with a one-time sampling of the graph with the original edge weights and compute the attention coefficients for only the remaining edges . In particular , we use two simpler variations of FastGAT include : i ) FastGAT-const , where the subgraph g is kept constant in all the layers and attention heads and ii ) FastGAT-layer , where the subgraph is different in each layer ( drawn stochastically from the original edge weights ) , but the same across all the attention heads within a layer . Algorithm 1 : The FastGAT Algorithm Input : Graph G ( V , E ) , Num . layers = L , Num . Attention heads K ( ` ) , ` = 1 , · · · , L Initial Weight matrices W ( ` ) , Non-linearity , Feature matrix H 2 RN⇥D Randomized edge sampling function EdgeSample ( · ) , Attention function ( · ) Num . edges sampled q for each layer ` do for each attention head k 2 { 1 , 2 , · · · , K ( ` ) } do Compute attention matrix ( ` ) k 2 RN⇥N , with ( ` ) k ( i , j ) = ✓k ( h ( ` ) i , h ( ` ) j ) Sample a graph ̂ ( ` ) k = EdgeSample ( ( ` ) k , A , q ) Compute H ( ` +1 ) k = ⇣ ̂ ( ` ) k H ( ` ) k W ( ` ) ⌘ H ( ` +1 ) = || k H ( ` ) k // Concatenate the output of attention heads Compute loss and update W ’ s // gradient based weight update
This paper proposes a paradigm which speeds up the training/inference time of GATs while not compromising too much performance. The method adopts a layerwise sampling procedure. In particular. The authors propose to sample a sub-portion of edges for each layer based on their effective resistance. Such sampling keeps the spectral similar to the original results theoretically and gives a guarantee to the performance drop.
SP:12c875bb1a25581a9f1e4eebfb1e1519d47ee6c7
Cut-and-Paste Neural Rendering
Cut-and-paste methods take an object from one image and insert it into another . Doing so often results in unrealistic looking images because the inserted object ’ s shading is inconsistent with the target scene ’ s shading . Existing reshading methods require a geometric and physical model of the inserted object , which is then rendered using environment parameters . Accurately constructing such a model only from a single image is beyond the current understanding of computer vision . We describe an alternative procedure – cut-and-paste neural rendering , to render the inserted fragment ’ s shading field consistent with the target scene . We use a Deep Image Prior ( DIP ) as a neural renderer trained to render an image with consistent image decomposition inferences . The resulting rendering from DIP should have an albedo consistent with cut-and-paste albedo ; it should have a shading field that , outside the inserted fragment , is the same as the target scene ’ s shading field ; and cut-and-paste surface normals are consistent with the final rendering ’ s shading field . The result is a simple procedure that produces convincing and realistic shading . Moreover , our procedure does not require rendered images or image decomposition from real images or any form of labeled annotations in the training . In fact , our only use of simulated ground truth is our use of a pre-trained normal estimator . Qualitative results are strong , supported by a user study comparing against state-of-the-art image harmonization baseline . 1 INTRODUCTION . Cut-and-Paste rendering involves creating a new image by cutting fragments out of one or more source images and pasting them into a target image ; the idea originates with Lalonde et al . ( 2007 ) . Results are often unrealistic , because of the difference in illumination between the source and target images . But the procedure is useful to artists , and there is consistent evidence that such procedures can be used to train detectors ( Liao et al. , 2012 ; Dwibedi et al. , 2017 ) . When the geometry and material of the inserted object are known , it is enough to infer an illumination model from the target , render and composite . But current procedures for recovering shape and material from a single fragment simply can ’ t deal with most realistic fragments ( think of , say , a furry cat ) . This paper describes an alternative method , Cut-and-Paste Neural Rendering , that can render convincing composite images by adjusting the cut-and-paste images so that some simple image inferences are consistent with cut-and-paste predictions . So the albedo from the adjusted image should look like cut-and-paste albedo ; the shading should look like a shading field ; and the image should look like an image . A simple post-processing trick produces very high-resolution composites . Note that all our rendered images are 1024x1024 pixels resolution and are best viewed on screen . Evaluation is mostly qualitative , but we show that our method fools a recent method for detecting tampering . Our contribution is a method that can realistically correct shading in composite images , without requiring labeled data ; our method works for matte , glossy and specular fragments without an explicit geometric or physical model ; and human subjects prefer the results of our method over cut-and-paste and image harmonization . 2 RELATED WORK . Object Insertion starts with Lalonde et al . ( 2007 ) , who insert fragments into target images . Lalonde et al . ( 2007 ) control illumination problems by checking fragments for compatibility with targets ; Bansal et al . ( 2019 ) do so by matching contexts . Poisson blending ( Pérez et al. , 2003 ; Jia et al. , 2006 ) can resolve nasty boundary artifacts , but significant illumination and color mismatches will cause cross-talk between target and fragment , producing ugly results . Karsch et al . ( 2011 ) show that computer graphics ( CG ) objects can be convincingly inserted into inverse rendering models got with a geometric inference or with single image depth reconstruction ( Karsch et al. , 2014 ) . Inverse rendering trained with rendered images can produce excellent reshading of CG objects ( Ramachandran , 1988 ) . However , recovering a renderable model from an image fragment is extremely difficult , particularly if the fragment has an odd surface texture . Liao et al . showed that a weak geometric model of the fragment can be sufficient to correct shading if one has strong geometric information about the target scene ( Liao et al. , 2015 ; 2019 ) . In contrast , our work is entirely image-based : one takes a fragment from one image , drops it into another , and expects a system to correct it . We use image harmonization ( IH ) methods as a strong baseline . These procedures aim to correct corrupted images . IH methods are trained to correct images where a fragment has been adjusted by some noise process ( made brighter ; recolored ; etc . ) to the original image ( Sunkavalli et al. , 2010 ; Tsai et al. , 2017 ; Cong et al. , 2020 ) , and so could clearly be applied here . But we find those image harmonization methods very often change the albedo of an inserted object , rather than its shading . This is because they rely on ensuring consistency of color representations across the image . For example , in the iHarmony dataset from Cong et al . ( 2020 ) , they change pink candy to brown ( an albedo change ; see Fig 12 in Appendix ) . In contrast , we wish to correct shading alone . Image Relighting . With appropriate training data , for indoor-scenes , one can predict multiple spherical harmonic components of illumination ( Garon et al. , 2019 ) , or parametric lighting model ( Gardner et al. , 2019 ) or even full radiance maps at scene points from images ( Song & Funkhouser , 2019 ; Srinivasan et al. , 2020 ) . For outdoor scenes , the sun ’ s position is predicted in panoramas using a learning-based approach ( Hold-Geoffroy et al. , 2019 ) . One can also construct a volumetric radiance field from multi-view data to synthesize novel views ( Mildenhall et al. , 2020 ) . However , we do not have access to either training data with lighting parameters/environment maps or multi-view data to construct such a radiance field . Our renderings are entirely image-based . Recent singleimage relighting methods relight portrait faces under directional lighting ( Sun et al. , 2019 ; Zhou et al. , 2019 ; Nestmeyer et al. , 2020 ) . Our method can relight matte , gloss and specular objects with complex material properties like cars ( Fig 7 ) for both indoor and outdoor spatially varying illuminated environments only from a single image and without requiring physics-based BRDF ( Li et al. , 2020 ) . Image decomposition . Land ’ s influential Retinex model assumes effective albedo displays sharp , localized changes ( which result in large image gradients ) , and that shading has small gradients ( Land , 1959a ; b ; 1977 ; Land & McCann , 1971 ) . These models require no ground truth . An alternative is to use CG rendered images for image decomposition training ( Li & Snavely , 2018 ) , particularly with specialized losses ( Bi et al. , 2015 ; Fan et al. , 2018 ) . One can also train using rendering constraints to produce a form of self-supervised training ( Janner et al. , 2017 ) . Current image decomposition evaluation uses the weighted human disagreement rate ( WHDR ) ( Bell et al. , 2014 ) ; current champions are ( Fan et al. , 2018 ) . We use an image decomposition method built around approximate statistical models of albedo and shading ( paradigms ) to train our image decomposition network without requiring real image ground truth decompositions . Our method has reasonable , but not SOTA , WHDR ; but we show that improvements in WHDR do not result in improvements in reshading ( Fig 5 ) . 3 CUT-AND-PASTE NEURAL RENDERER . We synthesize a reshaded composite image containing a fragment transferred from a source image into a target scene image . We use a deep image prior ( DIP ) ( Ulyanov et al. , 2018 ) as a neural renderer to produce a reshaded image that produces consistent image decomposition inferences . We use an image decomposition trained on paradigms ( statistical samples of albedo , shading and gloss ; Fig 4a ) and not real images described in section 3.3 , and normals inferred by the method of Nekrasov et al . ( 2019 ) to meet the shading consistency tests ( section 3.2 ) . The final reshaded image ’ s albedo must be like the cut-and-paste albedo ; the reshaded image ’ s shading must match the shading of the target scene outside the fragment ; and the shading of the reshaded image must have reasonable spherical harmonic properties and meet a consistency test everywhere Fig 2 summarizes our method . 3.1 DEEP IMAGE PRIOR FOR RENDERING CUT-AND-PASTE IMAGES . Assume we have a noisy image It , and wish to reconstruct the original . Write z for a random vector , and fθ for a CNN with parameters θ and E ( fθ ( z ) ; It ) for a loss comparing the image fθ ( z ) to It . The Deep Image Prior seeks θ̂ = argminθE ( fθ ( z ) ; It ) ( 1 ) and then reports fθ̂ ( z ) . We modify this formulation by requiring that the E ( · ; It ) impose inferential consistency . In particular , write gφ for some inference network ( s ) and tψ ( Is , It ) for inferences constructed out of It and the source image Is . We seek θ̂ = argminθE ( gφ ( fθ ( z ) ) ; tψ ( Is , It ) ) . ( 2 ) For us , gφ is an image decomposition network ( pretrained and fixed ) , and tψ creates target albedos ( At ) , shading ( St ) and glosses ( Gt ) fields . We then train DIP to produce an image that has reasonable intrinsic image properties . For DIP , the input z is the cut-and-paste image and fθ is optimized to inpaint inserted fragment and also to meet satisfactory intrinsic image properties . We use a U-Net with partial convolutions ( Liu et al. , 2018 ; Shih et al. , 2020 ; Dundar et al. , 2020 ) . However , we find the standard partial convolution often convergence to a trivial solution , producing images close to cut-and-paste and without convincing reshading . To prevent this overfitting to cut-and-paste images , we flip the context for the partial convolution ; that is , we consider the inserted fragment as the context and hallucinate/outpaint the entire target scene around it . We can view this as an inverse partial convolution . We use CP ( Is ; It ; s ) for an operator that cuts the fragment out of the source image Is , scales it by s , and places it in the relevant location in the target image It.M for a mask with the size of the target image that is 0 inside the fragment and 1 outside . Reconstruction loss for background and is given by : Lrecons = ||It M− ( fθ ( CP ( Is ; It ; s ) ; M ) ) ||2 ( 3 ) We then pass the DIP rendered image through the image decomposition network gφ making Arender , Srender and Grender for the albedo , shading and gloss respectively . Our consistent image decomposition inference losses to train DIP are : Ldecomp = ||ACP ( Is ; It ; s ) −Arender|| 2 + ||St M−Srender M||2 + ||Gt M− Grender M||2 ( 4 )
This paper proposes cut-and-paste neural rendering that allows to insert objects into a target scene in a plausible manner, i.e., in terms of shading plausibility. At the core of the approach is a deep image prior that allows to match the shading and albedo fields based on shading and albedo consistency losses. A normal estimation network that is trained based on synthetic data is used to further inform shading estimation. The approach is interesting and shows plausible results.
SP:5207e34f58574e18c30192be6e2312863129fccd
Optimizing Transformers with Approximate Computing for Faster, Smaller and more Accurate NLP Models
1 INTRODUCTION . Transformer networks with hundreds of billions of parameters , such as T5 ( Raffel et al . ( 2019 ) ) , Megatron ( Shoeybi et al . ( 2019 ) ) , BERT ( Devlin et al . ( 2019 ) ) , GPT-2 ( Radford et al . ( 2019 ) ) and GPT-3 ( Brown et al . ( 2020 ) ) , have achieved state-of-the-art performance in several Natural Language Processing tasks . Model sizes are expected to grow further in the future as increasing the number of parameters has been shown to improve performance . For instance , increasing the number of parameters from 1.5B to 175B enabled a reduction in perplexity for Language Modelling ( Penn Treebank ) from 35.8 in GPT-2 to 20.5 in GPT-3 . This makes it computationally challenging to train Transformers as well as perform inference using them . The challenges associated with training these models are alleviated through the ( re- ) use of pre-trained models that are subsequently fine-tuned for different tasks . Consequently , these models incur a major one-time cost in computational resources , time and energy during the pre-training process , but the repeated fine-tuning for individual downstream tasks is performed at a considerably lower cost . However , performing inference using fine-tuned Transformer models continues to remain a challenge because of the large amount of storage and compute operations required . Prior research efforts have explored different techniques for improving the efficiency of Transformer inference . However , several of the proposed approaches either require training the network completely from scratch ( which is extremely compute and memory-intensive ) , or cause significant degradation in accuracy on the downstream task . In this work , we overcome these limitations by exploiting the transfer learning step in Transformers to produce individually optimized models for the different downstream tasks , using techniques that do not require training from scratch and maintain or improve accuracy levels . From the runtime and memory breakdown of Transformers ( Fig . 1 ) , we observe that the most timeconsuming and memory-intensive operations in a Transformer are the self-attention ( ATTN ) blocks , which are used to identify and form relationships between the different tokens in text , and the feedforward neural network blocks ( FFN blocks ) in the Transformer layers . These blocks together account for more than 85 % of the inference time ( and more than 75 % of the model ’ s parameters ) . We accordingly optimize the execution of these two components in our approach . The self-attention component dominates the execution time and memory size for longer context lengths as its operation scales quadratically in time and memory with sequence length . Some previous works ( Kitaev et al . ( 2020 ) , Ye et al . ( 2019 ) ) have addressed this issue , accelerating training and inference of Transformers when large context lengths are used . However , they suffer from significant overheads and slowdowns in applications with smaller context lengths , such as question answering , where questions and answers are usually short , in the order of a few hundred tokens . Our approach , on the other hand , performs well across context lengths , size of hidden layers , number of layers and other network characteristics . The pre-training of Transformer models with some initial objective ( most commonly predicting masked words in a large text corpus ) and the subsequent fine-tuning on a downstream task leads to highly over-parameterized models for many downstream tasks ( Michel et al . ( 2019 ) ) , providing ample opportunities for approximations . As these models grow larger , such opportunities are expected to increase even further . We observe that for a given downstream task , some parts of the pre-trained Transformer are more significant to obtain good accuracy , while other parts are less important or unimportant . In order to exploit this observation in a principled manner , we introduce a framework to introduce approximations while fine-tuning a pre-trained Transformer network , optimizing for either size , latency , or accuracy of the final network . We perform and apply significance analysis in a hierarchical manner , first pruning entire blocks , followed by attention heads , and finally pruning weight groups . We achieve further gains by also allowing elements that can not be pruned to be approximated by other techniques . We specifically apply two forms of approximations , depending on the element type . For weights , we utilize quantization . For the self-attention operation , we replace the scaled dot product attention mechanism with a novel sign matching-based attention mechanism . We summarize our main contributions as follows : • We introduce a framework for creating fine-tuned models from pre-trained Transformer models that are optimized for various metrics ( size , latency , accuracy ) . • We incorporate multiple heuristics in the framework , such as hierarchical processing , model-driven insights , and run-time based ordering of elements . • We propose a significance analysis technique to identify the importance of each element of the pre-trained Transformer for a given downstream task . We use this technique to prune entire blocks , attention heads , and weight groups and to guide the quantization of low-importance weights . • We propose a low-complexity attention mechanism , sign matching , in order to approximate dot product attention in the less significant attention layers . • Across a suite of different Transformer networks , including previously proposed optimized networks , we demonstrate that our techniques produce models that are up to 4× faster and up to 14× smaller ( with less than 0.5 % relative accuracy degradation ) , or up to 5.5 % more accurate with simultaneous size and latency improvements . 2 RELATED WORK . Given the effectiveness and popularity of Transformer models , several techniques have been proposed to overcome their computational and memory challenges , and to accelerate inference using these models . Most of these works directly pre-train efficient models from scratch . For example , DistilBERT ( Sanh et al . ( 2019 ) ) , MobileBERT ( Sun et al . ( 2020 ) ) and TinyBERT ( Jiao et al . ( 2019 ) ) use knowledge distillation to train smaller and faster networks using the original network as a teacher . LayerDrop ( Fan et al . ( 2020 ) ) randomly drops layers during pre-training , thereby enabling their dropping during inference . SchuBERT ( Khetan & Karnin ( 2020 ) ) learns the optimal sizes of the BERT elements during pre-training . Lite Transformer ( Wu et al . ( 2020 ) ) uses Long-Short Range Attention to speed up the self-attention operation , with different attention heads attending to local and global context . Depth-adaptive Transformer ( Elbayad et al . ( 2020 ) ) and DeeBERT ( Xin et al . ( 2020 ) ) modulate Transformer depth depending on the complexity of each input sample using gating functions that are trained along with the model . AlBERT ( Lan et al . ( 2020 ) ) uses factorized embeddings and cross-layer parameter sharing . These works are orthogonal to ours , as the models that they produce are still subsequently fine-tuned for downstream tasks . We demonstrate using DistilBERT , AlBERT and LayerDrop as examples that these optimized networks still have significant opportunities that our techniques can take advantage of . Other works ( including ours ) aim to improve the inference efficiency of Transformers using techniques that do not require training new models from scratch . Among these , PoWER-BERT ( Goyal et al . ( 2020 ) ) , which eliminates redundant word vectors from the model without removing any parameters , and Q8BERT ( Zafrir et al . ( 2019 ) ) , which quantizes all weights and activations in the model to 8-bit integers through the use of Quantization-Aware Training at fine-tuning time , are orthogonal and complementary to our work . Poor Man ’ s BERT ( Sajjad et al . ( 2020 ) ) evaluates several layer-dropping techniques that do not require re-training . Compared to layer-dropping techniques that do not require re-training , our techniques produce models that are up to 20 % more accurate at comparable inference speed , and this is especially true when working with highly optimized baselines such as Q8BERT . Our framework can also be adapted to satisfy a wide range of user constraints . 3 PRELIMINARIES . A Transformer ( Fig . 1 ) consists of an embedding layer , followed by multiple transformer layers stacked together , and a task-specific final layer . A transformer layer consists of the multi-headed self-attention operation ( ATTN block ) , followed by a feed-forward neural network ( FFN block ) with layer norm operations at the input and output of the layer . In this work , we define the elements of a Transformer to include different levels of granularity , i.e. , ATTN blocks , FFN blocks , Attention Heads and Weight Groups . We define Weight Groups only along dimensions that do not impact the shape of the output of the block when these groups are removed . The self-attention operation takes as input a sequence n of vectors X , and computes three matrices , Query = X × Wq , Key = X × Wk and Value = X × Wv . Then , the output of the self-attention operation is computed as Y = softmax ( ( Query × KeyT ) + attention mask ) × V alue . For auto-regressive models , tokens are not allowed to attend to future tokens . Hence , an attention mask is applied before the softmax operation , setting attention scores with future tokens to a very large negative number , which becomes zero after the softmax operation . This operation has multiple “ attention heads ” working in parallel on the input sequence , where each head has its own set of parameters to compute the query , key and value matrices . The independent attention outputs are concatenated and transformed into the expected output dimensions . The self-attention operation scales quadratically in time and memory with sequence length n since Query × KeyT has n2 entries . 4 DESIGN METHODOLOGY . We propose a framework for producing fine-tuned Transformer models that are optimized for a specific metric ( speed , model size , or accuracy ) . Fig . 2 presents an overview of the proposed framework . As shown in the figure , the inputs to the framework are a pre-trained Transformer model , the fine-tuning dataset , the goal of optimization ( speed , size or accuracy ) and acceptable accuracy loss ( when optimizing for speed or size ) . The framework has three major components : ( i ) a set of heuristics used to build an ordered queue of elements ( TransElements ) to be considered , ( ii ) a significance analysis method to identify insignificant elements in a pre-trained Transformer and ( iii ) a set of techniques to prune or approximate the insignificant elements . The framework proceeds in an iterative manner . That is , we first start with the original Transformer . We then remove an element from the TransElements queue , analyze its significance , and apply pruning/approximation techniques to the element . This results in new Transformer , where the element is replaced by the pruned or approximated version . This modified Transformer is then used as the baseline for the next iteration . After processing all of the identified elements , we fine-tune on the downstream task for the same number of epochs as the baseline model to obtain the final , optimized model . A detailed description of our methodology for approximating Transformers is presented in Fig . 2 and in Algorithm 4 . In the following subsections , we further describe our techniques for generating the ordered queue TransElements , followed by the significance analysis method , and finally the pruning and approximation techniques for different Transformer elements . TransElement Ordered Queue . In order to optimize a given model , we would ideally want to characterize the significance of each and every parameter in the model , rank them in order of importance , and finally prune/approximate only the least significant parameters , as in Molchanov et al . ( 2017 ) . However , Transformers have billions of parameters , making this process computationally infeasible . In addition , previously proposed techniques that can efficiently estimate the importance of each parameter , such as using Taylor expansion , are not applicable . This is because the { approximate , fine-tune , approximate } cycle does not work for Transformers during fine-tuning , since they very quickly overfit the training data for the downstream task ( usually within 5 epochs ) . We take advantage of hierarchical structure of Transformers and consider them in a hierarchical manner , ordered by increasing granularity . Specifically , we place entire FFN and ATTN blocks earlier in the queue , followed by heads , and finally weight groups . Through this ordering , we are able to quickly eliminate large numbers of parameters from further consideration , speeding up future iterations of the framework . For example , eliminating a single FFN block in the BERT-Base model removes 5.6 % of all parameters under consideration . To further reduce the number of elements under consideration , we also dynamically remove elements from the queue if they are encompassed by a high-importance block . For example , if a given ATTN block is determined to be of high importance , we remove all heads and weight groups within that block from the TransElement queue . Since the framework iterates through the entries of the TransElement queue sequentially , its efficacy is dependent on the ordering of the elements at each level of granularity . In order to minimize the run-time of the framework , we provide two additional heuristics to guide the ordering of elements . First , we use the unique linguistic properties captured by the different Transformer layers ( Jawahar et al . ( 2019 ) ) . These properties depend on both the Transformer and the downstream task under consideration , since different tasks require different types of linguistic knowledge . For example , top layers usually have low significance for Language Understanding tasks , since long-range dependency information is not required for most tasks ( for example , sentiment analysis requires only local context ) . Hence , we place the final layer at the front of the queue , and work our way backwards towards the first layer , since blocks in the final layers are more likely to be removed , thereby speeding up future iterations . Second , we use a run-time ( or parameter-count ) aware ordering of the TransElements , such the most time consuming blocks ( or blocks with the most parameters ) are likely to be removed earlier in the algorithm . For example , at large context lengths , we start with the ATTN blocks in all layers before moving on to the FFN blocks , and vice-versa at small context lengths . This has the dual benefit of producing highly optimized models for inference , as well as speeding up the significance analysis process by eliminating time-consuming blocks early and making further iterations faster . Algorithm 1 and Fig . 2 describe the process of creating the TransElement Queue . The utility of this framework and the heuristics used are discussed in Appendix C. Significance Analysis . To determine the significance of each Transformer element , we first finetune the original Transformer model for the given downstream task to obtain the baseline loss . We then use this baseline loss , along with the provided acceptable accuracy degradation , to generate a set of loss thresholds that determine whether a given element is of low importance and therefore can be pruned/approximated . This is a one-time step and performed globally for all elements in the TransElements queue . Then , for the element under consideration in each iteration of the framework , we compute the loss of the current Transformer model with the element removed . We then compare this loss to the thresholds determined above . The exact thresholds used are dependent on the optimization metric : speed , size , or accuracy . If we are optimizing the network for speed or size , we prune the element under consideration if the training/validation loss upon removing it from the Transformer is less than the pruning threshold . If we are optimizing for accuracy , we prune the element only if the training/validation loss when it is removed is less than the minimum loss seen thus far during the optimization process , since the goal is to find a model with minimum loss . Similarly , we apply approximations if the loss with the element removed from the Transformer is greater than the pruning threshold but lower than the approximation threshold . Algorithm 2 describes Significance Analysis . Pruning and Approximating . As evident from Section 3 , the structure and functionality of ATTN blocks differ significantly from that of FFN blocks in a Transformer . We accordingly adopt different strategies for approximating them , as described below . But pruning an entire ATTN or FFN block is effectively the same as it simply involves using the skip connection to bypass that block . The pruning strategies for the FFN and ATTN blocks are illustrated in Fig . 4 and Fig . 5 . Pruning Weight Groups within approximable FFN Blocks . Consider an approximable FFN block that performs the transformation Rn×d × Rd×y → Rn×y , with weight groups defined along the d dimension ( ( d/W ) weight groups of ( W ) weights each , where W is a hyperparameter that defines the granularity of approximations ) . When optimizing models for accuracy , we remove weight groups only if doing so results in a reduction in the model loss . When optimizing for size , we remove weight groups that maintain loss within the pruning threshold when removed . When optimizing for speed , however , removing weight groups with low significance from arbitrary locations does not help , since it introduces unstructured sparsity in the weight matrix that can be difficult to exploit to achieve speedups . Instead , we impose structure on our pruning . Specifically , we use a “ greedy shrinking ” algorithm that finds the largest number of weight groups that can be removed while maintaining loss below the threshold , such that the weight groups that remain in the model form a contiguous block . We first start from the bottom ( weight group 0 ) , work our way up and remove as many weight groups as possible while staying within the loss threshold . We then start from the top ( weight group d/W ) , work our way down and remove as many weight groups as possible while staying within the loss threshold . When this process is completed , the weight groups that remain form a contiguous dense block , enabling speedups on all hardware platforms . Since weight groups are removed along the “ hidden ” dimension d , our methods do not change the shape of the output of this block ; instead , we are simply “ shrinking ” the effective hidden dimension size through structured pruning . Quantizing Weight Groups within approximable FFN and ATTN Blocks . When optimizing the Transformer for size , we also quantize weight values within weight groups for which the loss lies between the pruning and approximation thresholds . We use uniform quantization with QuantizationAware Training proposed in Q8BERT ( Zafrir et al . ( 2019 ) ) within our hierarchical framework to quantize insignificant weight groups to lower precisions . This reduces the memory requirements of those weight groups but does not improve the execution time as the computations are still performed at the baseline precision . Pruning ATTN heads and Weight Groups within approximable ATTN Blocks . We divide the multi-headed self-attention operation into two main steps . In the first step , we compute the Query , Key and Value matrices by multiplying the input to this layer with the corresponding weight matrices ( Rn×d × Rd×y → Rn×y ) , and then reshape them into multiple attention heads ( Rn×y → Rn×h× ( y/h ) ) . Our approach to pruning this step is exactly the same as for the FFN blocks , where we iteratively prune weight groups along the d dimension using our shrinking algorithm . In the second step , we compute the “ attention output “ as Y = softmax ( ( Query×KeyT ) + attention mask ) × V alue . To optimize this step , we apply two techniques . Firstly , we identify insignificant attention heads , and prune them from the model . However , removing attention heads changes the shape of the output of this layer . We overcome this by keeping track of the pruned heads , and padding the output with zeros in the corresponding locations . In spite of this overhead , we still manage to achieve significant speedup from this approximation technique since pruning heads makes multiple downstream operations ( computing the attention scores , applying softmax to the attention scores , and computing the final score ) considerably faster . Therefore , we do not use our greedy shrinking method , but rather rely on unstructured pruning as it allows for greater pruning which further benefits the downstream operations . Secondly , we dynamically reduce the size of the key and value matrices by pruning weight groups from the same location along the n dimension in both matrices , based on sign matches with the query vectors . This again makes multiple downstream operations considerably faster and does not change the shape of the output of the pruned block . Approximating self-attention within approximable ATTN Blocks . We observe that the “ attention scores ” matrix is highly sparse , especially after the softmax operation . This sparsity implies that most of the dot products between the query and the key are unnecessary . Thus , we would ideally like to perform the attention operations for the query vectors that give highest dot-product with each key vector efficiently without explicitly performing all of the dot products . To this end , we propose replacing the O ( n2 ) dot product-based attention mechanism with a linear-time sign-matching-based mechanism in approximable ATTN blocks . Sign-matching attention ( SM ) is based on the idea that key vectors whose signs match with the largest number of query vectors will have high dot-products with maximum number of query vectors . However , it is expensive to compute a sign match for all pairs of query-key vectors , as this will grow quadratically . Instead , we employ a low-cost approximation . For each column of the query matrix , we identify if more number of vectors will have a positive or negative number in that column . This becomes the representative sign in that column for all the query vectors . Each key vector is then scored by how well the sign of each of its elements matches with the sign of the representative query vector by computing the Hamming distance between the two sign vectors . This score is used to select the top K key vectors . As a result , we reduce the number of computations required to score each key vector ( and the overall complexity ) from O ( n2 ) to O ( n ) . Sign matching is illustrated in Fig . 2 , and explained in detail in Appendix B . As this approximation does not increase the accuracy of the models nor decrease the number of parameters , they are only applied when optimizing the fine-tuned models for speed .
This paper presents a method for improving a fine-turned Transformer in terms of a specific metric such as size, speed, or accuracy. The candidates of removed elements are considered hierarchically with some heuristics and are evaluated in terms of training and validation loss to determine whether they should actually be removed from the model. The authors apply their method to several state-of-the-art Transformer models and show that they can produce fast and compact models without losing much accuracy.
SP:0e68a02aff6bc3918d91083d6b48a3d625ebdc5d
Multi-Level Generative Models for Partial Label Learning with Non-random Label Noise
1 INTRODUCTION . Partial label ( PL ) learning is a weakly supervised learning problem with ambiguous labels ( Hüllermeier & Beringer , 2006 ; Zeng et al. , 2013 ) , where each training instance is assigned a set of candidate labels , among which only one is the true label . Since it is typically difficult and costly to annotate instances precisely , the task of partial label learning naturally arises in many real-world learning scenarios , including automatic face naming ( Hüllermeier & Beringer , 2006 ; Zeng et al. , 2013 ) , and web mining ( Luo & Orabona , 2010 ) . As the true label information is hidden in the candidate label set , the main challenge of PL lies in identifying the ground truth labels from the candidate noise labels , aiming to learn a good prediction model . Some previous works have made effort on adjusting the existing effective learning techniques to directly handle the candidate label sets and perform label disambiguation implicitly ( Gong et al. , 2018 ; Nguyen & Caruana , 2008 ; Wu & Zhang , 2018 ) . These methods are good at exploiting the strengths of the standard classification techniques and have produced promising results on PL learning . Another set of works pursue explicit label disambiguation by trying to identify the true labels from the noise labels in the candidate label sets . For example , the work in ( Feng & An , 2018 ) tries to estimate the latent label distribution with iterative label propagations and then induce a prediction model by fitting the learned latent label distribution . Another work in ( Lei & An , 2019 ) exploits a self-training strategy to induce label confidence values and learn classifiers in an alternative manner by minimizing the squared loss between the model predictions and the learned label confidence matrix . However , these methods suffer from the cumulative errors induced in either the separate label distribution estimation steps or the error-prone label confidence estimation process . Moreover , all these methods have a common drawback : they automatically assumed random noise in the label space – that is , they assume the noise labels are randomly distributed in the label space for each instance . However , in real world problems the appearance of noise labels is usually dependent on the target true label . For example , when the object contained in an image is a “ computer ” , a noise label “ TV ” could be added due to a recognition mistake or image ambiguity , but it is less likely to annotate the object as “ lamp ” or “ curtain ” , while the probability of getting noise labels such as “ tree ” or “ bike ” is even smaller . In this paper , we propose a novel multi-level adversarial generative model , MGPLL , for partial label learning . The MGPLL model comprises of conditional data generators at both the label level and feature level . The noise label generator directly models non-random appearances of noise labels conditioning on the true label by adversarially matching the candidate label observations , while the data feature generator models the data samples conditioning on the corresponding true labels by adversarially matching the observed data sample distribution . Moreover , a prediction network is incorporated to predict the denoised true label of each instance from its input features , which forms inverse mappings between labels and features , together with the data feature generator . The learning of the overall model corresponds to a minimax adversarial game , which simultaneously identifies true labels of the training instances from both the observed data features and the observed candidate labels , while inducing accurate prediction networks that map input feature vectors to ( denoised ) true label vectors . To the best of our knowledge , this is the first work that exploits multi-level generative models to model non-random noise labels for partial label learning . We conduct extensive experiments on real-world and synthesized PL datasets . The empirical results show the proposed MGPLL achieves the state-of-the-art PL performance . 2 RELATED WORK . Partial label ( PL ) learning is a popular weakly supervised learning framework ( Zhou , 2018 ) in many real-world domains , where the true label of each training instance is hidden within a given candidate label set . The challenge of PL learning lies in disambiguating the true labels from the candidate label sets to induce good prediction models . One strategy towards PL learning is to adjust the standard learning techniques and implicitly disambiguate the noise candidate labels through the statistical prediction pattern of the data . For example , with the maximum likelihood techniques , the likelihood of each PL training sample can be defined over its candidate label set instead of its implicit ground-truth label ( Jin & Ghahramani , 2003 ; Liu & Dietterich , 2012 ) . For the k-nearest neighbor technique , the candidate labels from neighbor instances can be aggregated to induce the final prediction on a test instance ( Hüllermeier & Beringer , 2006 ; Gong et al. , 2018 ; Zhang & Yu , 2015 ) . For the maximum margin technique , the classification margin can be defined over the predictive difference between the candidate labels and the non-candidate labels for each PL training sample ( Nguyen & Caruana , 2008 ; Yu & Zhang , 2016 ) . For the boosting technique , the weight of each PL training instance and the confidence value of each candidate label being ground-truth label can be refined via each boosting round ( Tang & Zhang , 2017 ) . For the error-correcting output codes ( ECOC ) technique , multiple binary classifier corresponding to the ECOC coding matrix are built based on the transformed binary training sets ( Zhang et al. , 2017 ) . For the binary decomposition techniques , a one-vs-one decomposition strategy has been adopted to address PL learning by considering the relevance of each label pair ( Wu & Zhang , 2018 ) . Recently , there have been increasing attentions in designing explicit feature-aware disambiguation strategies ( Feng & An , 2018 ; Xu et al. , 2019a ; Feng & An , 2019 ; Wang et al. , 2019a ) . The authors of ( Feng & An , 2018 ) attempt to refine the latent label distribution using iterative label propagations and then induce a predictive model based on the learned latent label distribution . However , the latent label distribution estimation in this approach can be impaired by the cumulative error induced in the propagation process , which can consequently degrade the PL learning performance , especially when the noisy labels dominate . Another work in ( Lei & An , 2019 ) tries to refine the label confidence values with a self-training strategy and induce the prediction model over the refined label confidence scores via alternative optimization . Its estimation error on confidence values however can negatively impact the coupled partial label classifier due to the nature of alternative optimization . A recent work in ( Yao et al. , 2020 ) proposes to address the PL learning problem by enhancing the representation ability via deep features and improving the discrimination ability through margin maximization between the candidate labels and the non-candidate labels . Another recent work in ( Yan & Guo , 2020 ) proposes to dynamically correct label confidence values with a batch-wise label correction strategy and induce a robust predictive model based on the MixUp enhanced data . Although these works demonstrate good empirical performance , they are subject to one common drawback of assuming random distributions of noise labels by default , which does not hold in many real-world learning scenarios . This paper presents the first work that explicitly model non-random noise labels for partial label learning . PL learning is related to other types of weakly supervised learning problems , including noise label learning ( NLL ) ( Xu et al. , 2019b ; Thekumparampil et al. , 2018 ; Arazo et al. , 2019 ) and partial multi-label learning ( PML ) ( Wang et al. , 2019b ; Fang & Zhang , 2019 ; Xie & Huang , 2018 ) , but addresses different problems from them . The main difference between the PL learning and the other two well-established learning problems lies in the assumption on the label information provided by the training samples . Both PL learning and NLL aim to induce a multi-class prediction model from the training instances with noise-corrupted labels . However NLL assumes the true labels on some training instances are replaced by the noise labels , while PL assumes the true-label coexists with the noise labels in the candidate label set of each training instance . Hence the off-the-shelf NLL learning methods can not be directly applied to solve the PL learning problem . Both PL learning and PML learn from training samples with ambiguous candidate label sets , which contains the true labels and additional noise labels . But PL learning addresses a multi-class learning problem where each candidate label set contains only one true label , while PML learning addresses a multi-label learning problem where each candidate label set contains all but unknown number of true labels . The Wasserstein Generative Adversarial Networks ( WGANs ) ( Arjovsky et al. , 2017 ) , which perform minimax adversarial training with a generator and a discriminator , is a popular alternative to the standard GANs ( Goodfellow et al. , 2014b ) due to its effective and stable training of GANs . During the past few years , WGANs have been proposed as a successful tool for various applications , including adversarial sample generation ( Zhao et al. , 2017 ) , domain adaption ( Dou et al. , 2018 ) , and learning with noisy labels ( Chen et al. , 2018 ) . This paper presents the first work that exploits WGAN to model non-random noise labels for partial label learning . 3 PROPOSED APPROACH . Given a partial label training set S = { ( xi , yi ) } ni=1 , where xi ∈ Rd is a d-dimensional feature vector for the i-th instance , and yi ∈ { 0 , 1 } L denotes the candidate label indicator vector associated with xi , which has multiple 1 values corresponding to the ground-truth label and the additional noise labels , the task of PL learning is to learn a good multi-class prediction model from S. In real world scenarios , the irrelevant noise labels are typically not presented in a random manner , but rather correlated with the ground-truth label . In this section , we present a novel multi-level generative model for partial label learning , MGPLL , which models non-random noise labels using an adversarial conditional noise label generator , and builds connections between the denoised label vectors and instance features using a label-conditioned feature generator and a label prediction network . The overall model learning problem corresponds to a minimax adversarial game , which conducts multi-level generator learning by matching the observed data in both the feature and label spaces , while boosting the correspondence relationships between features and labels to induce an accurate multi-class prediction model . Figure 1 illustrates the proposed multi-level generative model , MGPLL , which attempts to address the partial label learning problem from both the label level and feature level under a bi-directional mapping framework . The MGPLL model comprises five component networks : the conditional noise label generator , Gn , which models the noise labels conditioning on the ground-truth label at the label level ; the conditional data generator , Gx , which generates data samples at the feature level conditioning on the denoised label vectors ; the discriminator , Dn , which separates the generated candidate label vectors from the observed candidate label vectors in the real training data ; the discriminator , Dx , which separates the generated samples from the real data in the feature space ; and the prediction network , F , which predicts the denoised label for each sample from its input features . zp denotes a one-hot label indicator vector sampled from a multinomial distribution Pz . The conditional noise label generator Gn induces the denoised prediction target for the prediction network F , while the conditional data generator Gx learns an inverse mapping at the feature level that maps the denoised label vectors in the label space to the data samples in the feature space . Below we present the details of the two level generations and the overall learning algorithm .
This submission proposes a new method of learning from data with partially observed labels. In this problem, every instance has a label candidate set, which contains the true label. This submission introduces adversarial learning to improve the disambiguation of inexact labels. Particularly, there are two adversarial learning component. In the first component, a generator tries to match the distribution of label candidate sets given the "true" label of an instance. In the second component, a generator tries to learn the distribution of instances give their "true" labels. Since the the "true" label is not accessible, the "true" label is actually from a predictive model.
SP:c0f80cb8844c1d9e6490f25a0b8feaa27557086c
A Discriminative Gaussian Mixture Model with Sparsity
1 INTRODUCTION . In probabilistic classification , a discriminative model is an approach that assigns a class label c to an input sample x by estimating the posterior probability P ( c | x ) . The posterior probability P ( c | x ) should correctly be modeled because it is not only related to classification accuracy , but also to the confidence of decision making in real-world applications such as medical diagnosis support . In general , the model calculates the class posterior probability using the softmax function after nonlinear feature extraction . Classically , a combination of the kernel method and the softmax function has been used . The recent mainstream method is to use a deep neural network for representation learning and softmax for the calculation of the posterior probability . Such a general procedure for developing a discriminative model potentially contains a limitation due to unimodality . The softmax-based model , such as a fully connected ( FC ) layer with a softmax function that is often used in deep neural networks ( NNs ) , assumes a unimodal Gaussian distribution for each class ( details are shown in Appendix A ) . Therefore , even if the feature space is transformed into discriminative space via the feature extraction part , P ( c | x ) can not correctly be modeled if the multimodality remains , which leads to a decrease in accuracy . Mixture models can address this issue . Mixture models are widely used for generative models , with a Gaussian mixture model ( GMM ) as a typical example . Mixture models are also effective in discriminative models ; for example , discriminative GMMs have been applied successfully in various fields , e.g. , speech recognition ( Tüske et al . 2015 ; Wang 2007 ) . However , the number of parameters increases if the number of mixture components increases , which may lead to over-fitting and an increase in memory usage ; this is useful if we can reduce the number of redundant parameters while maintaining multimodality . In this paper , we propose a discriminative model with two important properties ; multimodality and sparsity . The proposed model is referred to as the sparse discriminative Gaussian mixture ( SDGM ) . In the SDGM , a GMM-based discriminative model is formulated and trained via sparse Bayesian learning . This learning algorithm reduces memory usage without losing generalization capability by obtaining sparse weights while maintaining the multimodality of the mixture model . The technical highlight of this study is twofold : One is that the SDGM finds the multimodal structure in the feature space and the other is that redundant Gaussian components are removed owing to sparse learning . Figure 1 shows a comparison of the decision boundaries with other discriminative models . The two-class data are from Ripley ’ s synthetic data ( Ripley 2006 ) , where two Gaussian components are used to generate data for each class . The FC layer with the softmax function , which is often used in the last layer of deep NNs , assumes a unimodal Gaussian for each class , resulting in an inappropriate decision boundary . Kernel Bayesian methods , such as the Gaussian process ( GP ) classifier ( Wenzel et al . 2019 ) and relevance vector machine ( RVM ) ( Tipping 2001 ) , estimate nonlinear decision boundaries using nonlinear kernels , whereas these methods can not find multimodal structures . Although the discriminative GMM finds multimodal structure , this model retains redundant Gaussian components . However , the proposed SDGM finds a multimodal structure of data while removing redundant components , which leads to an accurate decision boundary . Furthermore , the SDGM can be embedded into NNs , such as convolutional NNs ( CNNs ) , and trained in an end-to-end manner with an NN . The proposed SDGM is also considered as a mixture , nonlinear , and sparse expansion of the logistic regression , and thus the SDGM can be used as the last layer of an NN for classification by replacing it with the fully connected ( FC ) layer with a softmax activation function . The contributions of this study are as follows : • We propose a novel sparse classifier based on a discriminative GMM . The proposed SDGM has both multimodality and sparsity , thereby flexibly estimating the posterior distribution of classes while removing redundant parameters . Moreover , the SDGM automatically determines the number of components by simultaneously removing the redundant components during learning . • From the perspective of the Bayesian kernel methods , the SDGM is considered as the expansion of the GP and RVM . The SDGM can estimate the posterior probabilities more flexibly than the GP and RVM owing to multimodality . The experimental comparison using benchmark data demonstrated superior performance to the existing Bayesian kernel methods . • This study connects both fields of probabilistic models and NNs . From the equivalence of a discriminative model based on a Gaussian distribution to an FC layer , we demonstrate that the SDGM can be used as a module of a deep NN . We also demonstrate that the SDGM exhibits superior performance to the FC layer with a softmax function via end-toend learning with an NN on the image recognition task . 2 RELATED WORK AND POSITION OF THIS STUDY . The position of the proposed SDGM among the related methods is summarized in Figure 2 . Interestingly , by summarizing the relationships , we can confirm that the three separately developed fields , generative models , discriminative models , and kernel Bayesian methods , are related to each other . Starting from the Gaussian distribution , all the models shown in Figure 2 are connected via four types of arrows . There is an undeveloped area in the upper right part , and the development of the area is the contribution of this study . A ( unimodal ) Gaussian distribution is used as the most naive generative model in machine learning and is the foundation of this relationship diagram . A GMM is the mixture expansion of the Gaussian distributions . Since the GMM can express ( almost ) arbitrary continuous distributions using multiple Gaussian components , it has been utilized for a long time . Since Gaussian fitting requires numerous parameters , the sparsified versions of Gaussian ( Hsieh et al . 2011 ) and GMM ( Gaiffas & Michel 2014 ) have been proposed . The discriminative models and the generative models are mutually related ( Lasserre et al . 2006 ; Minka 2005 ) . According to Lasserre et al . ( 2006 ) , the only difference between these models is their statistical parameter constraints . Therefore , given a generative model , we can derive a corresponding discriminative model . For example , discriminative models corresponding to the Gaussian mixture model have been proposed ( Axelrod et al . 2006 ; Bahl et al . 1996 ; Klautau et al . 2003 ; Tsai & Chang 2002 ; Tsuji et al . 1999 ; Tüske et al . 2015 ; Wang 2007 ) . They indicate more flexible fitting capability for classification problems than the generative GMM because the discriminative models have a lower statistical bias than the generative models . Furthermore , as shown by Tüske et al . ( 2015 ) ; Variani et al . ( 2015 ) , these models can be used as the last layer of the NN because these models output the class posterior probability . From the perspective of the kernel Bayesian methods , the GP classifier ( Wenzel et al . 2019 ) and the mixture of GPs ( MGP ) ( Luo & Sun 2017 ) are the Bayesian kernelized version of the logistic regression and the discriminative GMM , respectively . The SDGM with kernelization is also regarded as a kernel Bayesian method because the posterior distribution of weights is estimated during learning instead of directly estimating the weights as points , as with the GP and MGP . The RVM ( Tipping 2001 ) is the sparse version of the GP classifier and is the most important related study . The learning algorithm of the SDGM is based on that of the RVM ; however , it is extended for the mixture model . If we use kernelization , the SDGM becomes one of the kernel Bayesian methods and is considered as the mixture expansion of the RVM or sparse expansion of the MGP . Therefore , the classification capability and sparsity are compared with kernel Bayesian methods in Section 4.1 . Otherwise , the SDGM is considered as one of the discriminative models and can be embedded in an NN . The comparison with other discriminative models is conducted in Section 4.2 via image classification by combining a CNN . 3 SPARSE DISCRIMINATIVE GAUSSIAN MIXTURE ( SDGM ) . The SDGM takes a continuous variable as its input and outputs the posterior probability of each class , acquiring a sparse structure by removing redundant components via sparse Bayesian learning . Figure 3 shows how the SDGM is trained by removing unnecessary components while maintaining discriminability . In this training , we set the initial number of components to three for each class . As the training progresses , one of the components for each class gradually becomes small and is removed . 3.1 NOTATION . Let x ∈ RD be a continuous input variable and tc ( c ∈ { 1 , . . . , C } , C is the number of classes ) be a discrete target variable that is coded in a one-of-C form , where tc = 1 if x belongs to class c , tc = 0 otherwise . Also , let zcm be a discrete latent variable , and zcm = 1 when x from class c belongs to the m-th component ( m ∈ { 1 , . . . , Mc } , Mc is the number of components for class c ) , zcm = 0 otherwise . For simplicity , in this paper , the probabilities for classes and components are described using only c and m ; e.g. , we use P ( c , m | x ) instead of P ( tc = 1 , zcm = 1 | x ) . 3.2 MODEL FORMULATION . The posterior probabilities of each class c given x are calculated as follows : P ( c | x ) = Mc∑ m=1 P ( c , m | x ) , P ( c , m | x ) = πcm exp [ w T cmφ ] ∑C c′=1 ∑Mc′ m′=1 πc′m′ exp [ w T c′m′φ ] , ( 1 ) φ = [ 1 , xT , x21 , x1x2 , . . . , x1xD , x 2 2 , x2x3 , . . . , x 2 D ] T , ( 2 ) where πcm is the mixture weight that is equivalent to the prior of each component P ( c , m ) . It should be noted that we usewcm ∈ RH , which is the weight vector representing the m-th Gaussian component of class c. The dimension of wcm , i.e. , H , is the same as that of φ ; namely , H = 1 +D ( D + 3 ) /2 . Derivation . Utilizing a Gaussian distribution as a conditional distribution of x given c and m , P ( x | c , m ) , the posterior probability of c given x , P ( c | x ) , is calculated as follows : P ( c | x ) = Mc∑ m=1 P ( c , m ) P ( x | c , m ) ∑C c=1 ∑Mc m=1P ( c , m ) P ( x | c , m ) , ( 3 ) P ( x | c , m ) = 1 ( 2π ) D 2 |Σcm| 1 2 exp [ −1 2 ( x− µcm ) TΣ−1cm ( x− µcm ) ] , ( 4 ) where µcm ∈ RD and Σcm ∈ RD×D are the mean vector and the covariance matrix for component m in class c. Since the calculation inside an exponential function in ( 4 ) is quadratic form , the conditional distributions can be transformed as follows : P ( x | c , m ) = exp [ wTcmφ ] , ( 5 ) where wcm = [ − D 2 ln 2π − 1 2 ln |Σcm| − 1 2 D∑ i=1 D∑ j=1 scmijµcmiµcmj , D∑ i=1 scmi1µcmi , . . . , D∑ i=1 scmiDµcmi , − 1 2 scm11 , −scm12 , . . . , −scm1D , − 1 2 scm22 , . . . , − 1 2 scmDD ] T . ( 6 ) Here , scmij is the ( i , j ) -th element of Σ−1cm .
The paper proposes a sparse classifier via discriminative GMM. This model is trained based on sparse Bayesian learning. The sparsity constraint removes redundant Gaussian components which results in reducing the number of parameters and improving the generalization. This framework can potentially be embedded into the deep models and trained in an end-to-end fashion. The main motivation is that the proposed model (i.e., SDGM,) can consider multimodal data while conventional softmax classifiers only assume unimodality for each class. Experimental results show the superiority of the SDGM over existing softmax-based discriminative models.
SP:c9bda3b4e9859b304a8a3d1bc30ae0c8618a509d
Meta-Learning of Structured Task Distributions in Humans and Machines
1 INTRODUCTION . While machine learning has supported tremendous progress in artificial intelligence , a major weakness – especially in comparison to humans – has been its relative inability to learn structured representations , such as compositional grammar rules , causal graphs , discrete symbolic objects , etc . ( Lake et al. , 2017 ) . One way that humans acquire these structured forms of reasoning is via “ learning-to-learn ” , in which we improve our learning strategies over time to give rise to better reasoning strategies ( Thrun & Pratt , 1998 ; Griffiths et al. , 2019 ; Botvinick et al. , 2019 ) . Inspired by this , researchers have renewed investigations into meta-learning . Under this approach , a model is trained on a family of learning tasks based on structured representations such that they achieve better performance across the task distribution . This approach has demonstrated the acquisition of sophisticated abilities including model-based learning ( Wang et al. , 2016 ) , causal reasoning ( Dasgupta et al. , 2019 ) , compositional generalization ( Lake , 2019 ) , linguistic structure ( McCoy et al. , 2020 ) , and theory of mind ( Rabinowitz et al. , 2018 ) , all in relatively simple neural network models . The meta-learning approach , along with interaction with designed environments , has also been suggested as a general way to automatically generate artificial intelligence ( Clune , 2019 ) . These approaches have made great strides , and have great promise , toward closing the gap between human and machine learning . However , in this paper , we argue that significant challenges remain in how we evaluate whether structured forms of reasoning have indeed been acquired . There are often multiple strategies that can result in good meta-test performance , and there is no guarantee a priori that meta-learners will learn the strategies we intend when generating the training distribution . Previous work on metalearning structured representations do partially acknowledge this . In this paper , we highlight these challenges more generally . At the end of the day , meta-learning is simply another learning problem . And similar to any vanilla learning algorithm , meta-learners themselves have inductive biases ( which we term meta-inductive bias ) . Note that meta-learning is a way to learn inductive biases for vanilla learning algorithms Grant et al . ( 2018 ) . Here , we consider the fact the meta-learners themselves have inductive biases that impact the kinds of strategies ( and inductive biases ) they prefer to learn . In this work , the kind of structure we study is that imposed by compositionality , where simple rules can be recursively combined to generate complexity ( Fodor et al. , 1988 ) . Previous work demonstrates that some aspects of compositionality can be meta-learned ( Lake , 2019 ) . Here , we introduce a broader class of compositionally generated task environments using an explicit generative grammar , in an interactive reinforcement learning setting . A key contribution of our work is to also develop control task environments that are not generated using the same simple recursively applied rules , but are comparable in statistical complexity . We provide a rigorous comparison between human and meta-learning agent behavior in tasks performed in distributions of environments of each type . We show through three different analyses that human behavior is consistent with having learned the structure that results from our compositional rules in the structured environments . In contrast , despite training on distributions that contain this structure , standard meta-learning agents instead prefer ( i.e . have a meta-inductive bias toward ) more global statistical patterns that are a downstream consequence of these low-dimensional rules . Our results show that simply doing well at meta-test on a tasks in a distribution of structured environments does not necessarily indicate meta-learning of that structure . We therefore argue that architectural inductive biases still play a crucial role in the kinds of structure acquired by meta-learners , and simply embedding the requisite structure in a training task distribution may not be adequate . 2 EMBEDDING STRUCTURE IN A TASK DISTRIBUTION . In this work , we define a broad family of task distributions in which tasks take place in environments generated from abstract compositional structures , by recursively composing those environments using simple , low-dimensional rules . Previous work on such datasets ( Lake & Baroni , 2018 ; Johnson et al. , 2017 ) focuses primarily on language . Here we instead directly consider the domain of structure learning . This is a fundamental tenet of human cognition and has been linked to how humans learn quickly in novel environments ( Tenenbaum et al. , 2011 ; Mark et al. , 2020 ) . Structure learning is required in a vast range of domains : from planning ( understanding an interrelated sequence of steps for cooking ) , category learning ( the hierarchical organization of biological species ) , to social inference ( understanding a chain of command at the workplace , or social cliques in a high school ) . A task distribution based on structure learning can therefore be embedded into several domains relevant for machine learning . Kemp & Tenenbaum ( 2008 ) provide a model for how people infer such structure . They present a probabilistic context-free graph grammar that produces a space of possible structures , over which humans do inference . A grammar consists of a start symbol S , terminal and non-terminal symbols Σ and V , as well as a set of production rules R. Different structural forms arise from recursively applying these production rules . This framework allows us to specify abstract structures ( via the grammar ) and to produce various instantiations of this abstract structure ( via the noisy generation process ) , naturally producing different families of task environments , henceforth referred to as task distributions . We consider three structures : chains , trees , and loops . These exist in the real world across multiple domains . Chains describe objects on a one-dimensional spectrum , like people on the left-right political spectrum . Trees describe objects organized in hierarchies , like evolutionary trees . Loops describe cycles , like the four seasons . Here we embed these structures into a grid-based task . Exploration on a grid is an extensively studied problem in machine learning , particularly in reinforcement learning . Further , it is also a task that is easy for humans to perform on online crowdsourcing platforms – but not trivially so . This allows us to directly compare human and machine performance on the same task . Fig . 1 displays the symbols of the grammar we use and the production rules that give rise to grids of different structural forms . 2.1 A TASK TO TEST STRUCTURE LEARNING . Here we describe the specific task built atop this embedding of structural forms . We use a tile revealing task on the grid . Humans as well as agents are shown a 7 × 7 grid of tiles , which are initially white except for one red tile . The first red tile revealed at the beginning of the episode is the same tile as the initial start tile of the grid ’ s generative process ( see Fig . 1 ) . Clicking white tiles reveal them to be either red or blue . The episode finishes when the agent reveals all the red tiles . There is a reward for each red tile revealed , and a penalty for every blue tile revealed . The goal therefore is to reveal all the red tiles while revealing as few blue tiles as possible . The particular configuration of the red tiles defines a single task . The distribution of tasks for meta-learning is defined by the grammar from which these structures are sampled . Here , we randomly sampled from a uniform mixture of chains , trees , and loops as defined in Fig . 1 . 2.2 A STATISTICALLY EQUIVALENT NULL TASK DISTRIBUTION . Previous approaches to evaluating whether machine-learning systems can extract compositional structure ( Lake & Baroni , 2018 ; Dasgupta et al. , 2018 ) have relied on examining average performance on held-out examples from compositionally structured task distributions . However , we argue that this often confounds whether a system has truly internalized this underlying structure or whether it is relying on statistical patterns that come about as a consequence of compositional rules . To directly examine whether structured reasoning is a factor in how humans and meta-learning agents perform this task , we need a control task distribution that is similar in statistical complexity , by generating one based on those statistics rather than the direct use of the compositional grammar . To this end , we trained a fully connected neural network ( 3 layers , 49 units each ) to learn the conditional distribution of each tile given all the other tiles on the compositional boards . Note that these conditional distributions contain all the relevant statistical information about the boards . We do this by training on an objective inspired by masked language models like BERT ( Devlin et al. , 2018 ) . The network was given a compositional board with a random tile masked out and trained to reproduce the entire board including the randomly masked tile . The loss was binary cross entropy between the predicted and actual masked tiles . The network was trained on all possible compositional boards for 104 epochs , and achieved a training accuracy of ∼99 % . We then sampled boards from these conditionals with Gibbs sampling . We started with a grid in which each tile is randomly set to red or blue with probability 0.5 . We then masked out a tile and ran the grid through the network to get the conditional probability of the tile being red given the other tiles , turning the tile red with that probability . We repeated this by masking each tile in the 7 × 7 grid ( in a random order ) to complete a single Gibbs sweep , and repeated this whole Gibbs sweep 20 times to generate a single sample . We refer to the distribution of boards generated this way as the null task distribution . Fig . 2 shows example compositional and null distribution grids . While the statistical structure looks similar , the non-compositional null boards shown could not have been generated by the grammar in Fig . 1 . The conditional distributions for the two distributions are similar by design ; we further quantify statistical similarity using Ising statistics ( Zhang , 2007 ) . We compared the 0th order , 1st order , and 2nd order effects defined as follows . The 0th order statistic corresponds to the number of red minus number of blue tiles . The 1st order statistic counts the number of agreeing neighbours ( vertically or horizontally adjacent ) minus the disagreeing ones , where agreeing means being of the same color . The 2nd order statistic is the number of triples ( tile + its neighbor + its neighbor ’ s neighbor ) that agree , minus those that don ’ t . Fig . 2b shows that the two distributions are not significantly different in terms of the Ising statistics measured ( p > 0.05 for all three orders ) . The principal difference between these two task distributions is the way in which they were generated . The compositional task distribution was generated through the recursive application of simple , low-dimensional rules that generates a mixture of three discrete structures , whereas the null task distribution was generated through a more complex Gibbs sampling procedure that is not explicitly compositional and does not utilize explicit simple , low-dimensional rules . Although it is true that some boards within the null task distribution may be consistent with a simple compositional grammar , the distribution as a whole was not generated through a compositional grammar .
This work is an exploration of model behaviour upon meta-learning tasks with compositional structure. The authors discover that, unlike humans, machine learning models do not readily pick up on the underlying compositional generative structure of a set of tasks, and hence cannot match the performance of humans. Conversely, when the task is structured to leverage other statistical patterns, models do well.
SP:f0574c6588c9dc844b3e651e490092f058b7eb3c
Transformers with Competitive Ensembles of Independent Mechanisms
1 INTRODUCTION . A major theme throughout the history of deep learning has been the introduction of inductive biases in neural architectures , more recently with a focus on the ability to dynamically keep distinct types of information separated . While an MLP architecture has one large hidden representation at each layer , a convnet keeps different spatial positions ’ representations separated by default . This separation enables more appropriate reuse of parameters , improving generalization ( e.g . compared with a fully connected MLP ) by ensuring that some parts of the hidden representation capturing some aspects of the data can remain unchanged when other aspects are changed . Additionally , it is important to be able to reuse parameters in all situations where the parameters are relevant , and not use parameters in positions where they are irrelevant , and this is where attention mechanisms can be very useful . While dividing information between different positions ( for example time steps or spatial positions ) is already very useful , it has been recognized from the earliest deep learning work on the notion of disentangling ( Bengio , 2009 ; Glorot et al. , 2011 ; Rifai et al. , 2012 ; Mathieu et al. , 2016 ; Achille & Soatto , 2018 ) that other features of the data could advantageously be kept well-separated , even over overlapping sets of positions . This has suggested the idea that a model can be decomposed into multiple components , which are often called modules , each operating on a different set of features . Modularity has been identified as an essential ingredient for generalization in machine learning ( Ronco et al. , 1997 ; Alet et al. , 2018 ; Goyal et al. , 2019 ) . The motivating intuition is that if the relationship between the modules changes between training and evaluation , then a model which keeps these modules sufficiently separate but can adapt how they are combined could be more robust . It can even be robust to changes where the overall data distribution differs between training and evaluation . This has been studied in the causality literature through the notion of “ Independent Mechanisms ” ( Peters et al. , 2018 ; Parascandolo et al. , 2018 ) or causal modules , which can be flexibly re-combined , re-used , and re-purposed . While modularity and independent mechanisms ideas are closely related , the latter has a special focus on the notion that mechanisms should have the ability to remain unchanged when unrelated aspects of the world are changed . In that sense it is a more specific idea which builds on the more general concept of modularity . While the study of independent mechanisms in the context of deep architectures is relatively recent ( Goyal et al. , 2019 ; Mittal et al. , 2020 ) , a few ideas are considered central . One is that mechanisms are separately parameterized ( or dynamically parameterized , with the possibility of separation ) , which means that the function computed by a module remains the same even as other mechanisms need to be changed . Another central idea is specialization between mechanisms , which is the idea that mechanisms should seek to only model some parts of the world . One way to help accomplish this is by forcing the mechanisms to compete to explain different positions ( in time or space ) , such that some mechanisms would not be used by the model on positions where they are less relevant . In this work we explore how the idea of independent mechanisms can be beneficial in the Transformer architecture . Transformers ( Vaswani et al. , 2017 ) are based on information sharing across positions controlled dynamically by a soft-attention mechanism ( Bahdanau et al. , 2014 ) , while still using a fully-connected MLP to process the extracted feature vectors ( concatenated over a set of attention heads ) at each position . An important way in which this improves over convnets is that if this attention becomes sufficiently sparse , then it gains the ability to keep information well-separated between different positions . At the same time , at each position , the Transformer stores a single monolithic hidden representation , over which it applies its entire set of parameters . For example , if we consider a generative model of images of animals in a field , then some of the parameters like those describing how animals have symmetric eyes or a certain number of feet , are only relevant for the positions in the image where the animal is present . A normal Transformer , however , would apply the same parameters to the entire hidden representation at all spatial positions . Additionally , if sources of information need to be accessed over multiple positions , it has no way to keep that information well-separated between parts of the hidden representation , unless a large fraction of the parameters are set to exactly zero . In practice , models tend not to learn these sorts of highly sparse parameter matrices as it is not necessary in order to fit the training set . Thus different underlying factors tend to be freely blended together rather than disentangled : we hypothesize and show empirically that this leads to deteriorated generalization when something about some of these factors changes . Our newly proposed technique , which we call Transformers with Competitive Independent Mechanisms ( TIM ) seeks to address this limitation of the Transformer by dividing the hidden representation and parameters into multiple distinct mechanisms . These mechanisms perform self-attention ( over input elements ) separately , and information is exchanged sparingly between the mechanisms using attention . Thus the model is naturally compelled to keep multiple information signals well-separated , even within a single position . Moreover , only the parameters corresponding to an activated mechanism are called upon , focusing on one aspect of the hidden representation . The process of selectively activating some mechanisms and not others relies on competition between mechanisms , just like in recurrent independent mechanism ( RIMs ) ( Goyal et al. , 2019 ) . We hypothesize and show empirically that this provides an inductive bias encouraging the mechanisms to be more independent and specialized , more robust to changes only affecting other mechanisms . 2 TRANSFORMERS WITH COMPETITIVE INDEPENDENT MECHANISMS . 2.1 PRELIMINARIES . Multihead Self-attention sub-layer The attention mechanism can be formulated as querying a dictionary with key-value pairs ( Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ) , e.g. , Attention ( Q , K , V ) = softmax ( QKT / √ dmodel ) · V , where dmodel is the dimensionality of the hidden representations and Q ( Query ) , K ( Key ) , V ( Value ) are specified as the hidden representations of the previous layer in the so-called self-attention sub-layers in the Transformer architecture . The multi-head variant of attention allows the model to jointly attend to information from different representation subspaces , and is defined as Multihead ( Q , K , V ) = Concat ( head1 , · · · , headH ) WO , with the heads defined as : headk = Attention ( QW Q k , KW K k , V W V k ) where W Q k ∈ Rdmodel×dK , WKk ∈ Rdmodel×dK , WVk ∈ Rdmodel×dV , and WO ∈ RHdV ×dmodel are project parameter matrices , H is the number of heads , and dK and dV are the dimensionalities of Key and Value . Group Linear Layer : It takes multiple hidden representations , and applies a separately parameterized linear transformation to each . This operation can be efficiently implemented using batched-matrix multiplications . We set the numbers of groups ns and define a weight tensor W ∈ Rns×din×dout . If the input h is shaped as h ∈ Rns×din , then we can define the layer as : GroupLinear ( h , W , ns ) = [ hjWj ] ns j=1 2.2 TIM ALGORITHM . We first lay out the parts of a TIM layer and then give more detailed steps in Algorithm 1 . We then give a high-level detail of how to turn a transformer layer into a TIM layer in a typical implementation ( Section 2.3 ) . An illustration of how independent mechanisms differ from heads is given in Figure 1 . 2.2.1 COMPETITION BETWEEN DIFFERENT MECHANISMS . Aside from having separate parameters and only exchanging information via inter-mechanism attention , we wanted to create a stronger inductive bias to encourage the mechanisms to specialize . To do this , we created a competition system in which each mechanism has a layer which outputs a single scalar value ( as a function of the current layer ’ s representation ) , and these are passed through a softmax over the different mechanisms ( this softmax is applied position-wise and separately for each layer ) . The value of this softmax is then used to weight how much each mechanism is allowed to update its representation after the self-attention . This competition score is computed as c = softmax ( GroupLinear ( h , W c , ns ) ) , where we note that each mechanism has its own parameters for the layer ( hence the use of a Group Linear layer instead of a normal linear layer ) . Thus the ns modules have a per-step weighting for how much they are able to read during the later self-attention stage . Thus if one mechanism wants to perform attention on a given position , it suppresses the other mechanisms on that position . We found that this often improved results and that these softmax scores are fairly interpretable as a measure of specialization . Exact equations for this step are given in Step 1 and used in Step 2 in Algorithm 1 in the appendix . 2.2.2 EACH MECHANISM SHARES INFORMATION ACROSS TIME AND PROCESSES INFORMATION . This step allows each mechanism to have its own independent dynamics , which are themselves similar to a normal transformer layer . These independent dynamics allow each mechanism to read information from other time steps using attention and process that information using FFN layers . We modify the self-attention sub-layer and feed-forward sub-layers ( FFN ) to be mechanism-wise as well as position-wise , with separate parameters for each mechanism . Additionally , the layer-normalization is modified to be performed separately for each mechanism . The projections and FFN sub-layers can be modified by replacing the linear layers with group linear layers . When performing the self-attention itself , the mechanisms behave the same as heads , and thus we can use the same type of multi-head attention process , so long as the total number of heads is divisible by the number of mechanisms . One notable property is if TIMs only consisted of this part of the model ( independent dynamics ) by itself , then each TIM would be a completely independent transformer model with its own forward pass and its own parameters . Steps 2 and 4 in the appendix , Algorithm 1 give more detail on this step . 2.2.3 ATTENTION IS USED TO COMMUNICATION INFORMATION BETWEEN DIFFERENT MECHANISMS . Although we allow each TIM to remain independent and process information independently , it is also important to allow the different mechanisms in TIMs to share information between each other ( in case the TIMs are not truly fully independent ) . To do this we use a standard multi-head attention sub-layer to share information between the mechanisms , which is done in a position-wise fashion . We made this attention mechanism relatively small , with just 2 heads with 32 units each . This is because we want the different mechanisms to be as independent as possible , and thus only share small amounts of high level information . This can be thought of as another attention layer , where we treat the different mechanisms as positions , and perform this attention in parallel over the different steps in the sequence . More details on this are given in Step 3 in the appendix ’ s Algorithm 1 .
This paper proposes an independent mechanism that divides hidden representations and parameters into multiple independent mechanisms. The authors claim that the mechanism benefits the computation of sparse tensors; it does learn better inductive biases than a sizeable monolithic model. This idea is particularly similar to Recurrent Independent Mechanisms (RIM) [1], mentioned in the paper. The main contribution of this work is introducing competition between independent mechanisms. The authors evaluate their models on the image transformer model, speech enhancement, and NLP tasks.
SP:21f106f8f8fa276557c2d46d25ab456370502f75
Environment Predictive Coding for Embodied Agents
1 INTRODUCTION . In visual navigation tasks , an intelligent embodied agent must move around a 3D environment using its stream of egocentric observations to sense objects and obstacles , typically without the benefit of a pre-computed map . Significant recent progress on this problem can be attributed to the availability of large-scale visually rich 3D datasets ( Chang et al. , 2017 ; Xia et al. , 2018 ; Straub et al. , 2019 ) , developments in high-quality 3D simulators ( Anderson et al. , 2018b ; Kolve et al. , 2017 ; Savva et al. , 2019a ; Xia et al. , 2020 ) , and research on deep memory-based architectures that combine geometry and semantics for learning representations of the 3D world ( Gupta et al. , 2017 ; Henriques & Vedaldi , 2018 ; Chen et al. , 2019 ; Fang et al. , 2019 ; Chaplot et al. , 2020b ; c ) . Deep reinforcement learning approaches to visual navigation often suffer from sample inefficiency , overfitting , and instability in training . Recent contributions work towards overcoming these limitations for various navigation and planning tasks . The key ingredients are learning good image-level representations ( Das et al. , 2018 ; Gordon et al. , 2019 ; Lin et al. , 2019 ; Sax et al. , 2020 ) , and using modular architectures that combine high-level reasoning , planning , and low-level navigation ( Gupta et al. , 2017 ; Chaplot et al. , 2020b ; Gan et al. , 2019 ; Ramakrishnan et al. , 2020a ) . Prior work uses supervised image annotations ( Mirowski et al. , 2016 ; Das et al. , 2018 ; Sax et al. , 2020 ) and self-supervision ( Gordon et al. , 2019 ; Lin et al. , 2019 ) to learn good image representations that are transferrable and improve sample efficiency for embodied tasks . While promising , such learned image representations only encode the scene in the nearby locality . However , embodied agents also need higher-level semantic and geometric representations of their history of observations , grounded in 3D space , in order to reason about the larger environment around them . Therefore , a key question remains : how should an agent moving through a visually rich 3D environment encode its series of egocentric observations ? Prior navigation methods build environment-level representations of observation sequences via memory models such as recurrent neural networks ( Wijmans et al. , 2020 ) , maps ( Henriques & Vedaldi , 2018 ; Chen et al. , 2019 ; Chaplot et al. , 2020b ) , episodic memory ( Fang et al. , 2019 ) , and topological graphs ( Savinov et al. , 2018 ; Chaplot et al. , 2020c ) . However , these approaches typically use hand-coded representations such as occupancy maps ( Chen et al. , 2019 ; Chaplot et al. , 2020b ; Ramakrishnan et al. , 2020a ; Karkus et al. , 2019 ; Gan et al. , 2019 ) and semantic labels ( Narasimhan et al. , 2020 ; Chaplot et al. , 2020a ) , or specialize them by learning end-to-end for solving a specific task ( Wijmans et al. , 2020 ; Henriques & Vedaldi , 2018 ; Parisotto & Salakhutdinov , 2018 ; Cheng et al. , 2018 ; Fang et al. , 2019 ) . 1 In this work , we introduce environment predictive coding ( EPC ) , a self-supervised approach to learn flexible representations of 3D environments that are transferrable to a variety of navigation-oriented tasks . The key idea is to learn to encode a series of egocentric observations in a 3D environment so as to be predictive of visual content that the agent has not yet observed . For example , consider an agent that just entered the living room in an unfamiliar house and is searching for a refrigerator . It must be able to predict where the kitchen is and reason that it is likely to contain a refrigerator . The proposed EPC model aims to learn representations that capture these natural statistics of real-world environments in a self-supervised fashion , by watching videos recorded by other agents . See Fig . 1 . To this end , we devise a self-supervised zone prediction task in which the model learns environment embeddings by watching egocentric view sequences from other agents navigating in 3D environments in pre-collected videos . Specifically , we segment each video into zones of visually and geometrically connected views , while ensuring limited overlap across zones in the same video . Then , we randomly mask out zones , and predict the masked views conditioned on both the unmasked zones ’ views and the masked zones ’ camera poses . Intuitively , to perform this task successfully , the model needs to reason about the geometry and semantics of the environment to figure out what is missing . We devise a transformer-based model to infer the masked visual features . Our general strategy can be viewed as a context prediction task in sequential data ( Devlin et al. , 2018 ; Sun et al. , 2019b ; Han et al. , 2019 ) —but , very differently , aimed at representing high-level semantic and geometric priors in 3D environments to aid embodied agents who act in them . Through extensive experiments on Gibson and Matterport3D , we show that our method achieves good improvements on multiple navigation-oriented tasks compared to state-of-the-art models and baselines that learn image-level embeddings . 2 RELATED WORK . Self-supervised visual representation learning : Prior work leverages self-supervision to learn image and video representations from large collections of unlabelled data . Image representations attempt proxy tasks such as inpainting ( Pathak et al. , 2016 ) and instance discrimination ( Oord et al. , 2018 ; Chen et al. , 2020 ; He et al. , 2020 ) , while video representation learning leverages signals such as temporal consistency ( Wei et al. , 2018 ; Fernando et al. , 2017 ; Kim et al. , 2019 ) and contrastive predictions ( Han et al. , 2019 ; Sun et al. , 2019a ) . The VideoBERT project ( Sun et al. , 2019a ; b ) jointly learns video and text representations from unannotated videos via filling in masked out information . Dense Predictive Coding ( Han et al. , 2019 ; 2020 ) learns video representations that capture the slow-moving semantics in videos . Whereas these methods focus on capturing human activity for video recognition , we aim to learn geometric and semantic cues in 3D spaces for embodied agents . Accordingly , unlike the existing video models ( Sun et al. , 2019a ; b ; Han et al. , 2019 ) , our approach is grounded in the 3D relationships between views . Representation learning via auxiliary tasks for RL : Reinforcement learning approaches often suffer from high sample complexity , sparse rewards , and unstable training . Prior work tackles these challenges by using auxiliary tasks for learning image representations ( Mirowski et al. , 2016 ; Gordon et al. , 2019 ; Lin et al. , 2019 ; Shen et al. , 2019 ; Ye et al. , 2020 ) . In contrast , we encode image sequences from embodied agents to obtain environment-level representations . Recent work also learns state representations via future prediction and implicit models ( Ha & Schmidhuber , 2018 ; Eslami et al. , 2018 ; Gregor et al. , 2019 ; Hafner et al. , 2019 ; Guo et al. , 2020 ) . In particular , neural rendering approaches achieve impressive reconstructions for arbitrary viewpoints ( Eslami et al. , 2018 ; Kumar et al. , 2018 ) . However , unlike our idea , they focus on pixelwise reconstruction , and their success has been limited to synthetically generated environments like DeepMind Lab ( Beattie et al. , 2016 ) . In contrast to any of the above , we use egocentric videos to learn predictive feature encodings of photorealistic 3D environments to capture their naturally occurring regularities . Scene completion : Past work in scene completion performs pixelwise reconstruction of 360 panoramas ( Jayaraman & Grauman , 2018 ; Ramakrishnan et al. , 2019 ) , image inpainting ( Pathak et al. , 2016 ) , voxelwise reconstructions of 3D structures and semantics ( Song et al. , 2017 ) , and imagelevel extrapolation of depth and semantics ( Song et al. , 2018 ; Yang et al. , 2019b ) . Recent work on visual navigation extrapolates maps of room-types ( Wu et al. , 2019 ; Narasimhan et al. , 2020 ) and occupancy ( Ramakrishnan et al. , 2020a ) . While our approach is also motivated by anticipating unseen elements , we learn to extrapolate in a high-dimensional feature space ( rather than pixels , voxels , or semantic categories ) and in a self-supervised manner without relying on human annotations . Further , the proposed model learns from egocentric video sequences captured by other agents , without assuming access to detailed scans of the full 3D environment as in past work . Learning image representations for navigation : Prior work exploits ImageNet pretraining ( Gupta et al. , 2017 ; Anderson et al. , 2018a ; Chen et al. , 2019 ) , mined object relations ( Yang et al. , 2019a ) , video ( Chang et al. , 2020 ) , and annotated datasets from various image tasks ( Sax et al. , 2020 ; Chaplot et al. , 2020c ) to aid navigation . While these methods also consider representation learning in the context of navigation tasks , they are limited to learning image-level functions for classification and proximity prediction . In contrast , we learn predictive representations for sequences of observations conditioned on the camera poses . 3 APPROACH . We propose environment predictive coding ( EPC ) to learn self-supervised environment-level representations ( Sec . 3.1 ) . To demonstrate the utility of these representations , we integrate them into a transformer-based navigation architecture and refine them for individual tasks ( Sec . 3.2 ) . As we will show in Sec . 4 , our approach leads to both better performance and better sample efficiency compared to existing approaches . 3.1 ENVIRONMENT PREDICTIVE CODING . Our hypothesis is that it is valuable for an embodied agent to learn a predictive coding of the environment . The agent must not just encode the individual views it observes , but also learn to leverage the encoded information to anticipate the unseen parts of the environment . Our key idea is that the environment embedding must be predictive of unobserved content , conditioned on the agent ’ s camera pose . This equips an agent with the natural priors of 3D environments to quickly perform new tasks , like finding the refrigerator or covering more area . We propose the proxy task of zone prediction to achieve this goal ( see Fig . 2 ) . For this task , we use a dataset of egocentric video walkthroughs collected parallely from other agents deployed in various unseen environments ( Fig . 2 , top ) . For each video , we assume access to RGB-D , egomotion data , and camera intrinsics . Specifically , our current implementation uses egocentric camera trajectories from photorealistic scanned indoor environments ( Gibson ( Xia et al. , 2018 ) ) to sample the training videos ; we leave leveraging in-the-wild consumer video as a challenge for future work . We do not assume that the agents who generated those training videos were acting to address a particular navigation task . In particular , their behavior need not be tied to the downstream navigationoriented tasks for which we test our learned representation . For example , a training video may show agents moving about to maximize their area coverage , whereas the encoder we learn is applicable to an array of navigation tasks ( as we will demonstrate in Sec . 4 ) . Furthermore , we assume that the environments seen in the videos are not accessible for interactive training . In practice , this means that we can parallelly collect data from different robots deployed in a large number of environments , without having to actually train our navigation policy on those environments . These assumptions are much weaker than those made by prior work on imitation learning and behavioral cloning that rely on task-specific data generated from experts ( Bojarski et al. , 2016 ; Giusti et al. , 2016 ) . Our method works as follows . First , we automatically segment videos into “ zones ” which contain frames with significant view overlaps . We then perform the self-supervised zone prediction task on the segmented videos . Finally , we incorporate the learned environment encoder into an array of downstream navigation-oriented tasks . We explain each step in detail next . Zone generation At a glance , one might first consider masking arbitrary individual frames in the training videos . However , doing so is inadequate for representation learning , since unmasked frames having high viewpoint overlap with the masked frame can make its prediction trivial . Instead , our approach masks zones of frames at once . We define a zone to be a set of frames in the video which share a significant overlap in their viewpoints . We also require that the frames across multiple zones share little to no overlap . To generate these zones , we first cluster frames in the videos based on the amount of pairwisegeometric overlap between views . We estimate the viewpoint overlap ψ ( oi , oj ) between two frames oi , oj by measuring their intersection in 3D point clouds obtained by backprojecting depth inputs into 3D space . See Appendix for more details . For a video of length L , we generate a distance matrix D ∈ RL×L where Di , j = 1 − ψ ( oi , oj ) . We then perform hierarchical agglomerative clustering ( Lukasová , 1979 ) to cluster the video frames into zones based on D ( see Fig . 2 , bottom left ) . While these zones naturally tend to overlap near their edges , they typically capture disjoint sets of content in the video . Note that the zones segment video trajectories , not floorplan maps , since we do not assume access to the full 3D environment . Zone prediction task Having segmented the video into zones , we next present our EPC zone prediction task to learn environment embeddings ( see Fig . 2 ) . We randomly divide the video v into seen zones { Zvs , i } ni=1 ( cyan ) and unseen zones { Zvu , i } mi=1 ( yellow ) , where a zone Z is a tuple of images and the corresponding camera poses Zi = { ( oj , pj ) } |Zi|1 . Given the seen zones , and the camera pose from an unseen zone pvu , i , we need to infer a feature encoding of the unseen zone Z v u , i . To perform this task , we first extract visual features x from each RGB-D frame o in the video using pretrained CNNs ( see Sec . 3.2 ) . These features are concatenated with the corresponding pose p and projected using an MLPM to obtain the image-level embedding . The target features for the unseen zone Zvu , i are obtained as follows : fvu , i = 1 |Zvu , i| ∑ [ x , p ] ∈Zvu , i M ( [ x , p ] ) . ( 1 ) The rationale behind the feature averaging is that we want to predict the high-level visual content of the zone , while ignoring viewpoint specific variations within the zone . We use a transformer-based encoder-decoder model to perform this task ( Vaswani et al. , 2017 ) . Our model consists of an environment encoder and a zone decoder which infers the zone features ( see Fig . 2 , bottom ) . The environment encoder takes in the image-level embeddingsM ( [ x , p ] ) from the input zones , and performs multi-headed self-attention to generate the environment embeddings E . The zone decoder attends to E using the average camera pose from the unseen zone pvu , i and predicts the zone features as follows : f̂u , i = ZoneDecoder ( E , pvu , i ) . ( 2 ) We transform all poses in the input zones relative to pvu , i before encoding , which provides the model an egocentric view of the world . The environment encoder , zone decoder , and the projection function M are jointly trained using noise-contrastive estimation ( Gutmann & Hyvärinen , 2010 ) . We use f̂u , i as the anchor and fvu , i from Eqn . 1 as the positive . We sample negatives from other unseen zones in the same video and from all zones in other videos . The loss for the ith unseen zone in video v is : Lvi = −log exp ( sim ( f̂u , i , fvu , i ) ) m∑ j=1 exp ( sim ( f̂u , i , fvu , j ) ) + ∑ w 6=v , k exp ( sim ( f̂u , i , fwk ) ) , ( 3 ) where sim ( q , k ) = q·k|q||k| 1 τ and τ is a temperature hyperparameter . The idea is to predict zone representations that are closer to the ground truth , while being sufficiently different from the negative zones . Since the unseen zones have only limited overlap with the seen zones , the model needs to effectively reason about the geometric and semantic context in the seen zones to differentiate the positive from the negatives . We discourage the model from simply capturing video-specific textures and patterns by sampling negatives from within the same video .
The paper proposes a self-supervised approach for learning environment-level representations for embodied agents. The idea is that agents collect images and their corresponding poses during a walk-through phase. The images are clustered into multiple "zones". The zones are divided into seen and unseen zones. Using contrastive learning, the model is trained to distinguish the features of an unseen zone from the rest of the zones. The paper shows this approach improves performance over a number of baselines for Area Coverage, Flee, and Object Coverage tasks.
SP:eeab784f22aaf84838d021cc4c93a8707389d002
KETG: A Knowledge Enhanced Text Generation Framework
1 INTRODUCTION . Recent pre-trained language models such as GPT-2 can capture clear semantic and syntactic features ( Radford , 2018 ) , performing well in machine translation and abstract generation tasks ( Li et al. , 2016 ; Wang et al. , 2016 ) . However , the application of language models in text generation still needs to be explored . The logic in text generation , especially literature creation , is always obscure , which means they are usually low-frequency , causing the difficulty of modeling by current language models . On the other hand , too much limits of prior information will lead to homogenization in generated texts . To address these issues , ( Guan et al. , 2020 ) proposes a knowledge-enhanced pretraining model for commonsense story generation , by transforming the commonsense triples into sentences using a template-based method . However , the template-based transformed sentences from commonsense triples for post-training are rather homogeneous . In this paper , we introduce an innovative knowledge enhanced text generation ( KETG ) framework , which incorporates knowledge tuples and their associated sentences in training , such that the logic relation lying in the knowledge tuples can be effectively addressed . Regarding the sentences associated with the knowledge tuples , we may generate the sentences from the tuples by template-based method as in ( Guan et al. , 2020 ) . However , incorporating real corpus sentences would be more beneficial as they generally exhibit more diversity than those generated from templates , if they are available . In this way , the generation model can learn the both logicality and diversity in the knowledge tuples and sentences . We validate our KETG framework on rhetorical text generation , which is an important and essential part in modern literature ( Tu et al. , 2013 ) . Rhetoric is quite obscure , requiring strong logical correlation , and a rhetoric knowledge graph with explicit logical information ( rather than the commonsense knowledge graph ) would be helpful to rhetorical text generation . Unfortunately , to the best of our knowledge , we are not aware of any rhetoric knowledge graph . Hence by using relation extraction methods , we build a rhetoric ( specifically , here we refer to metaphor and personification ) knowledge graph from a collection of Chinese poems and compositions . With the newly built rhetoric knowledge graph and the corpus from which the knowledge graph is extracted , we train a rhetorical text generation model . Both automatic and manual evaluations show that our KETG model outperforms baseline models on rhetorical type control , semantic comprehensibility and diversity . Experiments also illustrate that incorporating sentences by template-based method in training results in rather similar generated text as the template , while incorporating real corpus sentences brings more diversity in text generation . To sum up , the main contributions of this paper are summarized as follows : 1 . We propose a KETG framework , which includes both knowledge information and associated sentences in training to address logicality and diversity . 2 . We validate our KETG framework on rhetorical ( metaphor and personification ) text generation . Results show that our KETG framework can generate more reasonable and diverse rhetorical texts , and the rhetoric types can be controlled implicitly . 3 . To the best of our knowledge , we build the first Chinese rhetoric ( metaphor and personification ) graph with 35228 tuples . 2 RELATED WORK . Language Model ( LM ) In order to use as much semantic information as possible , several research work has been conducted . In early stage , researchers focused on the feature-based method to express syntactic and semantic information in texts . However , this kind of method can not solve the problem of polysemy . To improve , ( Peters et al. , 2018 ) Elmo is proposed to capture complex word characteristics in texts . Meanwhile , in NLP tasks , massive texts are often unlabeled . To solve this , fine-tuning models are raised , which can learn ” common sense ” from unlabeled texts . Both Bert and GPT-2 are representative models . ( Wang et al. , 2019 ; Ferreira et al. , 2019 ) They have achieved good evaluation results in multiple NLP tasks , such as named entity recognition , Q & A , text classification and text generation . Knowledge Enhanced LM To mimic human ’ s writing manner , the most basic thing is to ensure that the generated text fluent and semantically understandable . Secondly , the common sense of humankind is also indispensable . Furthermore , aesthetics and logicality make language expressions more vivid , novel and apt . However , it ’ s hard to meet these requirements merely by language models . ( Bowman et al. , 2015 ) used common-sense knowledge base in natural language inference ( NLI ) and NLG . As mentioned in ( Zhou et al. , 2018 ) , common sense knowledge can promote performance in dialogue generation . ( Mihaylov & Frank , 2018 ) introduced a neural reading comprehension model that encodes external common sense knowledge as key-value memory . ( Zhang et al. , 2019 ) introduced a knowledge enhanced pre-trained language framework ERNIE , trying to increase the knowledge representation by masking semantic units such as words and entities . ( Guan et al. , 2020 ) proposes a knowledge-enhanced pretraining model for commonsense story generation . They post-train the model on knowledge-augmented data by transforming the commonsense triples into sentences . Rhetorical Text Generation Rhetoric is an important and essential part in modern literature ( Tu et al. , 2013 ) . It can express author ’ s passion and grace , improving the aesthetic merit of creations . ( Liu et al. , 2019 ) proposed a rhetorically controlled generation model for Chinese poetry generation to govern the rhetorical modes . Through a classifier inserted in the encoder , they can control the rhetorical modes of generated poems . However , it does not include knowledge graph and hence might generate illogical sentences , like ” Flakes of snow are flying like snow ” , which appears to be a metaphor , but includes illogical ‘ snow like snow ’ . 3 OUR KETG FRAMEWORK . We propose an innovative KETG framework , to combine the knowledge information with text generation models , just like the external device to computer . The architecture could be used to combine different types of knowledge graph with text generation model . As depicted in Figure 1 , we query the keyword in knowledge graph firstly , getting a context vector containing knowledge information . Then , we concatenate the context knowledge vector and the keyword vector , input them together with associated sentence to the language model . In this way , we can highlight the topic in the sentence and potential logical relationship between the entities , forcing the model pay more attention to them . When generating texts , with a given topic word , we get the context knowledge vector in the same way , which then serve as input to the trained model to generate the whole sentence in an auto-regressive manner . Compared with single topic word , the expanded context knowledge vector can also take the diversity advantage of knowledge graph , make sure the generated sentences full of variety . It ’ s worth mentioning that the real corpus sentences are retained in our framework , rather than those generated from templates , which means the generation model can learn the diversity of sentence structure . In detail , we add [ cls ] at the beginning of keyword vector and put [ mask ] to separate them from original sentence . After that , we concatenate them together as the input of text generation model . Using above approach , we can integrate knowledge information into text generation model naturally . With external knowledge , the generation model can generate more reasonable text , meanwhile captures the significant semantic and syntactic features . 4 RHETORICAL TEXT GENERATION . Rhetoric is an essential element in literature . Among 8744 Chinese poems ( Liu et al. , 2019 ) , 31.4 % are metaphor and 18.5 % are personification . We also collected 54949 excellent sentences from named composition . Among them , 11989 are metaphor and 28718 are personification . It ’ s obvious that metaphor and personification are the main parts of rhetoric . Therefore we build the rhetorical knowledge graph on metaphor and personification . 4.1 RHETORICAL RELATION EXTRACTION . We use relationship extraction algorithm ( Rai et al. , 2016 ; Alt et al. , 2019 ) to build our rhetorical graph . Based on bert+crf layer ( Huang et al. , 2015 ; Lample et al. , 2016 ; Pramanick et al. , 2018 ) , the model is designed to deal with NER ( Named Entity Recognition ) and relation classification jointly . In addition , we introduces a priori relation graph to filter NER results , which can improve the accuracy of extraction results effectively . Besides , in order to address multiple entities in a sentence , a mechanism called ” semi-pointer semi-label ” ( Su , 2019 ) is adopted in our model . 4.2 CONSTRUCTING RHETORIC GRAPH . We build our rhetorical knowledge graph in three steps . Firstly , we collect sentences of metaphor and personification from named compositions . Based on coreference resolution rules , we use Stanford Core NLP tools ( Manning et al. , 2014 ) to extract metaphor to a tuple of ( noumenon , metaphor object , metaphor base ) , meanwhile personification to a set of ( unhuman-subject , human-action/human-emotion ) . Using the above method , we build a seed rhetorical knowledge graph with 8035 rhetorical sentences , after that manually marked to make sure the accuracy . Secondly , we trained a rhetorical classifier using this seed graph . We also add 3432 negative examples to prevent over-fitting . The accuracy of the classifier in metaphor is 0.97 , while personification is 0.75 . Finally , Based on rules and the above classifier , we continue to expand the data set and retrain the classifier iteratively to build a large rhetorical knowledge graph , including 35228 tuples and 30970 nodes . During the construction , we found that rhetoric relationships have strong logicality , especially metaphor . The common storage mechanism will lead to serious logical errors during query . For example , metaphorical relationships like ( snowflake , falling , catkins ) and ( leaf , falling , snowflakes ) , will be stored as [ snowflake ] −feature− [ float ] − like− [ catkin ] , [ leaf ] −feature− [ float ] − like− [ snowflake ] . when searching noumenon ” snowflake ” in the graph , the result will potentially be [ snowflake ] − feature− [ float ] − like− [ snowflake ] . We design a graph storage mechanism to avoid such illogical problem , the structure is shown in Figure 2 . We use a triangle structure to store noumenon , metaphor object and metaphor base . It ’ s worth mentioning that we save the metaphor base as node instead of edge , for that the types of metaphor base are complicated and varied , saving it as edge will lower search efficiency enormously . Personification is similar to metaphor , but it contains only two entities [ unhuman − subject ] − Personification− [ human− action/human− emotion ] . 4.3 GENERATING WITH RHETORIC GRAPH . Firstly , We query the keyword in rhetorical graph to get a context vector , which including corresponding rhetorical information . For example , in metaphor , the vector contains information of metaphor object and metaphor base . After , we concatenate the context vector and keyword vector , sending them together with the associated original sentence to text generation model , training the model using the method in Figure 1 . During generation , given a topic word and rhetoric type , we get the context knowledge vector in the same way , then generate the corresponding rhetoric sentences in the trained model . In particular , we use the Top-K generation method , that is , when predicting the next word , we will randomly select one sentence from probable values of top 5 . This method can effectively solve the problem of repeated words in the generation .
This paper proposes to use a rhetoric knowledge graph for rhetorical text generation. One of its key contributions is to construct a rhetoric knowledge graph by leveraging SOTA NER and relation classification models. To generate a rhetorical text, the new method starts with sending a keyword to the knowledge graph to retrieve the neighborhood of the keywords as its context words. Both the context words and the original query word are fed into a language model to generate the final word sequence.
SP:a0417f78d102a7c5ae83d98abe990dc03e3405ec
Learning to Make Decisions via Submodular Regularization
1 INTRODUCTION . In real-world automated decision making tasks we seek the optimal set of actions that jointly achieve the maximal utility . Many of such tasks — either deterministic/non-adaptive or stochastic/adaptive — can be viewed as combinatorial optimization problems over a large number of actions . As an example , consider the active learning problem where a learner seeks the maximally-informative set of training examples for learning a classifier . The utility of a training set could be measured by the mutual information ( Lindley , 1956 ) between the training set and the remaining ( unlabeled ) data points , or by the expected reduction in generation error if the model is trained on the candidate training set . Similar problems arise in a number of other domains , such as experimental design ( Chaloner and Verdinelli , 1995 ) , document summarization ( Lin and Bilmes , 2012 ) , recommender system ( Javdani et al. , 2014 ) , and policy making ( Runge et al. , 2011 ) . Identifying the optimal set of actions ( e.g. , optimal training sets , most informative experiments ) amounts to evaluating the expected utility over a combinatorial number of candidate sets . When the underlying model class is complex and the evaluation of the utility function is expensive , these tasks are notoriously difficult to optimize ( Krause and Guestrin , 2009 ) . For a broad class of decision making problems whose optimization criterion is to maximize the decision-theoretic value of information ( e.g. , active learning and experimental design ) , it has been shown that it is possible to design surrogate objective functions that are ( approximately ) submodular while being aligned with the original objective at the optimal solutions ( Javdani et al. , 2014 ; Chen et al. , 2015b ; Choudhury et al. , 2017 ) . Here , the information gathering policies no longer aim to directly optimize the target objective value , but rather choose to follow a greedy trajectory governed by the surrogate function that is much cheaper to evaluate . These insights have led to principled algorithms that enable significant gains in the efficiency of the decision making process , while enjoying strong performance guarantees that are competitive with the optimal policy . Despite the promising performance , a caveat for these “ submodular surrogate ” -based approaches is that it is often challenging to engineer such a surrogate objective without an ad-hoc design and analysis that requires trial-and-error ( Chen et al. , 2015b ; Satsangi et al. , 2018 ) . Furthermore , for certain classes of surrogate functions , it is NP-hard to compute/evaluate the function value ( Javdani et al. , 2014 ) . In such cases , even a greedy policy , which iteratively picks the best action given the ( observed ) history , can be prohibitively costly to design or run . Addressing this limitation requires more automated or systematic ways of designing ( efficient ) surrogate objective functions for decision making . Overview of main results . Inspired by contemporary work in data-driven decision making , we aim to learn a greedy heuristic for sequentially selecting actions . This heuristic acts as a surrogate for invoking the expensive oracle when evaluating an action . Our key insight is that many practical algorithms can be interpreted as greedy approaches that follow an ( approximate ) submodular surrogate objective . In particular , we focus on the class of combinatorial problems that can be solved via submodular maximization ( either directly on the objective function or via a submodular surrogate ) . We highlight some of the key results below : • Focusing on utility-based greedy policies , we introduce a data-driven optimization framework based on the “ submodular-norm ” loss , which is a novel loss function that encourages learning functions that exhibit “ diminishing returns ” . Our framework , called LEASURE ( Learning with Submodular Regularization ) , outputs a surrogate objective that is efficient to train , approximately submodular , and can be made permutation-invariant . The latter two properties allow us to prove approximation guarantees for the resulting greedy heuristic . • We show that our approach can be easily integrated with modern imitation learning pipelines for sequential prediction tasks . We provide a rigorous analysis of the proposed algorithm and prove strong performance guarantees for the learned objective . • We demonstrate the performance of our approach on a variety of decision making tasks , including set cover , active learning for classification , and data-driven protein design . Our results suggest that , compared to standard learning-based baselines : ( a ) at training time , LEASURE requires significantly fewer oracle calls to learn the target objective ( i.e. , to minimize the approximation error against the oracle objective ) ; and ( b ) at test time , LEASURE achieves superior performance on the corresponding optimization task ( i.e. , to minimize the regret for the original combinatorial optimization task ) . In particular , LEASURE has shown promising performance in the protein design task and will be incorporated into a real-world protein design workflow . 2 RELATED WORK . Near-optimal decision making via submodular optimization . Submodularity is a property of a set function that has a strong relationship with diminishing returns , and the use of submodularity has wide applications from information gathering to document summarization ( Leskovec et al. , 2007 ; Krause et al. , 2008 ; Lin and Bilmes , 2011 ; Krause and Golovin , 2014 ) . The maximization of a submodular function has been an active area of study in various settings such as centralized ( Nemhauser et al. , 1978 ; Buchbinder et al. , 2014 ; Mitrovic et al. , 2017 ) , streaming ( Badanidiyuru et al. , 2014 ; Kazemi et al. , 2019 ; Feldman et al. , 2020 ) , continuous ( Bian et al. , 2017b ; Bach , 2019 ) and approximate ( Horel and Singer , 2016 ; Bian et al. , 2017a ) . Variants of the greedy algorithm , which iteratively selects an element that maximizes the marginal gain , feature prominently in the algorithm design process . For example , in the case of maximizing a monotone submodular function subject to a cardinality constraint , it is shown that the greedy algorithm achieves an approximation ratio of ( 1− 1/e ) of the optimal solution ( Nemhauser et al. , 1978 ) . In applications where we need to make a sequence of decisions , such as information gathering , we usually need to adapt our future decisions based on past outcomes . Adaptive submodularity is the corresponding property where an adaptive greedy algorithm enjoys a similar guarantee for maximizing an adaptive submodular function ( Golovin and Krause , 2011 ) . Recent works have explored optimizing the value of information ( Chen et al. , 2015b ) and Bayesian active learning ( Javdani et al. , 2014 ; Chen et al. , 2017a ) with this property . Another line of related work is online setting ( typically bandits ) , which is grounded in minimizing cumulative regret ( Radlinski et al. , 2008 ; Streeter et al. , 2009 ; Yue and Guestrin , 2011 ; Ross et al. , 2013 ; Yu et al. , 2016 ; Hiranandani et al. , 2020 ) . Learning submodular functions . Early work focused on learning non-negative linear combinations of submodular basis functions ( Yue and Joachims , 2008 ; El-Arini et al. , 2009 ; Yue and Guestrin , 2011 ; Sipos et al. , 2012 ) , which was later generalized to mixtures of “ submodular shells ” ( Lin and Bilmes , 2012 ) . Deep submodular functions ( Dolhansky and Bilmes , 2016 ) extend these ideas to more expressive compositional function classes by using sums of concave composed with modular functions . The theoretical question of the learnability of general submodular functions is analyzed in Balcan and Harvey ( 2018 ) . Our goal is to encourage submodularity via regularization , rather than via hard constraints on the function class design . Learning to optimize via imitation learning . Rather than first learning a submodular function and then optimizing it , one can instead learn to directly make decisions ( e.g. , imitate the oracle greedy algorithm ) . This area builds upon imitation learning , which learns a policy ( i.e. , a mapping from states to actions ) directly from examples provided by an expert ( e.g. , an expensive computational oracle , or a human instructor ) ( Chernova and Thomaz , 2014 ) . Classic work on imitation learning ( e.g. , the Dataset Aggregation ( DAgger ) algorithm ( Ross et al. , 2011 ) ) reduce the policy learning problem to the supervised learning setting , which has been extended to submodular optimization by imitating the greedy oracle method ( Ross et al. , 2013 ) . More generally , learning to optimize has been applied generically to improve combinatorial optimization solvers for focused distributions of optimization problems ( He et al. , 2014 ; Song et al. , 2018 ; Khalil et al. , 2016 ; Balunovic et al. , 2018 ; Gasse et al. , 2019 ; Song et al. , 2020 ) . Our approach bridges learning to optimize and learning submodular functions , with a focus on learning surrogate utilities using submodular regularization . Learning active learning . Our approach is applicable to active learning , and so is related to work on learning active learning . The closest line of work learns a utility function as a surrogate for improvement in classifier accuracy ( Konyushkova et al. , 2017 ; Liu et al. , 2018 ) , which is then used as the decision criterion . However , prior work either used restricted function classes ( Konyushkova et al. , 2017 ) , or very expressive function classes that can be hard to fit well ( Liu et al. , 2018 ) . Our work can be viewed as a direct extension of this design philosophy , where we aim to reliably learn over expressive function classes using submodular regularization . Other related work do not directly learn an active learning criterion , instead encouraging sample diversity using submodularity ( Wei et al. , 2015 ) or the gradient signal from the classifier ( Ash et al. , 2020 ) . 3 BACKGROUND AND PROBLEM STATEMENT . 3.1 DECISION MAKING VIA SUBMODULAR SURROGATES . Given a ground set of items V to pick from , let u : 2V → R be a set function that measures the value of any given subset1 A ⊆ V . For example , for experimental design , u ( A ) captures the utility of the output of the best experiment ; for active learning u ( A ) captures the generalization error after training with set A . We denote a policy π : 2V → V to be a partial mapping from the set/sequence of items already selected , to the next item to be picked . We use Π to denote our policy class . Each time a policy picks an item e ∈ V , it incurs a unit cost . Given the ground set V , the utility function u , and a budget k for selecting items , we seek the optimal policy π that achieves the maximal utility : π∗ ∈ arg max π∈Π u ( Sπ , k ) . ( 1 ) Sπ , k is the sequence of items picked by π : Sπ , i = Sπ , i−1 ∪ { π ( Sπ , i−1 ) } for i > 0 and Sπ,0 = ∅ . As we have discussed in the previous sections , many sequential decision making problems can be characterized as constrained monotone submodular maximization problem . In those scenarios u is : • Monotone : For any A ⊆ V and e ∈ V \A , u ( A ) ≤ u ( A ∪ { e } ) . • Submodular : For any A ⊆ B ⊆ V and e ∈ V \B , u ( A ∪ { e } ) − u ( A ) ≥ u ( B ∪ { e } ) − u ( B ) . 1For simplicity , we focus on deterministic set functions in this section . Note that many of our results can easily extent to the stochastic , by leveraging the theory of adaptive submodularity ( Golovin and Krause , 2011 ) In such cases , a mypopic algorithm following the greedy trajectory of u admits a near-optimal policy . However , in many real-world applications , u is not monotone submodular . Then one strategy is to design a surrogate function f : 2V → R which is : • Globally aligning with u : For instance , f lies within a factor of u : f ( A ) ∈ [ c1 · u ( A ) , c2 · u ( A ) ) ] for some constants c1 , c2 and any set A ⊆ V ; or within a small margin with u : f ( A ) ∈ [ u ( A ) − , u ( A ) + ] for a fixed > 0 and any set A ⊆ V ; • Monotone submodular : Intuitively , a submodular surrogate function encourages selecting items that are beneficial in the long run , while ensuring that the decision maker does not miss out any actions that are “ surprisingly good ” by following a myopic policy ( i.e. , future gains for any item are diminishing ) . Examples that fall into this category include machine teaching ( Singla et al. , 2014 ) , active learning ( Chen et al. , 2015a ) , etc . We argue that in real-world decision making scenarios—as validated later in Section 6—the decision maker is following a surrogate objective that aligns with the above characterization . In the following context , we will assume that such surrogate function exists . Our goal is thus to learn from an expert policy that behaves greedily according to such surrogate functions .
This paper combines combines submodular surrogates for sequential decision making with imitation learning. Specifically, it proposes to learn an acquisition function g by imitating an expert which is assumed to be following a greedy policy wrt a general submodular surrogate f. This is accomplished by regularizing g to encourage diminishing returns and monotonicity. The learning algorithm is a modified version of DAgger which is consistent with the expert and provably near-optimal utility. Results outperform baselines on various sequential decision making tasks.
SP:364842bf9376198df47a7323185d72cc73380d4d
CLOPS: Continual Learning of Physiological Signals
1 INTRODUCTION . Many deep learning algorithms operate under the assumption that instances are independent and identically-distributed ( i.i.d. ) . The violation of this assumption can be detrimental to the training behaviour and performance of an algorithm . The assumption of independence can be violated , for example , when data are streamed temporally from a sensor . Introducing multiple sensors in a changing environment can introduce covariate shift , arguably the ‘ Achilles heel ’ of machine learning model deployment ( Quionero-Candela et al. , 2009 ) . A plethora of realistic scenarios violate the i.i.d . assumption . This is particularly true in healthcare where the multitude of physiological sensors generate time-series recordings that may vary temporally ( due to seasonal diseases ; e.g . flu ) , across patients ( due to different hospitals or hospital settings ) , and in their modality . Tackling the challenges posed by such scenarios is the focus of continual learning ( CL ) whereby a learner , when exposed to tasks in a sequential manner , is expected to perform well on current tasks without compromising performance on previously seen tasks . The outcome is a single algorithm that can reliably solve a multitude of tasks . However , most , if not all , research in this field has been limited to a small handful of imaging datasets ( Lopez-Paz & Ranzato , 2017 ; Aljundi et al. , 2019b ; a ) . Although understandable from a benchmarking perspective , such research fails to explore the utility of continual learning methodologies in more realistic healthcare scenarios ( Farquhar & Gal , 2018 ) . To the best of our knowledge , we are the first to explore and propose a CL approach in the context of physiological signals . The dynamic and chaotic environment that characterizes healthcare necessitates the availability of algorithms that are dynamically reliable ; those that can adapt to potential covariate shift without catastrophically forgetting how to perform tasks from the past . Such dynamic reliability implies that algorithms no longer needs to be retrained on data or tasks to which it has been exposed in the past , thus improving its data-efficiency . Secondly , algorithms that perform consistently well across a multitude of tasks are more trustworthy , a desirable trait sought by medical professionals ( Spiegelhalter , 2020 ) . Our Contributions . In this paper , we propose a replay-based continual learning methodology that is based on the following : 1 . Importance-guided storage : task-instance parameters , a scalar corresponding to each instance in each task , as informative signals for loss-weighting and buffer-storage . 2 . Uncertainty-based acquisition : an active learning inspired methodology that determines the degree of informativeness of an instance and thus acts as a buffer-acquisition mechanism . 2 RELATED WORK . Continual learning ( CL ) approaches have resurfaced in recent years ( Parisi et al. , 2019 ) . Those similar to ours comprise memory-based methods such as iCaRL ( Rebuffi et al. , 2017 ) , CLEAR ( Rolnick et al. , 2019 ) , GEM ( Lopez-Paz & Ranzato , 2017 ) , and aGEM ( Chaudhry et al. , 2018 ) . In contrast to our work , the latter two methods naively populate their replay buffer with the last m examples observed for a particular task . Isele & Cosgun ( 2018 ) and Aljundi et al . ( 2019b ) employ a more sophisticated buffer-storage strategy where a quadratic programming problem is solved in the absence of task boundaries . Aljundi et al . ( 2019a ) introduce MIR whereby instances are stored using reservoir sampling and sampled according to whether they incur the greatest change in loss if parameters were to be updated on the subsequent task . This approach is computationally expensive , requiring multiple forward and backward passes per batch . The application of CL in the medical domain is limited to that of Lenga et al . ( 2020 ) wherein existing methodologies are implemented on chest X-ray datasets . In contrast to previous research that independently investigates buffer-storage and acquisition strategies , we focus on a dual storage and acquisition strategy . Active learning ( AL ) in healthcare has observed increased interest in recent years , with a review of methodologies provided by Settles ( 2009 ) . For example , Gong et al . ( 2019 ) propose a Bayesian deep latent Gaussian model to acquire important features from electronic health record ( EHR ) data in MIMIC ( Johnson et al. , 2016 ) to improve mortality prediction . In dealing with EHR data , Chen et al . ( 2013 ) use the distance of unlabelled samples from the hyperplane in an SVM to acquire datapoints . Wang et al . ( 2019 ) implement an RNN to acquire ECG samples during training . Zhou et al . ( 2017 ) perform transfer learning in conjunction with a convolutional neural network to acquire biomedical images in an online manner . Smailagic et al . ( 2018 ; 2019 ) actively acquire unannotated medical images by measuring their distance in a latent space to images in the training set . Such similarity metrics , however , are sensitive to the amount of available labelled training data . Gal et al . ( 2017 ) adopt BALD ( Houlsby et al. , 2011 ) with Monte Carlo Dropout to acquire instances that maximize the Jensen-Shannon divergence ( JSD ) across MC samples . To the best of our knowledge , we are the first to employ AL-inspired acquisition functions in the context of CL . 3 BACKGROUND . 3.1 CONTINUAL LEARNING . In this work , we consider a learner , fω : xT ∈ Rm → yT ∈ Rc , parameterized by ω , that maps an m-dimensional input , xT , to a c-dimensional output , yT , where c is the number of classes , for each task T ∈ [ 1 . . . N ] . This learner is exposed to new tasks in a sequential manner once previouslytackled tasks are mastered . In this paper , we formulate our tasks based on a modification of the three-tier categorization proposed by van de Ven & Tolias ( 2019 ) . In our learning scenarios ( see Fig . 1 ) , a network is sequentially tasked with solving a binary classification problem in response to data from mutually-exclusive pairs of classes Class Incremental Learning ( Class-IL ) , multiclass classification problem in response to data collected at different times of the year ( e.g. , winter and summer ) Time Incremental Learning ( Time-IL ) , and a multi-class classification problem in response to inputs with a different modality Domain Incremental Learning ( Domain-IL ) . In the aforementioned cases , task identities are absent during both training and testing and neural architectures are single-headed . 4 METHODS . The two ideas underlying our proposal are the storage of instances into and the acquisition of instances from a buffer such that destructive interference is mitigated . We describe these in more detail below . 4.1 IMPORTANCE-GUIDED BUFFER STORAGE . We aim to populate a buffer , DB , of finite size , M , with instances from the current task that are considered important . To quantify importance , we learn parameters , entitled task-instance parameters , βiT , associated with each instance , xiT , in each task , T . These parameters play a dual role . 4.1.1 LOSS-WEIGHTING MECHANISM . For the current task , k , and its associated data , Dk , we incorporate β as a coefficient of the loss , Lik , incurred for each instance , xik ∈ Dk . For a mini-batch of size , B , that consists of Bk instances from the current task , the objective function is shown in Eq . 1 . We can learn the values of βik via gradient descent , with some learning rate , η , as shown in Eq . 2 . L = 1 Bk Bk∑ i=1 βikLik ( 1 ) βik ← βik − η ∂L ∂βik ( 2 ) Note that ∂L∂βik = Lik > 0 . This suggests that instances that are hard to classify ( ↑ Lik ) will exhibit ↓ βik . From this perspective , βik can be viewed as a proxy for instance difficulty . However , as presented , βik → 0 as training progresses , an observation we confirmed empirically . Since βik is the coefficient of the loss , Lik , this implies that the network will quickly be unable to learn from the data . To avoid this behaviour , we initialize βik = 1 in order to emulate a standard loss function and introduce a regularization term to penalize its undesirable and rapid decay toward zero . As a result , our modified objective function is : Lcurrent = 1 Bk Bk∑ i=1 βikLik + λ ( βik − 1 ) 2 ( 3 ) When k > 1 , we replay instances from previous tasks by using a replay buffer ( see Sec . 4.2 for replay mechanism ) . These replayed instances incur a loss Lij ∀ j ∈ [ 1 . . . k − 1 ] . We decide to not weight these instances , in contrast to what we perform to instances from the current task ( see Appendix K ) . Lreplay = 1 B −Bk k−1∑ j=1 Bj∑ i Lij ( 4 ) L = Lcurrent + Lreplay ( 5 ) 4.1.2 BUFFER-STORAGE MECHANISM . We leverage β , as a proxy for instance difficulty , to store instances into the buffer . To describe the intuition behind this process , we illustrate , in Fig . 2 , the trajectory of β1k and β2k associated with two instances , x1k and x2k , while training on the current task , k , for τ = 20 epochs . In selecting instances for storage into the buffer , we can 1 ) retrieve their corresponding β values at the conclusion of the task , i.e. , at β ( t = 20 ) , 2 ) rank all instances based on these β values , and 3 ) acquire the top b fraction of instances . This approach , however , can lead to erroneous estimates of the relative difficulty of instances , as explained next . In Fig . 2 , we see that β2k > β1k for the majority of the training process , indicating that x2k had been easier to classify than x1k . The swap in the ranking of these β values that occurs towards the end of training in addition to myopically looking at β ( t = 20 ) would erroneously make us believe that the opposite was true . Such convergence or swapping of β values has also been observed by Saxena et al . ( 2019 ) . As a result , the reliability of β as a proxy of instance difficulty is eroded . To maintain the reliability of this proxy , we propose to track the β values after each training epoch , t , until the final epoch , τ , for the task at hand and calculate the area under these tracked values . We do so by using the trapezoidal rule as shown in Eq . 6 . We explored several variants of the storage function and found the proposed form to work best ( see Appendix H ) . At t = τ , we rank the instances in descending order of sik ( easy to hard ) as we found this preferable to the opposite order ( see Appendix I ) , select the top b fraction , and store them into the buffer , of which each task is allotted a fixed portion . The higher the value of the storage fraction , b , the more likely it is that the buffer will contain representative instances and thus mitigate forgetting , however this comes at an increased computational cost . sik = ∫ τ 0 βik ( t ) dt ≈ τ∑ t=0 ( βik ( t+ ∆t ) + βik ( t ) 2 ) ∆t ( 6 )
The authors propose a learning methodology designed to offset detriments to algorithm performance that arise when instances are not i.i.d (independent and identically distributed), focusing on cases in continual learning (CL) given physiological signals. They designed a replay-based learning method that handles an instance buffer using Importance-guided Storage and Uncertainty-based Acquisition strategies. They apply their method on Class, Time and Domain types of CL, and they introduce t-Step Backward Weight Transfer and Lambda Backward Weight Transfer methods by which to evaluate their method. They conclude with two ablation studies to explore an explanation for their method’s performance and attempt to validate their hypotheses based on these studies.
SP:61d83ed48f892bcb7d0488c9b918132b2623eea1
SALD: Sign Agnostic Learning with Derivatives
1 INTRODUCTION . Recently , neural networks ( NN ) have been used for representing and reconstructing 3D surfaces . Current NN-based 3D learning approaches differ in two aspects : the choice of surface representation , and the supervision method . Common representations of surfaces include using NN as parametric charts of surfaces ( Groueix et al. , 2018b ; Williams et al. , 2019 ) ; volumetric implicit function representation defined over regular grids ( Wu et al. , 2016 ; Tatarchenko et al. , 2017 ; Jiang et al. , 2020 ) ; and NN used directly as volumetric implicit functions ( Park et al. , 2019 ; Mescheder et al. , 2019 ; Atzmon et al. , 2019 ; Chen & Zhang , 2019 ) , referred henceforth as implicit neural representations . Supervision methods include regression of known or approximated volumetric implicit representations ( Park et al. , 2019 ; Mescheder et al. , 2019 ; Chen & Zhang , 2019 ) , regression directly with raw 3D data ( Atzmon & Lipman , 2020 ; Gropp et al. , 2020 ; Atzmon & Lipman , 2020 ) , and differentiable rendering using 2D data ( i.e. , images ) supervision ( Niemeyer et al. , 2020 ; Liu et al. , 2019 ; Saito et al. , 2019 ; Yariv et al. , 2020 ) . The goal of this paper is to introduce SALD , a method for learning implicit neural representations of surfaces directly from raw 3D data . The benefit in learning directly from raw data , e.g. , nonoriented point clouds or triangle soups ( e.g. , Chang et al . ( 2015 ) ) and raw scans ( e.g. , Bogo et al . ( 2017 ) ) , is avoiding the need for a ground truth signed distance representation of all train surfaces for supervision . This allows working with complex models with inconsistent normals and/or missing parts . In Figure 1 we show reconstructions of zero level sets of SALD learned implicit neural representations of car models from the ShapeNet dataset ( Chang et al. , 2015 ) with variational autoencoder ; notice the high detail level and the interior , which would not have been possible with , e.g. , previous data pre-processing techniques using renderings of visible parts ( Park et al. , 2019 ) . Our approach improves upon the recent Sign Agnostic Learning ( SAL ) method ( Atzmon & Lipman , 2020 ) and shows that incorporating derivatives in a sign agnostic manner provides a significant improvement in surface approximation and detail . SAL is based on the observation that given an unsigned distance function h to some raw 3D data X ⊂ R3 , a sign agnostic regression to h will introduce new local minima that are signed versions of h ; in turn , these signed distance functions can be used as implicit representations of the underlying surface . In this paper we show how the sign agnostic regression loss can be extended to compare both function values h and derivatives ∇h , up to a sign . The main motivation for performing NN regression with derivatives is that it reduces the sample complexity of the problem ( Czarnecki et al. , 2017 ) , leading to better accuracy and generalization . For example , consider a one hidden layer NN of the form f ( x ) = max { ax , bx } +c . Prescribing two function samples at { −1 , 1 } are not sufficient for uniquely determining f , while adding derivative information at these points determines f uniquely . We provide empirical evidence as well as theoretical motivation suggesting that both SAL and SALD possess the favorable minimal surface property ( Zhao et al. , 2001 ) , that is , in areas of missing parts and holes they will prefer zero level sets with minimal area . We justify this property by proving that , in 2D , when restricted to the zero level-set ( a curve in this case ) , the SAL and SALD losses would encourage a straight line solution connecting neighboring data points . We have tested SALD on the dataset of man-made models , ShapeNet ( Chang et al. , 2015 ) , and human raw scan dataset , D-Faust ( Bogo et al. , 2017 ) , and compared to state-of-the-art methods . In all cases we have used the raw input data X as is and considered the unsigned distance function to X , i.e. , hX , in the SALD loss to produce an approximate signed distance function in the form of a neural network . Comparing to state-of-the-art methods we find that SALD achieves superior results on this dataset . On the D-Faust dataset , when comparing to ground truth reconstructions we report state-of-the-art results , striking a balance between approximating details of the scans and avoiding overfitting noise and ghost geometry . Summarizing the contributions of this paper : • Introducing sign agnostic learning with derivatives . • Identifying and providing a theoretical justification for the minimal surface property of sign agnostic learning in 2D . • Training directly on raw data ( end-to-end ) including unoriented or not consistently oriented triangle soups and raw 3D scans . 2 PREVIOUS WORK . Learning 3D shapes with neural networks and 3D supervision has shown great progress recently . We review related works , where we categorize the existing methods based on their choice of 3D surface representation . Parametric representations . The most fundamental surface representation is an atlas , that is a collection of parametric charts f : R2 → R3 with certain coverage and transition properties ( Do Carmo , 2016 ) . Groueix et al . ( 2018b ) adapted this idea using neural network to represent a surface as union of such charts ; Williams et al . ( 2019 ) improved this construction by introducing better transitions between charts ; Sinha et al . ( 2016 ) use geometry images ( Gu et al. , 2002 ) to represent an entire shape using a single chart ; Maron et al . ( 2017 ) use global conformal parameterization for learning surface data ; Ben-Hamu et al . ( 2018 ) use a collection of overlapping global conformal charts for human-shape generative model . Hanocka et al . ( 2020 ) shrink-wraps a template mesh to fits a point cloud . The benefit in parametric representations is in the ease of sampling the learned surface ( i.e. , forward pass ) and work directly with raw data ( e.g. , Chamfer loss ) ; their main struggle is in producing charts that are collectively consistent , of low distortion , and covering the shape . Implicit representations . Another approach for representing surfaces is as zero level sets of a function , called an implicit function . There are two popular methods to model implicit volumetric functions with neural networks : i ) Convolutional neural network predicting scalar values over a predefined fixed volumetric structure ( e.g. , grid or octree ) in space ( Tatarchenko et al. , 2017 ; Wu et al. , 2016 ) ; and ii ) Multilayer Perceptron of the form f : R3 → R defining a continuous volumetric function ( Park et al. , 2019 ; Mescheder et al. , 2019 ; Chen & Zhang , 2019 ) . Currently , neural networks are trained to be implicit function representations with two types of supervision : ( i ) regression of samples taken from a known or pre-computed implicit function representation such as occupancy function ( Mescheder et al. , 2019 ; Chen & Zhang , 2019 ) or a signed distance function ( Park et al. , 2019 ) ; and ( ii ) working with raw 3D supervision , by particle methods relating points on the level sets to the model parameters ( Atzmon et al. , 2019 ) , using sign agnostic losses ( Atzmon & Lipman , 2020 ) , or supervision with PDEs defining signed distance functions ( Gropp et al. , 2020 ) . Primitives . Another type of representation is to learn shapes as composition or unions of a family of primitives . Gradient information have been used to improve and facilitate fitting of invariant polynomial representations ( Tasdizen et al. , 1999 ; Birdal et al. , 2019 ) . Li et al . ( 2019 ) represent a shape using a parametric collection of primitives . Genova et al . ( 2019 ; 2020 ) use a collection of Gaussians and learn consistent shape decompositions . Chen et al . ( 2020 ) suggest a differentiable Binary Space Partitioning tree ( BSP-tree ) for representing shapes . Deprelle et al . ( 2019 ) combine points and charts representations to learn basic shape structures . Deng et al . ( 2020 ) represent a shape as a union of convex sets . Williams et al . ( 2020 ) learn cites of Voronoi cells for implicit shape representation . Template fitting . Lastly , several methods learn 3D shapes of a certain class ( e.g. , humans ) by learning the deformation from a template model . Classical methods use matching techniques and geometric loss minimization for non-rigid template matching ( Allen et al. , 2002 ; 2003 ; Anguelov et al. , 2005 ) . Groueix et al . ( 2018a ) use an auto-encoder architecture and Chamfer distance to match target shapes . Litany et al . ( 2018 ) use graph convolutional autoencoder to learn deformable template for shape completion . 3 METHOD . Given raw geometric input data X ⊂ R3 , e.g. , a triangle soup , our goal is to find a multilayer perceptron ( MLP ) f : R3 × Rm → R whose zero level-set , S = { x ∈ R3 | f ( x ; θ ) = 0 } ( 1 ) is a manifold surface that approximates X . Sign agnostic learning . Similarly to SAL , our approach is to consider the ( readily available ) unsigned distance function to the raw input geometry , h ( y ) = min x∈X ‖y − x‖ ( 2 ) and perform sign agnostic regression to get a signed version f of h. SAL uses a loss of the form loss ( θ ) = Ex∼D τ ( f ( x ; θ ) , h ( x ) ) , ( 3 ) where D is some probability distribution , e.g. , a sum of gaussians with centers uniformly sampled over the input geometry X , and τ is an unsigned similarity . That is , τ ( a , b ) is measuring the difference between scalars a , b ∈ R up-to a sign . For example τ ( a , b ) = ∣∣|a| − b∣∣ ( 4 ) is an example that is used in Atzmon & Lipman ( 2020 ) . The key property of the sign agnostic loss in equation 3 is that , with proper weights initialization θ0 , it finds a new signed local minimum f which in absolute value is similar to h. In turn , the zero level set S of f is a valid manifold describing the data X . Sign agnostic learning with derivatives . Our goal is to generalize the SAL loss ( equation 3 ) to include derivative data of h and show that optimizing this loss provides implicit neural representations , S , that enjoy better approximation properties with respect to the underlying geometry X . Generalizing equation 3 requires designing an unsigned similarity measure τ for vector valued functions . The key observation is that equation 4 can be written as τ ( a , b ) = min { |a− b| , |a+ b| } , a , b ∈ R , and can be generalized to vectors a , b ∈ Rd by τ ( a , b ) = min { ‖a− b‖ , ‖a+ b‖ } . ( 5 ) We define the SALD loss : loss ( θ ) = Ex∼D τ ( f ( x ; θ ) , h ( x ) ) + λEx∼D′ τ ( ∇xf ( x ; θ ) , ∇xh ( x ) ) ( 6 ) where λ > 0 is a parameter , D′ is a probability distribution , e.g. , it could be identical to D , or uniform over the input geometry X , and ∇xf ( x ; θ ) , ∇xh ( x ) are the gradients f , h ( resp . ) with respect to their input x . In Figure 2 we show the unsigned distance h to an L-shaped curve ( left ) , and the level sets of the MLPs optimized with the SALD loss ( middle ) and the SAL loss ( right ) ; note that SALD loss reconstructed the sharp features ( i.e. , corners ) of the shape and the level sets of h , while SAL loss smoothed them out ; the implementation details of this experiment can be found in Appendix A.4 . Minimal surface property . We show that the SAL and SALD losses possess a minimal surface property ( Zhao et al. , 2001 ) , that is , they strive to minimize surface area of missing parts . For example , Figure 4 shows the unsigned distance to a curve with a missing segment ( left ) , and the zero level sets of MLPs optimized with SALD loss ( middle ) , and SAL loss ( right ) . Note that in both cases the zero level set in the missing part area is the minimal length curve ( i.e. , a line ) connecting the end points of that missing part . SALD also preserves sharp features of the rest of the shape . Figure A1 in the supplementary shows additional 2D experiments comparing to the Implicit Geometric Regularization ( IGR ) method ( Gropp et al. , 2020 ) that learns implicit representations by regularizing the gradient norm and do not posses the minimal surface property . We will provide a theoretical justification to this property in the 2D case . We consider a geometry defined by two points in the plane , X = { x1 , x2 } ⊂ R2 and possible solutions where the zero level set curve S is connecting x1 and x2 . We prove that among a class of curves U connecting x1 and x2 , the straight line minimizes the losses in equation 3 and equation 6 restricted to U , when assuming uniform distributions D , D′ . We assume ( without losing generality ) that x1 = ( 0 , 0 ) T , x2 = ( ` , 0 ) T and consider curves u ∈ U defined by u ( s ) = ( s , t ( s ) ) T , where s ∈ [ 0 , ` ] , and t : R→ R is some differentiable function such that t ( 0 ) = 0 = t ( ` ) , see Figure 3 . For the SALD loss we prove the claim for a slightly simplified agnostic loss motivated by the following lemma proved in Appendix A.1 : Lemma 1 . For any pair of unit vectors a , b : min { ‖a− b‖ , ‖a+ b‖ } ≥ |sin∠ ( a , b ) | . We consider τ ( a , b ) = |sin∠ ( a , b ) | for the derivative part of the loss in equation 6 , which is also sign agnostic . Theorem 1 . Let X = { x1 , x2 } ⊂ R2 , and the family of curves U connecting x1 and x2 . Furthermore , let lossSAL ( u ) and lossSALD ( u ) denote the losses in equation 3 and equation 6 ( resp . ) when restricted to u with uniform distributions D , D′ . Then in both cases the straight line , i.e. , the curve u ( s ) = ( s , 0 ) , is the strict global minimizer of these losses . Proof . The unsigned distance function is h ( u ) = { √ s2 + t2 s ∈ [ 0 , ` /2 ] √ ( s− ` ) 2 + t2 s ∈ ( ` /2 , ` ] . From symmetry it is enough to consider only the first half of the curve , i.e. , s ∈ [ 0 , ` /2 ) . Then , the SAL loss , equation 3 , restricted to the curve u ( i.e. , where f vanishes ) takes the form lossSAL ( u ) = ∫ ` /2 0 τ ( f ( u ; θ ) , h ( u ) ) ‖u̇‖ ds = ∫ ` /2 0 √ s2 + t2 √ 1 + ṫ2 ds , where √ 1 + ṫ2 ds is the length element on the curve u , and τ ( f ( s , t ; θ ) , h ( s , t ) ) = |h ( s , t ) | =√ s2 + t2 , since f ( s , t ; θ ) = 0 over the curve u . Plugging t ( s ) ≡ 0 in lossSAL ( u ) we see that the curve u = ( s , 0 ) T , namely the straight line curve from x1 to 0.5 ( x1 + x2 ) is a strict global minimizer of lossSAL ( u ) . Similar argument on s ∈ [ ` /2 , ` ] prove the claim for the SAL case . For the SALD case , we want to calculate τ ( ∇xf ( u ; θ ) , ∇xh ( u ) ) restricted to the curve u ; let a = ∇xf ( u ; θ ) and b = ∇xh ( u ) . First , b = ( s2+ t2 ) −1/2 ( s , t ) T . Second , a is normal to the curve u , therefore it is proportional to u̇⊥ = ( −ṫ , 1 ) T . Next , note that |sin∠ ( a , b ) | = ∣∣∣∣det ( −ṫ s1 t ) ∣∣∣∣√ 1 + ṫ2 √ s2 + t2 = 1√ 1 + ṫ2 ∣∣∣∣ dds ‖ ( s , t ) ‖ ∣∣∣∣ , where the last equality can be checked by differentiating ‖ ( s , t ) ‖ w.r.t . s. Therefore , lossSALD ( u ) − lossSAL ( u ) λ = ∫ ` /2 0 τ ( a , b ) ‖u̇‖ ds = ∫ ` /2 0 ∣∣∣∣ dds ‖ ( s , t ) ‖ ∣∣∣∣ ds ≥ ∥∥∥∥ ( ` 2 , t ( ` 2 ) ) ∥∥∥∥ ≥ ` 2 . This bound is achieved for the curve u = ( s , 0 ) , which is also a minimizer of the SAL loss . The straight line also minimizes this version of the SALD loss since lossSALD ( u ) = ( lossSALD ( u ) − lossSAL ( u ) ) + lossSAL ( u ) .
This paper presents SALD, a new type of implicit shape representation that, in addition to predicting the signed distance function, aligns the gradients of the distance function with that of the neural distance field. The resulting algorithm, for example, has improved approximation power and better preserves the sharp features than the ancestor SAL (sign agnostic learning). The formulation is such that the architecture can consume raw point clouds.
SP:ceacad438130adfb746240e36dd32d14794b4291
Sequential Density Ratio Estimation for Simultaneous Optimization of Speed and Accuracy
1 INTRODUCTION . The sequential probability ratio test , or SPRT , was originally invented by Abraham Wald , and an equivalent approach was also independently developed and used by Alan Turing in the 1940s ( Good , 1979 ; Simpson , 2010 ; Wald , 1945 ) . SPRT calculates the log-likelihood ratio ( LLR ) of two competing hypotheses and updates the LLR every time a new sample is acquired until the LLR reaches one of the two thresholds for alternative hypotheses ( Figure 1 ) . Wald and his colleagues proved that when sequential data are sampled from independently and identically distributed ( i.i.d . ) data , SPRT can minimize the required number of samples to achieve the desired upper-bounds of false positive and false negative rates comparably to the Neyman-Pearson test , known as the most powerful likelihood test ( Wald & Wolfowitz , 1948 ) ( see also Theorem ( A.5 ) in Appendix A ) . Note that Wald used the i.i.d . assumption only for ensuring a finite decision time ( i.e. , LLR reaches a threshold within finite steps ) and for facilitating LLR calculation : the non-i.i.d . property does not affect other aspects of the SPRT including the error upper bounds ( Wald , 1947 ) . More recently , Tartakovsky et al . verified that the non-i.i.d . SPRT is optimal or at least asymptotically optimal as the sample size increases ( Tartakovsky et al. , 2014 ) , opening the possibility of potential applications of the SPRT to non-i.i.d . data series . About 70 years after Wald ’ s invention , neuroscientists found that neurons in the part of the primate brain called the lateral intraparietal cortex ( LIP ) showed neural activities reminiscent of the SPRT ( Kira et al. , 2015 ) ; when a monkey sequentially collects random pieces of evidence to make a binary choice , LIP neurons show activities proportional to the LLR . Importantly , the time of the decision can be predicted from when the neural activity reaches a fixed threshold , the same as the SPRT ’ s decision rule . Thus , the SPRT , the optimal sequential decision strategy , was re-discovered to be an algorithm explaining primate brains ’ computing strategy . It remains an open question , however , what algorithm will be used in the brain when the sequential evidence is correlated , non-i.i.d . series . The SPRT is now used for several engineering applications ( Cabri et al. , 2018 ; Chen et al. , 2017 ; Kulldorff et al. , 2011 ) . However , its i.i.d . assumption is too crude for it to be applied to other real-world scenarios , including time-series classification , where data are highly correlated , and key dynamic features for classification often extend across more than one data point , violating the i.i.d . assumption . Moreover , the LLR of alternative hypotheses needs to be calculated as precisely as possible , which is infeasible in many practical applications . In this paper , we overcome the above difficulties by using an SPRT-based algorithm that Treats data series As an N-th orDEr Markov process ( SPRT-TANDEM ) , aided by a sequential probability density ratio estimation based on deep neural networks . A novel Loss function for Log-Likelihood Ratio estimation ( LLLR ) efficiently estimates the density ratio that let the SPRT-TANDEM approach close to asymptotic Bayes-optimality ( i.e. , Appendix A.4 ) . In other words , LLLR optimizes classification speed and accuracy at the same time . The SPRT-TANDEM can classify non-i.i.d . data series with user-defined model complexity by changing N ( ∈ N ) , the order of approximation , to define the number of past samples on which the given sample depends . By dynamically changing the number of samples used for classification , the SPRT-TANDEM can maintain high classification accuracy while minimizing the sample size as much as possible . Moreover , the SPRT-TANDEM enables a user to flexibly control the speed-accuracy tradeoff without additional training , making it applicable to various practical applications . We test the SPRT-TANDEM on our new database , Nosaic MNIST ( NMNIST ) , in addition to the publicly available UCF101 action recognition database ( Soomro et al. , 2012 ) and Spoofing in the Wild ( SiW ) database ( Liu et al. , 2018 ) . Two-way analysis of variance ( ANOVA , ( Fisher , 1925 ) ) followed by a Tukey-Kramer multi-comparison test ( Tukey , 1949 ; Kramer , 1956 ) shows that our proposed SPRT-TANDEM provides statistically significantly higher accuracy than other fixed-length and variable-length classifiers at a smaller number of data samples , making Wald ’ s SPRT applicable even to non-i.i.d . data series . Our contribution is fivefold : 1 . We invented a deep neural network-based algorithm , SPRT-TANDEM , which enables Wald ’ s SPRT on arbitrary sequential data without knowing the true LLR . 2 . The SPRT-TANDEM extends the SPRT to non-i.i.d . data series without knowing the true LLR . 3 . With a novel loss , LLLR , the SPRT-TANDEM sequentially estimates LLR to optimize speed and accuracy simultaneously . 4 . The SPRT-TANDEM can control the speed-accuracy tradeoff without additional training . 5 . We introduce Nosaic MNIST , a novel early-classification database . 2 RELATED WORK . The SPRT-TANDEM has multiple interdisciplinary intersections with other fields of research : Wald ’ s classical SPRT , probability density estimation , neurophysiological decision making , and time-series classification . The comprehensive review is left to Appendix B , while in the following , we introduce the SPRT , probability density estimation algorithms , and early classification of the time series . Sequential Probability Ratio Test ( SPRT ) . The SPRT , denoted by δ∗ , is defined as the tuple of a decision rule and a stopping rule ( Tartakovsky et al. , 2014 ; Wald , 1947 ) : Definition 2.1 . Sequential Probability Ratio Test ( SPRT ) . Let λt as the LLR at time t , and X ( 1 , T ) as a sequential data X ( 1 , T ) : = { x ( t ) } Tt=1 . Given the absolute values of lower and upper decision threshold , a0 ≥ 0 and a1 ≥ 0 , SPRT , δ∗ , is defined as δ∗ = ( d∗ , τ∗ ) , ( 1 ) where the decision rule d∗ and stopping time τ∗ are d∗ ( X ( 1 , T ) ) = { 1 if λτ∗ ≥ a1 0 if λτ∗ ≤ −a0 , ( 2 ) τ∗ = inf { T ≥ 0|λT /∈ ( −a0 , a1 ) } . ( 3 ) We review the proof of optimality in Appendix A.4 , while Figure 1 shows an intuitive explanation . Probability density ratio estimation . Instead of estimating numerator and denominator of a density ratio separately , the probability density ratio estimation algorithms estimate the ratio as a whole , reducing the degree of freedom for more precise estimation ( Sugiyama et al. , 2010 ; 2012 ) . Two of the probability density ratio estimation algorithms that closely related to our work are the probabilistic classification ( Bickel et al. , 2007 ; Cheng & Chu , 2004 ; Qin , 1998 ) and density fitting approach ( Sugiyama et al. , 2008 ; Tsuboi et al. , 2009 ) algorithms . As we show in Section 4 and Appendix E , the SPRT-TANDEM sequentially estimates the LLR by combining the two algorithms . Early classification of time series . To make decision time as short as possible , algorithms for early classification of time series can handle variable length of data ( Mori et al. , 2018 ; Mori et al. , 2016 ; Xing et al. , 2009 ; 2012 ) to minimize high sampling costs ( e.g. , medical diagnostics ( Evans et al. , 2015 ; Griffin & Moorman , 2001 ) , or stock crisis identification ( Ghalwash et al. , 2014 ) ) . Leveraging deep neural networks is no exception in the early classification of time series ( Dennis et al. , 2018 ; Suzuki et al. , 2018 ) . Long short-term memory ( LSTM ) variants LSTM-s/LSTM-m impose monotonicity on classification score and inter-class margin , respectively , to speed up action detection ( Ma et al. , 2016 ) . Early and Adaptive Recurrent Label ESTimator ( EARLIEST ) combines reinforcement learning and a recurrent neural network to decide when to classify and assign a class label ( Hartvigsen et al. , 2019 ) . 3 PROPOSED ALGORITHM : SPRT-TANDEM . In this section , we propose the TANDEM formula , which provides the N -th order approximation of the LLR with respect to posterior probabilities . The i.i.d . assumption of Wald ’ s SPRT greatly simplifies the LLR calculation at the expense of the precise temporal relationship between data samples . On the other hand , incorporating a long correlation among multiple data may improve the LLR estimation ; however , calculating too long a correlation may potentially be detrimental in the following cases . First , if a class signature is significantly shorter than the correlation length in consideration , uninformative data samples are included in calculating LLR , resulting in a late or wrong decision ( Campos et al. , 2018 ) . Second , long correlations require calculating a long-range of backpropagation , prone to vanishing gradient problem ( Hochreiter et al. , 2001 ) . Thus , we relax the i.i.d . assumption by keeping only up to the N -th order correlation to calculate the LLR . The TANDEM formula . Here , we introduce the TANDEM formula , which computes the approximated LLR , the decision value of the SPRT-TANDEM algorithm . The data series is approximated as an N -th order Markov process . For the complete derivation of the 0th ( i.i.d . ) , 1st , and N -th order TANDEM formula , see Appendix C. Given a maximum timestamp T ∈ N , let X ( 1 , T ) and y be a sequential data X ( 1 , T ) : = { x ( t ) } Tt=1 and a class label y ∈ { 1 , 0 } , respectively , where x ( t ) ∈ Rdx and dx ∈ N. By using Bayes ’ rule with the N -th order Markov assumption , the joint LLR of data at a timestamp t is written as follows : log ( p ( x ( 1 ) , x ( 2 ) , ... , x ( t ) |y = 1 ) p ( x ( 1 ) , x ( 2 ) , ... , x ( t ) |y = 0 ) ) = t∑ s=N+1 log ( p ( y = 1|x ( s−N ) , ... , x ( s ) ) p ( y = 0|x ( s−N ) , ... , x ( s ) ) ) − t∑ s=N+2 log ( p ( y = 1|x ( s−N ) , ... , x ( s−1 ) ) p ( y = 0|x ( s−N ) , ... , x ( s−1 ) ) ) − log ( p ( y = 1 ) p ( y = 0 ) ) ( 4 ) ( see Equation ( 84 ) and ( 85 ) in Appendix C for the full formula ) . Hereafter we use terms k-let or multiplet to indicate the posterior probabilities , p ( y|x ( 1 ) , ... , x ( k ) ) = p ( y|X ( 1 , k ) ) that consider correlation across k data points . The first two terms of the TANDEM formula ( Equation ( 4 ) ) , N + 1- let and N -let , have the opposite signs working in “ tandem ” adjusting each other to compute the LLR . The third term is a prior ( bias ) term . In the experiment , we assume a flat prior or zero bias term , but a user may impose a non-flat prior to handling the biased distribution of a dataset . The TANDEM formula can be interpreted as a realization of the probability matching approach of the probability density estimation , under an N -th order Markov assumption of data series . Neural network that calculates the SPRT-TANDEM formula . The SPRT-TANDEM is designed to explicitly calculate the N -th order TANDEM formula to realize sequential density ratio estimation , which is the critical difference between our SPRT-TANDEM network and other architecture based on convolutional neural networks ( CNNs ) and recurrent neural networks ( RNN ) . Figure 2 illustrates a conceptual diagram explaining a generalized neural network structure , in accordance with the 1st-order TANDEM formula for simplicity . The network consists of a feature extractor and a temporal integrator ( highlighted by red and blue boxes , respectively ) . They are arbitrary networks that a user can choose depending on classification problems or available computational resources . The feature extractor and temporal integrator are separately trained because we find that this achieves better performance than the end-to-end approach ( also see Appendix D ) . The feature extractor outputs single-frame features ( e.g. , outputs from a global average pooling layer ) , which are the input vectors of the temporal integrator . The output vectors from the temporal integrator are transformed with a fully-connected layer into two-dimensional logits , which are then input to the softmax layer to obtain posterior probabilities . They are used to compute the LLR to run the SPRT ( Equation ( 2 ) ) . Note that during the training phase of the feature extractor , the global average pooling layer is followed by a fully-connected layer for binary classification . How to choose the hyperparameter N ? By tuning the hyperparameter N , a user can efficiently boost the model performance depending on databases ; in Section 5 , we change N to visualize the model performance as a function of N . Here , we provide two ways to choose N . One is to choose N based on the specific time scale , a concept introduced in Appendix D , where we describe in detail how to guess on the best N depending on databases . The other is to use a hyperparameter tuning algorithm , such as Optuna , ( Akiba et al. , 2019 ) to choose N objectively . Optuna has multiple hyperparameter searching algorithms , the default of which is the Tree-structured Parzen Estimator ( Bergstra et al. , 2011 ) . Note that tuning N is not computationally expensive , because N is only related to the temporal integrator , not the feature extractor . In fact , the temporal integrator ’ s training speed is much faster than that of the feature extractor : 9 mins/epoch vs. 10 hrs/epoch ( N = 49 , NVIDIA RTX2080Ti , SiW database ) .
This work introduces SPRT-TANDEM an algorithm to train a sequential probability ratio test (SPRT) as a neural network. This network is then used to discriminate between two hypotheses as fast as possible (seeing the smallest number of observations in a sequence) while maintaining a certain level of accuracy. The main contribution of this work is to enable Wald's SPRT without actual knowledge of the ratio, learning a neural network to model it.
SP:3120ae529b5b2964470ad055d1f13989f192c961
NOVAS: Non-convex Optimization via Adaptive Stochastic Search for End-to-end Learning and Control
1 INTRODUCTION . Deep learning has experienced a drastic increase in the diversity of neural network architectures , both in terms of proposed structure , as well as in the repertoire of operations that define the interdependencies of its elements . With respect to the latter , a significant amount of attention has been devoted to incorporating optimization blocks or modules operating at some part of the network . This has been motivated by large number of applications , including meta-learning ( Finn et al. , 2017 ; Rusu et al. , 2018 ; Bartunov et al. , 2019 ) , differentiable physics simulators ( de Avila Belbute-Peres et al. , 2018 ) , classification ( Amos et al. , 2019 ) , GANs ( Metz et al. , 2016 ) , reinforcement learning with constraints , latent spaces , or safety ( Amos & Kolter , 2017 ; Srinivas et al. , 2018 ; Amos & Yarats , 2019 ; Cheng et al. , 2019 ; Pereira et al. , 2020 ) , model predictive control ( Amos et al. , 2018 ; Pereira et al. , 2018 ) , as well as tasks relying on the use of energy networks ( Belanger et al. , 2017 ; Bartunov et al. , 2019 ) , among many others . Local1 optimization modules lead to nested optimization operations , as they interact with the global , end-to-end training of the network that contains them . Consider some component within the neural network architecture , e.g . a single layer , whose input and output are xi ∈ Rn and xi+1 ∈ Rm , respectively . Within that layer , the input and output are linked via the solution of the following optimization problem : xi+1 = arg min x F ( x ; xi , θ ) , ( 1 ) that is , the output xi+1 is defined as the solution to an optimization problem for which the input xi remains temporarily fixed , i.e. , acts as a parameter . Here , F ( x ; xi , θ ) : Rm × Rn × Θ → R is a function possibly further parameterized by some subset of the neural network parameters θ ∈ Θ . Note that x here is an independent variable which is free to vary . The result of this optimization could potentially also be subject to a set of ( input-dependent ) constraints , though in this paper we ∗Equal contribution . 1To distinguish between the optimization of the entire network as opposed to that of the optimization module , we frequently refer to the former as global or outer-loop optimization and to the latter as local or inner-loop optimization . will consider only unconstrained optimization . It is also important to note that , depending on the problem , F can be a given function , or it can itself be represented by a multi-layer neural network ( trained by the outer loop ) , in which case the aforementioned optimization layer consists of multiple sub-layers and is more accurately described as a module rather than a single layer . Examples of this type of optimization are structured prediction energy networks ( e.g . Belanger et al . ( 2017 ) ) ; another such example is Amos & Kolter ( 2017 ) which treats the case of convex F ( · ; xi , θ ) . In order to facilitate end-to-end learning over the entire network , the gradient of its loss function L with respect to θ will require during backpropagation passing the gradient of the module ’ s output xi+1 with respect to parameters θ and xi . Depending on the nature of the optimization problem under consideration , several procedures have been suggested ; among them , particularly appealing is the case of convex optimization ( Gould et al. , 2016 ; Johnson et al. , 2016 ; Amos et al. , 2017 ; Amos & Kolter , 2017 ) , in which the aforementioned gradients can be computed efficiently through an application of the implicit function theorem to a set of optimality conditions , such as the KKT conditions . In the case of non-convex functions however , obtaining such gradients is not as straight-forward ; solutions involve either forming and solving a locally convex approximation of the problem , or unrolling gradient descent ( Domke , 2012 ; Metz et al. , 2016 ; Belanger et al. , 2017 ; Finn et al. , 2017 ; Srinivas et al. , 2018 ; Rusu et al. , 2018 ; Foerster et al. , 2018 ; Amos et al. , 2018 ) . Unrolling gradient descent approximates the arg min operator with a fixed number of gradient descent iterations during the forward pass and interprets these as an unrolled compute graph that can be differentiated through during the backward pass . One drawback in using this unrolled gradient descent operation however is the fact that doing so can lead to over-fitting to the selected gradient descent hyper-parameters , such as learning rate and number of iterations . Recently , Amos & Yarats ( 2019 ) demonstrated promising results in alleviating this phenomenon by replacing these iterations of gradient descent by iterations of sampling-based optimization , in particular a differentiable approximation of the cross-entropy method . While still unrolling the graph created by the fixed number of iterations , they showed empirically that no over-fitting to the hyper-parameters occurred by performing inference on the trained network with altered inner-loop optimization hyper-parameters . Another significant bottleneck in all methods involving graph unrolling is the number of iterations , which has to be kept low to prevent a prohibitively large graph during backprop , to avoid issues in training . Note that in eq . ( 1 ) the variable of optimization is free to vary independently of the network . This is in contrast to many applications involving nested optimization , mainly in the field of meta-learning , in which the inner loop , rather than optimizing a free variable , performs adaptation to an initial value which is supplied to the inner loop by the outer part of the network . For example , MAML ( Finn et al. , 2017 ) performs the inner-loop adaptation θ → θ′ , in which the starting point θ is not arbitrary ( as x is in eq . ( 1 ) ) but is supplied by the network . Thus , in the context of adaptation , unrolling the inner-loop graph during back-prop is generally necessary to trace the adaptation back to the particular network-supplied initial value . Two notable exceptions are first-order MAML ( Finn et al. , 2017 ; Nichol et al. , 2018 ) , which ignores second derivative terms , and implicit MAML ( Rajeswaran et al. , 2019 ) , which relies on local curvature estimation . In this paper we propose Non-convex Optimization Via Adaptive Stochastic Search ( NOVAS ) , a module for differentiable , non-convex optimization . The backbone of this module is adaptive stochastic search ( Zhou & Hu , 2014 ) , a sampling-based method within the field of stochastic optimization . The contributions of our work are as follows : ( A ) . We demonstrate that the NOVAS module does not over-fit to optimization hyper-parameters and offers improved speed and convergence rate over its alternative ( Amos & Yarats , 2019 ) . ( B ) . If the inner-loop variable of optimization is free to vary ( i.e. , the problem fits the definition given by eq . ( 1 ) ) , we show that there is no need to unroll the graph during the back-propagation of gradients . The latter advantage is critical , as it drastically reduces the size of the overall end-to-end computation graph , thus facilitating improved ability to learn with higher convergence rates , improved speed , and reduced memory requirements . Furthermore , it allows us to use a higher number of inner-loop iterations . ( C ) . If the inner-loop represents an adaptation to a network-supplied value as it is the case in meta-learning applications , NOVAS may still be used in lieu of the gradient descent rule ( though unrolling the graph may be necessary here ) . Testing NOVAS in such a setting is left for future work . ( D ) . We combine the NOVAS module with the framework of deep FBSDEs , a neural network-based approach to solving nonlinear partial differential equations ( PDEs ) . This combination allows us to solve Hamilton-Jacobi-Bellman ( HJB ) PDEs of the most general form , i.e. , those in which the min operator does not have a closed-form solution , a class of problems that was previously impossible to address due to the non-convexity of the corresponding Hamiltonian . We validate the algorithm on a cart-pole task and demonstrate its scalability on a 101-dimensional continuous-time portfolio selection problem . The code is available at https : //github.com/iexarchos/NOVAS.git 2 FURTHER BACKGROUND AND RELATED WORK . Relation to Differentiable Cross-Entropy : Particular importance should be given to Amos & Yarats ( 2019 ) , since , to the best of our knowledge , it is the first to suggest sampling-based optimization instead of gradient descent , and features some similarities with our approach . The authors therein propose a differentiable approximation of the cross-entropy method ( CEM ) ( Rubinstein , 2001 ; De Boer et al. , 2005 ) , called differentiable cross-entropy ( DCEM ) . To obtain this approximation , they need to approximate CEM ’ s eliteness threshold operation , which is non-differentiable . This is done by solving an additional , convex optimization problem separately for each inner loop step ( and separately for each sample of xi in the batch , resulting in a total of N ×M ×K additional convex optimization problems , with N : batch size , M : number of inner loop iterations , K : number of outer loop iterations , i.e . training epochs ) . After CEM has been locally approximated by DCEM , they replace the usual inner-loop gradient descent steps with DCEM steps , and the entire inner-loop optimization graph is unrolled during the backward pass . Our method differs from this approach in the following ways : 1. we employ the already differentiable adaptive stochastic search algorithm , thus not having to solve any additional optimization problem to obtain a differentiable approximation ( speed improvement ) , while also showing some convergence rate improvements , and most importantly 2 . In the case of inner-loop optimization over an independent variable ( e.g. , such as the problem defined by eq . ( 1 ) ) , we do not unroll the optimization graph , but instead pass the gradients only through the last inner-loop iteration . This drastically reduces its size during backpropagation , increasing speed , reducing memory requirements , and facilitating easier learning . Sampling-based Optimization : Adaptive stochastic search ( Zhou & Hu , 2014 ) is a sampling-based method within stochastic optimization that transforms the original optimization problem via a probabilistic approximation . The core concept behind this algorithm is approximating the gradient of the objective function by evaluating random perturbations around some nominal value of the independent variable , a concept that also appears under the name Stochastic Variational Optimization and shares many similarities with natural evolution strategies ( Bird et al. , 2018 ) . Another comparable approach is CEM ( Rubinstein , 2001 ; De Boer et al. , 2005 ) . In contrast to adaptive stochastic search , CEM is non-differentiable ( due to the eliteness threshold ) and the parameters are typically updated de novo in each iteration , rather than as a gradient descent update to the parameter values of the previous iteration . In the case of Gaussian distributions , the difference between CEM and adaptive stochastic search boils down to the following : in adaptive stochastic search , the mean gets updated by calculating the average of all sampled variable values weighted by a typically exponential mapping of their corresponding objective function values , whereas in CEM only the top-k performing values are used , and are weighted equally . Furthermore , this difference can be made even smaller if one replaces the exponential mapping in the former method with a differentiable ( sigmoid ) function that approximates the eliteness operation . More details are available in the Appendix . Deep Learning Approaches for PDEs and FBSDEs : There has been a recent surge in research and literature in applying deep learning to approximate solutions of high-dimensional PDEs . The transition from a PDE formulation to a trainable neural network is done via the concept of a system of Forward-Backward Stochastic Differential Equations ( FBSDEs ) . Specifically , certain PDE solutions are linked to solutions of FBSDEs . Systems of FBSDEs can be interpreted as a stochastic equivalent to a two-point boundary value problem , and can be solved using a suitably defined deep neural network architecture . This is known in the literature as the deep FBSDE approach ( Han et al. , 2018 ; Raissi , 2018 ) . While applied in high-dimensional PDEs , the aforementioned results have seen very limited applicability in the field of optimal control . Indeed , the HJB PDE in control theory has a much more complicated structure , and in its general form involves a min operator applied on its Hamiltonian term over the control input . Exploiting certain structures of system dynamics and cost functions that allowed for a closed-form expression for this operator , Exarchos & Theodorou ( 2018 ) ; Exarchos et al . ( 2018 ; 2019 ) developed a framework for control using FBSDEs , which was then translated to a deep neural network setting in Pereira et al . ( 2019b ) ; Wang et al . ( 2019 ) . In this work , we incorporate the NOVAS module inside deep FBSDE neural network architectures to account for PDEs lacking a closed-form expression for their min and/or max operators . Thus , we are able to address the most general description of a HJB PDE in which the corresponding Hamiltonian is nonconvex . More information concerning the deep FBSDE framework can be found in the Appendix .
This paper aims to present a method that allows efficient learning in neural networks architecture that present optimization blocks. These blocks have the form of x_{i+1} = \arg \min_x F(x, x_i, \theta), and can be thought of as a neural network layer. The addition of this block results in a complex optimization problem, since it presents a multi-level problem. The approach presented in this paper relies on adaptive stochastic search as a differentiable optimization procedure. The authors evaluate the proposed algorithm in a variety of applications, including structured prediction networks and control.
SP:b64d32119a136b5957e85e52c3ab32c27d3c2f3f
Neural Point Process for Forecasting Spatiotemporal Events
1 . Introduction . Accurate modeling of spatiotemporal event dynamics is fundamentally important for disaster response ( Veen and Schoenberg , 2008 ) , logistic optimization ( Safikhani et al. , 2018 ) and social media analysis ( Liang et al. , 2019 ) . Compared to other sequence data such as texts or time series , spatiotemporal events occur irregularly with uneven time and space intervals . Discrete-time deep dynamics models such as recurrent neural networks ( RNNs ) ( Hochreiter and Schmidhuber , 1997 ; Chung et al. , 2014 ) assume events to be evenly sampled . Interpolating an irregular sampled sequence into a regular sequence can introduce significant biases ( Rehfeld et al. , 2011 ) . Furthermore , event sequences contain strong spatiotemporal dependencies . The rate of an event depends on the preceding events , as well as the events geographically correlated to it . Spatiotemporal point processes ( STPP ) ( Daley and Vere-Jones , 2007 ; Reinhart et al. , 2018 ) provides the statistical framework for modeling continuous-time event dynamics . As shown in Figure 1 , given the history of events sequence , STPP estimates the intensity function that is evolv- © 2022 Z. Zhou , X. Yang , R. Rossi , H. Zhao & R. Yu . ing in space and time . However , traditional statistical methods for estimating STPPs often require strong modeling assumptions , feature engineering , and can be computationally expensive . Machine learning community is observing a growing interest in continuous-time deep dynamics models that can handle irregular time intervals . For example , Neural ODE ( Chen et al. , 2018 ) parametrizes the hidden states in an RNN with an ODE . Shukla and Marlin ( 2018 ) uses a separate network to interpolates between reference time points . Neural temporal point process ( TPP ) ( Mei and Eisner , 2017 ; Zhang et al. , 2020 ; Zuo et al. , 2020 ) is an exciting area that combines fundamental concepts from temporal point processes with deep learning to model continuous-time event sequences , see a recent review on neural TPP ( Shchur et al. , 2021 ) . However , most of the existing models only focus on temporal dynamics without considering spatial modeling . In the real world , while time is a unidirectional process ( arrow of time ) , space extends in multiple directions . This fundamental difference from TPP makes it nontrivial to design a unified STPP model . The naive approach to approximate the intensity function by a deep neural network would lead to intractable integral computation for likelihood . Prior research such as Du et al . ( 2016 ) discretizes the space as “ markers ” and use marked TPP to classify the events . This approach can not produce the space-time intensity function . Okawa et al . ( 2019 ) models the spatiotemporal density using a mixture of symmetric kernels , which ignores the unidirectional property of time . Chen et al . ( 2021 ) proposes to model temporal intensity and spatial density separately with neural ODE , which is computational expensive . We propose a simple yet efficient approach to learn STPP . Our model , Deep Spatiotemporal Point Process ( DeepSTPP ) marries the principles of spatiotemporal point processes with deep learning . We take a non-parametric approach and model the space-time intensity function as mixture of kernels . The parameters of the intensity function are governed by a latent stochastic process no sampling which captures the uncertainty of the event sequence . The latent process is then inferred via amortized variational inference . That is , we draw a sample from the variational distribution for every event . We use a Transformer network to parametrize the variational distribution conditioned on the previous events . Compared with existing approaches , our model is non-parametric , hence does not make assumptions on the parametric form of the distribution . Our approach learns the space-time intensity function jointly without requiring separate models for time-intensity function and spatial density as in Chen et al . ( 2021 ) . Our model is probabilistic by nature and can describe various uncertainties in the data . More importantly , our model enjoys closed form integration , making it feasible for processing large-scale event datasets . To summarize , our work makes the following key contributions : • Deep Spatiotemporal Point Process . We propose a novel Deep Point Process model for forecasting unevenly sampled spatiotemporal events . It integrates deep learning with spatiotemporal point processes to learn continuous space-time dynamics . • Neural Latent Process . We model the space-time intensity function using a nonparametric approach , governed by a latent stochastic process . We use amortized variational inference to perform inference on the latent process conditioned on the previous events . • Effectiveness . We demonstrate our model using many synthetic and real-world spatiotemporal event forecasting tasks , where it achieves superior performance in accuracy and efficiency . We also derive and implement efficient algorithms for simulating STPPs . 2 . Methodology . We first introduce the background of spatiotemporal point process , and then describe our approach to learn the underlying spatiotemporal event dynamics . 2.1 . Background on Spatiotemporal Point Process . Spatiotemporal Point Process . Spatiotemporal point process ( STPP ) models the number of events N ( S × ( a , b ) ) that occurred in the Cartesian product of the spatial domain S ⊆ R2 and the time interval ( a , b ] . It is characterized by a non-negative space-time intensity function given the history Ht : = { ( s1 , t1 ) , . . . , ( sn , tn ) } tn≤t : λ∗ ( s , t ) : = lim ∆s→0 , ∆t→0 E [ N ( B ( s , ∆s ) × ( t , t+∆t ) ) |Ht ] B ( s , ∆s ) ∆t ( 1 ) which is the probability of finding an event in an infinitesimal time interval ( t , t + ∆t ] and an infinitesimal spatial ball S = B ( s , ∆s ) centered at location s. Example 1 : Spatiotemporal Hawkes process ( STH ) . Spatiotemporal Hawkes ( or self-exciting ) process assumes every past event has an additive , positive , decaying , and spatially local influence over future events . Such a pattern resembles neuronal firing and earthquakes . It is characterized by the following intensity function ( Reinhart et al. , 2018 ) : λ∗ ( s , t ) : = µg0 ( s ) + ∑ i : ti < t g1 ( t , ti ) g2 ( s , si ) : µ > 0 ( 2 ) where g0 ( s ) is the probability density of a distribution over S , g1 is the triggering kernel and is often implemented as the exponential decay function , g1 ( ∆t ) : = α exp ( −β∆t ) : α , β > 0 , and g2 ( s , si ) is the density of an unimodal distribution over S centered at si . Example 2 : Spatiotemporal Self-Correcting process ( STSC ) . Self-correcting spatiotemporal point process Isham and Westcott ( 1979 ) assumes that the background intensity increases with a varying speed at different locations , and the arrival of each event reduces the intensity nearby . STSC can model certain regular event sequences , such as an alternating home-to-work travel sequence . It has the following intensity function : λ∗ ( s , t ) = µ exp ( g0 ( s ) βt− ∑ i : ti < t αg2 ( s , si ) ) : α , β , µ > 0 ( 3 ) Here g0 ( s ) is the density of a distribution over S , and g2 ( s , si ) is the density of an unimodal distribution over S centered at location si . Maximum likelihood Estimation . Given a history of n events Ht , the joint log-likelihood function of the observed events for STPP is as follows : log p ( Ht ) = n∑ i=1 log λ∗ ( si , ti ) − ∫ S ∫ t 0 λ∗ ( u , τ ) dudτ ( 4 ) Here , the space-time intensity function λ∗ ( s , t ) plays a central role . Maximum likelihood estimation seeks the optimal λ∗ ( s , t ) from data that optimizes Eqn . ( 4 ) . Predictive distribution . Denote the probability density function ( PDF ) for STPP as f ( s , t|Ht ) which represents the conditional probability that next event will occur at location s and time t , given the history . The PDF is closely related to the intensity function : f ( s , t|Ht ) = λ∗ ( s , t ) 1− F ∗ ( s , t|Ht ) = λ∗ ( s , t ) exp ( − ∫ S ∫ t tn λ∗ ( u , τ ) dτdu ) ( 5 ) where F is the cumulative distribution function ( CDF ) , see derivations in Appendix A.1 . This means the intensity function specifies the expected number of events in a region conditional on the past . The predicted time of the next event is the expected value of the predictive distribution for time f⋆ ( t ) in the entire spatial domain : E [ tn+1|Ht ] = ∫ ∞ tn t ∫ S f∗ ( s , t ) dsdt = ∫ ∞ tn t exp ( − ∫ t tn λ∗ ( τ ) dτ ) λ∗ ( t ) dsdt Similarly , the predicted location of the next event evaluates to : E [ sn+1|Ht ] = ∫ S s ∫ ∞ tn f∗ ( s , t ) dtds = ∫ ∞ tn exp ( − ∫ t tn λ∗ ( τ ) dτ ) ∫ S sλ∗ ( s , t ) dsdt Unfortunately , Eqn . ( 4 ) is generally intractable . It requires either strong modeling assumptions or expensive Monte Carlo sampling . We propose the Deep STPP model to simplify the learning . 2.2 . Deep Spatiotemporal Point Process ( DSTPP ) . We propose DeepSTPP , a simple and efficient approach for learning the space-time event dynamics . Our model ( 1 ) introduces a latent process to capture the uncertainty ( 2 ) parametrizes the latent process with deep neural networks to increase model expressivity and ( 3 ) approximates the intensity function with a set of spatial and temporal kernel functions . Neural latent process . Given a sequence of n event , we wish to model the conditional density of observing the next event given the history f ( s , t|Ht ) . We introduce a latent process to capture the uncertainty of the event history and infer the latent process with armotized variational inference . The latent process dictates the parameters in the space-time intensity function . We sample from the latent process using the re-parameterization trick Kingma and Welling ( 2013 ) . As shown in Figure 2 , given the event sequence Ht = { ( s1 , t1 ) , . . . , ( sn , tn ) } tn≤t , we encode the entire sequence into the high-dimensional embedding . We use positional encoding to encode the sequence order . To capture the stochasticity in the temporal dynamics , we introduce a latent process z = ( z1 , · · · , zn ) for the entire sequence . We assume the latent process follows a multivariate Gaussian at each time step : zi ∼ qϕ ( zi|Ht ) = N ( µ , Diag ( σ ) ) ( 6 ) where the mean µ and covariance Diag ( σ ) are the outputs of the embedding neural network . In our implementation , we found using a Transformer Vaswani et al . ( 2017 ) with sinusoidal positional encoding to be beneficial . The positions to be encoded are the normalized event time instead of the index number , to account for the unequal time interval . Recently , Zuo et al . ( 2020 ) also demonstrated that Transformer enjoys better performance for learning the intensity in temporal point processes . Non-parametric model . We take a non-parameteric approach to model the space-time intensity function λ∗ ( s , t ) as : λ∗ ( s , t|z ) = n+J∑ i=1 wiks ( s , si ; γi ) kt ( t , ti ; βi ) ( 7 ) Here wi ( z ) , γi ( z ) , βi ( z ) are the parameters for each event that is conditioned on the latent process . Specifically , wi represents the non-negative intensity magnitude , implemented with a soft-plus activation function . ks ( · , · ) and kt ( · , · ) are the spatial and temporal kernel functions , respectively . For both kernel functions , we parametrize them as a normalized RBF kernel : ks ( s , si ) = α −1 exp ( − γi∥s− si∥ ) , kt ( t , ti ) = exp ( − βi∥t− ti∥ ) ( 8 ) where the bandwidth parameter γi controls an event ’ s influence over the spatial domain . The parameter βi is the decay rate that represents the event ’ s influence over time . α = ∫ S exp ( −γi∥s−si∥ ) ds is the normalization constant . We use a decoder network to generate the parameters { wi , γi , βi } given z separately , shown in Figure 2 . Each decoder is a 4-layer feed-forward network . We use a softplus activation function to ensure wi and γi are positive . The decay rate βi can be any number , such that an event could have constant or increasing triggering intensity over time . In addition to n historical events , we also randomly sample J representative points from the spatial domain to approximate the background intensity . This is to account for the influence from unobserved events in the background , with varying rates at different absolution locations . The inclusion of these representative points can approximate this background distribution . The model design in ( 7 ) enjoys a closed form integration , which gives the conditional PDF as : f ( s , t|Ht , z ) = λ∗ ( s , t|z ) exp ( − n+J∑ i=1 wi βi [ kt ( tn , ti ) − kt ( t , ti ) ] ) ( 9 ) See the derivation details in Appendix A.2 . DeepSTPP circumvents the integration of the intensity function and enjoys fast inference in forecasting future events . In contrast , NSTPP Chen et al . ( 2021 ) is relatively inefficient as its ODE solver also requires additional numerical integration . Parameter learning . Due to the latent process , the posterior becomes intractable . Instead , we use amortized inference by optimizing the evidence lower bound ( ELBO ) of the likelihood . In particular , given event historyHt , the conditional log-likelihood of the next event is : log p ( s , t|Ht ) ≥ log pθ ( s , t|Ht , z ) + KL ( qϕ ( z|Ht ) ||p ( z ) ) ( 10 ) = log λ∗ ( s , t|z ) − ∫ t tn λ∗ ( τ ) dτ + KL ( q||p ) ( 11 ) where ϕ represents the parameters of the encoder network and θ are the parameters of the decoder network . p ( z ) is the prior distribution , which we assume to be Gaussian . KL ( ·||· ) is the Kullback–Leibler divergence between two distributions . We can optimize the objective function in Eqn . ( 11 ) w.r.t . the parameters ϕ and θ using back-propagation .
This work studies the DNN-based spatiotemporal point process model. It points out the drawback of most existing DNN-based point process models: incapability to incorporate the spatio information. Although in statistics, the spatiotemporal point process is capable of capturing events in continuous space and time, such methods are computation expensive. The theoretical analysis is provided, and experimental comparisons are conducted on synthetic and real data.
SP:401998f890d05e3c22e89754ed6b64403e1a6ead
Discovering Parametric Activation Functions
1 INTRODUCTION . The rectified linear unit ( ReLU ( x ) = max { x , 0 } ) is the most commonly used activation function in modern deep learning architectures ( Nair & Hinton , 2010 ) . When introduced , it offered substantial improvements over the previously popular tanh and sigmoid activation functions . Because ReLU is unbounded as x→∞ , it is less susceptible to vanishing gradients than tanh and sigmoid are . It is also simple to calculate , which leads to faster training times . Activation function design continues to be an active area of research , and a number of novel activation functions have been introduced since ReLU , each with different properties ( Nwankpa et al. , 2018 ) . In certain settings , these novel activation functions lead to substantial improvements in accuracy over ReLU , but the gains are often inconsistent across tasks . Because of this inconsistency , ReLU is still the most commonly used : it is reliable , even though it may be suboptimal . The improvements and inconsistencies are due to a gradually evolving understanding of what makes an activation function effective . For example , Leaky ReLU ( Maas et al. , 2013 ) allows a small amount of gradient information to flow when the input is negative . It was introduced to prevent ReLU from creating dead neurons , i.e . those that are stuck at always outputting zero . On the other hand , the ELU activation function ( Clevert et al. , 2015 ) contains a negative saturation regime to control the forward propagated variance . These two very different activation functions have seemingly contradicting properties , yet each has proven more effective than ReLU in various tasks . There are also often complex interactions between an activation function and other neural network design choices , adding to the difficulty of selecting an appropriate activation function for a given task . For example , Ramachandran et al . ( 2018 ) warned that the scale parameter in batch normalization ( Ioffe & Szegedy , 2015 ) should be set when training with the Swish activation function ; Hendrycks & Gimpel ( 2016 ) suggested using an optimizer with momentum when using GELU ; Klambauer et al . ( 2017 ) introduced a modification of dropout ( Hinton et al. , 2012 ) called alpha dropout to be used with SELU . These results suggest that significant gains are possible by designing the activation function properly for a network and task , but that it is difficult to do so manually . This paper presents an approach to automatic activation function design . The approach is inspired by genetic programming ( Koza , 1992 ) , which describes techniques for evolving computer programs to solve a particular task . In contrast with previous studies ( Bingham et al. , 2020 ; Ramachandran et al. , 2018 ; Liu et al. , 2020 ; Basirat & Roth , 2018 ) , this paper focuses on automatically discovering activation functions that are parametric . Evolution discovers the general form of the function , while gradient descent optimizes the parameters of the function during training . The approach , called PANGAEA ( Parametric ActivatioN functions Generated Automatically by an Evolutionary Algorithm ) , discovers general activation functions that improve performance overall over previously proposed functions . It also produces specialized functions for different architectures , such as Wide ResNet , ResNet , and Preactivation ResNet , that perform even better than the general functions , demonstrating its ability to customize activation functions to architectures . 2 RELATED WORK . Prior work in automatic activation function discovery includes that of Ramachandran et al . ( 2018 ) , who used reinforcement learning to design novel activation functions . They discovered multiple functions , but analyzed just one in depth : Swish ( x ) = x ·σ ( x ) . Of the top eight functions discovered , only Swish and max { x , σ ( x ) } consistently outperformed ReLU across multiple tasks , suggesting that improvements are possible but often task specific . Bingham et al . ( 2020 ) used evolution to discover novel activation functions . Whereas their functions had a fixed graph structure , PANGAEA utilizes a flexible search space that implements activation functions as arbitrary computation graphs . PANGAEA also includes more powerful mutation operations , and a function parameterization approach that makes it possible to further refine functions through gradient descent . Liu et al . ( 2020 ) evolved normalization-activation layers . They searched for a computation graph that replaced both batch normalization and ReLU in multiple neural networks . They argued that the inherent nonlinearity of the discovered layers precluded the need for any explicit activation function . However , experiments in this paper show that carefully designed parametric activation functions can in fact be a powerful augmentation to existing deep learning models . Finally , Basirat & Roth ( 2018 ) used a genetic algorithm to discover task-specific piecewise activation functions . They showed that different functions are optimal for different tasks . However , the discovered activation functions did not outperform ELiSH and HardELiSH , two hand-designed activation functions proposed in the same paper ( Basirat & Roth , 2018 ) . The larger search space in PANGAEA affords evolution extra flexibility in designing activation functions , while the trainable parameters give customizability to the network itself , leading to consistent , significant improvement . 3 THE PANGAEA METHOD . 3.1 REPRESENTING AND MODIFYING ACTIVATION FUNCTIONS . Activation functions are represented as computation graphs in which each node is a unary or a binary operator ( Table 1 ) . The activation functions are implemented in TensorFlow ( Abadi et al. , 2016 ) , and safe operator implementations are chosen when possible ( e.g . the binary operator x1/x2 is implemented as tf.math.divide_no_nan , which returns 0 if x2 = 0 ) . The operators in Table 1 were chosen to create a large and expressive search space that contains activation functions unlikely to be discovered by hand . Operators that are periodic ( e.g . sin ( x ) ) and operators that contain repeated asymptotes were not included ; in preliminary experiments they often caused training instability . All of the operators have domain R , making it possible to compose them arbitrarily . PANGAEA begins with an initial population of P random activation functions . Each function is either of the form f ( x ) = unary1 ( unary2 ( x ) ) or f ( x ) = binary ( unary1 ( x ) , unary2 ( x ) ) , as shown in Figure 1 . Both forms are equally likely , and the unary and binary operators are also selected uniformly at random . Previous work has suggested that it is difficult to discover highperforming activation functions that have complicated computation graphs ( Bingham et al. , 2020 ) . The computation graphs in Figure 1 thus represent the simplest non-trivial computation graphs with and without a binary operator . During the search , all ReLU activation functions in a given neural network are replaced with a candidate activation function . No other changes to the network or training setup are made . The network is trained on the dataset , and the activation function is assigned a fitness score equal to the network ’ s accuracy on the validation set . Given a parent activation function , a child activation function is created by applying one of four possible mutations ( Figure 2 ) . Other possible evolutionary operators like crossover are not used in this paper . All mutations are equally likely with two special cases . If a remove mutation is selected for an activation function with just one node , a change mutation is applied instead . Additionally , if an activation function with greater than seven nodes is selected for mutation , the mutation is a remove mutation , in order to reduce bloat . Insert In an insert mutation , one operator in the search space is selected uniformly at random . This operator is placed on a random edge of a parent activation function graph . In Figure 2b , the unary operator Swish ( x ) is inserted at the edge connecting the output of tanh ( x ) to the input of x1 + x2 . After mutating , the parent activation function ( tanh ( x ) + |erf ( x ) | ) 2 produces the child activation function ( Swish ( tanh ( x ) ) + |erf ( x ) | ) 2 . If a binary operator is randomly chosen for the insertion , the incoming input value is assigned to the variable x1 . If the operator is addition or subtraction , the input to x2 is set to 0 . If the operator is multiplication , division , or exponentiation , the input to x2 is set to 1 . Finally , if the operator is the maximum or minimum operator , the input to x2 is a copy of the input to x1 . When a binary operator is inserted into a computation graph , the activation function computed remains unchanged . However , the structure of the computation graph is modified and can be further altered by future mutations . Remove In a remove mutation , one node is selected uniformly at random and deleted . The node ’ s input is rewired to its output . If the removed node is binary , one of the two inputs is chosen at random and is deleted . The other input is kept . In Figure 2c , the addition operator is removed from the parent activation function . The two inputs to addition , tanh ( x ) and |erf ( x ) | , can not both be kept . By chance , tanh ( x ) is discarded , resulting in the child activation function |erf ( x ) |2 . Change To perform a change mutation , one node in the computation graph is selected at random and replaced with another operator from the search space , also uniformly at random . Unary operators are always replaced with unary operators , and binary operators with binary operators . Figure 2d shows how changing addition to multiplication produces the activation function ( tanh ( x ) · |erf ( x ) | ) 2 . Regenerate In a regenerate mutation , every operator in the computation graph is replaced with another operator from the search space . As with change mutations , unary operators are replaced with unary operators , and binary operators with binary operators . Although every node in the graph is changed , the overall structure of the computation graph remains the same . Regenerate mutations are useful for increasing exploration , and are similar in principle to burst mutation and delta coding ( Gomez & Miikkulainen , 2003 ; Whitley et al. , 1991 ) . Figure 2e shows the child activation function −max { 0 , tanh ( SELU ( x ) ) } , which is quite different from the parent function in Figure 2a . Parameterization of Activation Functions After mutation ( or random initialization ) , activation functions are parameterized ( Figure 3 ) . A value k ∈ { 0 , 1 , 2 , 3 } is chosen uniformly at random , and k edges of the activation function graph are randomly selected . Multiplicative per-channel parameters are inserted at these edges and initialized to one . Whereas evolution is well suited for discovering the general form of the activation function in a discrete , structured search space , parameterization makes it possible to fine-tune the function using gradient descent . The function parameters are updated at every epoch during backpropagation , resulting in different activation functions in different stages of training . As the parameters are per-channel , the process creates different activation functions at different locations in the neural network . Thus , parameterization gives neural networks additional flexibility to customize activation functions .
The authors propose to search for activation functions with regularized evolution, an evolutionary algorithm proposed by Real et al. Various mutations are proposed that allow to investigate a larger search space than prior work. In particular, a mutation is added which adds trainable parameters to the activation function. The discovered activation functions are compared on three different architectures to several state-of-the-art activation functions.
SP:510133bddf8cd65c97348e4a8161009fc1d791e0
Efficient Competitive Self-Play Policy Optimization
1 INTRODUCTION . Reinforcement learning ( RL ) from self-play has drawn tremendous attention over the past few years . Empirical successes have been observed in several challenging tasks , including Go ( Silver et al. , 2016 ; 2017 ; 2018 ) , simulated hide-and-seek ( Baker et al. , 2020 ) , simulated sumo wrestling ( Bansal et al. , 2017 ) , Capture the Flag ( Jaderberg et al. , 2019 ) , Dota 2 ( Berner et al. , 2019 ) , StarCraft II ( Vinyals et al. , 2019 ) , and poker ( Brown & Sandholm , 2019 ) , to name a few . During RL from selfplay , the learner collects training data by competing with an opponent selected from its past self or an agent population . Self-play presumably creates an auto-curriculum for the agents to learn at their own pace . At each iteration , the learner always faces an opponent that is comparably in strength to itself , allowing continuous improvement . The way the opponents are selected often follows human-designed heuristic rules in prior work . For example , AlphaGo ( Silver et al. , 2016 ) always competes with the latest agent , while the later generation AlphaGo Zero ( Silver et al. , 2017 ) and AlphaZero ( Silver et al. , 2018 ) generate self-play data with the maintained best historical agent . In specific tasks , such as OpenAI ’ s sumo wrestling , competing against a randomly chosen historical agent leads to the emergence of more diverse behaviors ( Bansal et al. , 2017 ) and more stable training than against the latest agent ( Al-Shedivat et al. , 2018 ) . In population-based training ( Jaderberg et al. , 2019 ; Liu et al. , 2019 ) and AlphaStar ( Vinyals et al. , 2019 ) , an elite or random agent is picked from the agent population as the opponent . Unfortunately , these rules may be inefficient and sometimes ineffective in practice since they do not necessarily enjoy last-iterate convergence to the “ average-case optimal ” solution even in tabular matrix games . In fact , in the simple Matching Pennies game , self-play with the latest agent fails to converge and falls into an oscillating behavior , as shown in Sec . 5 . In this paper , we develop an algorithm that adopts a principle-derived opponent-selection rule to alleviate some of the issues mentioned above . This requires clarifying first what the solution of self-play RL should be . From the game-theoretical perspective , Nash equilibrium is a fundamental solution concept that characterizes the desired “ average-case optimal ” strategies ( policies ) . When each player assumes other players also play their equilibrium strategies , no one in the game can gain more by unilaterally deviating to another strategy . Nash , in his seminal work ( Nash , 1951 ) , has established the existence result of mixed-strategy Nash equilibrium of any finite game . Thus solving for a mixed-strategy Nash equilibrium is a reasonable goal of self-play RL . We consider the particular case of two-player zero-sum games as the model for the competitive selfplay RL environments . In this case , the Nash equilibrium is the same as the ( global ) saddle point and as the solution of the minimax program minx∈X maxy∈Y f ( x , y ) . We denote x , y as the strategy profiles ( in RL terminology , policies ) and f as the loss for x or utility/reward for y . A saddle point ( x∗ , y∗ ) ∈ X × Y , where X , Y are the sets of all possible mixed-strategies ( stochastic policies ) of the two players , satisfies the following key property f ( x∗ , y ) ≤ f ( x∗ , y∗ ) ≤ f ( x , y∗ ) , ∀x ∈ X , ∀y ∈ Y . ( 1 ) Connections to the saddle problem and game theory inspire us to borrow ideas from the abundant literature for finding saddle points in the optimization field ( Arrow et al. , 1958 ; Korpelevich , 1976 ; Kallio & Ruszczynski , 1994 ; Nedić & Ozdaglar , 2009 ) and for finding equilibrium in the game theory field ( Zinkevich et al. , 2008 ; Brown , 1951 ; Singh et al. , 2000 ) . One particular class of method , i.e. , the perturbation-based subgradient methods to find the saddles ( Korpelevich , 1976 ; Kallio & Ruszczynski , 1994 ) , is especially appealing . This class of method directly builds upon the inequality properties in Eq . 1 , and has several advantages : ( 1 ) Unlike some algorithms that require knowledge of the game dynamics ( Silver et al. , 2016 ; 2017 ; Nowé et al. , 2012 ) , it requires only subgradients ; thus , it is easy to be adapted to policy optimization with estimated policy gradients . ( 2 ) For convexconcave functions , it is guaranteed to converge in its last iterate instead of an average iterate , hence alleviates the need to compute any historical averages as in Brown ( 1951 ) ; Singh et al . ( 2000 ) ; Zinkevich et al . ( 2008 ) , which can get complicated when neural nets are involved ( Heinrich & Silver , 2016 ) . ( 3 ) Most importantly , it prescribes a simple principled way to adversarially choose self-play opponents , which can be naturally instantiated with a concurrently-trained agent population . To summarize , we apply ideas from the perturbation-based methods of classical saddle point optimization to the model-free self-play RL regime . This results in a novel population-based policy gradient method with a principled adversarial opponent-selection rule . Analogous to the standard model-free RL setting , we assume only “ naive ” players ( Jafari et al. , 2001 ) where the game dynamic is hidden and only rewards for their own actions are revealed . This enables broader applicability to problems with mismatched or unknown game dynamics than many existing algorithms ( Silver et al. , 2016 ; 2017 ; Nowé et al. , 2012 ) . In Sec . 4 , we provide an approximate convergence theorem for convex-concave games as a sanity check . Sec . 5 shows extensive experiment results favoring our algorithm ’ s effectiveness in several games , including matrix games , grid-world soccer , a board game , and a challenging simulated robot sumo game . Our method demonstrates higher per-agent sample efficiency than baseline methods with alternative opponent-selection rules . Our trained agents also outperform the baseline agents on average in competitions . 2 RELATED WORK . Reinforcement learning trains a single agent to maximize the expected return in an environment ( Sutton & Barto , 2018 ) . Multiagent reinforcement learning ( MARL ) , of which two-agent is a special case , concerns multiple agents taking actions in the same environment ( Littman , 1994 ) . Self-play is a training paradigm to generate data for MARL and has led to great successes , achieving superhuman performance in several domains ( Tesauro , 1995 ; Silver et al. , 2016 ; Brown & Sandholm , 2019 ) . Applying RL algorithms naively as independent learners in MARL sometimes produces strong agents ( Tesauro , 1995 ) but not always . People have studied ways to extend RL algorithms specifically to MARL , e.g. , minimax-Q ( Littman , 1994 ) , Nash-Q ( Hu & Wellman , 2003 ) , WoLF-PG ( Bowling & Veloso , 2002 ) , etc . However , most of these methods are designed for tabular RL only , therefore not readily applicable to continuous state action spaces or complex policy functions where gradient-based policy optimization methods are preferred . Recently , Bai & Jin ( 2020 ) , Lee et al . ( 2020 ) and Zhang et al . ( 2020 ) provide theoretical regret or convergence analyses under tabular or other restricted self-play settings , which complement our empirical effort . There are algorithms developed from the game theory and online learning perspective ( Lanctot et al. , 2017 ; Nowé et al. , 2012 ; Cardoso et al. , 2019 ) , notably Tree search , Fictitious self-play ( Brown , 1951 ) , Regret minimization ( Jafari et al. , 2001 ; Zinkevich et al. , 2008 ) , and Mirror descent ( Mertikopoulos et al. , 2019 ; Rakhlin & Sridharan , 2013 ) . Tree search such as minimax and alpha-beta pruning is particularly effective in small-state games . Monte Carlo Tree Search ( MCTS ) is also effective in Go ( Silver et al. , 2016 ) . However , Tree search requires learners to know ( or at least learn ) the game dynamics . The other ones typically require maintaining some historical quantities . In Fictitious play , the learner best-responds to a historical average opponent , and the average strategy converges . Similarly , the total historical regrets in all ( information ) states are maintained in ( counterfactual ) regret minimization ( Zinkevich et al. , 2008 ) . Furthermore , most of those algorithms are designed only for discrete state action games . Special care has to be taken with neural net function approximators ( Heinrich & Silver , 2016 ) . On the contrary , our method does not require the complicated computation of averaging neural nets , and is readily applicable to continuous environments . In two-player zero-sum games , the Nash equilibrium coincides with the saddle point . This enables the techniques developed for finding saddle points . While some saddle-point methods also rely on time averages ( Nedić & Ozdaglar , 2009 ) , a class of perturbation-based gradient method is known to converge under mild convex-concave assumption for deterministic functions ( Kallio & Ruszczynski , 1994 ; Korpelevich , 1976 ; Hamm & Noh , 2018 ) . We develop a sampling version of them for stochastic RL objectives , which leads to a more principled and effective way of choosing opponents in self-play . Our adversarial opponent-selection rule bears a resemblance to Gleave et al . ( 2019 ) . However , our goal is to develop an effective self-play RL algorithm , while Gleave et al . ( 2019 ) aims at attacking deep self-play learned policies . A recent work by Prajapat et al . ( 2020 ) tackles the self-play policy optimization problem differently from ours by employing a bilinear approximation to the game . Finally , although the algorithm presented here builds upon policy gradient , the same framework may be extended to other RL algorithms such as MCTS thanks to a recent interpretation of MCTS as policy optimization ( Grill et al. , 2020 ) . Our way of leveraging Eq . 1 in a population may potentially work beyond gradient-based RL , e.g. , in training generative adversarial networks similarly to Hamm & Noh ( 2018 ) due to the same minimax formulation . 3 METHOD . Classical game theory defines a two-player zero-sum game as a tuple ( X , Y , f ) where X , Y are the sets of possible strategies of Players 1 and 2 respectively , and f : X × Y 7→ R is a mapping from a pair of strategies to a real-valued utility/reward for Player 2 . The game is zero-sum ( fully competitive ) , so Player 1 ’ s reward is −f . This is a special case of the Stochastic Game formulation for Multiagent RL ( Shapley , 1953 ) which itself is an extension to Markov Decision Processes ( MDP ) . We consider mixed strategies induced by stochastic policies πx and πy . The policies can be parameterized functions in which case X , Y are the sets of all possible policy parameters . Denote at as the action of Player 1 and bt as the action of Player 2 at time t , let T be the time limit of the game , then the stochastic payoff f writes as f ( x , y ) = E at∼πx , bt∼πy , st+1∼P ( ·|st , at , bt ) T∑ t=0 γtr ( st , at , bt ) . ( 2 ) The state sequence { st } Tt=0 follows a transition dynamic P ( st+1|st , at , bt ) . Actions are sampled according to action distributions πx ( ·|st ) and πy ( ·|st ) . And r ( st , at , bt ) is the reward ( payoff ) for Player 2 at time t , determined jointly by the state and actions . We use the term ‘ agent ’ and ‘ player ’ interchangeably . While we consider an agent pair ( x , y ) in this paper , in some cases ( Silver et al. , 2016 ) , x = y can be enforced by sharing parameters if the game is impartial . The discounting factor γ weights between short- and long-term rewards and is optional . Note that when one agent is fixed , taking y as an example , the problem x is facing reduces to an MDP if we define a new state transition dynamic Pnew ( st+1|st , at ) = ∑ bt P ( st+1|st , at , bt ) πy ( bt|st ) and a new reward rnew ( st , at ) = ∑ bt r ( st , at , bt ) πy ( bt|st ) . This leads to the naive gradient descentascent algorithm , which provably works in strictly convex-concave games ( where f is strictly convex in x and strictly concave in y ) under some assumptions ( Arrow et al. , 1958 ) . However , in general , it does not enjoy last-iterate convergence to the Nash equilibrium . Even for simple games such as Matching Pennies and Rock Paper Scissors , as we shall see in our experiments , the naive algorithm generates cyclic sequences of xk , yk that orbit around the equilibrium . This motivates us to study the perturbation-based method which converges under weaker assumptions . Algorithm 1 : Perturbation-based self-play policy optimization of an n agent population . Input : N : No iterations ; ηk : learning rates ; mk : sample size ; n : population size ; l : No inner updates ; Result : n pairs of policies ; 1 Initialize ( x0i , y 0 i ) , i = 1 , 2 , . . . n ; 2 for k = 0 , 1 , 2 , . . . N − 1 do 3 Evaluate f̂ ( xki , y k j ) , ∀i , j ∈ 1 . . . n with Eq . 4 and sample size mk ; 4 for i = 1 , . . . n do 5 Construct candidate opponent sets Ckyi = { y k j : j = 1 . . . n } and Ckxi = { x k j : j = 1 . . . n } ; 6 Find perturbed vki = argmaxy∈Ckyi f̂ ( xki , y ) , perturbed u k i = argminx∈Ckxi f̂ ( x , yki ) ; 7 Invoke a single-agent RL algorithm ( e.g. , A2C , PPO ) on xki for l times that : 8 Estimate policy gradients ĝkxi = ∇̂xf ( x k i , v k i ) with sample size mk ( e.g. , Eq . 5 ) ; 9 Update policy by xk+1i ← x k i − ηkĝkxi ( or RmsProp ) ; 10 Invoke a single-agent RL algorithm ( e.g. , A2C , PPO ) on yki for l times that : 11 Estimate policy gradients ĝkyi = ∇̂yf ( u k i , y k i ) with sample size mk ; 12 Update policy by yk+1i ← yi k + ηkĝ k yi ( or RmsProp ) ; 13 return { ( xNi , yNi ) } ni=1 ; Recall that the Nash equilibrium has to satisfy the saddle constraints Eq . 1 : f ( x∗ , y ) ≤ f ( x∗ , y∗ ) ≤ f ( x , y∗ ) . The perturbation-based methods build upon this property ( Nedić & Ozdaglar , 2009 ; Kallio & Ruszczynski , 1994 ; Korpelevich , 1976 ) and directly optimize for a solution that meets the constraints . They find perturbed points u of Player 1 and v of Player 2 , and use gradients at ( x , v ) and ( u , y ) to optimize x and y respectively . Under some regularity assumptions , gradient direction from a single perturbed point is adequate for proving convergence for ( not strictly ) convex-concave functions ( Nedić & Ozdaglar , 2009 ) . They can be easily extended to accommodate gradient based policy optimization and the stochastic RL objective in Eq . 4 . We propose to find the perturbations from an agent population , resulting in the algorithm outlined in Alg . 1 . The algorithm trains n pairs of agents simultaneously . At each rounds of training , we first run n2 pairwise competitions as the evaluation step ( Alg . 1 L3 ) , costing n2mk trajectories . To save sample complexity , we can use these rollouts to do one policy update as well . Then a simple adversarial rule ( Eq . 3 ) is adopted in Alg . 1 L6 to choose the opponents adaptively . The intuition is that vki and u k i are the most challenging opponents in the population for the current xi and yi . vki = arg max y∈Ckyi f̂ ( xki , y ) , u k i = arg min x∈Ckxi f̂ ( x , yki ) . ( 3 ) The perturbations vki and u k i always satisfy f ( x k i , v k i ) ≥ f ( uki , yki ) , since maxy∈Ckyi f̂ ( x k i , y ) ≥ f̂ ( xki , y k i ) ≥ minx∈Ckxi f̂ ( x , y k i ) . Then we run gradient descent on x k i with the perturbed v k i as opponent to minimize f ( xki , v k i ) , and run gradient ascent on y k i to maximize f ( u k i , y k i ) . Intuitively , the duality gap between minx maxy f ( x , y ) and maxy minx f ( x , y ) , approximated by f ( xki , v k i ) − f ( uki , y k i ) , is reduced , leading ( x k i , y k i ) to converge to the saddle point ( equilibrium ) . We build the candidate opponent sets in L5 of Alg . 1 simply as the concurrently-trained n-agent population . Specifically , Ckyi = { yk1 , . . . , y k n } and Ckxi = { xk1 , . . . , x k n } . This is due to the following considerations . An alternative source of candidates is the fixed known agents such as a rule-based agent , which may not be available in practice . Another source is the extragradient methods ( Korpelevich , 1976 ; Mertikopoulos et al. , 2019 ) , where extra gradient steps are taken on y before optimizing x . The extragradient method can be thought of as a local approximation to Eq . 3 with a neighborhood opponent set , thus is related to our method . However , this method could be less efficient because the trajectory sample used in the extragradient steps is wasted as it does not contribute to actually optimizing y . Yet another source is the past agents . This choice is motivated by Fictitious play and ensures that the current learner always defeats a past self . However , as we shall see in the experiments , self-play with a random past agent may learn slower than our method . We expect all agents in the population in our algorithm to be strong , thus provide stronger learning signals . Finally , we use Monte Carlo estimation to compute the values and gradients of f . In the classical game theory setting where the game dynamic and payoff are known , it is possible to compute the exact values and gradients of f . But in the model-free MARL setting , we have to collect roll-out trajectories to estimate both the function values through policy evaluation and gradients through the Policy gradient theorem ( Sutton & Barto , 2018 ) . After collecting m independent trajectories { { ( sit , ait , rit ) } Tt=0 } m i=1 , we can estimate f ( x , y ) by f̂ ( x , y ) = 1 m m∑ i=1 T∑ t=0 γtrit . ( 4 ) And given estimates Q̂x ( s , a ; y ) to the state-action value Qx ( s , a ; y ) ( assuming an MDP with y as a fixed opponent of x ) , we construct an estimator for∇xf ( x , y ) ( and similarly for∇yf given Q̂y ) by ∇̂xf ( x , y ) ∝ 1 m m∑ i=1 T∑ t=0 ∇x log πx ( ait|sit ) Q̂x ( sit , ait ; y ) . ( 5 )
The paper “Efficient Competitive Self-Play Policy Optimization” introduces a new self-play scheme for solving zero-sum two-player games. It is suggested to train a population of N agents in parallel, where each agent is matched against the comparatively strongest opponent in the next round of training. As baselines, the paper considers self-play against the best, the latest and random snapshots from the training history of only a single agent.
SP:d23a1168bdf9f77e67f24b5062525cefd213a43e
Defending against black-box adversarial attacks with gradient-free trained sign activation neural networks
While machine learning models today can achieve high accuracies on classification tasks , they can be deceived by minor imperceptible distortions to the data . These are known as adversarial attacks and can be lethal in the black-box setting which does not require knowledge of the target model type or its parameters . Binary neural networks that have sign activation and are trained with gradient descent have been shown to be harder to attack than conventional sigmoid activation networks but their improvements are marginal . We instead train sign activation networks with a novel gradient-free stochastic coordinate descent algorithm and propose an ensemble of such networks as a defense model . We evaluate the robustness of our model ( a hard problem in itself ) on image , text , and medical ECG data and find it to be more robust than ensembles of binary , full precision , and convolutional neural networks , and than random forests while attaining comparable clean test accuracy . In order to explain our model ’ s robustness we show that an adversary targeting a single network in our ensemble fails to attack ( and thus non-transferable to ) other networks in the ensemble . Thus a datapoint requires a large distortion to fool the majority of networks in our ensemble and is likely to be detected in advance . This property of non-transferability arises naturally from the non-convexity of sign activation networks and randomization in our gradient-free training algorithm without any adversarial defense effort . 1 INTRODUCTION . State of the art machine learning algorithms can achieve high accuracies in classification tasks but misclassify minor perturbations in the data known as as adversarial attacks Goodfellow et al . ( 2015 ) ; Papernot et al . ( 2016b ) ; Kurakin et al . ( 2016 ) ; Carlini & Wagner ( 2017 ) ; Brendel et al . ( 2018 ) . Adversarial examples have been shown to transfer across models which makes it possible to perform transfer-based ( substitute model ) black box attacks Papernot et al . ( 2016a ) . To counter adversarial attacks many defense methods been proposed with adversarial training being the most popular Szegedy et al . ( 2014 ) ; Tramèr et al . ( 2018 ) . However this tends to lower accuracy on clean test data that has no perturbations Raghunathan et al . ( 2019 ) ; Zhang et al . ( 2019 ) and can still be attacked with better transfer based methods Wu et al . ( 2020 ) ; Xie et al . ( 2019a ) ; Dong et al . ( 2019 ) . Many previously proposed defenses have also been shown to be vulnerable Carlini & Wagner ( 2017 ) ; Athalye et al . ( 2018 ) ; Ghiasi et al . ( 2020 ) thus leaving adversarial robustness an open problem in machine learning . A more lethal and practical attack than substitute model training is a boundary based one that requires only the prediction of the model Brendel et al . ( 2018 ) . These attacks are aimed at finding the minimum distortion to an image such that it will fool a classifier . This is in fact an NP-hard problem for ReLu activated neural networks Katz et al . ( 2017 ) ; Sinha et al . ( 2018 ) and tree ensemble classifiers Kantchelian et al . ( 2016 ) . Even approximating the minimum distortion for ReLu activated neural networks is NP-hard Weng et al . ( 2018 ) . Boundary based black box attacks such as HopSkipJump Chen et al. , Boundary Attack Brendel et al . ( 2018 ) and RayS Chen & Gu ( 2020 ) give an upper bound on the minimum adversarial distortion . Binary neural networks that have sign activation and binary weights were originally proposed as lightweight models . These are trained with gradient descent by approximating the sign activation . Recent work has shown that they tend to be more adversarially robust than full precision networks but the improvements are marginal ( see Tables 4 and 5 in Galloway et al . ( 2018 ) and Table 8 in Panda et al . ( 2019 ) ) . In this paper we propose a gradient free stochastic coordinate descent algorithm for training sign activation networks with and without binary weights similar to recent work Xue et al . ( 2020a ; b ) ; Xie et al . ( 2019b ) . While our original intention was to study the accuracy of a sign activation network trained directly without any approximation we make an interesting finding on the adversarial robustness of our model . We find that ensembling our model gives a high minimum distortion ( as measured by HopSkipJump ) compared to full precision , binary , and convolutional neural networks . We explain this phenomena by measuring the transferability between networks in an ensemble . In summary we make the following observations in our paper : • Our single hidden layer sign activation network has higher minimum distortion than ensembles of full precision and binary neural networks , than random forests that have the advantage of bootstrapping and random feature selection , and than ensembles of convolutional networks that have the advantage of convolutions and several layers . • Our model ’ s robustness stems from non-transferability of adversarial examples between networks in our ensemble and its robustness increases as we add more networks to the ensemble . • Substitute model black box attacks require a much greater distortion to bring our model to zero adversarial accuracy compared to ensembles of full precision and binary networks . • Text classification black box attacks are less effective on our model than on convolutional networks , random forests , and ensembles of full precision and binary networks . • In a medical diagnosis setting , attacks on ECG data on our model have a higher distortions and are visually distinguishable compared to attacks on ensembles of full precision and convolutional networks , and on random forests . 2 METHODS . 2.1 GRADIENT-FREE STOCHASTIC COORDINATE DECENT . Suppose we are given binary class data xi ∈ Rd and yi ∈ { −1 , +1 } for i = 0 ... n − 1 . Consider the objective function of a single hidden layer neural network with sign activation and 01 loss given below . We employ a stochastic coordinate descent algorithm shown in Algorithm 1 ( similar to recent work Xue et al . ( 2020a ; b ) ; Xie et al . ( 2019b ) ) to minimize this objective . 1 2n argmin W , W0 , w , w0 ∑ i ( 1− sign ( yi ( wT ( sign ( WTxi +W0 ) ) + w0 ) ) ) ( 1 ) We can train sign activation networks with and without binary weights using our SCD training procedure above . In the case of binary weights we don ’ t need a learning rate . We apply GPU parallelism to simultaneously update features and other heuristics to speed up runtimes ( with additional details given in the Supplementary Material ) . 2.2 IMPLEMENTATION , TEST ACCURACY , AND RUNTIME . We implement our training procedure in Python , numpy , and Pytorch Paszke et al . ( 2019 ) and make our code freely available from https : //github.com/zero-one-loss/scd_github . We train three types of sign activation networks with our algorithm : ( 1 ) SCD01 : 01-loss in the final node , ( 2 ) SCDCE : cross-entropy loss in the final node , and ( 3 ) SCDCEBNN : cross-entropy in the final node with binary weights throughout the model . Since sign activation is non-convex and our training starts from a different random initialization we run it a 100 times and output the majority vote . Algorithm 1 Stochastic coordinate descent for single hidden layer network Procedure : 1 . Initialize all network weights W , w to random values from the Normal distribution N ( 0 , 1 ) . 2 . Set network thresholds W0 to the median projection value on their corresponding weight vectors and w0 to the projection value that minimizes our network objective . while i < epochs do 1 . Randomly sample a batch of data equally from each class . ( We set this to 75 % of the training data in image and text data experiments and 25 % in the ECG data . ) 2 . Perform coordinate descent separately first on the final node w and then a randomly selected hidden node u ( a random column from the hidden layer weight matrix W ) 3 . Suppose we are performing coordinate descent on nodew . We select a random set of features ( coordinates ) from w called F . For each feature wi ∈ F we add/subtract a learning rate η and then determine the w0 that optimizes the loss ( done in parallel on a GPU ) . We consider all possible values of w0 = wT xi+w T xi+1 2 for i = 0 ... n− 2 and select the one that minimizes the loss ( also performed in parallel on a GPU ) . 4 . After making the update above we evaluate the loss on the full dataset ( performed on a GPU for parallel speedups ) and accept the change if it improves the loss . end while To illustrate our real runtimes and clean test accuracies we compare our models with a single hidden layer of 20 nodes to the equivalent network with sigmoid activation and logistic loss ( denoted as MLP ) and the binary neural network ( denoted as BNN ) Hubara et al . ( 2016 ) . We used the MLPClassifier in scikit-learn Pedregosa et al . ( 2011 ) to implement MLP and the Larq library Geiger & Team ( 2020 ) with the approx approximation to the sign activation . This has shown to achieve a higher test accuracy than the original straight through estimator ( STE ) of the sign activation Liu et al . ( 2018b ) . We perform a 1000 iterations of SCD01 and SCDCE and 10000 of SCDCEBNN . In Table 1 we show the runtimes of a single run of all models on CIFAR10 Krizhevsky ( 2009 ) ( 32× 32× 3 , 10K train , 2K test ) , CelebA facial attributes black hair vs brown hair Liu et al . ( 2015 ) ( 96×96×3 , 1K train , 1K test ) , GTSRB street sign recognition 60 vs 120 speed limit signs Stallkamp et al . ( 2011 ) ( 48×48×3 , 2816 train , 900 test ) , and ImageNet class 0 vs. 1 Russakovsky et al . ( 2015 ) ( 256 × 256 × 3 , 2580 train , 100 test ) . Our training runtimes are comparable to gradient descent in MLP and BNN and thus practically usable . We can trivially parallelize training an ensemble by doing multiple runs on CPU and GPU cores at the same time . We also show test accuracies of 100 vote ensembles of all models and find our model accuracies to be comparable to MLP and BNN . 3 RESULTS . Going forward we compare the adversarial robustness of ensembles of our three models SCD01 , SCDCE , and SCDCEBNN , their full precision and binary gradient descent trained equivalent counterparts MLP and BNN , two convolutional neural networks : LeNet LeCun et al . ( 1998 ) and ResNet50 He et al . ( 2016 ) , and random forests Breiman ( 2001 ) ( denoted as RF ) . For each model we use the majority vote output of 100 votes each with different initial parameters except for ResNet50 where we use 10 votes . In random forest we use an ensemble of 100 trees . We use a single hidden layer of 20 nodes in our three models and in MLP and BNN throughout the paper . The convolutional networks and random forest are not a fair comparison to our model since it has fewer parameters and does not perform bootstrapping or random feature selection as random forest . We include them nevertheless since convolutional neural networks serve as state of the art references and random forest serves as an alternative ensemble method .
The paper proposes an architecture (ensemble of networks) aiming at being robust against black-box attacks, based on the idea that crafting an adversarial example able to fool enough individual networks such that the majority vote changes is a more difficult task. The paper presents ways of training such ensembles and provides several sets of experiments showing the advantage of the approach. It also contains an observation on "non-transferability", counting how many co-networks are fooled when only one is targetted by the blackbox attack. It turns out that this amount is lower for the proposed scheme.
SP:be01b10daaf670341722afb0c2d8570156ba7b53
Flow Neural Network for Traffic Flow Modelling in IP Networks
1 INTRODUCTION . Deep Learning ( DL ) has gained substantial popularity in light of its applicability to real-world tasks across computer vision , natural language processing ( Goodfellow et al. , 2016 ) , protein structure prediction ( Senior et al. , 2020 ) and challenging games such as Go ( Silver et al. , 2017 ) . Typically , the data for these learning tasks takes the form of either grids , sequences , graphs or their combinations . The tremendous efforts on customizing neural network structures ( Krizhevsky et al. , 2012 ; Kiros et al. , 2015 ; Hochreiter & Schmidhuber , 1997 ) and learning strategies ( Sermanet et al. , 2018 ; Oord et al. , 2019 ) to explore the data-specific properties underpin the success of modern DL in these domains . Following the same design philosophy , we wish to capitalize on these advancements to develop a customized neural network and self-supervised learning strategy to tackle the crucial and timely challenge of traffic flow modelling in IP networks . 1.1 TRAFFIC FLOW MODELLING IN IP NETWORKS . An IP network is a communication network that uses Internet Protocol ( IP ) to send and receive messages between one or more devices such as computers , mobile phones . The messages could be general application data such as video , emails or control signals of any connected devices . When sending the messages from a source to a destination , the source device encapsulates the bit chunks of encoded messages into a set of IP packets . The packets then travel through communications links and routers or switches in a given routing path sequentially , thus forming the traffic flows in an IP network ( Hunt , 1992 ) . As one of the most commonly used global networks , the IP network provides the majority of such data transmission services to support today ’ s Internet applications such as video streaming , voice-over-IP , and Internet of Things . Therefore , a good understanding of the behaviorial patterns of the underlying traffic flows plays a crucial role in network planning , traffic management , as well as optimizing Quality of Service ( QoS , e.g. , transmission rate , delay ) . This challenge is termed as traffic flow modelling and is fundamental to IP networking research and practice . However , the high nonlinearity , randomness and complicated self similarity ( Leland et al. , 1994 ) of these traffic thwart extensive traditional analytical and learning models , particularly at fine-grained time scales , such as traffic flow modelling at a sub-second level . Consider the illustrative example in Fig . 1 , which depicts multiple packet flows with shared forwarding nodes and links in their routing paths . The sender of each flows streams data packets to the receiver at a dynamic sending rate , which is determined according to many factors such as its rate demand , existing traffic loads , available link bandwidth , and etc . The packets usually experience various delays on the journey due to actions such as forwarding processing , link transmission , packet queueing . For example , when the sum rate of Sender 2 and 3 exceeds 10 Gbps , the router R2–R4 will hold off and cache the arriving packets in their buffers until the links from R2 to Receiver 1 become free , causing what is known as the queueing delay . The extent of these delays depends on multiple factors , including the amount of traffic going on , the capacity of the router ’ s output queue , link bandwidth etc . The random establishment , interaction and termination of massive flow connections give rise to network dynamics . This illustrates the complexity of traffic flow modelling in IP network even for the simple example . This challenge is exacerbated when the traffic loads are running at over 100 Gbps and in a network with significantly larger size in practice . 1.2 MOTIVATING FLOWNN BASED TRAFFIC FLOW MODELLING . A flow pattern can be defined as anything that follows a trend and exhibits some kind of regularity , e.g. , distribution , periodicity etc . The modelling of traffic flow patterns can be done mathematically or by the use of data-driven learning algorithms . We argue that developing a customized FlowNN in the context of IP traffic flow modelling is important in two aspects : 1 ) improving the performances of supported network applications from the accurate modelling towards the behavioral patterns of traffic flows in IP network , particularly at the time scale of sub-second level ; 2 ) providing an exciting new “ playground ” and neural network model for the DL community to solve real-world-motivated research challenges by deeply combining its structure and working mechanisms . Next , we make the following two clarifications . Why not using traditional mathematical models . The past decades have seen numerous traffic models proposed to mathematically model the traffic characteristics of networks ( Gebali , 2015 ) . For example , extensive studies use the Poisson model to characterize the traffic by assuming the arrival pattern between two successive packets follows Poisson process . Considering the heavy tailed distribution and burstiness of the data-center traffic , recent work in Benson et al . ( 2010 ) models the traffic arrival pattern as a log-normal process . To capture the temporal patterns and make predictions accordingly , Seasonal Autoregressive Integrated Moving Average ( SARIMA ) is exploited in ( Ergenc & Ertan , 2019 ) to model the traffic time series . These analytical models may generate outputs that are easier to interpret , but are bonded to the specific working circumstance and assumptions . More importantly , these statistical models function at coarse time scales of hours and assume relatively smoother traffic patterns . However , as reported in many practical traffic measurements in e.g . Benson et al . ( 2010 ; 2011 ) ; Greenberg et al . ( 2009 ) , most flows last less than 1 minute . This implicates tasks requiring traffic models at finer-grained time scales are beyond the capability of these traditional models . Fig . 2 plots the traffic traces we collected from a practical backbone network–WIDE1 , which shows the realistic traffic patterns when the packet flows are sampled by two different time scales . The long time-scale plot in Fig . 2b shows clear a “ tide-effect ” associated with daily human activities . By contrast , the traffic traces in Fig . 2a get more noisy and difficult to recognize obvious patterns when they are counted by millisecond . 1http : //mawi.wide.ad.jp/˜agurim/index.html Why not using existing neural network models . When put in the context of data-driven learning , the traffic flow modelling problem can be reduced to the representation learning task . If treating the traffic flows as the general spatio-temporal data , extensive existing neural networks fit such task , including Convolutional Neural Net ( CNN , ( Mozo et al. , 2018 ) ) , Graph Neural Net ( GNN , ( Rusek et al. , 2019 ) ) , Recurrent Neural Net ( RNN ) as well as their variants and combinations ( e.g. , STHGCN ( Kalander et al. , 2020 ) , STGCN ( Yu et al. , 2018 ) , and Xiao et al . ( 2018 ) ; Polson & Sokolov ( 2017 ) ; Cui et al . ( 2020 ) ; Guo et al . ( 2019 ) ; Lin et al . ( 2019 ) ) . The customized designs to the data-specific properties make the success of these existing models , such as the convolutional operation to capture the spatially local correlations in CNN and the aggregation operation to extract the adjacent link correlations in GNN , and so on . As a human-engineered industrial system with clear system structure and working mechanism , the IP network creates domain-specific spatio-temporal data correlations , which are difficult to capture for the incumbent spatio-temporal models if without any modification . One of the most important difference is that the spatial data in IP networks is not only correlated to other spatial data at the same point in time , but also able to directly influence the future realizations of correlated locations with strict order ( i.e. , the Spatio-Temporal Induction effect as we will disclose later ) . Moreover , these existing studies only target at a coarse-grained timescale above minutes or even hours . Models at a sub-second granularity , as FlowNN functions , require deeply combining the spatio-temporal data trends , as well as the system structural knowledge and working mechanism . 1.3 OUR CONTRIBUTIONS . We claim two critical contributions : 1 ) we formulate the crucial traffic flow modelling problem in IP networks as the representation learning task in deep learning , and develop a customized neural network–FlowNN and the associated Induction Operation to extract the domain-specific spatiotemporal data correlations in IP traffic flows . To the best of our knowledge , this is the first work to design customized neural network and learning strategy by deeply combining the IP network structure and working mechanism . The Induction Operation also makes it the first time for a data-driven learning model able to infer the data features at a millisecond granularity , which is usually treated as the ‘ noise ’ region by existing coarse-grained models ; 2 ) we report the state-of-the-art performance over the baselines in different type of practical network applications , which provides a good testament of our model . 2 SPATIO-TEMPORAL INDUCTION . By stacking the networking feature timeseries sampled at all the nodes that a flow passes through , the IP traffic flow data can be organized in the form of high-dimensional tensor time series , as shown in Fig . 3a . The feature ( denoted as xtf , n ) could be the average flow rate ( i.e. , the amount of packet bits received in each unit measurement time ) at each node or the average per-hop packet delay etc . The routing path herein constitutes the most significant attribute for the generative process of each IP traffic flow . This creates many peculiar properties in such flow data . For example , for a flow with a routing path [ 1→4→12 ] in Fig . 3a , the current data at node 4 was originated from the history data at its predecessor node 1 , but delayed by at least2 the link delay ∆t . These data will also flow to its successor node after certain delays . This shows that the flow state at a node is physically 2Packet processing and queueing will also impose extra delay . driven3 by the past flow state at its predecessor node . Therefore , such time-resolved flow data not only tells us who is related to whom but also when and in which order relations occur . This forms the S-shaped data correlation pattern , as exemplified in Fig . 3a . An accurate modelling of the flow data requires attention to such domain-specific data properties , which are absent in the majority of existing learning models if not all of them . Fig . 3b plots a sample of flow traces at two neighboring path nodes from the WIDE dataset . We can observe that an exceeding of data rate at node 1 over node 2 for some duration ( e.g. , T1 ) will always induce a subsequent duration T2 when the data rate at node 2 is greater than that at node 1 . Moreover , the cumulative amount of data bits the two nodes forward in such two durations are almost same , as indicated by the rate difference between the two nodes at the bottom of Fig . 3b . This illustrates that there is a stronger correlation among the data in these two durations , and the future realizations at T2 are subject to the constraint of the states at T1 by local flow conservation4 between the two nodes . In analogy to the concept of Electromagnetic Induction5 in Physics , in what follows , we introduce the Spatio-Temporal Induction ( STI ) effect in IP traffic flows . Accordingly , a learning model is proposed to model the network traffic flows at a fine-grained timescale . Definition 1 . Spatio-Temporal Induction is the production of the temporal evolutions of a flow from the history spatial patterns at its correlated locations . The STI builds a physical relationship between the spatial and temporal patterns of IP flow data . This provides a more accurate interpretation to the underlying data generating process than the trending information manifested by the data itself . Such induction effect is created by the IP network structure and working mechanism , which is preserved when the flow data is sampled at any timescale in practice . Next , we develop an Induction Operation to concretely capture the S-shaped data correlation in IP flow data and propose what we call FlowNN to generate the desired traffic model for IP networks .
The goal of this study is the 1-step prediction of flow rate in flow networks. They first define a “spatial-temporal induction effect (STI)” and claim it to be the universal property of flow networks. Their main contribution is their proposed “flow neural network” which is based on the STI effect and a combination of GCN and GRU architectures. According to authors, the novelty of their work lies in the fact that they consider the spatiotemporal features of the flow network simultaneously, whereas the previous works only consider them separately.
SP:c582c4634f7e343732bab5e9cc7024efbf6d88d0
ARMCMC: Online Model Parameters full probability Estimation in Bayesian Paradigm
1 INTRODUCTION . Bayesian methods are powerful tools to not only obtain a numerical estimate of a parameter but also to give a measure of confidence ( Kuśmierczyk et al. , 2019 ; Bishop , 2006 ; Joho et al. , 2013 ) . In particular , Bayesian inferences calculate the probability distribution of parameters rather than a point estimate , which is prevalent in frequentist paradigms ( Tobar , 2018 ) . One of the main advantages of probabilistic frameworks is that they enable decision making under uncertainty ( NoormohammadiAsl & Taghirad , 2019 ) . In addition , knowledge fusion is significantly facilitated in probabilistic frameworks ; different sources of data or observations can be combined according to their level of certainty in a principled manner ( Agand & Shoorehdeli , 2019 ) . Nonetheless , Bayesian inferences require high computational effort for obtaining the whole probability distribution and prior general knowledge on noise distribution before estimation . One of the most effective methods for Bayesian inferences is the Markov Chain Monte Carlo ( MCMC ) methods . In the field of system identification , MCMC variants such as the one recently proposed by Green ( 2015 ) are mostly focused on offline system identification . This is partly due to computational challenges which prevent real-time use ( Kuindersma et al. , 2012 ) . The standard MCMC algorithm is not suitable for model variation since different candidates do not share the same parameter set . Green ( 1995 ) first introduced reversible jump Markov chain Monte Carlo ( RJMCMC ) as a method to address the model selection problem . In this method , an extra pseudo random variable is defined to handle dimension mismatch . There are further extensions of MCMC in the literature , however , an online implication of it has yet to be reported . Motion filtering and force prediction of robotic manipulators are important fields of study with interesting challenges suitable for Bayesian inferences to address ( Saar et al. , 2018 ) . Here , measurements are inherently noisy , which is not desirable for control purposes . Likewise , inaccuracy , inaccessibility , and costs are typical challenges that make force measurement not ideal for practical use ( Agand et al. , 2016 ) . Different environmental identification methods have been proposed in the literature for linear and Gaussian noise ( Wang et al. , 2018 ) ; however , in cases of nonlinear models like HuntCrossley that does not have Gaussian noise ( e.g . impulsive disorder ) , there is no optimal solution for the identification problem . Diolaiti et al . ( 2005 ) proposed a double-stage bootstrapped method for online identification of the Hunt-Crossley model , which is sensitive to parameter initial conditions . Carvalho & Martins ( 2019 ) proposed a method to determine the damping term in the Hunt-Crossley model . A neural network-based approach was introduced to control the contact/non-contact HuntCrossley model in ( Bhasin et al. , 2008 ) This paper proposes a new technique , Adaptive Recursive Markov Chain Monte Carlo ( ARMCMC ) , to address address certain weaknesses of traditional online identification methods , such as only being appllicable to systems Linear in Parameter ( LIP ) , having Persistent Excitation ( PE ) requirements , and assuming Gaussian noise . ARMCMC is an online method that takes advantage of the previous posterior distribution , given that there is no sudden change in the parameter distribution . To achieve this , we define a new variable jump distribution that accounts for the degree of model mismatch using a temporal forgetting factor . The temporal forgetting factor is computed from a model mismatch index and determines whether ARMCMC employs modification or reinforcement to either restart or refine parameter distribution . As this factor is a function of the observed data rather than a simple user-defined constant , it can effectively adapt to the underlying dynamics of the system . We demonstrate our method using two different examples : soft bending actuator and Hunt-Crossley model and show favorable performance compared to state-of-the-art baselines . The rest of this paper is organized as follows : In Sec . 2 , introductory context about the Bayesian approach and MCMC is presented . Sec 3 is devoted to presenting the proposed ARMCMC approach in a step-by-step algorithm . Simulation results on a soft bending actuator with empirical results on a reality-based model of a soft contact environment capturing a Hunt-Crossley dynamic are presented in Sec . 4 . Lastly , the final remarks and future directions are concluded in Sec 5 . 2 PRELIMINARIES . 2.1 PROBLEM STATEMENT . In the Bayesian paradigm , estimates of parameters are given in the form of the posterior probability density function ( pdf ) ; this pdf can be continuously updated as new data points are received . Consider the following general model : Y = F ( X , θ ) + ν , ( 1 ) where Y , X , θ , and ν are concurrent output , input , model parameters and noise vector , respectively . To calculate the posterior probability , the observed data along with a prior distribution are combined via Bayes ’ rule ( Khatibisepehr et al. , 2013 ) . The data includes input/output data pairs ( X , Y ) . We will be applying updates to the posterior pdf using batches of data points ; hence , it will be convenient to partition the data as follows : Dt = { ( X , Y ) tm , ( X , Y ) tm+2 , · · · , ( X , Y ) tm+Ns+1 } , ( 2 ) where Ns = Ts/T is the number of data points in each data pack with T , Ts being the data and algorithm sampling times , respectively . This partitioning is convenient for online applications , as Dt−1 should have been previously collected so that the algorithm can be executed from tm to tm + Ns+1 or algorithm time step t. Ultimately , inferences are completed at tm+Ns+2 . Fig . 1 illustrates the timeline for the data and the algorithm . It is worth mentioning that the computation can be done in parallel by rendering the task of the adjacent algorithm step ( e.g . phase A of algorithm t , phase B of algorithm t− 1 and phase C of algorithm t− 2 can all be done simultaneously ) According to Bayes ’ rule and assuming data points are independent and identically distributed ( i.i.d ) in equation 1 , we have P ( θt| [ Dt−1 , Dt ] ) = P ( Dt|θt , Dt−1 ) P ( θt|Dt−1 ) ∫ P ( D1|θt , Dt−1 ) P ( θt|Dt−1 ) dθt , ( 3 ) where θt denotes the parameters at current time step . P ( θt|Dt−1 ) is the prior distribution over parameters , which is also the posterior distribution at the previous algorithm sampling time . P ( Dt|θt , Dt−1 ) is the likelihood function which is obtained by the one-step-ahead prediction : Ŷ t|t−1 = F ( Dt−1 , θt ) , ( 4 ) where Ŷ t|t−1 is the prediction of the output in ( 1 ) . If the model in ( 4 ) is valid , then the difference between the real output and predicted should be measurement noise , ( i.e. , Y t|t−1 − Ŷ t|t−1 = ν ) . Therefore , the model parameter may be updated as follows : P ( Dt|θt , Dt−1 ) = tm+Ns+1∏ tm+1 Pν ( Y t|t−1 − Ŷ t|t−1 ) , ( 5 ) where Pν is the probability distribution of noise . Note that there is no restriction on the type of noise probability distribution . Remark 1 : As it was mentioned before , there is no need to know the exact probability distribution of noise . This probability distribution can be simply substituted with a Gaussian distribution , if one has minimal knowledge of the mean and variance of the data which can be easily obtained with preprocessing ( Bishop , 2006 ) . 2.2 MARKOV CHAIN MONTE CARLO . MCMC is often employed to compute the posterior pdf numerically . The multidimensional integral in ( 3 ) is approximated by samples drawn from the posterior pdf . The samples are first drawn from a different distribution called proposal distribution , denoted q ( . ) , which can be sampled easier compared to the posterior . Brooks et al . ( 2011 ) discuss different types of MCMC implementations which may employ various proposal distributions and corresponding acceptance criteria . The main steps of the Metropolis-Hastings algorithm are listed as follows ( Ninness & Henriksen , 2010 ) : 1 . Set initial guess θ0 while P ( θ0|Y ) > 0 for iteration k = 1 , 2 . Draw candidate parameter θcnd , at iteration k , from the proposal distribution , q ( θcnd|θk−1 ) 3 . Compute the acceptance probability , α ( θcnd|θk−1 ) = min { 1 , P ( θcnd|D ) q ( θk−1|θcnd ) P ( θk−1|D ) q ( θcnd|θk−1 ) } , ( 6 ) 4 . Generate a uniform random number γ in [ 0 , 1 ] , 5 . ‘ Accept ’ candidate if γ ≤ α and ‘ ignore ’ it if γ > α , 6 . Set iteration to k + 1 and go to step 2 . 2.3 PRECISION AND RELIABILITY . Two important notions in probabilistic framework to compare results are precision ( ) and reliability ( δ ) . The former represents the proximity of a sample to the ground truth , and the latter represents the probability that an accepted sample lies within of the ground truth . Lemma : Let Pk be k samples from MCMC , and E ( Pk ) denote its expected value . According to Chernoff bound ( Tempo et al. , 2012 ) , given , δ ∈ [ 0 , 1 ] , if the number of samples ( k ) satisfies k ≥ 1 2 2 log ( 2 1− δ ) , ( 7 ) then Pr { { Pk − E ( Pk ) } ≤ } ≥ δ. Algorithm 1 ARMCMC Assumptions : 1 ) roughly noise mean ( µν ) 2 ) roughly noise variance ( σν ) 3 ) desired precision and reliability ( 0 , δ0 ) 4 ) desired threshold for model mismatch ( ζth ) Goal : Online calculation of parameters posterior distribution given the consecutive t-th pack of data ( P ( θt|Dt ) ) Initialization : Prior knowledge for θ01 , n=0 Consider desire precision and reliability ( , δ ) repeat Put t0 = n ∗Ns + 1 from ( 2 ) , n++ Add new data pack to dataset Dt Model mismatch index : ζt from ( 10 ) if ζt < ζth then Reinforcement : set prior knowledge equal to the latest posterior of previous pack Temporal forgetting factor : λt from ( 9 ) else Modification : set prior knowledge θn1 Temporal forgetting factor : λt = 0 end if Set minimum iteration kmin from ( 12 ) for k = 1 to kmax do Proposal distribution : • draw λk ∼ U ( 0 , 1 ) • Variable jump distribution : qtk ( . ) from ( 8 ) Draw θt∗k ∼ qtk ( . ) Acceptance rate : α ( . ) from ( 6 ) Draw γ ∼ U ( 0 , 1 ) if γ ≤ α then ‘ Accept ’ the proposal end if end for Wait to build Dtm+Ns+1t0 ( algorithm sample time ) until No data is obtained 3 ARMCMC ALGORITHM . At each time interval , ARMCMC recursively estimates the posterior distribution by drawing samples . The number of samples drawn is constrained by the desired precision and reliability , and the real-time requirement . On the other hand , the maximum number of data points in each data pack , Ns , is limited by the frequency of model variation , and the minimum is confined by the shortest required time such that the algorithm is real-time . We propose a variable jump distribution that enables both enriching and exploring . This will necessitate the definition of the temporal forgetting factor as a measure to reflect current underlying dynamics of the data . In other words , this parameter will show the validity of the previous model for the current data . We also prove that ARMCMC achieves the same precision and reliability with fewer samples compared to the traditional MCMC . Algorithm 1 summarizes ARMCMC .
The paper introduces a new Markov chain Monte-Carlo (MCMC) algorithm to obtain and track the posterior distribution over unknown parameters in a non-linear system. Despite its simple elegance, i.e., the introduction of a data-driven _temporal forgetting factor_ into the usual Metropolis-Hastings algorithm, the approach is, to my knowledge, novel. Its discovery seems to be the result of the intersection between fields: system identification and Bayesian sampling techniques, leading to new bridges.
SP:d9610d460905f545ccdd7524b9efc049ecdc0f25
MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering
1 INTRODUCTION . Unsupervised clustering is a fundamental task that aims to partition data into distinct groups of similar ones without explicit human labels . Deep clustering methods ( Xie et al. , 2016 ; Wu et al. , 2019 ) exploit the representations learned by neural networks and have made large progress on high-dimensional data recently . Often , such methods learn the representations for clustering by reconstructing data in a deterministic ( Ghasedi Dizaji et al. , 2017 ) or probabilistic manner ( Jiang et al. , 2016 ) , or maximizing certain mutual information ( Hu et al. , 2017 ; Ji et al. , 2019 ) ( see Sec . 2 for the related work ) . Despite the recent advances , the representations learned by existing methods may not be discriminative enough to capture the semantic similarity between images . The instance discrimination task ( Wu et al. , 2018 ; He et al. , 2020 ) in contrastive learning has shown promise in pre-training representations transferable to downstream tasks through fine-tuning . Given that the literature ( Shiran & Weinshall , 2019 ; Niu et al. , 2020 ) shows improved representations can lead to better clustering results , we hypothesize that instance discrimination can improve the performance as well . A straightforward approach is to learn a classical clustering model , e.g . spherical k-means ( Dhillon & Modha , 2001 ) , directly on the representations pre-trained by the task . Such a two-stage baseline can achieve excellent clustering results ( please refer to Tab . 1 ) . However , because of the independence of the two stages , the baseline may not fully explore the semantic structures of the data when learning the representations and lead to a sub-optimal solution for clustering . To this end , we propose Mixture of Contrastive Experts ( MiCE ) , a unified probabilistic clustering method that utilizes the instance discrimination task as a stepping stone to improve clustering . In particular , to capture the semantic structure explicitly , we formulate a mixture of conditional models by introducing latent variables to represent cluster labels of the images , which is inspired by the mixture of experts ( MoE ) formulation . In MiCE , each of the conditional models , also called an expert , learns to discriminate a subset of instances , while an input-dependent gating function partitions the dataset into subsets according to the latent semantics by allocating weights among experts . Further , we develop a scalable variant of the Expectation-Maximization ( EM ) algorithm ( Dempster et al. , ∗Corresponding author . 1Code is available at : https : //github.com/TsungWeiTsai/MiCE 1977 ) for the nontrivial inference and learning problems . In the E-step , we obtain the approximate inference of the posterior distribution of the latent variables given the observed data . In the M-step , we maximize the evidence lower bound ( ELBO ) of the log conditional likelihood with respect to all parameters . Theoretically , we show that the ELBO is bounded and the proposed EM algorithm leads to the convergence of ELBO . Moreover , we carefully discuss the algorithmic relation between MiCE and the two-stage baseline and show that the latter is a special instance of the former in a certain extreme case . Compared with existing clustering methods , MiCE has the following advantages . ( i ) Methodologically unified : MiCE conjoins the benefits of both the discriminative representations learned by contrastive learning and the semantic structures captured by a latent mixture model within a unified probabilistic framework . ( ii ) Free from regularization : MiCE trained by EM optimizes a single objective function , which does not require auxiliary loss or regularization terms . ( iii ) Empirically effective : Evaluated on four widely adopted natural image datasets , MiCE achieves significantly better results than a strong contrastive baseline and extensive prior clustering methods on several benchmarks without any form of pre-training . 2 RELATED WORK . Deep clustering . Inspired by the success of deep learning , many researchers propose to learn the representations and cluster assignments simultaneously ( Xie et al. , 2016 ; Yang et al. , 2016 ; 2017 ) based on data reconstruction ( Xie et al. , 2016 ; Yang et al. , 2017 ) , pairwise relationship among instances ( Chang et al. , 2017 ; Haeusser et al. , 2018 ; Wu et al. , 2019 ) , multi-task learning ( Shiran & Weinshall , 2019 ; Niu et al. , 2020 ) , etc . The joint training framework often ends up optimizing a weighted average of multiple loss functions . However , given that the validation dataset is barely provided , tuning the weights between the losses may be impractical ( Ghasedi Dizaji et al. , 2017 ) . Recently , several methods also explore probabilistic modeling , and they introduce latent variables to represent the underlying classes . On one hand , deep generative approaches ( Jiang et al. , 2016 ; Dilokthanakul et al. , 2016 ; Chongxuan et al. , 2018 ; Mukherjee et al. , 2019 ; Yang et al. , 2019 ) attempt to capture the data generation process with a mixture of Gaussian prior on latent representations . However , the imposed assumptions can be violated in many cases , and capturing the true data distribution is challenging but may not be helpful to the clustering ( Krause et al. , 2010 ) . On the other hand , discriminative approaches ( Hu et al. , 2017 ; Ji et al. , 2019 ; Darlow & Storkey , 2020 ) directly model the mapping from the inputs to the cluster labels and maximize a form of mutual information , which often yields superior cluster accuracy . Despite the simplicity , the discriminative approaches discard the instance-specific details that can benefit clustering via improving the representations . Besides , MIXAE ( Zhang et al. , 2017 ) , DAMIC ( Chazan et al. , 2019 ) , and MoE-Sim-VAE ( Kopf et al. , 2019 ) combine the mixture of experts ( MoE ) formulation ( Jacobs et al. , 1991 ) with the data reconstruction task . However , either pre-training , regularization , or an extra clustering loss is required . Contrastive learning . To learn discriminative representations , contrastive learning ( Wu et al. , 2018 ; Oord et al. , 2018 ; He et al. , 2020 ; Tian et al. , 2019 ; Chen et al. , 2020 ) incorporates various contrastive loss functions with different pretext tasks such as colorization ( Zhang et al. , 2016 ) , context autoencoding ( Pathak et al. , 2016 ) , and instance discrimination ( Dosovitskiy et al. , 2015 ; Wu et al. , 2018 ) . The pre-trained representations often achieve promising results on downstream tasks , e.g. , depth prediction , object detection ( Ren et al. , 2015 ; He et al. , 2017 ) , and image classification ( Kolesnikov et al. , 2019 ) , after fine-tuning with human labels . In particular , InstDisc ( Wu et al. , 2018 ) learns from instance-level discrimination using NCE ( Gutmann & Hyvärinen , 2010 ) , and maintains a memory bank to compute the loss function efficiently . MoCo replaces the memory bank with a queue and maintains an EMA of the student network as the teacher network to encourage consistent representations . A concurrent work called PCL ( Li et al. , 2020 ) also explores the semantic structures in contrastive learning . They add an auxiliary cluster-style objective function on top of the MoCo ’ s original objective , which differs from our method significantly . PCL requires an auxiliary k-means ( Lloyd , 1982 ) algorithm to obtain the posterior estimates and the prototypes . Moreover , their aim of clustering is to induce transferable embeddings instead of discovering groups of data that correspond to underlying semantic classes . 3 PRELIMINARY . We introduce the contrastive learning methods based on the instance discrimination task ( Wu et al. , 2018 ; Ye et al. , 2019 ; He et al. , 2020 ; Chen et al. , 2020 ) , with a particular focus on the recent state-of-the-art method , MoCo ( He et al. , 2020 ) . Let X = { xn } Nn=1 be a set of images without the ground-truth labels , and each of the datapoint xn is assigned with a unique surrogate label yn ∈ { 1 , 2 , ... , N } such that yn 6= yj , ∀j 6= n2 . To learn representations in an unsupervised manner , instance discrimination considers a discriminative classifier that maps the given image to its surrogate label . Suppose that we have two encoder networks fθ and fθ′ that generate ` 2-normalized embeddings vyn ∈ Rd and fn ∈ Rd , respectively , given the image xn with the surrogate label yn . We show the parameters of the networks in the subscript , and images are transformed by a stochastic data augmentation module before passing to the networks ( please see Appendix D ) . We can model the probability classifier with : p ( Y|X ) = N∏ n=1 p ( yn|xn ) = N∏ n=1 exp ( v > ynfn/τ ) ∑N i=1 exp ( v > i fn/τ ) , ( 1 ) where τ is the temperature hyper-parameter controlling the concentration level ( Hinton et al. , 2015 ) 3 . The recent contrastive learning methods mainly differ in : ( 1 ) The contrastive loss used to learn the network parameters , including NCE ( Wu et al. , 2018 ) , InfoNCE ( Oord et al. , 2018 ) , and the margin loss ( Schroff et al. , 2015 ) . ( 2 ) The choice of the two encoder networks based on deep neural networks ( DNNs ) in which θ′ can be an identical ( Ye et al. , 2019 ; Chen et al. , 2020 ) , distinct ( Tian et al. , 2019 ) , or an exponential moving average ( EMA ) ( He et al. , 2020 ) version of θ . In particular , MoCo ( He et al. , 2020 ) learns by minimizing the InfoNCE loss : log exp ( v > ynfn/τ ) exp ( v > ynfn/τ ) + ∑ν i=1 exp ( q > i fn/τ ) , ( 2 ) where q ∈ Rν×d is a queue of size ν ≤ N storing previous embeddings from fθ′ . While it adopts the EMA approach to avoid rapidly changing embeddings in the queue that adversely impacts the performance ( He et al. , 2020 ) . For convenience , we refer fθ and fθ′ as the student and teacher network respectively ( Tarvainen & Valpola , 2017 ; Tsai et al. , 2019 ) . In the following , we propose a unified latent mixture model based on contrastive learning to tackle the clustering task . 4 MIXTURE OF CONTRASTIVE EXPERTS . Unsupervised clustering aims to partition a dataset X with N observations into K clusters . We introduce the latent variable zn ∈ { 1 , 2 , ... , K } to be the cluster label of the image xn and naturally extend Eq . ( 1 ) to Mixture of Contrastive Experts ( MiCE ) : p ( Y , Z|X ) = N∏ n=1 K∏ k=1 p ( yn , zn = k|xn ) 1 ( zn=k ) = N∏ n=1 K∏ k=1 p ( zn = k|xn ) 1 ( zn=k ) p ( yn|xn , zn = k ) 1 ( zn=k ) , ( 3 ) where 1 ( · ) is an indicator function . The formulation explicitly introduces a mixture model to capture the latent semantic structures , which is inspired by the mixture of experts ( MoE ) framework ( Jacobs et al. , 1991 ) . In Eq . ( 3 ) , p ( yn|xn , zn ) is one of the experts that learn to discriminate a subset of instances and p ( zn|xn ) is a gating function that partitions the dataset into subsets according to the latent semantics by routing the given input to one or a few experts . With a divide-and-conquer principle , the experts are often highly specialized in particular images that share similar semantics , which improves the learning efficiency . Notably , MiCE is generic to the choice of the underlying 2The value of the surrogate label can be regarded as the index of the image . 3Due to summation over the entire dataset in the denominator term , it can be computationally prohibitive to get Maximum likelihood estimation ( MLE ) of the parameters ( Ma & Collins , 2018 ) . contrastive methods ( Wu et al. , 2018 ; He et al. , 2020 ; Chen et al. , 2020 ) , while in this paper , we focus on an instance based on MoCo . Also , please see Fig . 1 for an illustration of MiCE with three experts . In contrast to the original MoE used in the supervised settings ( Jacobs et al. , 1991 ) , our experts learn from instance-wise discrimination instead of human labels . In addition , both gating and expert parts of MiCE are based on DNNs to fit the high-dimensional data . In the following , we will elaborate on how we parameterize the gating function and the experts to fit the clustering task . For simplicity , we omit the parameters in all probability distributions in this section . Gating function . The gating function organizes the instance discrimination task into K simpler subtasks by weighting the experts based on the semantics of the input image . We define gψ as an encoder network that outputs an embedding for each input image . We denote the output vector for image xn as gn ∈ Rd . The gating function is then parameterized as : p ( zn|xn ) = exp ( ω > zngn/κ ) ∑K k=1 exp ( ω > k gn/κ ) , ( 4 ) where κ is the temperature , and ω = { ωk } Kk=1 represent the gating prototypes . All prototypes and image embeddings are ` 2-normalized in the Rd space . Hence , the gating function performs a soft partitioning of the dataset based on the cosine similarity between the image embeddings and the gating prototypes . We can view it as a prototype-based discriminative clustering module , whereas we obtain the cluster labels using posterior inference to consider additional information in the experts . Experts . In MiCE , every expert learns to solve the instance discrimination subtask arranged by the gating function . We define the expert in terms of the unnormalized model Φ ( · ) following Wu et al . ( 2018 ) ; He et al . ( 2020 ) . Therefore , the probability of the image xn being recognized as the yn-th one by the zn-th expert is formulated as follows : p ( yn|xn , zn ) = Φ ( xn , yn , zn ) Z ( xn , zn ) , ( 5 ) where Z ( xn , zn ) = ∑N i=1 Φ ( xn , yi , zn ) is a normalization constant that is often computationally intractable . Similar to MoCo , we have the student network fθ that maps the image xn into K continuous embeddings fn = { fn , k } Kk=1 ∈ RK×d . Likewise , the teacher network fθ′ outputs vyn = { vyn , k } Kk=1 ∈ RK×d given xn . To be specific , fn , zn ∈ Rd and vyn , zn ∈ Rd are the student embedding and the teacher embedding for images xn under the zn-th expert , respectively . We then parameterize the unnormalized model as : Φ ( xn , yn , zn ) = exp ( v > yn , zn ( fn , zn + µzn ) /τ ) , ( 6 ) where τ is the temperature and µ = { µk } Kk=1 represent the cluster prototypes for the experts . In Eq . ( 6 ) , the first instance-wise dot product explores the instance-level information to induce discriminative representations within each expert . The second instance-prototype dot product incorporates the class-level information into representation learning , encouraging a clear cluster structure around the prototype . Overall , the learned embeddings are therefore encoded with semantic structures while being discriminative enough to represent the instances . Eq . ( 6 ) is built upon MoCo with the EMA approach , while in principle , many other potential solutions exist to define the experts , which are left for future studies . Besides , the parameters θ and ψ are partially shared , please refer to the Appendix D for more details on the architecture .
Authors present “mixture of experts” type of method to solve a clustering with unsupervised learning problem. Method is called as Mixture of Contrastive Experts (MiCE) which uses contrastive learning as a base module and combines it with latent mixture models. Authors develop a scalable algorithm for MiCE and empirically evaluate the proposed method for image clustering.
SP:c3995e4d2f6dcf282fa8312606a43471c82f629f
PDE-regularized Neural Networks for Image Classification
1 INTRODUCTION . It had been discovered that interpreting neural networks as differential equations is possible by several independent research groups ( Weinan , 2017 ; Ruthotto & Haber , 2019 ; Lu et al. , 2018 ; Ciccone et al. , 2018 ; Chen et al. , 2018 ; Gholami et al. , 2019 ) . Among them , the seminal neural ordinary differential equation ( neural ODE ) research work , which considers the general architecture in Figure 1 ( a ) , is to learn a neural network approximating ∂h ( t ) ∂t , where h ( t ) is a hidden vector at layer ( or time ) t ( Chen et al. , 2018 ) . As such , a neural network is described by a system of ODEs , each ODE of which describes a dynamics of a hidden element . While neural ODEs have many good characteristics , they also have limitations , which are listed as follows : Pros . Neural ODEs can interpret t as a continuous variable and we can have hidden vectors at any layer ( or time ) l by h ( l ) = h ( 0 ) + ∫ l 0 o ( h ( t ) , t ; θo ) dt , where o ( h ( t ) , t ; θo ) = ∂h ( t ) ∂t is a neural network parameterized by θo . Pros . Neural ODEs sometimes have smaller numbers of parameters than those of other conven- tional neural network designs , e.g. , ( Pinckaers & Litjens , 2019 ) . Cons . Neural ODEs , which use an adaptive step-size ODE solver , sometimes show numerical instability ( i.e. , the underflow error of the step-size ) or their forward-pass inference can take a long time ( i.e. , too many steps ) in solving integral problems , e.g , a forward-pass time of 37.6 seconds of ODE-Net vs. 9.8 seconds of PR-Net in Table 2 . Several countermeasures have been proposed but it is unavoidable to solve integral problems ( Zhuang et al. , 2020 ; Finlay et al. , 2020 ; Daulbaev et al. , 2020 ) . To tackle the limitation , we propose the concept of partial differential equation-regularized neural network ( PR-Net ) to directly learn a hidden element , denoted h ( d , t ) at layer ( or time ) t ∈ [ 0 , T ] and dimension d ∈ Rm . Under general contexts , a PDE consists of i ) an initial condition at t = 0 , ii ) a boundary condition at a boundary location of the spatial domain Rm , and iii ) a governing equation describing ∂h ( d , t ) ∂t . As such , learning a PDE from data can be reduced to a regression-like problem to predict h ( d , t ) that meets its initial/boundary conditions and governing equation . In training our proposed PR-Net , h ( 0 ) is provided by an earlier feature extraction layer , which is the same as neural ODEs . However , an appropriate governing equation is unknown for downstream machine learning tasks . Therefore , we propose to train a regression model for predicting h ( d , t ) and its governing equation simultaneously ( see Figure 1 ( b ) ) . In other words , neural ODEs directly learn a governing equation ( i.e. , ∂h ( t ) ∂t ) , whereas PR-Net learns a governing equation in conjunction with a regression model that conforms with the learned governing equation . The key advantage in our approach is that we can eliminate the necessity of solving integral problems — in neural ODEs , where we learn a governing equation only , solving integral problems is mandatory . Such forward and inverse problems ( i.e. , solving PDEs for h ( d , t ) and identifying governing equations , respectively ) arise in many important computational science problems and there have been many efforts applying machine learning/deep learning techniques to those problems ( e.g. , in earth science ( Reichstein et al. , 2019 ; Bergen et al. , 2019 ) and climate science ( Rolnick et al. , 2019 ) ) . Recently , physics-informed or physics-aware approaches ( Battaglia et al. , 2016 ; Chang et al. , 2017 ; de Bezenac et al. , 2018 ; Raissi et al. , 2019 ; Sanchez-Gonzalez et al. , 2018 ; Long et al. , 2018 ) have demonstrated that designing neural networks to incorporate prior scientific knowledge ( e.g. , by enforcing physical laws described in governing equations ( Raissi et al. , 2019 ) ) greatly helps avoiding over-fitting and improving generalizability of the neural networks . There also exist several approaches to incorporate various ideas of classical mechanics in designing neural-ODE-type networks ( Greydanus et al. , 2019 ; Chen et al. , 2020 ; Cranmer et al. , 2020 ; Zhong et al. , 2020 ; Lee & Parish , 2020 ) . However , all these works are interested in solving either forward or inverse problems whereas we solve the two different problem types at the same time for downstream tasks . The most similar existing work to our work is in ( Long et al. , 2018 ) . However , this work studied scientific PDEs and do not consider t as a continuous variable but use a set of discretized points of t. Compared to previous approaches , the proposed method has a distinct feature that forward and inverse problems are solved simultaneously with a continuous variable t. Due to this unique feature , the method can be applied to general machine learning downstream tasks , where we do not have a priori knowledge on governing equations , such as image classification . Our proposed PR-Net had the following characteristics : Pros . PR-Net trains a regression model that outputs a scalar element h ( d , t ) ( without solving any integral problems ) , and we can consider both d and t as continuous variables . Therefore , it is possible to construct flexible hidden dimension vectors . Pros . PR-Net does not require solving integral problems . As such , there is no numerical instability and their forward-pass time is much shorter than that of neural ODEs . Pros . By learning a governing equation , we can regularize the overall behavior of PR-Net . Cons . PR-Net sometimes requires a larger number of parameters than that of neural ODEs or conventional neural networks . 2 PARTIAL DIFFERENTIAL EQUATIONS . The key difference between ODEs and PDEs is that PDEs can have derivatives of multiple variables whereas ODEs should have only one such variable ’ s derivative . Therefore , our PDE-based method interprets both the layer of neural network and the dimension of hidden vector as continuous variables , which can not be done in neural ODEs . In our context , h ( d , t ) means a hidden scalar element at layer t ∈ R and dimension d ∈ Rm , e.g. , m = 1 if h ( t ) is a vector , m = 3 if h ( t ) is a convolutional feature map , and so on . h ( d,0 ) Governing Equation Solution h ( d , t ) Neural Network Figure 2 : A neural network predicts solution values at d , t given initial conditions , denoted h ( d , 0 ) for various d , and a governing equation . Table 1 : Two types of PDE problems related to our work Type Data What to infer Forward Problem - Initial condition- Governing equation Solution h ( d , t ) Inverse Problem - Solution h ( d , t ) - Initial condition Governing equation In this section , we first introduce the forward and inverse problems of PDEs in general contexts ( see Table 1 ) . Then , we extend them to design our proposed method in deep-learning contexts . 2.1 FORWARD PROBLEM OF PDES IN GENERAL CONTEXTS . The forward PDE problem in general contexts is to find a solution h ( d , t ) , where d is in a spatial domain Rm and t is in a time domain [ 0 , T ] , given i ) an initial condition h ( d , 0 ) , ii ) a boundary condition h ( dbc , t ) , where dbc is a boundary location of the spatial domain Rm , and iii ) a governing equation g ( Raissi et al. , 2019 ) We note that the boundary condition can be missing in some cases ( Kim , 2018 ) . The governing equation is typically in the following form with particular choices of αi , j ( Raissi , 2018 ; Peng et al. , 2020 ) : g ( d , t ; h ) def = ht − ( α0,0 + α1,0h+ α2,0h 2 + α3,0h 3 + α0,1hd + α1,1hhd + α2,1h 2hd + α3,1h 3hd + α0,2hdd + α1,2hhdd + α2,2h 2hdd + α3,2h 3hdd + α0,3hddd + α1,3hhddd + α2,3h 2hddd + α3,3h 3hddd ) , ( 1 ) where ht = ∂h ( d , t ) ∂t , hd = ∂h ( d , t ) ∂d , hdd = ∂2h ( d , t ) ∂d2 , and hddd = ∂3h ( d , t ) ∂d3 . We also note that g is always zero in all PDEs , i.e. , g ( d , t ; h ) = 0 . In many cases , it is hard to solve the forward problem and hence general purpose PDE solvers do not exist . Nevertheless , one can use the following optimization to train a neural network f ( d , t ; θ ) to approximate the solution function h ( d , t ) as shown in Figure 2 ( Raissi et al. , 2019 ) : argmin θ LI + LB + LG , ( 2 ) LI def = 1 NI ∑ d ( f ( d , 0 ; θ ) − h ( d , 0 ) ) 2 , ( 3 ) LB def = 1 NB ∑ ( dbc , t ) ( f ( dbc , t ; θ ) − h ( dbc , t ) ) 2 , ( 4 ) LG def = 1 NG ∑ ( d , t ) g ( d , t ; f , θ ) 2 , ( 5 ) where NI , NB , NG are the numbers of training samples , LI is to train θ for the initial condition , LB is for the boundary condition , and LG is for the governing equation . Because the governing equation is always zero , we simply minimize its squared term . Note that i ) ft , fd , fdd , fddd can be easily constructed using the automatic differentiation implemented in TensorFlow or PyTorch , and ii ) we only need h ( d , 0 ) , h ( dbc , t ) , which are known a priori , to train the parameters θ . 2.2 INVERSE PROBLEM OF PDES IN GENERAL CONTEXTS . The inverse problem is to find a governing equation given i ) an initial condition h ( d , 0 ) and ii ) a solution function h ( d , t ) ( Raissi , 2018 ) . It learns αi , j in Eq . 1 with the following loss ( if possible , they use reference solutions as well ) : argmin αi , j 1 NG ∑ ( d , t ) g ( d , t ; h ) 2 . Given a solution function h and its partial derivative terms , we train αi , j by minimizing the objective loss . Note that we know h in this case . Therefore , the objective loss is defined with h rather than with f , unlike Eq . 5 . The optimal solution of αi , j is not unique sometimes . However , we note that no trivial solutions , e.g. , αi , j = 0 for all i , j , exist for the inverse problem .
The paper proposed the method of neural PDE as an improvement of neural ODE. In specific, neural PDE considers both the layer and the hidden dimension as continuous variables of the PDE. The new part of neural PDE compared to neural ODE is essentially solving PDE inverse problems (learning PDE from data) in the computational mathematics and engineering community, and the way of learning PDE (by embedding the PDE and initial condition into the loss function via automatic differentiation) is the physics-informed neural network (PINN) proposed in [Raissi et al., JCP, 2019]. The experiments show that compared to neural ODE, neural PDE achieves comparable accuracy but with less forward-pass inference time; but these experiments are not convincing enough.
SP:6357b56f8b11f6eb3ccd152460b4aff5ab9ff6d4
Combining Ensembles and Data Augmentation Can Harm Your Calibration
1 INTRODUCTION . Many success stories in deep learning ( Krizhevsky et al. , 2012 ; Sutskever et al. , 2014 ) are in restricted settings where predictions are only made for inputs similar to the training distribution . In real-world scenarios , neural networks can face truly novel data points during inference , and in these settings it can be valuable to have good estimates of the model ’ s uncertainty . For example , in healthcare , reliable uncertainty estimates can prevent over-confident decisions for rare or novel patient conditions ( Dusenberry et al. , 2019 ) . We highlight two recent trends obtaining state-of-the-art in uncertainty and robustness benchmarks . Ensemble methods are a simple approach to improve a model ’ s calibration and robustness ( Lakshminarayanan et al. , 2017 ) . The same network architecture but optimized with different initializations can converge to different functional solutions , leading to decorrelated prediction errors . By averaging predictions , ensembles can rule out individual mistakes ( Lakshminarayanan et al. , 2017 ; Ovadia et al. , 2019 ) . Additional work has gone into efficient ensembles such as MC-dropout ( Gal and Ghahramani , 2016 ) , BatchEnsemble , and its variants ( Wen et al. , 2020 ; Dusenberry et al. , 2020 ; Wenzel et al. , 2020 ) . These methods significantly improve calibration and robustness while adding few parameters to the original model . Data augmentation is an approach which is orthogonal to ensembles in principle , encoding additional priors in the form of invariant feature transformations . Intuitively , data augmentation enables the model to train on more data , encouraging the model to capture certain invariances with respect to its inputs and outputs ; data augmentation may also produce data that may be closer to an out-ofdistribution target task . It has been a key factor driving state-of-the-art : for example , Mixup ( Zhang et al. , 2018 ; Thulasidasan et al. , 2019a ) , AugMix ( Hendrycks et al. , 2020 ) , and test-time data augmentation ( Ashukha et al. , 2020 ) . A common wisdom in the community suggests that ensembles and data augmentation should naturally combine . For example , the majority of uncertainty models in vision with strong performance are 1Contact : ywen @ utexas.edu . Code : https : //github.com/google/edward2/tree/master/ experimental/marginalization_mixup . built upon baselines leveraging standard data augmentation ( He et al. , 2016 ; Hendrycks et al. , 2020 ) ( e.g. , random flips , cropping ) ; Hafner et al . ( 2018 ) cast data augmentation as an explicit prior for Bayesian neural networks , treating it as beneficial when ensembling ; and Hendrycks et al . ( 2020 ) highlights further improved results in AugMix when combined with Deep Ensembles ( Hansen and Salamon , 1990 ; Krogh and Vedelsby , 1995 ) . However , we find the complementary benefits between data augmentations and ensembels are not universally true . Section 3.1 illustrates the poor calibration of combining ensembles ( MC-dropout , BatchEnsemble and Deep Ensembles ) and Mixup on CIFAR : the model outputs excessive low confidence . Motivated by this pathology , in this paper , we investigate in more detail why this happens and propose a method to resolve it . Contributions . In contrast to prior work , which finds individually that ensembles and Mixup improve calibration , we find that combining ensembles and Mixup consistently degrades calibration performance across three ensembling techniques . From a detailed analysis , we identify a compounding under-confidence , where the soft labels in Mixup introduce a negative confidence bias that hinders its combination with ensembles . We further find this to be true for other label-based strategies such as label smoothing . Finally , we propose CAMixup to correct this bias , pairing well with ensembles . CAMixup produces new state-of-the-art calibration on both CIFAR-10/100 ( e.g. , 0.4 % and 2.3 % on CIFAR-10 and CIFAR-10C ) , building on Wide ResNet 28-10 for competitive accuracy ( e.g. , 97.5 % and 89.8 % ) and on ImageNet ( 1.5 % ) , building on ResNet-50 for competitive accuracy ( 77.4 % ) . 2 BACKGROUND ON CALIBRATION , ENSEMBLES AND DATA AUGMENTATION . 2.1 CALIBRATION . Uncertainty estimation is critical but ground truth is difficult to obtain for measuring performance . Fortunately , calibration error , which assesses how well a model reliably forecasts its predictions over a population , helps address this . Let ( Ŷ , P̂ ) denote the class prediction and associated confidence ( predicted probability ) of a classifier . Expected Calibration Error ( ECE ) : One notion of miscalibration is the expected difference between confidence and accuracy ( Naeini et al. , 2015 ) : EP̂ [ |P ( Ŷ = Y |P̂ = p ) − p| ] . ECE approximates this by binning the predictions in [ 0 , 1 ] under M equally-spaced intervals , and then taking a weighted average of each bins ’ accuracy/confidence difference . Let Bm be the set of examples in the mth bin whose predicted confidence falls into interval ( m−1M , m M ] . The bin Bm ’ s accuracy and confidence are : Acc ( Bm ) = 1 |Bm| ∑ xi∈Bm 1 ( ŷi = yi ) , Conf ( Bm ) = 1 |Bm| ∑ xi∈Bm p̂i , ( 1 ) where ŷi and yi are the predicted and true labels and p̂i is the confidence for example xi . Given n examples , ECE is ∑M m=1 |Bm| n ∣∣∣Acc ( Bm ) − Conf ( Bm ) ∣∣∣ . 2.2 ENSEMBLES . Aggregating the predictions of multiple models into an ensemble is a well-established strategy to improve generalization ( Hansen and Salamon , 1990 ; Perrone and Cooper , 1992 ; Dietterich , 2000 ) . BatchEnsemble : BatchEnsemble takes a network architecture and shares its parameters across ensemble members , adding only a rank-1 perturbation for each layer in order to decorrelate member predictions ( Wen et al. , 2020 ) . For a given layer , define the shared weight matrix among K ensemble members as W ∈ Rm×d . A tuple of trainable vectors rk ∈ Rm and sk ∈ Rn are associated with each ensemble member k. The new weight matrix for each ensemble member in BatchEnsemble is W′k = W ◦ Fk , where Fk = rks > k ∈ Rm×d , ( 2 ) where ◦ denotes the element-wise product . Applying rank-1 perturbations via r and s adds few additional parameters to the overall model . We use an ensemble size of 4 in all experiments . MC-Dropout : Gal and Ghahramani ( 2016 ) interpret Dropout ( Srivastava et al. , 2014 ) as an ensemble model , leading to its application for uncertainty estimates by sampling multiple dropout masks at test time in order to ensemble its predictions . We use an ensemble size of 20 in all experiments . Deep Ensembles : Composing an ensemble of models , each trained with a different random initialization , provides diverse predictions ( Fort et al. , 2019 ) which have been shown to outperform strong baselines on uncertainty estimation tasks ( Lakshminarayanan et al. , 2017 ) . We use an ensemble size of 4 in all experiments . In this work , we focus on the interaction between data augmentation strategies and BatchEnsemble , MC-Dropout , and deep ensembles . Other popular ensembling approaches leverage weight averaging such as Polyak-Ruppert ( Ruppert , 1988 ) , checkpointing ( Huang et al. , 2017 ) , and stochastic weight averaging ( Izmailov et al. , 2018 ) to collect multiple sets of weights during training and aggregate them to make predictions with only a single set . 2.3 DATA AUGMENTATION . Data augmentation encourages a model to make invariant predictions under desired transformations which can greatly improve generalization performance . For example , in computer vision , random leftright flipping and cropping are de-facto approaches ( He et al. , 2016 ) . We highlight two state-of-the-art techniques which we study . Mixup : Mixup ( Zhang et al. , 2018 ) manipulates both the features and the labels in order to encourage linearly interpolating predictions . Given an example ( xi , yi ) , Mixup applies x̃i = λxi + ( 1− λ ) xj , ỹi = λyi + ( 1− λ ) yj . ( 3 ) Here , xj is sampled from the training dataset ( taken from the minibatch ) , and λ ∼ Beta ( a , a ) for a fixed hyperparameter a > 0 . Mixup was shown to be effective for generalization and calibration of deep neural networks ( Zhang et al. , 2018 ; Thulasidasan et al. , 2019b ) . Recent work has investigated why Mixup improves generalization ( Guo et al. , 2018 ; Shimada et al. , 2019 ) and adversarial robustness ( Beckham et al. , 2019 ; Pang et al. , 2020 ; Mangla et al. , 2020 ) . Given Mixup ’ s simplicity , many extensions have been proposed with further improvements ( Yun et al. , 2019 ; Berthelot et al. , 2019 ; Verma et al. , 2019 ; Roady et al. , 2020 ; Chou et al. , 2020 ) . AugMix : Searching or sampling over a set of data augmentation operations can lead to significant improvement on both generalization error and calibration ( Cubuk et al. , 2019b ; a ) . AugMix ( Hendrycks et al. , 2020 ) applies a sum of augmentations , each with random weighting , with a Jensen-Shannon consistency loss to encourage similarity across the augmentations . AugMix achieves state-of-the-art calibration across in- and out-of-distribution tasks . Let O be the set of data augmentation operations and k be the number of AugMix iterations . AugMix samples w1 , . . . , wk ∼ Dirichlet ( a , . . . , a ) for a fixed hyperparameter a > 0 and op1 , . . . , opk from O . Given an interpolation parameter m , sampled from Beta ( a , a ) , the augmented input x̃augmix is : x̃augmix = mxorig + ( 1−m ) xaug , xaug = k∑ i=1 wiopi ( xorig ) . ( 4 ) 3 MIXUP-ENSEMBLE PATHOLOGY . We seek to understand the effect of data augmentations on ensembles . In particular , we hope to verify the hypothesis of compounding improvements when combining the seemingly orthogonal techniques of data augmentation and ensembles . To our surprise , we find that augmentation techniques can be detrimental to ensemble calibration . 3.1 THE SURPRISING MISCALIBRATION OF ENSEMBLES WITH MIXUP . Ensembles are the most known and simple approaches to improving calibration ( Ovadia et al. , 2019 ; Lakshminarayanan et al. , 2017 ) , and Thulasidasan et al . ( 2019b ) showed that Mixup improves calibration in a single network . Motivated by this , Fig . 1 applies Mixup to each ensemble member on CIFAR-10/CIFAR-100 with WideResNet 28-10 ( Zagoruyko and Komodakis , 2016 ) . Here , we searched over Mixup ’ s optimal hyperparameter α ( Eq . 3 ) and found that α = 1 gives the best result , which corroborates the finding in Zhang et al . ( 2018 ) . All data points in Fig . 1 are averaged over 5 random seeds . Figs . 1a and 1b demonstrate improved test accuracy ( Red ( ensembles without Mixup ) to Blue ( ensembles with Mixup ) ) . However , if we shift focus to Figs . 1c and 1d ’ s calibration error , it is evident that combining Mixup with ensembles leads to worse calibration ( Red to Blue ) . This is counterintuitive as we would expect Mixup , which improves calibration of individual models ( Thulasidasan et al. , 2019a ) , to also improve the calibration of their ensemble . Fig . 1 confirms this pattern across BatchEnsemble ( BE ) , MC-dropout ( MC ) , and deep ensembles ( DE ) . This pathology also occurs on ImageNet , as seen in Table 1 . Why do Mixup ensembles degrade calibration ? To investigate this in more detail , Fig . 2 plots a variant of reliability diagrams ( DeGroot and Fienberg , 1983 ) on BatchEnsemble . We bin the predictions into M = 15 equally spaced intervals based on their confidence ( softmax probabilities ) and compute the difference between the average confidence and the average accuracy as in Eq . 1 for each bin . Fig . 2 tracks this difference over varying confidence levels . A positive difference ( Acc−Conf ) implies under-confidence with respect to the true frequencies ; negative implies over-confidence ; and zero implies perfect calibration . The backbone model in Fig . 2 is BatchEnsemble with an ensemble size of 4 ( we also found this consistent for MC-Dropout and Deep-Ensemble ) . The figure presents 4 methods : Single : vanilla WideResNet 28-10 ; Mix- upSingle : WideResNet 28-10 model trained with Mixup ; BatchEnsemble : vanilla BatchEnsemble WideResNet 28-10 model ; MixupBE : BatchEnsemble WideResNet 28-10 model trained with Mixup . Fig . 2 shows that only models trained with Mixup have positive ( Acc− Conf ) values on the test set , which suggests that Mixup encourages under-confidence . Mixup ensemble ’ s under-confidence is also greater in magnitude than that of the individual Mixup models . This suggests that Mixup ensembles suffer from compounding under-confidence , leading to a worse calibration for the ensemble than the individual Mixup models ’ calibration . This is contrary to our intuition that ensembles always improves calibration . To further visualize this issue , Appendix C ’ s Fig . 8 investigates the confidence ( softmax probabilities ) surface of deep ensembles and Mixup when trained on a toy dataset consisting of 5 clusters , each with a different radius . We ensemble over 4 independently trained copies of 3-layer MLPs . Deep ensemble ’ s predictive confidence is plotted over the entire input data space in Fig . 8c . The resulting predictions are extremely confident except at the decision boundaries . Deep Ensemble still displays high confidence in the area nearest to the origin which is expected to have lower confidence level . On the other hand , Fig . 8d shows that Mixup-Ensemble is only confident in a very constrained area around the training clusters , leading to an overall under-confident classifier which confirms our postulation of compounding under-confidence .
This work analyses the interaction between data-augmentation strategies such as MixUp and model ensembles with regards to calibration performance. The authors note how strategies such as mixup and label smoothing, which reduce a single model's over-confidence, lead to degradation in calibration performance when such models are combined as an ensemble. Specifically, all techniques, taken individually, improve calibration by reducing overconfidence. However, in combination they lead to under-confident models and, therefore, worse calibration. Based on this analysis, the author's provide a simple technique which yields SOTA calibration performance on CIFAR-10, CIFAR-10-C, CIFAR-100 and CIFAR-100-C and ImageNet. The authors propose to dynamically enable and disable MixUp based on whether the model is over/under confident on a particular class, as judged on a validation dataset.
SP:8079cb72ef8db9b5ab9275770ade605746840832
Do Deeper Convolutional Networks Perform Better?
1 INTRODUCTION . Traditional statistical learning theory argues that over-parameterized models will overfit training data and thus generalize poorly to unseen data ( Hastie et al. , 2001 ) . This is explained through the bias-variance tradeoff ; as model complexity increases , so will variance , and thus more complex models will generalize poorly . Modern deep learning models , however , have been able to achieve state-of-the-art test accuracy by using an increasing number of parameters ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; He et al. , 2016 ) . In fact , while over-parameterized neural networks have enough capacity to interpolate randomly labeled training data Zhang et al . ( 2017 ) , in practice training often leads to interpolating solutions that generalize well . To reconcile this apparent conflict , Belkin et al . ( 2019a ) proposed the double descent risk curve , where beyond the interpolation threshhold , the risk decreases as model complexity increases . In neural networks , model complexity has thus far mainly been analyzed by varying network width . Indeed , in line with double descent , Yang et al . ( 2020 ) ; Nakkiran et al . ( 2020 ) ; Belkin et al . ( 2019a ) demonstrated that increasing width beyond the interpolation threshhold while holding depth constant can decrease test loss . However , model complexity in neural networks can also be increased through depth . In this work , we study the effect of depth on test performance while holding network width constant . In particular , we focus on analyzing the effect of increasing depth in convolutional networks . These networks form the core of state-of-the-art models used for image classification and serve as a prime example of a network with layer constraints . In this paper we answer the following question : What is the role of depth in convolutional networks ? In contrast to what has been shown for increasing model complexity through width , we demonstrate that test performance of convolutional networks worsens when increasing network depth beyond a critical point , suggesting that double descent does not happen through depth . Figure 1 demonstrates the difference between increasing width and depth in ResNets ( He et al. , 2016 ) trained on CIFAR10 . In particular , Figure 1a shows that increasing width leads to a decrease in test error even when training accuracy is 100 % . This effect is captured by the double descent curve . On the other hand , Figure 1b demonstrates that training ResNets of increasing depth but fixed width leads to an increase in test error . Since network depth is a form of model complexity , this behavior contradicts what is expected based on double descent . It is therefore critical to carefully analyze and understand this phenomenon . The main contributions of our work are as follows : 1 . We conduct a range of experiments in the classification setting on CIFAR10 and ImageNet32 using ResNets , fully-convolutional networks , and convolutional neural tangent kernels , and consistently demonstrate that test performance worsens beyond a critical depth ( Section 3 ) . In particular , in several settings , we observe that the test accuracy of convolutional networks is even worse than that of fully connected networks as depth increases . 2 . To gain intuition for this phenomenon we analyze linear neural networks . We demonstrate that increasing depth in linear neural networks with layer constraints ( e.g . convolutional networks or Toeplitz networks ) leads to a decrease in the Frobenius norm and stable rank of the resulting linear operator . This implies that increasing depth leads to poor generalization , when solutions of lower Frobenius norm ( e.g . solutions learned by linear fully connected networks ) do not generalize ( Section 4 ) . 3 . Against conventional wisdom , our findings indicate that increasing depth does not always lead to better generalization . Namely , our results provide evidence that the driving force behind the success of deep learning is not the depth of the models , but rather their width . 2 RELATED WORK . We begin with a discussion of recent works analyzing the role of depth in convolutional networks ( CNNs ) . Yang et al . ( 2020 ) study the bias-variance decomposition of deep CNNs and show that as depth increases , bias decreases and variance increases . This work observes that generally the magnitude of bias is greater than that of variance , and thus overall risk decreases . However , the focus of their analysis on depth is not on the interpolating regime . In fact , they posit that it is possible for deeper networks to have increased risk . We extend their experimental methodology for training ResNets and demonstrate that , indeed , deeper networks have increased risk . Neyshabur ( 2020 ) studied the role of convolutions , but focuses on the benefit of sparsity in weight sharing . Their work analyzed the effect of depth on fully-convolutional networks , but only considered models of two depths . Urban et al . ( 2017 ) analyzed the role of depth in student-teacher CNNs , specifically by training shallow CNNs to fit the logits of an ensemble of deep CNNs . This differs from our goal of understanding the effect of depth on CNN ’ s trained from scratch on CIFAR10 ; furthermore , the ensemble of CNNs they consider only have eight convolutional layers , which is much smaller than the deep ResNets we consider in our experiments . Xiao et al . ( 2018 ) provides initial evidence that the performance of a CNN may degrade with depth ; however , it is unclear whether this phenomenon is universal across CNNs used in practice or simply an artifact of their specific initialization designed to train deep CNNs . In fact , Xiao et al . ( 2020 ) establish that the convolutional neural tangent kernel ( CNTK ) solution approaches that of the neural tangent kernel ( NTK ) as depth increases . In our work , we analyze the generalization of the CNTK as a function of depth in Section 3.3 . We show that as depth increases , test error monotonically decreases and then increases . Lastly , Xiao et al . ( 2018 ) Figure 4a and Xiao et al . ( 2020 ) Figure 2a , b provide examples of accuracy worsening with increasing depth in CNNs , but we demonstrate this phenomenon systematically across a number of settings . Other works have aimed to understand the role of depth in CNNs by characterizing implicit regularization in over-parameterized deep CNNs . Radhakrishnan et al . ( 2019 ) characterized the inductive bias of over-parameterized autoencoders and demonstrated that with sufficient depth , these networks become locally contractive around training examples . Zhang et al . ( 2020 ) similarly studied the role of depth in autoencoders in the more restrictive setting of a single training example . Nguyen & Hein ( 2018 ) studied optimization in deep CNNs and showed that increasing depth increases representational power , while increasing width smooths the optimization landscape . While each of these works identified forms of implicit regularization which occur with depth in CNNs , they did not provide an explicit connection to generalization in CNNs used for classification , which is the focus of our work . On the other hand , previous works studying generalization via double descent have primarily focused on over-parameterization through increasing width . In particular , Belkin et al . ( 2019a ) and Nakkiran et al . ( 2020 ) demonstrated that double descent occurs when increasing the width of neural networks trained on MNIST ( LeCun et al. , 1998 ) and CIFAR10 respectively . Several theoretical works demonstrated double descent theoretically ( Hastie et al. , 2019 ; Belkin et al. , 2019b ; Mitra , 2019 ; Muthukumar et al. , 2020 ; Bibas et al. , 2019 ; Bartlett et al. , 2020 ) , but analyzed linear or shallow non-linear models with an increasing number of features . Our work performs a similar empirical analysis to Nakkiran et al . ( 2020 ) , but on the impact of depth instead of width in CNNs , thereby identifying contrasting behaviors between the two different ways of increasing model complexity . 3 EMPIRICAL EVIDENCE IN NON-LINEAR CLASSIFIERS . We now present our main set of experiments demonstrating that the test accuracy of convolutional networks decreases when increasing depth past a critical threshold . We begin with a demonstration of this phenomenon for fully-convolutional networks applied to CIFAR10 and ImageNet32 . We then demonstrate that this phenomenon holds also for ResNets applied to CIFAR10 . Lastly , we show that this phenomenon occurs for the convolutional neural tangent kernel ( CNTK ) on subsets of CIFAR10 . Our training methodology is outlined in Appendix C . 3.1 IMAGE CLASSIFICATION WITH FULLY-CONVOLUTIONAL NETWORKS . To understand the role of depth in convolutional networks , we begin with a simplified model of a convolutional network , which we call the Fully-Conv Net . The architecture of a Fully-Conv Net of depth d and width w for a classification problem with c classes is depicted in Figure 9 of the Appendix and consists of the following layers : • A convolutional layer with stride 1 , 3 input filers , and w output filters , followed by batch norm ( Ioffe & Szegedy , 2015 ) and a LeakyReLU activation ( Xu et al. , 2015 ) . • d− 1 convolutional layers with stride 1 , w input filters , and w output filters , each followed by batch norm and LeakyReLU activation . • 1 convolutional layer with stride 1 , w input filters , and c output filters . This is followed by an average pool of each of the output filters to produce a c-dimensional prediction . Crucially , this network depends only on convolutional layers , a nonlinear activation , and batch norm ; it does not depend on other components commonly found in deep learning architectures such as residual connections , dropout , downsampling , or fully connected layers . We note that this model is not designed to necessarily perform well , but rather to isolate and understand the effect of increasing the number of convolutional layers . We trained the Fully-Conv Net on 2 , 5 , and 10 classes from CIFAR10 ( Krizhevsky , 2009 ) . All experiments were performed using 5 random seeds to reduce the impact of random initialization . Models were trained using Adam ( Kingma & Ba , 2015 ) with learning rate 10−4 for 2000 epochs , and we selected the model with the best training accuracy over the course of training . We used the Cross Entropy loss , and down-sampled images to 16 × 16 resolution to reduce the computational burden . See Appendix C for a list of all classes used . The resulting train and test accuracies are shown in Figure 2 . As expected , as depth increases , training accuracy becomes 100 % . However , beyond a critical depth threshold , the test accuracy begins to degrade sharply . Furthermore , the value of this critical depth appears to increase as the number of training classes increases . In addition to CIFAR10 , we also applied the Fully-Conv Net to subsets of ImageNet32 ( Chrabaszcz et al. , 2017 ) , which is ImageNet downsampled to size 32 × 32 . We again trained on 2 , 5 , and 10 classes , using the same training procedure as for CIFAR10 . Training and test accuracies for ImageNet32 are shown in Figure 3 . Again , we observe that as depth increases past a critical value , test performance degrades . Remarks . When training to classify between 2 and 5 classes , the test accuracy continues to decrease even when increasing depth past the interpolation threshold , i.e . even after achieving 100 % training accuracy . This in contrast to double descent where increasing model complexity beyond the interpolation threshold leads to an increase in test accuracy . Interestingly , as depth increases , the test accuracy approaches that of a fully connected network . While the Fully-Conv Nets were before or at the interpolation threshold for the 10 class setting in Figures 2 and 3 , Figure 4 demonstrates that a similar decrease in test accuracy occurs also after the interpolation threshold for wider models which can interpolate the data .
This paper mainly answers a fundamental question: what is the role of depth in convolutional networks? Specifically, the authors present an empirical analysis of the impact of the depth on the generalization in CNNs. Experiments on CIFAR10 and ImageNet32 demonstrate that the test performance beyond a critical depth. My detailed comments are as follows.
SP:4f6e5411e0d5a017100c74a3842fed4ff323d883
Machine Learning Algorithms for Data Labeling: An Empirical Evaluation
1 INTRODUCTION . Supervised learning is the most commonly used machine learning paradigms . There are problems with supervised learning and machine learning in general . The first problem is that machine learning requires huge amounts of data . Secondly , supervised learning needs labels in the data . In a case study performed with industry , several labeling issues were found ( Anonymous , 2020a ) . A recent systematic literature review was conducted to see what type of machine learning algorithms exist to make the labeling easier . A recent systematic literature review investigated the use of Semisupervised learning and Active learning for automatic labeling of data ( Anonymous , 2020b ) . From those results the authors concluded which active and semi-supervised learning algorithms were the most popular and which datatypes they can be used on . However , even if there has been work done on active and semi-supervised learning , these learning paradigms are still very new for many companies and consequentially seldomly used . Utilizing a simulation study we evaluated seven semi-supervised and active learning algorithms on six datasets of different types , numerical , text and image data . Implementing a Bayesian Bradley Terry model we ranked the algorithms according to accuracy and effort . The contribution of this paper is to provide a taxonomy of automatic labeling algorithms and an empirical evaluation of algorithms in the taxonomy evaluated across two dimensions : Performance , how accurate the algorithm is , and Effort , how much manual work has to be done from the data scientist . The remainder of this paper is organized as follows . In the upcoming section we provide the an overview about semi-supervised and active learning algorithms and how they work . In section 3 we will describe our study , how we preformed the simulations , what datasets and source code we used , and what kind of metrics we used to evaluate performance , effort and applicability . In section 4 we provide the results from the simulation study and finally , we will interpret the results and conclude the paper in section 5 . 2 BACKGROUND . 2.1 ACTIVE LEARNING . Suppose a large unlabeled dataset is to be used for training a classification algorithm . Active Learning ( AL ) , poses query strategies on the data and selects points to be labeled according to a measure of informativeness called a Query Strategy . After the instances has been labeled with the help of the oracle , the machine learning algorithm is trained with this newly labeled data . If the learner thinks that the accuracy of the algorithm is too low and that the accuracy can be improved , the learner will request new and or replace some of the old labels . The algorithm will then be re-trained and evaluated once again . This procedure will continue iterative until some other stopping criteria has been reached . As a reference on AL , the reader is recommended to look at other sources such as ( Settles , 2012 ) . We shall now present the query strategies that we used in this text . Uncertainty Sampling is according to ( Anonymous , 2020b ) the most commonly used active learning strategy . The idea of this approach is query the instances that we are the least certain about and then label these . Uncertainty sampling strategies are very commonly used and work especially well for probabilistic algorithms such as logistic regression according to ( Lewis & Catlett , 1994 ) . ( Lewis & Catlett , 1994 ) concluded that uncertainty sampling has the ability to outperform random sampling by evaluating and comparing it to on a text classification dataset and ( Joshi et al. , 2009 ) concluded the same on image data by comparing accuracy scores on two uncertainty-sampling based methods and random sampling . Query-by-Committee ( QBC ) means that we train a committee of classifiers and then query the instance on which the committee disagrees . Add the newly labeled instance to the labeled training data and retrain the algorithm on the new training set and repeat this procedure . What is important here is the way we measure disagreement . Some way to measure disagreement is through entropy , vote-entropy and KL divergence ( Settles , 2012 ) . QBC is relatively straightforward to implement and are applicable to any basic machine learning mode . ( Seung et al. , 1992 ) and ( Freund et al. , 1997 ) were the first to formulate QBC . In Seung et al . ( 1992 ) they use Monte Carlo simulation to show that QBC can outperform random sampling . Random sampling is when the learner chooses to query the instances randomly and not according to any strategy . If a learner does not choose his query strategy carefully with respect to his data and machine learning algorithm , then active learning might not outperform choosing your instances randomly . 2.2 SEMI-SUPERVISED LEARNING . Semi-supervised machine learning is a class of machine learning algorithms that utilizes both labeled and unlabeled data . Semi-supervised algorithms are then trained on both the unlabeled and the labeled data and in some cases it even outperforms supervised classifiers . For more information on semi-supervised learning we refer the reader to ( Zhu , 2005 ) . According to ( Anonymous , 2020b ) the second most popular semi-supervised learning algorithms are the graph-based algorithms . The idea of these algorithms is to build a graph from the training data . These graphs contains both labeled and unlabeled instances . Let each pair ( xi , yi ) and ( xj , yj ) represent each vertex and its corresponding label . Let the edge weight wij represent the weight of the edge between vertex i and vertex j . The larger wij becomes the more similar are the labels of both vertices . The question is then how to compute the weight wij . Two examples of graph-based methods are Label Propagation and Label Spreading ( Zha et al. , 2009 ) . Label propagation was first introduced in ( Zhu & Ghahramani , 2002 ) and presented as follows . Given labeled and unlabeled data , define the weight matrix wij . The probabilistic transition matrix T is defined as the probability of jumping from vertex i to vertex j Tij : = P ( j → i ) = wij∑l+u k=1 wkj . The matrix Y is called the label matrix and it ’ s ith row represents the label probability distribution of vertex xi The label propagation algorithm consists of the following steps 1 . All nodes propagate for one step : Y ←− TY 2 . Row normalize Y 3 . Clamp the labeled data . Repeat Step 1-2 until Y converges . ( Zhu & Ghahramani , 2002 ) evaluates the label propagation algorithm on both synthetic data and real-world classification data ( Hull , 1994 ) , by comparing it ’ s error rates to that of kNN with k = 1 . The results show that label propagation can outperform kNN when the number for labeled instances is greater than 40 . Label propagation algorithms have used and evaluated in image annotation ( Tang et al. , 2011 ; Chua et al. , 2009 ) and text classification ( Pawar et al. , 2016 ) Label Spreading was first introduced in ( Zhou et al. , 2004 ) . Given a partially labeled dataset with c different labels . Let F be the set of all matrices of size n× c with non-negative entries . Let F ∈ F . Each entry Fij in F depends on how we label xi . We have that yi = argmax j≤c Fij Define a matrix Y ∈ F such that Y = { Yij = 1 if yi = j Yij = 0 otherwise The label spreading algorithm is : 1 . Define the affinity matrix W = { Wij = wij if i 6= j Wii = 0 2 . Define the matrix S = D−1/2WD1/2 where D is a diagonal matrix Dii = ∑ kWik . 3 . Iterate F ( t+ 1 ) = αSF ( t ) + ( 1− α ) Y until convergence , α ∈ ( 0 , 1 ) . 4 . Label xi as yi = argmaxj≤c F ∗ ij where F ∗ is the limit of the sequence { F ( t ) } . ( Zhou et al. , 2004 ) evaluates the label spreading algorithm on a toy dataset , images in the form of handwritten digits and text classification and concludes that it outperforms baseline models kNN with k = 1 , SVM with RBF kernel . 2.3 THE BRADLEY TERRY MODEL . The Bradley-Terry model ( Bradley & Terry , 1952 ; Cattelan , 2012 ) is one of the most commonly used models when it comes to analysis of paired comparison data between two objects i and j for i , j = 1 , ... , n. The comparison can be by done several subjects s = 1 , ... , S and the total number of possible paired comparisons is equal to n ( n− 1 ) /2 . Let ys = ( ys,1,2 , ... , ys , n−1,1 ) be the vector of outcomes of all paired comparisons , we will assume that we outcomes are independent . Let µi ∈ R , i = 1 , 2 , ... , n denote a latent “ strength ” of the algorithm being compared . If the paired comparison can have only two outcomes and ties are randomly resolved , the probability of i beating j can be represented by : P [ i beats j ] = eµi eµi + eµj Reducing the expression to a logistic regression ( Bradley & Terry , 1952 ) : P ( i over j ) = logit−1 ( µi − µj ) . By estimating the strength latent variable µ , we can infer the probability of one algorithm to beat the other and use this information to rank the algorithms 3 RESEARCH METHOD . In this section we present the details about the datasets that we used for our simulations , the experimental conditions and the used algorithms . The goal of this study is to show in detail how machine learning algorithms can be used to help with data labeling and to provide an in-depth comparison on how these different algorithms perform on different types of data . To achieve this we performed an empirical evaluation of seven different active learning and semi-supervised learning algorithms and evaluated them on six datasets under different conditions . The main research questions that we use to evaluate the machine learning algorithms are the following . • RQ1 : How can we rank different active learning and semi-supervised learning algorithms in terms of accuracy ? • RQ2 : How do the rank of these algorithms with changes in the amount of manual labeling effort prior to applying these methods ? 3.1 SIMULATIONS . As recognized in ( Anonymous , 2020b ) Co-training/multi-view learning are the most popular algorithms but are based on the assumption than we can watch an instance from multiple views . Graph-based algorithms are the second most common type of semi-supervised learning algorithm . Uncertainty sampling methods are very popular active learning query strategies followed by QBC . Furthermore we have included two different graph-based algorithms Label Spreading and Label Propagation . Both methods are easy to implement using python an • Label Spreading using k-NN is implemented with wij = kNN , k = 7 , α = 0.2 ( Pyt , b ) . • Label Spreading using RBF is implemented withwij = exp ( −γ|xi−xj |2 ) , γ = 20 , α = 0.2 • Label Propagation using k-NN is implemented with wij = kNN , k = 7 ( Pyt , a ) . • Label Propagation using RBF is implemented with wij = exp ( −γ|xi − xj |2 ) , γ = 20 • Radnom Sampling , Uncertainty Sampling and QBC : Each dataset was randomly split into training and test set , unlabeled and labeled set . 80 % of the data was allocated for training and 20 % was allocated for testing . As a stopping criterion we choose to stop after 50 instances had been queried . We choose six benchmarked datasets to be used in our experiments . Two numerical datasets , two text datasets and two image datasets . Due to the size of some datasets and to limited time and computational resources required we had to reduce the number of images used in our experiments . However , we made sure we used the same ration for the classes to get a fair estimated . • Image data : – Cifar-10 : This dataset originally contains 60000 32x32 colored images that can be divided into ten classes , airplane , automobile , bird , car , deer , dog , frog , horse , ship and truck ( cif ) . – Digits : This dataset contains 1797 samples of 8x8 images containing one digit each . There are ten classes that represent which digits is contained in each image . ( dig ) • Text data : – Fake and true news : This is a dataset containing 44594 instances and 5 features . The features are , ” title ” , the title of the news article . ” text ” , the text of the article , ” subject ” the article subject and a column representing the label classes , ” False ” or ” Truthful ” . From this dataset we only extracted the ” text ” column and used it as a features to predict the labels . The dataset can be download from Kaggle ( fak ) . – 20news : This dataset contains 18846 instances divided into 20 classes that describes the 20 different types of news . ( 20n ) . • Numerical data – Iris : This dataset is a classic example for multi-class classification . It contains 150 instances across three classes. ( iri ) . – Wine : The wine dataset also a classic example of multi-class classification . It contains 178 instances across three classes. ( win ) . For each dataset we ran each iteration ten times with different random seeds . Furthermore , the only parameter that we change is number of labeled instances . To answer RQ2 we have to vary the amount of instances in dataset that are already labeled . In our experiments we choose 10 % to represent small amount of manual effort required and 50 % for large amount of effort required . From each iteration we logged the F1-score to measure the accuracy of our predictive labels .
This paper aims to evaluate the performance of seven automated labeling algorithms in terms of accuracy. The authors conducted a set of experiments on six datasets from different domains under two typical settings where 10% and 50%of labels in the datasets are available. Experimental results show that the algorithms label spreading with KNN perform better in the aggregated results, the active learning algorithms QBC and query instance uncertainty sample perform better when 10% of labels available.
SP:975e5116fe8c4160a6e0c875044d95ee569208a9
IALE: Imitating Active Learner Ensembles
1 INTRODUCTION . The high performance of deep learning on various tasks from computer vision ( Voulodimos et al. , 2018 ) to natural language processing ( NLP ) ( Barrault et al. , 2019 ) also comes with disadvantages . One of their main drawbacks is the large amount of labeled training data they require . Obtaining such data is expensive and time-consuming and often requires domain expertise . Active Learning ( AL ) is an iterative process where during every iteration an oracle ( e.g . a human ) is asked to label the most informative unlabeled data sample ( s ) . In pool-based AL all data samples are available ( while most of them are unlabeled ) . In batch-mode pool-based AL , we select unlabeled data samples from the pool in acquisition batches greater than 1 . Batch-mode AL decreases the number of AL iterations required and makes it easier for an oracle to label the data samples ( Settles , 2009 ) . As a selection criteria we usually need to quantify how informative a label for a particular sample is . Well-known criteria include heuristics such as model uncertainty ( Gal et al. , 2017 ; Roth & Small , 2006 ; Wang & Shang , 2014 ; Ash et al. , 2020 ) , data diversity ( Sener & Savarese , 2018 ) , query-by-committee ( Beluch et al. , 2018 ) , and expected model change ( Settles et al. , 2008 ) . As ideally we label the most informative data samples at each iteration , the performance of a machine learning model trained on a labeled subset of the available data selected by an AL strategy is better than that of a model that is trained on a randomly sampled subset of the data . Besides the above mentioned , in the recent past several other data-driven AL approaches emerged . Some are modelling the data distributions ( Mahapatra et al. , 2018 ; Sinha et al. , 2019 ; Tonnaer , 2017 ; Hossain et al. , 2018 ) as a pre-processing step , or similarly use metric-based meta-learning ( Ravi & Larochelle , 2018 ; Contardo et al. , 2017 ) as a clustering algorithm . Others focus on the heuristics and predict the best suitable one using a multi-armed bandits approach ( Hsu & Lin , 2015 ) . Recent approaches that use reinforcement learning ( RL ) directly learn strategies from data ( Woodward & Finn , 2018 ; Bachman et al. , 2017 ; Fang et al. , 2017 ) . Instead of pre-processing data or dealing with the selection of a suitable heuristic they aim to learn an optimal selection sequence on a given task . However , these pure RL approaches not only require a huge amount of samples they also do not resort to existing knowledge , such as potentially available AL heuristics . Moreover , training the RL agents is usually very time-intensive as they are trained from scratch . Hence , imitation learning ( IL ) helps in settings where very few labeled training data and a potent algorithmic expert are available . IL aims to train , i.e. , clone , a policy to transfer the expert to the related few data problem . While IL mitigates some of the previously mentioned issues of RL , current approaches are still limited with respect to their algorithmic expert and their acquisition size ( including that of Liu et al . ( 2018 ) ) , i.e. , some only pick one sample per iteration , and were so far only evaluated on NLP tasks . We propose an batch-mode AL approach that enables larger acquisition sizes and that allows to make use of a more diverse set of experts from different heuristic families , i.e. , uncertainty , diversity , expected model-change , and query-by-committee . Our policy extends previous work ( see Section 2 ) by learning at which stage of the AL cycle which of the available strategies performs best . We use Dataset Aggregation ( DAGGER ) to train a robust policy and apply it to other problems from similar domains ( see Section 3 ) . We show that we can ( 1 ) train a policy on image datasets such as MNIST , Fashion-MNIST , Kuzushiji-MNIST , and CIFAR-10 , ( 2 ) transfer the policy between them , and ( 3 ) transfer the policy between different classifier architectures ( see Section 4 ) . 2 RELATED WORK . Next to the AL approaches for traditional ML models ( Settles , 2009 ) also ones that are applicable to deep learning have been proposed ( Gal et al. , 2017 ; Sener & Savarese , 2018 ; Beluch et al. , 2018 ; Settles et al. , 2008 ; Ash et al. , 2020 ) . Below we discuss AL strategies that are trained on data . Generative Models . Explicitly modeled data distributions capture the informativeness that can be used to select samples based on diversity . Sinha et al . ( 2019 ) propose a pool-based semi-supervised AL where a discriminator discriminates between labeled and unlabeled samples using the latent representations of a variational autoencoder . The representations are used to pick data points that are most diverse and representative ( Tonnaer , 2017 ) . Mirza & Osindero ( 2014 ) use a conditional generative adversarial network to generate samples with different characteristics from which the most informative are selected using the uncertainty measured by a Bayesian neural network ( Kendall & Gal , 2017 ; Mahapatra et al. , 2018 ) . Such approaches are similar to ours ( as they capture dataset properties ) but instead we model the dataset implicitly and infer a selection heuristic via imitation . Metric Learning . Metric learners such as Ravi & Larochelle ( 2018 ) use a set of statistics calculated from the clusters of un-/labeled samples in a Prototypical Network ’ s ( Snell et al. , 2017 ) embedding space , or learn to rank ( Li et al. , 2020 ) large batches . Such statistics use distances ( e.g . Euclidean distance ) or are otherwise converted into class probabilities . Two MLPs predict either a quality or diversity query selection using backpropagation and the REINFORCE gradient ( Mnih & Rezende , 2016 ) . However , while they rely on statistics over the classifier ’ s embedding and explicitly learn two strategies ( quality and diversity ) we use a richer state and are not constrained to specific strategies . Reinforcement Learning ( RL ) . The AL cycle can be modeled as a sequential decision making problem . Woodward & Finn ( 2018 ) propose a stream-based AL agent based on memory-augmented neural networks where an LSTM-based agent learns to decide whether to predict a class label or to query the oracle . Matching Networks ( Bachman et al. , 2017 ) extensions allow for pool-based AL . Fang et al . ( 2017 ) use Deep Q-Learning in a stream-based AL scenario for sentence segmentation . In contrast to them we consider batch-mode AL with acquisition sizes ≥ 1 , and work on a poolinstead of a stream-settings . While Bachman et al . ( 2017 ) propose a strategy to extend the RL-based approaches to a pool setting , they do still not work on batches . Instead , we allow batches of arbitrary acquisition sizes . Fan et al . ( 2018 ) propose a meta-learning approach that trains a student-teacher pair via RL . The teacher also optimizes data teaching by selecting labeled samples from a minibatch that lets the student learn faster . In contrast , our method learns to selects samples from an unlabeled pool , i.e. , in a missing target scenario . The analogy of teacher-student is related , however , the objective , method and available ( meta- ) data to learn a good teacher ( policy ) are different . Multi-armed Bandit ( MAB ) . Baram et al . ( 2004 ) treat the online selection of AL heuristics from an ensemble as the choice in a multi-armed bandit problem . COMB uses the known EXP4 algorithm to solve it , and ranks AL heuristics according to a semi-supervised maximum entropy criterion ( Classification Entropy Maximization ) over the samples in the pool . Building on this Hsu & Lin ( 2015 ) learn to select an AL strategy for an SVM-classifier , and use importance-weighted accuracy extension to EXP4 that better estimates each AL heuristics ’ performance improvement , as an unbiased estimator for the test accuracy . Furthermore , they reformulate the MAB setting so that the heuristics are the bandits and the algorithm selects the one with the largest performance improvement , in contrast to COMB ’ s formulation where unlabeled samples are the bandits . Chu & Lin ( 2016 ) extend Hsu & Lin ( 2015 ) to a setting where the selection of AL heuristics is done through a linear weighting , aggregating experience over multiple datasets . They adapt the semi-supervised reward scheme from Hsu & Lin ( 2015 ) to work with their deterministic queries . In our own work , we instead learn a unified AL policy instead of selecting from a set of available heuristics . This allows our policy to learn interpolation between batches of samples proposed by single heuristics and furthermore , to exploit the classifier ’ s internal state , so that it is especially suited for deep learning models . Imitation Learning ( IL ) . Liu et al . ( 2018 ) propose a neural network that learns an AL strategy based on the classifier ’ s loss on a validation set using Dataset Aggregation ( DAGGER ) ( Ross et al. , 2011 ) . One of their key limitations is that only a single sample is labeled during every acquisition . As the DL model is trained from scratch after every acquisition this results in a very slow active learning process and expensive expert-time is requested less efficiently ( Kirsch et al. , 2019 ; Sener & Savarese , 2018 ) . Hence , we extend this work for batch-mode AL using a top-k-like loss function , and select more samples to increase the suitability to deep learning and its efficiency ( as we do not retrain after each sample ) . We also incorporate recent ideas ( Ash et al. , 2020 ) to extend the state and imitate multiple AL heuristics . This is computationally more efficient and leads to better results . 3 IALE : IMITATING AN ENSEMBLE OF ACTIVE LEARNERS . IALE learns an AL sampling strategy for similar tasks from multiple experts in a pool-based setting . We train a policy with data consisting of states ( i.e. , that includes an encoding of the labeled data samples ) and best expert actions ( i.e. , samples selected for labeling ) collected over the AL cycles . The policy is then used on a similar ( but different ) task . To see states that are unlikely to be produced by the experts , DAGGER ( Ross et al. , 2011 ) collects a large set of states and actions over AL iterations . The policy network is trained on all the previous states and actions after each iteration . 3.1 BACKGROUND . In pool-based AL we train a model M on a dataset D by iteratively labeling data samples . Initially , M is trained on a small amount of labeled data Dlab randomly sampled from the dataset . The rest of the data is considered as the unlabeled data pool Dpool , i.e. , D = Dlab ∪ Dpool . From that point onwards during the AL iterations a subset of Dsel is selected from Dpool by using an acquisition function a ( M , Dpool ) . The data is labeled and then removed fromDpool and added toDlab . The size of Dsel is based on the acquisition size acq ( > 1 for batch-mode AL ) . The AL cycle continues until a labeling budget of B is reached . M is retrained after each acquisition to evaluate the performance boost with respect to the increased labeled dataset only ( and not the additional training time ) . The acquisition function a is a heuristic that uses the trained model M to decide which of the data samples in Dpool are most informative . For deep AL popular heuristics include uncertainty-based MC-Dropout ( Gal et al. , 2017 ) , query-by-committee-based Ensembles ( Beluch et al. , 2018 ) , data diversity-based CoreSet ( Sener & Savarese , 2018 ) , gradient-based BADGE ( Ash et al. , 2020 ) and soft-max-based Confidence- or Entropy-sampling ( Wang & Shang , 2014 ) . MC-Dropout uses a Monte-Carlo inference scheme based on a dropout layer to approximate the model ’ s predictive uncertainty ( Gal & Ghahramani , 2016 ) . The heuristic ( Gal et al. , 2017 ) then uses these values to select the most uncertain samples . Ensembles ( Beluch et al. , 2018 ) model predictive uncertainty using a committee of N classifiers initialized with different random seeds . However , while at inference time we need to run only N forward-passes per sample ( compared to MC-Dropout performing two dozen or more Monte-Carlo passes ) , the training ofN−1 additional deep models can become prohibitively expensive in many use-cases . CoreSet ( Sener & Savarese , 2018 ) aims to select diverse samples by solving the k-centers problem on the classifier ’ s embeddings . This involves minimizing the distance between each of the unlabeled data samples to its nearest labeled samples . BADGE uses ( pseudo labels of ) the magnitudes of the gradients in a batch to select samples by uncertainty , and the gradient directions together with a k-means++ clustering to select samples by diversity . Soft-max-based heuristics ( Confidence- and Entropy-sampling ) use predictive uncertainty and are computationally lightweight at lower AL performance ( Gal & Ghahramani , 2016 ; Ash et al. , 2020 ) ( Confidence selects the samples with the lowest class probability and Entropy the ones with with largest entropy of their probability distribution ) .
In this work, an imitation learning (AL) approach is proposed to imitate multiple active learning algorithms, in order to take their advantages to learn a better active learning algorithm. The main idea is to treat the active learning algorithms as experts and utilize the DAGGER algorithm for imitation learning. The proposed approach is evaluated on MNIST, fashion-MNIST, and Kuzushiji-MNIST, showing that the learned active learner outperforms baseline active learners, meanwhile is transferrable to other datasets.
SP:4063187f00775058a7d47814b0062648d88f0b8d
Neural networks with late-phase weights
1 INTRODUCTION . Neural networks trained with SGD generalize remarkably well on a wide range of problems . A classic technique to further improve generalization is to ensemble many such models ( Lakshminarayanan et al. , 2017 ) . At test time , the predictions made by each model are combined , usually through a simple average . Although largely successful , this technique is costly both during learning and inference . This has prompted the development of ensembling methods with reduced complexity , for example by collecting models along an optimization path generated by SGD ( Huang et al. , 2017 ) , by performing interpolations in weight space ( Garipov et al. , 2018 ) , or by tying a subset of the weights over the ensemble ( Lee et al. , 2015 ; Wen et al. , 2020 ) . An alternative line of work explores the use of ensembles to guide the optimization of a single model ( Zhang et al. , 2015 ; Pittorino et al. , 2020 ) . We join these efforts and develop a method that fine-tunes the behavior of SGD using late-phase weights : late in training , we replicate a subset of the weights of a neural network and randomly initialize them in a small neighborhood . Together with the stochasticity inherent to SGD , this initialization encourages the late-phase weights to explore the loss landscape . As the late-phase weights explore , the shared weights accumulate gradients . After training we collapse this implicit ensemble into a single model by averaging in weight space . Building upon recent work on ensembles with shared parameters ( Wen et al. , 2020 ) we explore a family of late-phase weight models involving multiplicative interactions ( Jayakumar et al. , 2020 ) . We focus on low-dimensional late-phase models that can be ensembled with negligible overhead . Our experiments reveal that replicating the ubiquitous batch normalization layers ( Ioffe & Szegedy , 2015 ) is a surprisingly simple and effective strategy for improving generalization1 . Furthermore , we find that late-phase weights can be combined with stochastic weight averaging ( Izmailov et al. , 2018 ) , a complementary method that has been shown to greatly improve generalization . 1We provide code to reproduce our experiments at https : //github.com/seijin-kobayashi/ late-phase-weights 2 METHODS AND MODELS . 2.1 LEARNING WITH LATE-PHASE WEIGHTS . Late-phase weights . To apply our learning algorithm to a given neural network model fw we first specify its weights w in terms of two components , base and late-phase ( θ and φ , resp. ) . The two components interact according to a weight interaction function w = h ( θ , φ ) . Base weights are learned throughout the entire training session , and until time step T0 both θ and φ are learned and treated on equal grounds . At time step T0 , a hyperparameter of our algorithm , we introduce K late-phase components Φ = { φk } Kk=1 , that are learned together with θ until the end . This procedure yields a late-phase ensemble of K neural networks with parameter sharing : reusing the base weights θ , each late-phase weight φk defines a model with parameters wk = h ( θ , φk ) . Late-phase weight averaging at test time . Our ensemble defined by the K late-phase weight configurations in Φ is kept only during learning . At test time , we discard the ensemble and obtain a single model by averaging over the K late-phase weight components . That is , given some input pattern x , we generate a prediction y ( x ) using the averaged model , computed once after learning : y ( x ) = fw ( x ) , w ≡ h ( θ , 1 K K∑ k=1 φk ) . ( 1 ) Hence , the complexity of inference is independent ofK , and equivalent to that of the original model . Late-phase weight initialization . We initialize our late-phase weights from a reference base weight . We first learn a base parameter φ0 from time step t = 0 until T0 , treating φ0 as any other base parameter in θ . Then , at time t = T0 , each configuration φk is initialized in the vicinity of φ0 . We explore perturbing φ0 using a symmetric Gaussian noise model , φk = φ0 + σ0 Z ( φ0 ) k , ( 2 ) where k is a standard normal variate of appropriate dimension and σ0 is a hyperparameter controlling the noise amplitude . We allow for a φ0-dependent normalization factor , which we set so as to ensure layerwise scale-invariance , which helps finding a single σ0 that governs the initialization of the entire network . More concretely , for a given neural network layer l with weights φ ( l ) 0 of dimension D ( l ) , we choose Z ( φ ( l ) 0 ) = √ D ( l ) /‖φ ( l ) 0 ‖ . Our perturbative initialization ( Eq . 2 ) is motivated by ongoing studies of the nonconvex , highdimensional loss functions that arise in deep learning . Empirical results and theoretical analyses of simplified models point to the existence of dense clusters of connected solutions with a locallyflat geometry ( Hochreiter & Schmidhuber , 1997a ) that are accessible by SGD ( Huang et al. , 2017 ; Garipov et al. , 2018 ; Baldassi et al. , 2020 ) . Indeed , the eigenspectrum of the loss Hessian evaluated at weight configurations found by SGD reveals a large number of directions of low curvature ( Keskar et al. , 2017 ; Chaudhari et al. , 2019 ; Sagun et al. , 2018 ) . For not yet completely understood reasons , this appears to be a recurring phenomenon in overparameterized nonlinear problems ( Brown & Sethna , 2003 ; Waterfall et al. , 2006 ) . Based on these observations , we assume that the initial parameter configuration φ0 can be perturbed in a late phase of learning without leading to mode hopping across the different models wk . While mode coverage is usually a sought after property when learning neural network ensembles ( Fort et al. , 2020 ) , here it would preclude us from taking the averaged model at the end of learning ( Eq . 1 ) . Stochastic learning algorithm . Having decomposed our weights into base and late-phase components , we now present a stochastic algorithm which learns both θ and Φ . Our algorithm works on the standard stochastic ( minibatch ) neural network optimization setting ( Bottou , 2010 ) . Given a loss function L ( D , w ) = 1|D| ∑ x∈D L ( x , w ) to be minimized with respect to the weights w on a set of data D , at every round we randomly sample a subsetM from D and optimize instead the stochastic loss L ( M , w ) . However , in contrast to the standard setting , in late stages of learning ( t > T0 ) we simultaneously optimize K parameterizationsW : = { wk | wk = h ( θ , φk ) } Kk=1 , instead of one . We proceed by iteration over W . At each step k , we sample a minibatch Mk and immediately update the late-phase weights φk , while accumulating gradients over the shared base weights θ . Such gradient accumulation has been previously used when learning ensembles ( Lee et al. , 2015 ; Wen et al. , 2020 ) and multi-task models ( Rebuffi et al. , 2017 ) with shared base parameters . A single iteration is finally concluded by changing the base weights in the direction opposite of the accumulated gradient . We scale the accumulated gradient by γθ ; setting γθ = 1/K recovers the original step size in θ , but other choices are possible . In particular , we find that a large γθ of unit size is in practice often tolerated , resulting in accelerated learning . Algorithm 1 : Late-phase learning Require : Base weights θ , late-phase weight set Φ , dataset D , gradient scale factor γθ , loss L Require : Training iteration t > T0 for 1 ≤ k ≤ K do Mk ← Sample minibatch from D ∆θk ← ∇θ L ( Mk , θ , φk ) φk ← Uφ ( φk , ∇φk L ( Mk , θ , φk ) ) θ ← Uθ ( θ , γθ ∑K k=1 ∆θk ) We summarize an iteration of our method in Algorithm 1 , where the loss L ( M , θ , φ ) is now seen as a function of θ and φ . We opt for a general presentation using unspecified gradient-based update operators Uφ and Uθ . These operators can be set to optimizers of choice . For instance , our method might benefit from additional noise injection onto parameter updates ( Welling & Teh , 2011 ) . Furthermore , late-phase optimizers need not coincide with the optimizer used in the early phase . In our work we typically set Uφ and Uθ to a single step of SGD with Nesterov momentum ( Nesterov , 2004 ) , and explore Adam ( Kingma & Ba , 2015 ) and plain SGD in a smaller set of experiments . 2.2 LATE-PHASE WEIGHT MODELS . As detailed next , we consider a number of distinct late-phase weight models in our experiments . In particular , we explore weight interaction functions h in which late-phase weights have low dimensionality , to avoid a large increase in complexity with the ensemble size K. To counteract this reduced dimensionality , we make extensive use of multiplicative base-late weight interactions . This design choice is motivated by the large expressive power of multiplicative interactions despite low dimensionality , which has been demonstrated in a wide range of settings ( Jayakumar et al. , 2020 ) . Late-phase batch normalization layers . Batch normalization layers ( BatchNorm ; Ioffe & Szegedy , 2015 ) are a staple of current deep neural network models . Besides standardizing the activity of the layer they are applied to , BatchNorm units introduce a learnable multiplicative ( scale ) parameter γ and an additive ( shift ) parameter β . While being low-dimensional , these additional parameters have large expressive power : it has been shown that learning only γ and β keeping the remaining weights frozen can lead to significantly lower loss than when learning random subsets of other weights of matching dimensionality ( Frankle et al. , 2020 ; Mudrakarta et al. , 2019 ) . We take the scale and shift parameters of BatchNorm layers as our first choice of late-phase weights ; the base weights are the remaining parameters of the model . Batch statistics are also individually estimated for each model in W . This late-phase weight parameterization is motivated by ( i ) the expressive power of γ and β discussed above , and by ( ii ) practical considerations , as BatchNorm layers are generally already present in feedforward neural network models , and are otherwise easy to implement efficiently . More concretely , let us consider an affine transformation layer l which maps an input vector r ( l−1 ) to θ ( l ) w r ( l−1 ) + θ ( l ) b , where the early-phase weight matrix θ ( l ) w and bias vector θ ( l ) b are already standardized using the respective batch statistics . For this standard layer , our model introduces a multiplicative interaction between base and late-phase weights , diag ( γ ( l ) ) θ ( l ) w , and an additive interaction between base and late-phase bias parameters , θ ( l ) b + β ( l ) . Late-phase rank-1 matrix weights . We also study a closely related late-phase weight model , where existing weight matrices – the base components , as before – are multiplied elementwise by rank-1 matrices ( Wen et al. , 2020 ) . For a given affine layer l , we define a late-phase weight matrix with resort to a pair of learnable vectors , φ ( l ) = u ( l ) v ( l ) T . Taking the Hadamard product with the base weight matrix yields the effective weights W ( l ) = φ ( l ) ◦ θ ( l ) . With this parameterization , we recover the ensemble proposed by Wen et al . ( 2020 ) , except that here it is generated late in training using our perturbative initialization ( Eq . 2 ) . Unlike BatchNorm layers , which include the shift parameter , rank-1 late-phase weights interact in a purely multiplicative manner with base weights . We study this model since it is easy to implement on neural networks which do not feature BatchNorm layers , such as standard long short-term memories ( LSTMs ; Hochreiter & Schmidhuber , 1997b ) . Hypernetworks with late-phase weight embeddings . Additionally , we generalize the late-phase weight models described above using hypernetworks ( Ha et al. , 2017 ) . A hypernetwork generates the parameters w of a given target neural network fw based on a weight embedding . In our framework , we can use a hypernetwork to implement the interaction function w = h ( θ , φ ) directly , with parameters θ corresponding to base weights and embeddings φ to late-phase weights . We experiment with linear hypernetworks and use the same hypernetwork to produce the weights of multiple layers , following Savarese & Maire ( 2019 ) ; Ha et al . ( 2017 ) ; von Oswald et al . ( 2020 ) . In this scheme , the weight embedding input specifies the target layer whose parameters are being generated . More specifically , the weight matrix for some layer l belonging to a group of layers g which share a hypernetwork is given byW ( g , l ) = θ ( g ) φ ( g , l ) , where θ ( g ) and φ ( g , l ) are appropriatelysized tensors . Sharing θ ( g ) over a layer group g allows countering an increase in the overall number of parameters . We parameterize our hypernetworks such that the weight embedding vectors φ ( g , l ) are small , and therefore cheap to ensemble . Late-phase classification layers . Finally , inspired by Lee et al . ( 2015 ) , in classification experiments we take the weights of the last linear layer as late-phase weights by default . In modern neural network architectures these layers do not usually comprise large numbers of parameters , and our architecture explorations indicated that it is typically beneficial to ensemble them . We therefore include W ( L ) in our late-phase weights φ , where W ( L ) denotes the weights of the final layer L .
This work suggests a variant of ensembling that is more compute-efficient. Specifically, it involves forking an ensemble only in the late stage of training, and forming this ensemble via a "low-dimentional" family. That is, instead of maintaining independent networks, maintain only "low-rank"-style perturbations of the base network (for various instanciations of "low-rank").
SP:651166f4bdf2eb56689f790d3c697a43be974521
Multi-Agent Collaboration via Reward Attribution Decomposition
1 INTRODUCTION . In recent years , multi-agent deep reinforcement learning ( MARL ) has drawn increasing interest from the research community . MARL algorithms have shown super-human level performance in various games like Dota 2 ( Berner et al. , 2019 ) , Quake 3 Arena ( Jaderberg et al. , 2019 ) , and StarCraft ( Samvelyan et al. , 2019 ) . However , the algorithms ( Schulman et al. , 2017 ; Mnih et al. , 2013 ) are far less sample efficient than humans . For example , in Hide and Seek ( Baker et al. , 2019 ) , it takes agents 2.69− 8.62 million episodes to learn a simple strategy of door blocking , while it only takes human several rounds to learn this behavior . One of the key reasons for the slow learning is that the number of joint states grows exponentially with the number of agents . Moreover , many real-world situations require agents to adapt to new configurations of teams . This can be modeled as ad hoc multi-agent reinforcement learning ( Stone et al. , 2010 ) ( Ad-hoc MARL ) settings , in which agents must adapt to different team sizes and configurations at test time . In contrast to the MARL setting where agents can learn a fixed and team-dependent policy , in the Ad-hoc MARL setting agents must assess and adapt to the capabilities of others to behave optimally . Existing work in ad hoc team play either require sophisticated online learning at test time ( Barrett et al. , 2011 ) or prior knowledge about teammate behaviors ( Barrett and Stone , 2015 ) . As a result , they do not generalize to complex real-world scenarios . Most existing works either focus on improving generalization towards different opponent strategies ( Lanctot et al. , 2017 ; Hu et al. , 2020 ) or simple ad-hoc setting like varying number of test-time teammates ( Schwab et al. , 2018 ; Long et al. , 2020 ) . We consider a more general setting where test-time teammates may have different capabilities . The need to reason about different team configurations in the Ad-hoc MARL results in an additional exponential increase ( Stone et al. , 2010 ) in representational complexity comparing to the MARL setting . In the situation of collaboration , one way to address the complexity of the ad hoc team play setting is to explicitly model and address how agents collaborate . In this paper , one key observation is that when collaborating with different agents , an agent changes their behavior because she realizes that the team could function better if she focuses on some of the rewards while leaving other rewards to other teammates . Inspired by this principle , we formulate multi-agent collaboration as a joint optimization over an implicit reward assignment among agents . Because the rewards are assigned differently for different team configurations , the behavior of an agent changes and adaptation follows . While solving this optimization directly requires centralization at test time , we make an interesting theoretical finding that each agent has a decentralized policy that is ( 1 ) approximately optimal for the joint optimization , and ( 2 ) only depends on the local configuration of other agents . This enables us to learn a direct mapping from states of nearby agents ( or “ observation ” of agent i ) to its Q-function using deep neural network . Furthermore , this finding also suggests that the Q-function of agent i should be decomposed into two terms : Qalonei that only depends on agent i ’ s own state si , andQ collab i that depends on nearby agents but vanishes if no other agents nearby . To enforce this semantics , we regularize Qcollabi ( si , · ) = 0 in training via a novel Multi-Agent Reward Attribution ( MARA ) loss . The resulting algorithm , Collaborative Q-learning ( CollaQ ) , achieves a 40 % improvement in win rates over state-of-the-art techniques for the StarCraft multi-agent challenge . We show that ( 1 ) the MARA Loss is critical for strong performance and ( 2 ) both Qalone and Qcollab are interpretable via visualization . Furthermore , CollaQ agents can achieve ad hoc team play without retraining or fine-tuning . We propose three tasks to evaluate ad hoc team play performance : at test time , ( a ) assign a new VIP unit whose survival matters , ( b ) swap different units in and out , and ( c ) add or remove units . Results show that CollaQ outperforms baselines by an average of 30 % in all these settings . Related Works . The most straightforward way to train such a MARL task is to learn individual agent ’ s value function Qi independently ( IQL ) ( Tan , 1993 ) . However , the environment becomes non-stationary from the perspective of an individual agent thus this performs poorly in practice . Recent works , e.g. , VDN ( Sunehag et al. , 2017 ) , QMIX ( Rashid et al. , 2018 ) , QTRAN ( Son et al. , 2019 ) , adopt centralized training with decentralized execution to solve this problem . They propose to write the joint value function as Qπ ( s , a ) = φ ( s , Q1 ( o1 , a1 ) , ... , QK ( oK , aK ) ) but the formulation of φ differs in each method . These methods successfully utilize the centralized training technique to alleviate the non-stationary issue . However , none of the above methods generalize well to ad-hoc team play since learned Qi functions highly depend on the existence of other agents . 2 COLLABORATIVE MULTI-AGENT REWARD ASSIGNMENT . Basic Setting . A multi-agent extension of Markov Decision Process called collaborative partially observable Markov Games ( Littman , 1994 ) , is defined by a set of states S describing the possible configurations of allK agents , a set of possible actionsA1 , . . . , AK , and a set of possible observations O1 , . . . , OK . At every step , each agent i chooses its action ai by a stochastic policy πi : Oi ×Ai → [ 0 , 1 ] . The joint action a produces the next state by a transition function P : S×A1×· · ·×AK → S. All agents share the same reward r : S × A1 × · · · × AK → R and with a joint value function Qπ = Est+1 : ∞ , at+1 : ∞ [ Rt|st , at ] where Rt = ∑∞ j=0 γ jrt+j is the discounted return . In Sec . 2.1 , we first model multi-agent collaboration as a joint optimization on reward assignment : instead of acting based on the joint state s , each agent i is acting independently on its own state si , following its own optimal value Vi , which is a function of the perceived reward assignment ri . While the optimal perceived reward assignment r∗i ( s ) depends on the joint state of all agents and requires centralization , in Sec . 2.2 , we prove that there exists an approximate optimal solution r̂i that only depends on the local observation slocali of agent i , and thus enabling decentralized execution . Lastly in Sec . 2.3 , we distill the theoretical insights into a practical algorithm CollaQ , by directly learning the compositional mapping slocali 7→ r̂i 7→ Vi in an end-to-end fashion , while keeping the decomposition structure of self state and local observations . 2.1 BASIC ASSUMPTION . A naive modeling of multi-agent collaboration is to estimate a joint value function Vjoint : = Vjoint ( s1 , s2 , . . . , sK ) , and find the best action for agent i to maximize Vjoint according to the current joint state s = ( s1 , s2 , . . . , sK ) . However , it has three fundamental drawbacks : ( 1 ) Vjoint generally requires exponential number of samples to learn ; ( 2 ) in order to evaluate this function , a full observation of the states of all agents is required , which disallows decentralized execution , one key preference of multi-agent RL ; and ( 3 ) for any environment/team changes ( e.g. , teaming with different agents ) , Vjoint needs to be relearned for all agents and renders ad hoc team play impossible . Our CollaQ addresses the three issues with a novel theoretical framework that decouples the interactions between agents . Instead of using Vjoint that bundles all the agent interactions together , we consider the underlying mechanism how they interact : in a fully collaborative setting , the reason why agent i takes actions towards a state , is not only because that state is rewarding to agent i , but also because it is more rewarding to agent i than other agents in the team , from agent i ’ s point of view . This is the concept of perceived reward of agent i . Then each agent acts independently following its own value function Vi , which is the optimal solution to the Bellman equation conditioned on the assigned perceived reward , and is a function of it . This naturally leads to collaboration . We build a mathematical framework to model such behaviors . Specifically , we make the following assumption on the behavior of each agent : Assumption 1 . Each agent i has a perceived reward assignment ri ∈ R|Si||Ai|+ that may depend on the joint state s = ( s1 , . . . , sK ) . Agent i acts according to its own state si and individual optimal value Vi = Vi ( si ; ri ) ( and associated Qi ( si , ai ; ri ) ) , which is a function of ri . Note that the perceived reward assignment ri ∈ R|Si||Ai|+ is a non-negative vector containing the assignment of scalar reward at each state-action pair ( hence its length is |Si||Ai| ) . We might also equivalently write it as a function : ri ( x , a ) : Si × Ai 7→ R , where x ∈ Si and a ∈ Ai . Here x is a dummy variable that runs through all states of agent i , while si refers to its current state . Given the perceived rewards assignment { ri } , the values and actions of agents become decoupled . Due to the fully collaborative nature , a natural choice of { ri } is the optimal solution of the following objective J ( r1 , r2 , . . . , rK ) . Here re is the external rewards of the environment , wi ≥ 0 is the preference of agent i and is the Hadamard ( element-wise ) product : J ( r1 , . . . , rK ) : = K∑ i=1 Vi ( si ; ri ) s.t . K∑ i=1 wi ri ≤ re ( 1 ) Note that the constraint ensures that the objective has bounded solution . Without this constraints , we could easily take each perceived reward ri to +∞ , since each value function Vi ( si ; ri ) monotonously increases with respect to ri . Intuitively , Eqn . 1 means that we “ assign ” the external rewards re optimally to K agents as perceived rewards , so that their overall values are the highest . In the case of sparse reward , most of the state-action pair ( x , a ) , re ( x , a ) = 0 . By Eqn . 1 , for all agent i , their perceived reward ri ( x , a ) = 0 . Then we only focus on nonzero entries for each ri . Define M to be the number of state-action pairs with positive reward : M = ∑ ai∈Ai 1 { ri ( x , ai ) > 0 } . Discarding zero-entries , we could regard all ri as M -dimensional vector . Finally , we define the reward matrix R = [ r1 , . . . , rK ] ∈ RM×K . Clarification on Rewards . There are two kinds of rewards here : external reward re and perceived reward for each agent ri . re is defined to be the environmental reward shared by all the agents : re : S×A1×· · ·×Ak → R. Given this external reward , depending on a specific reward assignment , each agent can receive a perceived reward ri that drives its behavior . If the reward assignment is properly defined/optimized , then all the agents can act based on the perceived reward to jointly optimize ( maximize ) the shared external reward . 2.2 LEARN TO PREDICT THE OPTIMAL ASSIGNED REWARD r∗i ( s ) The optimal reward assignments R∗ of Eq . 1 , as well as its i-th assignment r∗i , is a function of the joint states s = { s1 , s2 , . . . , sK } . Once the optimization is done , each agent can get the best action a∗i = argmaxai Qi ( si , ai ; r ∗ i ( s ) ) independently from the reconstructed Q function . The formulation Vi ( si ; ri ) avoids learning the value function of statistically infeasible joint states Vi ( s ) . Since an agent acts solely based on ri , ad hoc team play becomes possible if the correct ri is assigned . However , there are still issues . First , since each Vi is a convex function regarding ri , maximizing Eqn . 1 is a summation of convex functions under linear constraints optimization , and is hard computationally . Furthermore , to obtain actions for each agent , we need to solve Eqn . 1 at every step , which still requires centralization at test time , preventing us from decentralized execution . To overcome optimization complexity and enable decentralized execution , we consider learning a direct mapping from the joint state s to optimally assigned reward r∗i ( s ) . However , since s is a joint state , learning such a mapping can be as hard as modeling Vi ( s ) . Fortunately , Vi ( si ; ri ( s ) ) is not an arbitrary function , but the optimal value function that satisfies Bellman equation . Due to the speciality of Vi , we could find an approximate assignment r̂i for each agent i , so that r̂i only depends on a local observation slocali of the states of nearby other agents observed by agent i : r̂i ( s ) = r̂i ( slocali ) . At the same time , these approximate reward assignments { r̂i } achieve approximate optimal for the joint optimization ( Eqn . 1 ) with bounded error : Theorem 1 . For all i ∈ { 1 , . . . , K } , all si ∈ Si , there exists a reward assignment r̂i that ( 1 ) only depends on slocali and ( 2 ) r̂i is the i-th column of a feasible global reward assignment R̂ such that J ( R̂ ) ≥ J ( R∗ ) − ( γC + γD ) RmaxMK , ( 2 ) where C and D are constants related to distances between agents/rewards ( details in Appendix ) . Since r̂i only depends on the local observation of agent i ( i.e. , agent ’ s own state si as well as the states of nearby agents ) , it enables decentralized execution : for each agent i , the local observation is sufficient for an agent to act near optimally . Limitation . One limitation of Theorem 1 is that the optimality gap of r̂i heavily depends on the size of slocali . If the local observation of agent i covers more agents , then the gap is smaller but the cost to learn such a mapping is higher , since the mapping has more input states and becomes higher-dimensional . In practice , we found that the observation oi of agent i covers slocali works sufficiently well , as shown in the experiments ( Sec . 4 ) .
To address the ad hoc team play, the authors propose a residual term of Q function, which additionally considers the states of nearby agents. A novel MARA loss is introduced to the residual term as a regularization to achieve the reward assignment implicitly. The proposed CollaQ could be easily built on QMIX and trained end-to-end. CollaQ outperforms other baselines on various tasks with the ad hoc team play setting.
SP:6adf73371c97da34bca974dbffb5b7dd211b9e44
Statistical inference for individual fairness
1 INTRODUCTION . The problem of bias in machine learning systems is at the forefront of contemporary ML research . Numerous media outlets have scrutinized machine learning systems deployed in practice for violations of basic societal equality principles ( Angwin et al. , 2016 ; Dastin , 2018 ; Vigdor , 2019 ) . In response researchers developed many formal definitions of algorithmic fairness along with algorithms for enforcing these definitions in ML models ( Dwork et al. , 2011 ; Hardt et al. , 2016 ; Berk et al. , 2017 ; Kusner et al. , 2018 ; Ritov et al. , 2017 ; Yurochkin et al. , 2020 ) . Despite the flurry of ML fairness research , the basic question of assessing fairness of a given ML model in a statistically principled way remains largely unexplored . In this paper we propose a statistically principled approach to assessing individual fairness ( Dwork et al. , 2011 ) of ML models . One of the main benefits of our approach is it allows the investigator to calibrate the method ; i.e . it allows the investigator to prescribe a Type I error rate . Passing a test that has a guaranteed small Type I error rate is the usual standard of proof in scientific investigations because it guarantees the results are reproducible ( to a certain degree ) . This is also highly desirable in detecting bias in ML models because it allows us to certify whether an ML model will behave fairly at test time . Our method for auditing ML models abides by this standard . There are two main challenges associated with developing a hypothesis test for individual fairness . First , how to formalize the notion of individual fairness in an interpretable null hypothesis ? Second , how to devise a test statistic and calibrate it so that auditors can control the Type I error rate ? In this paper we propose a test motivated by the relation between individual fairness and adversarial robustness ( Yurochkin et al. , 2020 ) . At a high-level , our approach consists of two parts : 1. generating unfair examples : by unfair example we mean an example that is similar to a training example , but treated differently by the ML models . Such examples are similar to adversarial examples ( Goodfellow et al. , 2014 ) , except they are only allowed to differ from a training example in certain protected or sensitive ways . 2. summarizing the behavior of the ML model on unfair examples : We propose a loss-ratio based approach that is not only scale-free , but also interpretable . For classification problems , we propose a variation of our test based on the error rates ratio . 1.1 RELATED WORK . At a high level , our approach is to use the difference between the empirical risk and the distributionally robust risk as a test statistic . The distributionally robust risk is the maximum risk of the ML model on similar training examples . Here similarity is measured by a fair metric that encodes our intuition of which inputs should be treated similarly by the ML model . We note that DRO has been extensively studied in the recent literature ( Duchi et al. , 2016 ; Blanchet & Murthy , 2016 ; Hashimoto et al. , 2018 ) , however outside of the fairness context with the exception of Yurochkin et al . ( 2020 ) ; Xue et al . ( 2020 ) . Yurochkin et al . ( 2020 ) focus on training fair or robust ML models instead of auditing ML models . Xue et al . ( 2020 ) also use the difference between the empirical and distributionally robust risks as a test statistic , but their test is only applicable to ML problems with finite feature spaces . This limitation severely restricts the applicability of their test . On the other hand , our test is suitable for ML problems with continuous features spaces . We note that the technical exposition in Xue et al . ( 2020 ) is dependant on the finite feature space assumption and in this work we develop a novel perspective of the problem that allows us to handle continuous feature spaces . 2 GRADIENT FLOW FOR FINDING UNFAIR EXAMPLES . In this section , we describe a gradient flow-based approach to finding unfair examples that form the basis of our suite of inferential tools . Imagine an auditor assessing whether an ML model is fair or not . The auditor aims to detect violations of individual fairness in the ML model . Recall Dwork et al . ( 2011 ) ’ s definition of individual fairness . Let X ⊂ Rd and Y ⊂ Rd be the input and output spaces respectively , and f : X → Y be an ML model to audit . The ML model f is known as individually fair if dy ( f ( x1 ) , f ( x2 ) ) ≤ Lfairdx ( x1 , x2 ) for all x1 , x2 ∈ X ( 2.1 ) for some Lipschitz constant Lfair > 0 . Here dx and dy are metrics on X and Y respectively . Intuitively , individually fair ML model treats similar samples similarly , and the fair metric dx encodes our intuition of which samples should be treated similarly . We should point out that dx ( x1 , x2 ) being small does not imply x1 and x2 are similar in all aspects . Even if dx ( x1 , x2 ) is small , x1 and x2 may differ much in certain attributes , e.g. , protected/sensitive attributes . Before moving on , we comment on the choice of the fair metric dx . This metric is picked by the auditor and reflects the auditor ’ s intuition about what is fair and what is unfair for the ML task at hand . It can be provided by a subject expert ( this is Dwork et al . ( 2011 ) ’ s original recommendation ) or learned from data ( this is a recent approach advocated by Ilvento ( 2019 ) ; Wang et al . ( 2019 ) ; Mukherjee et al . ( 2020 ) ) . Section 4 provides details of picking a fair metric in our empirical studies . To motivate our approach , we recall the distributionally robust optimization ( DRO ) approach to training individually fair ML models ( Yurochkin et al. , 2020 ) . Let f : X → Y be an ML model and ` ( f ( x ) , y ) : Z → R+ be any smooth loss ( e.g . cross-entropy loss ) . To search for differential treatment in the ML model , Yurochkin et al . ( 2020 ) solve the optimization problem max P : W ( P , Pn ) ≤ ∫ Z ` ( f ( x ) , y ) dP ( z ) , ( 2.2 ) where W is the Wasserstein distance on probability distributions on feature space induced by the fair metric , Pn is the empirical distribution of the training data , and is a moving budget that ensures the adversarial examples are close to the ( original ) training examples in the fair metric . Formally , this search for differential treatment checks for violations of distributionally robust fairness . Definition 2.1 ( distributionally robust fairness ( DRF ) ( Yurochkin et al. , 2020 ) ) . An ML model h : X → Y is ( , δ ) -distributionally robustly fair ( DRF ) WRT the fair metric dx iff supP : W ( P , Pn ) ≤ ∫ Z ` ( z , h ) dP ( z ) − ∫ Z ` ( z , h ) dPn ( z ) ≤ δ . ( 2.3 ) The optimization problem ( 2.2 ) is an infinite-dimensional problem , but its dual is more tractable . Blanchet & Murthy show that the dual of ( 2.2 ) is max P : W ( P , Pn ) ≤ EP [ ` ( f ( x ) , y ) ] = min λ≥0 { λ + EPn [ ` cλ ( x , y ) ] } , ( 2.4 ) ` cλ ( xi , yi ) , max x∈X { ` ( f ( x ) , yi ) − λd2x ( x , xi ) } . ( 2.5 ) In practice , since ( 2.5 ) is highly non-convex in general , auditors use gradient-based optimization algorithm to solve it and terminate the algorithm after few iterations . As a result , one can not guarantee optimality of the solution . However , optimality is required to establish convergence guarantees for DRO algorithms . This issue is typically ignored in practice when developing training algorithms , e.g . as in Yurochkin et al . ( 2020 ) , but it should be treated with care when considering limiting distribution of the related quantities required to calibrate a test . We note that Xue et al . ( 2020 ) needed discrete feature space assumption due to the aforementioned concern : when the feature space is discrete , it is possible to solve ( 2.5 ) optimally by simply comparing the objective value at all points of the sample space . In this paper we adapt theory to practice , i.e . analyze the limiting distribution of ( 2.5 ) optimized for a fixed number of gradient steps . The effects of early termination can be characterized by a continuous-time approximation of adversarial dynamics , which we called gradient flow attack . Given a sample ( x0 , y0 ) , the gradient flow attack solves a continuous-time ordinary differential equation ( ODE ) { Ẋ ( t ) = ∇x { ` ( f ( X ( t ) ) , y0 ) − λd2x ( X ( t ) , x0 ) } , X ( 0 ) = x0 , ( 2.6 ) over time t ≥ 0 . For fixed penalty parameter λ and stopping time T > 0 , Φ : X × Y → X is the unfair map Φ ( x0 , y0 ) , X ( T ) . ( 2.7 ) Here the map Φ is well-defined as long as g ( x ) , ∇x { ` ( f ( x ) , y0 ) − λd2x ( x , x0 ) } is Lipschitz , i.e. , ‖g ( x1 ) − g ( x2 ) ‖2 ≤ L‖x1 − x2‖2 for some L > 0 . Under this assumption , the autonomous Cauchy problem ( 2.6 ) has unique solution and thus Φ : X × Y → X is a one-to-one function . We call Φ an unfair map because it maps samples in the data to similar areas of the sample space that the ML model performs poorly on . We note that data in this case is an audit dataset chosen by the auditor to evaluate individual fairness of the given model . The audit data does not need to be picked carefully and could be simply an iid sample ( e.g . testing data ) . The unfair map plays the key role as it allows us to identify areas of the sample space where model violates individual fairness , even if the audit samples themselves reveal no such violations . In the rest of the paper , we define the test statistic in terms of the unfair map instead of the optimal point of ( 2.5 ) . This has two main benefits : 1. computational tractability : evaluating the unfair map is computationally tractable because integrating initial value problems ( IVP ) is a well-developed area of scientific computing ( Heath , 2018 ) . Auditors may appeal to any globally stable method for solving IVP ’ s to evaluate the unfair map . 2. reproducibility : the non-convex nature of ( 2.5 ) means the actual output of any attempts at solving ( 2.5 ) is highly depend on the algorithm and the initial iterate . By defining the test statistic algorithmically , we avoid ambiguity in the algorithm and initial iterate , thereby ensuring reproducibility . Of course , the tractability and reproducibility of the resulting statistical tests comes at a cost : power . Because we are not exactly maximizing ( 2.5 ) , the ability of the test statistic to detect violations of individual fairness is limited by the ability of ( 2.7 ) to find ( unfair ) adversarial examples .
The paper introduces a framework to statistically test whether a given model is individually fair or not. In particular, given a model, a distance metric over individuals, and a data point z, the authors propose an algorithm that finds a new data point z' such that z' is similar to z but their corresponding losses are different under the model -- if the model is not individually fair. They provide experimental results to show how their proposed method can detect unfairness in practice.
SP:85843d0456fb7791c3edfc1f81dec00be5abc41f
Revisiting the Train Loss: an Efficient Performance Estimator for Neural Architecture Search
1 INTRODUCTION . Reliably estimating the generalisation performance of a proposed architecture is crucial to the success of Neural Architecture Search ( NAS ) but has always been a major bottleneck in NAS algorithms ( Elsken et al. , 2018 ) . The traditional approach of training each architecture for a large number of epochs and evaluating it on validation data ( full evaluation ) provides a reliable performance measure , but requires prohibitively high computational resources on the order of thousands of GPU days ( Zoph & Le , 2017 ; Real et al. , 2017 ; Zoph et al. , 2018 ; Real et al. , 2019 ; Elsken et al. , 2018 ) . This motivates the development of methods for speeding up performance estimation to make NAS practical for limited computing budgets . A popular simple approach is early-stopping which offers a low-fidelity approximation of generalisation performance by training for fewer epochs ( Li et al. , 2016 ; Falkner et al. , 2018 ; Li & Talwalkar , 2019 ) . However , if we stop the training early at a small number of epochs and evaluate the model on validation data , the relative performance ranking may not correlate well with the performance ranking of the full evaluation ( Zela et al. , 2018 ) . Another line of work focuses on learning curve extrapolation ( Domhan et al. , 2015 ; Klein et al. , 2016b ; Baker et al. , 2017 ) , which trains a surrogate model to predict the final generalisation performance based on the initial learning curve and/or meta-features of the architecture . However , the training of the surrogate often requires hundreds of fully evaluated architectures to achieve satisfactory extrapolation performance and the hyper-parameters of the surrogate also need to be optimised . Alternatively , the idea of weight sharing is adopted in one-shot NAS methods to speed up evaluation ( Pham et al. , 2018 ; Liu et al. , 2019 ; Xie et al. , 2019b ) . Despite leading to significant cost-saving , weight sharing heavily underestimates the true performance of good architectures and is unreliable in predicting the relative ranking among architectures ( Yang et al. , 2020 ; Yu et al. , 2020 ) . In view of the above limitations , we propose a simple model-free method which provides a reliable yet computationally cheap estimation of the generalisation performance ranking of architectures : the Sum over Training Losses ( SoTL ) . Our method harnesses the training losses of the commonly-used SGD optimiser during training , and is motivated by recent empirical and theoretical results linking training speed and generalisation ( Hardt et al. , 2016 ; Lyle et al. , 2020 ) . We ground our method in the Bayesian update setting , where we show that the SoTL estimator computes a lower bound to the model evidence , a quantity with sound theoretical justification for model selection ( MacKay , 1992 ) . We show empirically that our estimator can outperform a number of strong existing approaches to predict the relative performance ranking among architectures , while speeding up different NAS approaches significantly . 2 METHOD . We propose a simple metric that estimates the generalisation performance of a deep neural network model via the Sum of its Training Losses ( SoTL ) . After training a deep neural network whose prediction is fθ ( · ) for T epochs1 , we sum the training losses collected so far : SoTL = T∑ t=1 [ 1 B B∑ i=1 l ( fθt , i ( Xi ) , yi ) ] ( 1 ) where l is the training loss of a mini-batch ( Xi , yi ) at epoch t and B is the number of training steps within an epoch . If we use the first few epochs as the burn-in phase for θt , i to converge to certain distribution P ( θ ) and start the sum from epoch t = T −E + 1 instead of t = 1 , we obtain a variant SoTL-E . In the case where E = 1 , we start the sum at t = T and our estimator corresponds to the sum over training losses within epoch t = T . We discuss SoTL ’ s theoretical interpretation based on Bayesian marginal likelihood and training speed in Section 3 , and empirically show that SoTL , despite its simple form , can reliably estimate the generalisation performance of neural architectures in Section 5 . If the sum over training losses is a useful indicator for the generalisation performance , one might expect the sum over validation losses to be a similarly effective performance estimator . The sum over validation losses ( SoVL ) lacks the link to the Bayesian model evidence , and so its theoretical motivation is different from our SoTL . Instead , the validation loss sum can be viewed as performing a bias-variance trade-off ; the parameters at epoch t can be viewed as a potentially high-variance sample from a noisy SGD trajectory , and so summation reduces the resulting variance in the validation loss estimate at the expense of incorporating some bias due to the relative ranking of models ’ test performance changing during training . We show in Section 5 that SoTL clearly outperforms SoVL in estimating the true test performance . 3 THEORETICAL MOTIVATION . The SoTL metric is a direct measure of training speed and draws inspiration from two lines of work : the first is a Bayesian perspective that connects training speed with the marginal likelihood in the model selection setting , and the second is the link between training speed and generalisation ( Hardt et al. , 2016 ) . In this section , we will summarize recent results that demonstrate the connection between SoTL and generalisation , and further show that in Bayesian updating regimes , the SoTL metric corresponds to an estimate of a lower bound on the model ’ s marginal likelihood , under certain assumptions . 3.1 TRAINING SPEED AND THE MARGINAL LIKELIHOOD . We motivate the SoTL estimator by a connection to the model evidence , also called the marginal likelihood , which is the basis for Bayesian model selection . The model evidence quantifies how likely a dataset D is to have been generated by a model , and so can be used to update a prior belief distribution over which model from a given set is most likely to have generated D. Given a model with parameters θ , prior π ( θ ) , and likelihood P ( D|θ ) for a training data set D = { D1 , . . . , Dn } with data points Di = ( xi , yi ) , the ( log ) marginal likelihood is expressed as follows . logP ( D ) = logEπ ( θ ) [ P ( D|θ ) ] ⇔ logP ( D ) = n∑ i=1 logP ( Di|D < i ) = n∑ i=1 log [ EP ( θ|D < i ) [ P ( Di|θ ) ] ] Interpreting the negative log posterior predictive probability− logP ( Di|D < i ) of each data point as a ‘ loss ’ function , the log evidence then corresponds to the area under a training loss curve , where each 1T can be far from the total training epochs Tend used in complete training training step would be computed by sampling a data point Di , taking the log expected likelihood under the current posterior P ( θ|D < i ) as the current loss , and then updating the posterior by incorporating the new sampled data point : D < i+1 : = D < i ∪ { Di } . One can therefore interpret the marginal likelihood as a measure of training speed in a Bayesian updating procedure . In the setting where we can not compute the posterior analytically and only samples θ̂ from the posterior over parameters are available , we obtain an unbiased estimator of a lower bound L ( D ) = ∑ EP ( θ|D < i ) [ logP ( Di|θ ) ] on the marginal likelihood by Jensen ’ s inequality , which again corresponds to minimizing a sum over training losses∑ logP ( Di|θ̂ ) ≈ ∑ EP ( θ|D < i ) [ logP ( Di|θ ) ] ≤ ∑ log [ EP ( θ|D < i ) [ P ( Di|θ ) ] ] = logP ( D ) with ≈ denoting equality in expectation . A full analysis of the Bayesian setting is outside of the scope of this work . We refer the reader to ( Lyle et al. , 2020 ) for more details of the properties of this estimator in Bayesian models . Although the NAS setting does not yield the same interpretation of SoTL as model evidence estimation , we argue that the SoTL metric is still plausibly useful for model selection . Just as the marginal likelihood measures the utility of updates based on early data points in predicting later data points , the SoTL of a model trained with SGD will be lower for models whose mini-batch gradient descent updates improve the loss of later mini-batches seen during optimisation . We refer the reader to Apppendix B to see a demonstration of the SoTL metric in the Bayesian linear regression setting . We emphasize that the Bayesian connection thus justifies the sum over training losses as a tool for model selection , but not the training loss from a single parameter update . 3.2 TRAINING SPEED AND GENERALISATION . Independent of the accuracy of SoTL in estimating the Bayesian model evidence , it is also possible to motivate our method by its relationship with training speed : models which achieve low training loss quickly will have low SoTL . There are both empirical and theoretical lines of work that illustrate a deep connection between training speed and generalisation . On the theoretical front , we find that models which train quickly can attain lower generalisation bounds . Training speed and generalisation can be related via stability-based generalisation bounds ( Hardt et al. , 2016 ; Liu et al. , 2017 ) , which characterize the dependence of the solution found by a learning algorithm on its training data . In networks of sufficient width , ( Arora et al. , 2019 ) propose a neural-tangent-kernel-based data complexity measure which bounds both the convergence rate of SGD and the generalisation error of the model obtained by optimisation . A similar generalisation bound and complexity measure is obtained by ( Cao & Gu , 2019 ) . While theoretical work has largely focused on ranking bounds on the test error , current results do not provide guarantees on consistency between the ranking of different models ’ test set performance and their generalisation bounds . The empirical work of ( Jiang * et al. , 2020 ) demonstrates that many complexity measures are uncorrelated or negatively correlated with the relative performance of models on their test data but notably , a particular measure of training speed – the number of steps required to reach cross-entropy loss of 0.1 , was highly correlated with the test set performance ranking of different models . The connection between training speed and generalisation is also observed by ( Zhang et al. , 2016 ) , who find that models trained on true labels converge faster than models trained on random labels , and attain better generalisation performance . 4 RELATED WORK . Various approaches have been developed to speed up architecture performance estimation , thus improving the efficiency of NAS . Low-fidelity estimation methods accelerate NAS by using the validation accuracy obtained after training architectures for fewer epochs ( namely early-stopping ) ( Li et al. , 2016 ; Falkner et al. , 2018 ; Zoph et al. , 2018 ; Zela et al. , 2018 ) , training a down-scaled model with fewer cells during the search phase ( Zoph et al. , 2018 ; Real et al. , 2019 ) or training on a subset of the data ( Klein et al. , 2016a ) . However , low-fidelity estimates underestimate the true performance of the architecture and can change the relative ranking among architectures ( Elsken et al. , 2018 ) . This undesirable effect on relative ranking is more prominent when the cheap approximation set-up is too dissimilar to the full evaluation ( Zela et al. , 2018 ) . As shown in our Fig . 2 below , the validation accuracy at early epochs of training suffers low rank correlation with the final test performance . Another way to cheaply estimate architecture performance is to train a regression model to extrapolate the learning curve from what is observed in the initial phase of training . Regression model choices that have been explored include Gaussian processes with a tailored kernel function ( Domhan et al. , 2015 ) , an ensemble of parametric functions ( Domhan et al. , 2015 ) , a Bayesian neural network ( Klein et al. , 2016b ) and more recently a ν-support vector machine regressor ( ν-SVR ) ( Baker et al. , 2017 ) which achieves state-of-the-art prediction performance . Although these model-based methods can often predict the performance ranking better than their model-free early-stopping counterparts , they require a relatively large amount of fully evaluated architecture data ( e.g . 100 fully evaluated architectures in ( Baker et al. , 2017 ) ) to train the regression surrogate properly and optimise the model hyperparameters in order to achieve good prediction performance . The high computational cost of collecting the training set makes such model-based methods less favourable for NAS unless the practitioner has already evaluated hundreds of architectures on the target task . Moreover , both low-fidelity estimates and learning curve extrapolation estimators are empirically developed and lack theoretical motivation . Finally , one-shot NAS methods employ weight sharing to reduce computational costs ( Pham et al. , 2018 ; Liu et al. , 2019 ; Xie et al. , 2019b ) . Under the one-shot setting , all architectures are considered as subgraphs of a supergraph . Only the weights of the supergraph are trained while the architectures ( subgraphs ) inherit the corresponding weights from the supergraph . Weight sharing removes the need for retraining each architecture during the search and thus achieves a significant speed-up . However , the weight sharing ranking among architectures often correlates very poorly with the true performance ranking ( Yang et al. , 2020 ; Yu et al. , 2020 ; Zela et al. , 2020 ) , meaning architectures chosen by one-shot NAS are likely to be sub-optimal when evaluated independently ( Zela et al. , 2020 ) . Moreover , one-shot methods are often outperformed by sample-based NAS methods ( Dong & Yang , 2020 ; Zela et al. , 2020 ) . Apart from the above mentioned performance estimators used in NAS , many complexity measures have been proposed to analyse the generalisation performance of deep neural networks . ( Jiang * et al. , 2020 ) provides a rigorous empirical analysis of over 40 such measures . This investigation finds that sharpness-based measures ( McAllester , 1999 ; Keskar et al. , 2016 ; Neyshabur et al. , 2017 ; Dziugaite & Roy , 2017 ) ( including PAC-Bayesian bounds ) provide good correlation with test set performance , but their estimation requires adding randomly generated perturbations to the network parameters and the magnitude of the perturbations needs to be carefully optimised with additional training , making them unsuitable performance estimators for NAS . Optimisation-based complexity measures also perform well in predicting generalisation . Specifically , the number of steps required to reach loss of 0.1 , as mentioned in Section 3.2 , is closely related to our approach as both quantities measure the training speed of architectures . To our knowledge though , this measure has never been used in the NAS context before .
This paper proposes a simple model-free method to estimate the generalization performance of deep neural architectures based on their early training losses. The proposed method uses the sum of training losses during training to estimate the performance and is motivated by recent empirical and theoretical results. The experimental results show that the proposed estimator outperforms the existing methods that predict the performance ranking among architectures.
SP:e7bd23e8d01a469909890d06581882da634a3e0f
Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting
1 INTRODUCTION . Modeling and forecasting complex dynamical systems is a major challenge in domains such as environment and climate ( Rolnick et al. , 2019 ) , health science ( Choi et al. , 2016 ) , and in many industrial applications ( Toubeau et al. , 2018 ) . Model Based ( MB ) approaches typically rely on partial or ordinary differential equations ( PDE/ODE ) and stem from a deep understanding of the underlying physical phenomena . Machine learning ( ML ) and deep learning methods are more prior agnostic yet have become state-of-the-art for several spatio-temporal prediction tasks ( Shi et al. , 2015 ; Wang et al. , 2018 ; Oreshkin et al. , 2020 ; Donà et al. , 2020 ) , and connections have been drawn between deep architectures and numerical ODE solvers , e.g . neural ODEs ( Chen et al. , 2018 ; Ayed et al. , 2019b ) . However , modeling complex physical dynamics is still beyond the scope of pure ML methods , which often can not properly extrapolate to new conditions as MB approaches do . Combining the MB and ML paradigms is an emerging trend to develop the interplay between the two paradigms . For example , Brunton et al . ( 2016 ) ; Long et al . ( 2018b ) learn the explicit form of PDEs directly from data , Raissi et al . ( 2019 ) ; Sirignano & Spiliopoulos ( 2018 ) use NNs as implicit methods for solving PDEs , Seo et al . ( 2020 ) learn spatial differences with a graph network , Ummenhofer et al . ( 2020 ) introduce continuous convolutions for fluid simulations , de Bézenac et al . ( 2018 ) learn the ∗Equal contribution , authors sorted by reverse alphabetical order . velocity field of an advection-diffusion system , Greydanus et al . ( 2019 ) ; Chen et al . ( 2020 ) enforce conservation laws in the network architecture or in the loss function . The large majority of aforementioned MB/ML hybrid approaches assume that the physical model adequately describes the observed dynamics . This assumption is , however , commonly violated in practice . This may be due to various factors , e.g . idealized assumptions and difficulty to explain processes from first principles ( Gentine et al. , 2018 ) , computational constraints prescribing a fine grain modeling of the system ( Ayed et al. , 2019a ) , unknown external factors , forces and sources which are present ( Large & Yeager , 2004 ) . In this paper , we aim at leveraging prior dynamical ODE/PDE knowledge in situations where this physical model is incomplete , i.e . unable to represent the whole complexity of observed data . To handle this case , we introduce a principled learning framework to Augment incomplete PHYsical models for ideNtIfying and forecasTing complex dYnamics ( APHYNITY ) . The rationale of APHYNITY , illustrated in Figure 1 on the pendulum problem , is to augment the physical model when—and only when—it falls short . Designing a general method for combining MB and ML approaches is still a widely open problem , and a clear problem formulation for the latter is lacking ( Reichstein et al. , 2019 ) . Our contributions towards these goals are the following : • We introduce a simple yet principled framework for combining both approaches . We decompose the data into a physical and a data-driven term such that the data-driven component only models information that can not be captured by the physical model . We provide existence and uniqueness guarantees ( Section 3.1 ) for the decomposition given mild conditions , and show that this formulation ensures interpretability and benefits generalization . • We propose a trajectory-based training formulation ( Section 3.2 ) along with an adaptive optimization scheme ( Section 3.3 ) enabling end-to-end learning for both physical and deep learning components . This allows APHYNITY to automatically adjust the complexity of the neural network to different approximation levels of the physical model , paving the way to flexible learned hybrid models . • We demonstrate the generality of the approach on three use cases ( reaction-diffusion , wave equations and the pendulum ) representative of different PDE families ( parabolic , hyperbolic ) , having a wide spectrum of application domains , e.g . acoustics , electromagnetism , chemistry , biology , physics ( Section 4 ) . We show that APHYNITY is able to achieve performances close to complete physical models by augmenting incomplete ones , both in terms of forecasting accuracy and physical parameter identification . Moreover , APHYNITY can also be successfully extended to the partially observable setting ( see discussion in Section 5 ) . 2 RELATED WORK . Correction in data assimilation Prediction under approximate physical models has been tackled by traditional statistical calibration techniques , which often rely on Bayesian methods ( Pernot & Cailliez , 2017 ) . Data assimilation techniques , e.g . the Kalman filter ( Kalman , 1960 ; Becker et al. , 2019 ) , 4D-var ( Courtier et al. , 1994 ) , prediction errors are modeled probabilistically and a correction using observed data is applied after each prediction step . Similar residual correction procedures are commonly used in robotics and optimal control ( Chen , 2004 ; Li et al. , 2014 ) . However , these sequential ( two-stage ) procedures prevent the cooperation between prediction and correction . Besides , in model-based reinforcement learning , model deficiencies are typically handled by considering only short-term rollouts ( Janner et al. , 2019 ) or by model predictive control ( Nagabandi et al. , 2018 ) . The originality of APHYNITY is to leverage model-based prior knowledge by augmenting it with neurally parametrized dynamics . It does so while ensuring optimal cooperation between the prior model and the augmentation . Augmented physical models Combining physical models with machine learning ( gray-box or hybrid modeling ) was first explored from the 1990 ’ s : Psichogios & Ungar ( 1992 ) ; Thompson & Kramer ( 1994 ) ; Rico-Martinez et al . ( 1994 ) use neural networks to predict the unknown parameters of physical models . The challenge of proper MB/ML cooperation was already raised as a limitation of gray-box approaches but not addressed . Moreover these methods were evaluated on specific applications with a residual targeted to the form of the equation . In the last few years , there has been a renewed interest in deep hybrid models bridging data assimilation techniques and machine learning to identify complex PDE parameters using cautiously constrained forward model ( Long et al. , 2018b ; de Bézenac et al. , 2018 ) , as discussed in introduction . Recently , some approaches have specifically targetted the MB/ML cooperation . HybridNet ( Long et al. , 2018a ) and PhICNet ( Saha et al. , 2020 ) both use data-driven networks to learn additive perturbations or source terms to a given PDE . The former considers the favorable context where the perturbations can be accessed , and the latter the special case of additive noise on the input . Wang et al . ( 2019 ) ; Mehta et al . ( 2020 ) propose several empirical fusion strategies with deep neural networks but lack theoretical groundings . PhyDNet ( Le Guen & Thome , 2020 ) tackles augmentation in partially-observed settings , but with specific recurrent architectures dedicated to video prediction . Crucially , all the aforementioned approaches do not address the issues of uniqueness of the decomposition or of proper cooperation for correct parameter identification . Besides , we found experimentally that this vanilla cooperation is inferior to the APHYNITY learning scheme in terms of forecasting and parameter identification performances ( see experiments in Section 4.2 ) . 3 THE APHYNITY MODEL . In the following , we study dynamics driven by an equation of the form : dXt dt = F ( Xt ) ( 1 ) defined over a finite time interval [ 0 , T ] , where the state X is either vector-valued , i.e . we have Xt ∈ Rd for every t , ( pendulum equations in Section 4 ) , or Xt is a d-dimensional vector field over a spatial domain Ω ⊂ Rk , with k ∈ { 2 , 3 } , i.e . Xt ( x ) ∈ Rd for every ( t , x ) ∈ [ 0 , T ] × Ω ( reaction-diffusion and wave equations in Section 4 ) . We suppose that we have access to a set of observed trajectories D = { X· : [ 0 , T ] → A | ∀t ∈ [ 0 , T ] , dXt/dt = F ( Xt ) } , where A is the set of X values ( either Rd or vector field ) . In our case , the unknown F has A as domain and we only assume that F ∈ F , with ( F , ‖ · ‖ ) a normed vector space . 3.1 DECOMPOSING DYNAMICS INTO PHYSICAL AND AUGMENTED TERMS . As introduced in Section 1 , we consider the common situation where incomplete information is available on the dynamics , under the form of a family of ODEs or PDEs characterized by their temporal evolution Fp ∈ Fp ⊂ F . The APHYNITY framework leverages the knowledge of Fp while mitigating the approximations induced by this simplified model through the combination of physical and data-driven components . F being a vector space , we can write : F = Fp + Fa where Fp ∈ Fp encodes the incomplete physical knowledge and Fa ∈ F is the data-driven augmentation term complementing Fp . The incomplete physical prior is supposed to belong to a known family , but the physical parameters ( e.g . propagation speed for the wave equation ) are unknown and need to be estimated from data . Both Fp and Fa parameters are estimated by fitting the trajectories from D. The decomposition F = Fp + Fa is in general not unique . For example , all the dynamics could be captured by the Fa component . This decomposition is thus ill-defined , which hampers the interpretability and the extrapolation abilities of the model . In other words , one wants the estimated parameters of Fp to be as close as possible to the true parameter values of the physical model and Fa to play only a complementary role w.r.t Fp , so as to model only the information that can not be captured by the physical prior . For example , when F ∈ Fp , the data can be fully described by the physical model , and in this case it is sensible to desire Fa to be nullified ; this is of central importance in a setting where one wishes to identify physical quantities , and for the model to generalize and extrapolate to new conditions . In a more general setting where the physical model is incomplete , the action of Fa on the dynamics , as measured through its norm , should be as small as possible . This general idea is embedded in the following optimization problem : min Fp∈Fp , Fa∈F ‖Fa‖ subject to ∀X ∈ D , ∀t , dXt dt = ( Fp + Fa ) ( Xt ) ( 2 ) The originality of APHYNITY is to leverage model-based prior knowledge by augmenting it with neurally parametrized dynamics . It does so while ensuring optimal cooperation between the prior model and the augmentation . A first key question is whether the minimum in Eq . ( 2 ) is indeed well-defined , in other words whether there exists indeed a decomposition with a minimal norm Fa . The answer actually depends on the geometry of Fp , and is formulated in the following proposition proven in Appendix B : Proposition 1 ( Existence of a minimizing pair ) . If Fp is a proximinal set1 , there exists a decomposition minimizing Eq . ( 2 ) . Proximinality is a mild condition which , as shown through the proof of the proposition , can not be weakened . It is a property verified by any boundedly compact set . In particular , it is true for closed subsets of finite dimensional spaces . However , if only existence is guaranteed , while forecasts would be expected to be accurate , non-uniqueness of the decomposition would hamper the interpretability of Fp and this would mean that the identified physical parameters are not uniquely determined . It is then natural to ask under which conditions solving problem Eq . ( 2 ) leads to a unique decomposition into a physical and a data-driven component . The following result provides guarantees on the existence and uniqueness of the decomposition under mild conditions . The proof is given in Appendix B : Proposition 2 ( Uniqueness of the minimizing pair ) . If Fp is a Chebyshev set1 , Eq . ( 2 ) admits a unique minimizer . The Fp in this minimizer pair is the metric projection of the unknown F onto Fp . The Chebyshev assumption condition is strictly stronger than proximinality but is still quite mild and necessary . Indeed , in practice , many sets of interest are Chebyshev , including all closed convex spaces in strict normed spaces and , if F = L2 , Fp can be any closed convex set , including all finite dimensional subspaces . In particular , all examples considered in the experiments are Chebyshev sets . Propositions 1 and 2 provide , under mild conditions , the theoretical guarantees for the APHYNITY formulation to infer the correct MB/ML decomposition , thus enabling both recovering the proper physical parameters and accurate forecasting .
This paper outlines a method for forecasting and parameter estimation when you have a partial physics model (possibly with unknown parameters) and time series data. This is a hybrid approach where the data-driven (deep learning) approach only learns the parts not accounted for by the physical model. A key feature is being able to decompose the problem in such a way that the data-driven model only models what cannot be captured by the physical model. The parameters of these two models must be fit jointly so that the physical model's parameters are more correct. They prove existence and uniqueness for this decomposition.
SP:ddf5fcf80d3a1d2c18cf4432d29c0eda32dbbef3
Winning the L2RPN Challenge: Power Grid Management via Semi-Markov Afterstate Actor-Critic
1 INTRODUCTION . The power grid , an interconnected network for delivering electricity from producers to consumers , has become an essential component of modern society . For a safe and reliable transmission of electricity , it is constantly monitored and managed by human experts in the control room . Therefore , there has been growing interest in automatically controlling and managing the power grid . As we make the transition to sustainable power sources such as solar , wind , and hydro ( Rolnick et al. , 2019 ) , power grid management is becoming a very complex task beyond human expertise , calling for data-driven optimization . Yet , automatic control of a large-scale power grid is a challenging task since it requires complex yet reliable decision-making . While most approaches have focused on controlling the generation or the load of electricity ( Venkat et al. , 2008 ; Zhao et al. , 2014 ; Huang et al. , 2020 ) , managing the power grid through the topology control ( changing the connection of power lines and bus assignments in substations ) would be the ultimate goal . By reconfiguring the topology of the power grid , it can reroute the flow of electricity , which enables the transmission of electricity from the producers to consumers efficiently and thus prevent surplus production . There are preliminary studies of the grid topology control in the power systems literature ( Fisher et al. , 2008 ; Khodaei & Shahidehpour , 2010 ) , but due to its large , combinatorial , and non-linear nature , these methods do not provide a practical solution to be deployed to the real-world . On the other hand , deep Reinforcement Learning ( RL ) has shown significant progress in complex sequential decision-making tasks , such as Go ( Silver et al. , 2016 ) and arcade video games ( Mnih et al. , 2015 ) , purely from data . RL is also perceived as a promising candidate to address the challenges of power grid management ( Ernst et al. , 2004 ; Dimeas & Hatziargyriou , 2010 ; Duan et al. , 2020 ; Zhang et al. , 2020 ; Hua et al. , 2019 ) . In this regard , we present Semi-Markov ∗ : Equal contribution Afterstate Actor-Critic ( SMAAC ) , an RL algorithm that effectively tackles the challenges in power grid management . One of the main challenges in RL for the real-world scale power grid management lies in its massive state and action space . We address the problem by adopting a goal-conditioned hierarchical policy with the afterstate representation . First , we represent state-action pairs as afterstates ( Sutton & Barto , 2018 ) , the state after the agent has made its decision but before the environment has responded , to efficiently cover the large state-action space . The afterstate representation can be much more succinct than the state-action pair representation when multiple state-action pairs are leading to an identical afterstate . For example , in the case of controlling the topology of the power grid , a pair of a current topology and an action of topology modification can be represented as a reconfigured topology , since the topology is deterministically reconfigured by the action . Then the next state is determined by random external factors , such as the change of power demands in load . Second , we extend this idea to a hierarchical framework , where the high-level policy produces a desirable topology under the current situation , and the low-level policy takes care of figuring out an appropriate sequence of primitive topology changes . Combined together , our hierarchical policy architecture with afterstates facilitates effective exploration for good topology during training . Our algorithm ranked first in the latest international competition on training RL agents to manage power grids , Learning To Run a Power Network ( L2RPN ) WCCI 2020 . In this paper , we further evaluate our approach using Grid2Op , the open-source power grid simulation platform used in the competition , by training and testing the agent in 3 different sizes of power grids . We show that the agent significantly outperforms all of the baselines in all grids except for the small grid where the task was easy for all algorithms . 2 BACKGROUND . 2.1 GRID2OP ENVIRONMENT . We briefly overview Grid2Op , the open-source simulation platform for power grid operation used in the L2RPN WCCI 2020 challenge . Grid2Op models realistic concepts found in realworld operations used to test advanced control algorithms , which follow real-world power system operational constraints and distributions ( Kelly et al. , 2020 ) . The power grid is essentially a graph composed of nodes corresponding to substations that are connected to loads , generators , and power lines . The generator produces electricity , the load consumes electricity , and the power line transmits electricity between substations . The substation can be regarded as a router in the network , which determines where to transmit electricity . Grid2Op considers 2 conductors per substation , known as the double busbar system . This means that the elements connected to a substation , i.e . loads , generators , and power lines , can be assigned to one of the two busbars , and the power travels only over the elements on the same busbar . Thus , each substation can be regarded as being split into two nodes . The state of the power grid consists of various features such as a topology configuration ( the connectivity of each power line and the bus assignment in each substation ) , as well as the amount of power provided by each generator , required by each load , transmitted in each line , and so on . The power supplied by generators and demanded by loads changes over time , and the power transmitted in lines also changes according to the current topology configuration together with supply and demand . In addition , each line has its own capacity to transmit electricity and can be automatically disconnected when there is an overflow of electricity . The agent can apply actions on substations and lines to managing the power grid . The action on a substation , called bus assignment , assigns the elements in the substation to a busbar . The action on a line , called line switch , disconnects ( both ends of the line is assigned to neither bus ) a line or reconnects a disconnected line . The agent is allowed to perform one line switch or one bus assignment action per step , and can not successively perform actions on the same line or substation . The power grid is simulated for a given period , typically for several days at a 5-minute interval . The simulation can terminate prematurely when the agent fails to manage the grid , i.e . ( 1 ) the amount of power required by loads are not delivered , which can happen if there are too many disconnected lines , or ( 2 ) a disconnected subgraph is formed as a result of applying an action . This is reflected in the failure penalty when measuring the performance of the agent , given by the number of remaining simulation time steps upon termination . Another important performance metric is the power loss penalty , given by the amount of power that disappeared during transmitting due to resistive loss . Thus , the goal of the agent is to operate the power grid both safely and efficiently by minimizing the failure penalty and the power loss penalty . Figure 1 illustrates how the actions affect the state of the power grid using the bus assignment action as an example . The simulator provides 3 different sizes of power grids , ( 1 ) IEEE-5 is the power grid with 5 substations , ( 2 ) IEEE-14 is the power grid with 14 substations , and ( 3 ) L2RPN WCCI 2020 is the power grid with 36 substations . See Appendix A.1 for more details on the environment . 2.2 AFTERSTATES IN RL . Grid2Op provides a natural framework to use RL for operating power grids : we assume a Markov decision process ( MDP ) defined by ( S , A , p , r , γ ) to represent the RL task , where S is the state space , A is the action space , and p ( st+1|st , at ) is the ( unknown ) state transition probability , rt = r ( st , at ) ∈ R is the immediate reward , and γ ∈ ( 0 , 1 ) is the discount factor . We assume learning a stochastic policy π ( at|st ) , which is a probability distribution over actions conditioned on states . The state and action value functions under π are V π ( s ) = Eπ [ ∑ l≥0 γ lrt+l|st = s ] and Qπ ( s , a ) = Eπ [ ∑ l≥0 γ lrt+l|st = s , at = a ] respectively . As shown in Figure 1 in the previous section , the transition in Grid2Op comprises two steps : the topological change that results directly from the action , and then the rest of the state changes that arise from exogenous events . This motivates the use of the afterstate ( Sutton & Barto , 2018 ) , also known as the post-decision state in Approximate Dynamic Programming ( ADP ) ( Powell , 2007 ) , which refers to the state after the agent has made its decision but before the arrival of new information . Let us define the state S as ( T , X ) where T is the part of the state that is deterministically changed by an action , and X as independent or affected indirectly from an action . Following the modeling in ( Powell , 2007 ) , the transition is decomposed into two parts using fA and fE : st+1 = [ τt+1 , xt+1 ] = f E ( [ τt+1 , xt ] , et+1 ) , s at t = [ τt+1 , xt ] = f A ( [ τt , xt ] , at ) , ( 1 ) where τt+1 , the deterministic part of st+1 , is given by the the function fA ( st , at ) , and xt+1 , the stochastic part , is given by the function fE ( sat , et+1 ) where et+1 is the source of the randomness in the transition sampled from some unknown distribution pE . Note that et+1 itself can be included as a part in xt+1 . Using the afterstate has a number of advantages . For example , if the state and the action spaces are very large but the set of unique afterstates is relatively small , learning the value function of afterstates would be much more efficient . The value of an afterstate sa under policy π is defined as V π ( sa ) = Eπ [ ∑ l≥0 γ lrt+l|sa = fA ( st , at ) ] and its recursive form can be written as : V π ( satt ) = Eet+1∼pE , at+1∼π [ r ( st , at ) + γV π ( fA ( st+1 , at+1 ) ) |st+1 = fE ( satt , et+1 ) ] ( 2 ) The optimal afterstate value function and the optimal policy can be obtained by iteratively alternating between the policy evaluation by Eq . ( 2 ) and policy improvement : πnew ( st ) = argmax at [ V πold ( fA ( st , at ) ) ] ( 3 ) Note that we can not gain much from the afterstate representation when using the individual power grid operations as actions since they result in unique changes in the grid topology . However , we shall see that the afterstate becomes very powerful when we consider the sequences of grid operations as the action space , where their permutations result in identical changes in the final topology .
This paper proposes an effective method for managing power grid topology to increase efficiency. They use Transformer attention over a Graph Neural Network as the basic architecture, then propose a hierarchical technique in which the upper level learns to output goal network topologies, which are then implemented by a lower-level policy or a rule-based algorithm. An ablation study reveals that one of the most important components of the algorithm is using an "afterstate" representation, which learns a value function for the state after the agent changes the topology, but before the network is affected by random external factors, including supply and demand.
SP:839dcc82412b1e77aa5e3f267ef421dae1bc0cfc
A spherical analysis of Adam with Batch Normalization
A SPHERICAL ANALYSIS OF ADAM WITH BATCH NORMALIZATION . Anonymous authors Paper under double-blind review Batch Normalization ( BN ) is a prominent deep learning technique . In spite of its apparent simplicity , its implications over optimization are yet to be fully understood . While previous studies mostly focus on the interaction between BN and stochastic gradient descent ( SGD ) , we develop a geometric perspective which allows us to precisely characterize the relation between BN and Adam . More precisely , we leverage the radial invariance of groups of parameters , such as filters for convolutional neural networks , to translate the optimization steps on the L2 unit hypersphere . This formulation and the associated geometric interpretation shed new light on the training dynamics . Firstly , we use it to derive the first effective learning rate expression of Adam . Then we show that , in the presence of BN layers , performing SGD alone is actually equivalent to a variant of Adam constrained to the unit hypersphere . Finally , our analysis outlines phenomena that previous variants of Adam act on and we experimentally validate their importance in the optimization process . 1 INTRODUCTION The optimization process of deep neural networks is still poorly understood . Their training involves minimizing a high-dimensional non-convex function , which has been proved to be a NP-hard problem ( Blum & Rivest , 1989 ) . Yet , elementary gradient-based methods show good results in practice . To improve the quality of reached minima , numerous methods have stemmed in the last years and become common practices . One of the most prominent is Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) , which improves significantly both the optimization stability and the prediction performance ; it is now used in most deep learning architectures . However , the interaction of BN with optimization and its link to regularization remain open research topics . Previous studies highlighted mechanisms of the interaction between BN and SGD , both empirically ( Santurkar et al. , 2018 ) and theoretically ( Arora et al. , 2019 ; Bjorck et al. , 2018 ; Hoffer et al. , 2018b ) . None of them studied the interaction between BN and one of the most common adaptive schemes for Neural Networks ( NN ) , Adam ( Kingma & Ba , 2015 ) , except van Laarhoven ( 2017 ) , which tackled it only in the asymptotic regime . In this work , we provide an extensive analysis of the relation between BN and Adam during the whole training procedure . One of the key effects of BN is to make NNs invariant to positive scalings of groups of parameters . The core idea of this paper is precisely to focus on these groups of radially-invariant parameters and analyze their optimization projected on the L2 unit hypersphere ( see Fig . 1 ) , which is topologically equivalent to the quotient manifold of the parameter space by the scaling action . One could directly optimize parameters on the hypersphere as Cho & Lee ( 2017 ) , yet , most optimization methods are still performed successfully in the original parameter space . Here we propose to study an optimization scheme for a given group of radially-invariant parameters through its image scheme on the unit hypersphere . This geometric perspective sheds light on the interaction between normalization layers and Adam , and also outlines an interesting link between standard SGD and a variant of Adam adapted and constrained to the unit hypersphere : AdamG ( Cho & Lee , 2017 ) . We believe this kind of analysis is an important step towards a better understanding of the effect of BN on NN optimization . Please note that , although our discussion and experiments focus on BN , our analysis could be applied to any radially-invariant model . The paper is organized as follows . In Section 2 , we introduce our spherical framework to study the optimization of radially-invariant models . We also define a generic optimization scheme that encompasses methods such as SGD with momentum ( SGD-M ) and Adam . We then derive its image step on the unit hypersphere , leading to definitions and expressions of effective learning rate and effective learning direction . This new definition is explicit and has a clear interpretation , whereas the definition of van Laarhoven ( 2017 ) is asymptotic and the definitions of Arora et al . ( 2019 ) and of Hoffer et al . ( 2018b ) are variational . In Section 3 , we leverage the tools of our spherical framework to demonstrate that in presence of BN layers , SGD has an adaptive behaviour . Formally , we show that SGD is equivalent to AdamG , a variant of Adam adapted and constrained to the hypersphere , without momentum . In Section 4 , we analyze the effective learning direction for Adam . The spherical framework highlights phenomena that previous variants of Adam ( Loshchilov & Hutter , 2017 ; Cho & Lee , 2017 ) act on . We perform an empirical study of these phenomena and show that they play a significant role in the training of convolutional neural networks ( CNNs ) . In Section 5 , these results are put in perspective with related work . Our main contributions are the following : • A framework to analyze and compare order-1 optimization schemes of radially-invariant models ; • The first explicit expression of the effective learning rate for Adam ; • The demonstration that , in the presence of BN layers , standard SGD has an adaptive behaviour ; • The identification and study of geometrical phenomena that occur with Adam and impact significantly the training of CNNs with BN . 2 SPHERICAL FRAMEWORK AND EFFECTIVE LEARNING RATE . In this section , we provide background on radial invariance and introduce a generic optimization scheme . Projecting the scheme update on the unit hypersphere leads to the formal definitions of effective learning rate and learning direction . This geometric perspective leads to the first explicit expression of the effective learning rate for Adam . The main notations are summarized in Figure 1 . 2.1 RADIAL INVARIANCE . We consider a family of parametric functions φx : Rin → Rout parameterized by a group of radiallyinvariant parameters x ∈ Rdr { 0 } , i.e. , ∀ρ > 0 , φρx =φx ( possible other parameters of φx are omitted for clarity ) , a dataset D ⊂ Rin ×Rout , a loss function ` : Rout ×Rout → R and a training loss function L : Rd → R defined as : L ( x ) def= 1 |D| ∑ ( s , t ) ∈D ` ( φx ( s ) , t ) . ( 1 ) It verifies : ∀ρ > 0 , L ( ρx ) = L ( x ) . In the context of NNs , the group of radially-invariant parameters x can be the parameters of a single neuron in a linear layer or the parameters of a whole filter in a convolutional layer , followed by BN ( see Appendix A for details , and Appendix B for the application to other normalization schemes such as InstanceNorm ( Ulyanov et al. , 2016 ) , LayerNorm ( Ba et al. , 2016 ) or GroupNorm ( Wu & He , 2018 ) ) . The quotient of the parameter space by the equivalence relation associated to radial invariance is topologically equivalent to a sphere . We consider here the L2 sphere Sd−1 = { u ∈ Rd/‖u‖2 = 1 } whose canonical metric corresponds to angles : dS ( u1 , u2 ) = arccos ( 〈u1 , u2〉 ) . This choice of metric is relevant to study NNs since filters in CNNs or neurons in MLPs are applied through scalar product to input data . Besides , normalization in BN layers is also performed using the L2 norm . Our framework relies on the decomposition of vectors into radial and tangential components . During optimization , we write the radially-invariant parameters at step k ≥ 0 as xk = rkuk where rk = ‖xk‖ and uk = xk/‖xk‖ . For any quantity qk ∈ Rd at step k , we write q⊥k = qk−〈qk , uk〉uk its tangential component relatively to the current direction uk . The following lemma states that the gradient of a radially-invariant loss function is tangential and −1 homogeneous : Lemma 1 ( Gradient of a function with radial invariance ) . If L : Rd → R is radially invariant and almost everywhere differentiable , then , for all ρ > 0 and all x ∈ Rd where L is differentiable : 〈∇L ( x ) , x〉 = 0 and ∇L ( x ) = ρ∇L ( ρx ) . ( 2 ) 2.2 GENERIC OPTIMIZATION SCHEME . There is a large body of literature on optimization schemes ( Sutskever et al. , 2013 ; Duchi et al. , 2011 ; Tieleman & Hinton , 2012 ; Kingma & Ba , 2015 ; Loshchilov & Hutter , 2019 ) . We focus here on two of the most popular ones , namely SGD and Adam ( Kingma & Ba , 2015 ) . Yet , to establish general results that may apply to a variety of other schemes , we introduce here a generic optimization update : xk+1 = xk − ηkak bk , ( 3 ) ak = βak−1 +∇L ( xk ) + λxk , ( 4 ) where xk ∈ Rd is the group of radially-invariant parameters at iteration k , L is the group ’ s loss estimated on a batch of input data , ak ∈ Rd is a momentum , bk ∈ Rd is a division vector that can depend on the trajectory ( xi , ∇L ( xi ) ) i∈J0 , kK , ηk ∈ R is the scheduled trajectory-independent learning rate , denotes the Hadamard element-wise division , β is the momentum parameter , and λ is the L2-regularization parameter . We show how it encompasses several known optimization schemes . Stochastic gradient descent ( SGD ) has proven to be an effective optimization method in deep learning . It can include L2 regularization ( also called weight decay ) and momentum . Its updates are : xk+1 = xk − ηkmk , ( 5 ) mk = βmk−1 +∇L ( xk ) + λxk , ( 6 ) where mk is the momentum , β is the momentum parameter , and λ is the L2-regularization parameter . It corresponds to our generic scheme ( Eqs . 3-4 ) with ak = mk and bk = [ 1 · · · 1 ] > . Adam is likely the most common adaptive scheme for NNs . Its updates are : xk+1 = xk − ηk mk 1− βk+11 √ vk 1− βk+12 + , ( 7 ) mk = β1mk−1+ ( 1− β1 ) ( ∇L ( xk ) + λxk ) , vk = β2vk−1 + ( 1− β2 ) ( ∇L ( xk ) + λxk ) 2 , ( 8 ) where mk is the momentum with parameter β1 , vk is the second-order moment with parameter β2 , and prevents division by zero . ( Here and in the following , the square and the square root of a vector are to be understood as element-wise . ) It corresponds to our generic scheme ( Eqs . 3-4 ) with β=β1 and : ak = mk 1− β1 , bk = 1− βk+11 1− β1 √ vk 1− βk+12 + . ( 9 ) 2.3 IMAGE OPTIMIZATION ON THE HYPERSPHERE . The radial invariance implies that the radial part of the parameter update x does not change the function φx encoded by the model , nor does it change the loss L ( x ) . The goal of training is to find the best possible function encodable by the network . Due to radial invariance , the parameter space projected on the unit hypersphere is topologically closer to the functional space of the network than the full parameter space . It hints that looking at optimization behaviour on the unit hypersphere might be interesting . Thus , we need to separate the quantities that can ( tangential part ) and can not ( radial part ) change the model function . Theorem 2 formulates the spherical decomposition ( Eqs . 3-4 ) in simple terms . It relates the update of radially-invariant parameters in the parameter space Rd and their update on Sd−1 through an exponential map . Theorem 2 ( Image step on Sd−1 ) . The update of a group of radially-invariant parameters xk at step k corresponds to an update of its projection uk on Sd−1 through an exponential map at uk with velocity ηekc ⊥ k , at order 3 : uk+1 = Expuk ( − [ 1 +O ( ( ηek‖c⊥k ‖ ) 2 ) ] ηekc ⊥ k ) , ( 10 ) where Expuk is the exponential map on Sd−1 , and with ck def = rkak bk d−1/2‖bk‖ , ηek def = ηk r2kd −1/2‖bk‖ ( 1− ηk〈ck , uk〉 r2kd −1/2‖bk‖ ) −1 . ( 11 ) More precisely : uk+1 = uk − ηekc⊥k√ 1 + ( ηek‖c⊥k ‖ ) 2 . ( 12 ) The proof is given in Appendix C.1.1 and the theorem is illustrated in the case of SGD in Figure 1 . Note that with typical values in CNN training we have 1− ηk〈ck , uk〉 r2kd −1/2‖bk‖ > 0 , which is a property needed for the proof . Another hypothesis is that steps on the hypersphere are shorter than π . These hypotheses are discussed and empirically verified in Appendix C.1.2 .
This work studies optimization dynamics for neural network models that are scaling invariant with respect to parameters. A general formulation of optimization algorithms is considered, covering many widely used algorithms like SGD and Adam. The projected dynamics (to the unit sphere) is studied, and the effective learning rate and update direction on the unit sphere are derived. Focusing on the projected dynamics, the equivalence is built between SGD and a type of "Adam". Then, different factors in the Adam dynamics that can potentially influence the optimization performance are identified, and empirically studied.
SP:2d1b5b2da4802fb7f229112fb841bc194ba47204
Sliced Kernelized Stein Discrepancy
1 INTRODUCTION . Discrepancy measures for quantifying differences between two probability distributions play key roles in statistics and machine learning . Among many existing discrepancy measures , Stein discrepancy ( SD ) is unique in that it only requires samples from one distribution and the score function ( i.e . the gradient up to a multiplicative constant ) from the other ( Gorham & Mackey , 2015 ) . SD , a special case of integral probability metric ( IPM ) ( Sriperumbudur et al. , 2009 ) , requires finding an optimal test function within a given function family . This optimum is analytic when a reproducing kernel Hilbert space ( RKHS ) is used as the test function family , and the corresponding SD is named kernelized Stein discrepancy ( KSD ) ( Liu et al. , 2016 ; Chwialkowski et al. , 2016 ) . Variants of SDs have been widely used in both Goodness-of-fit ( GOF ) tests ( Liu et al. , 2016 ; Chwialkowski et al. , 2016 ) and model learning ( Liu & Feng , 2016 ; Grathwohl et al. , 2020 ; Hu et al. , 2018 ; Liu & Wang , 2016 ) . Although theoretically elegant , KSD , especially with RBF kernel , suffers from the ” curseof-dimensionality ” issue , which leads to significant deterioration of test power in GOF tests ( Chwialkowski et al. , 2016 ; Huggins & Mackey , 2018 ) and mode collapse in particle inference ( Zhuo et al. , 2017 ; Wang et al. , 2018 ) . A few attempts have been made to address this problem , however , they either are limited to specific applications with strong assumptions ( Zhuo et al. , 2017 ; Chen & Ghattas , 2020 ; Wang et al. , 2018 ) or require significant approximations ( Singhal et al. , 2019 ) . As an alternative , in this work we present our solution to this issue by adopting the idea of “ slicing ” . Here the key idea is to project the score function and test inputs onto multiple one dimensional slicing directions , resulting in a variant of SD that only requires to work with one-dimensional inputs for the test functions . Specifically , our contributions are as follows . • We propose a novel theoretically validated family of discrepancies called sliced Stein discrepancy ( SSD ) , along with its scalable variant called max sliced kernelized Stein discrepancy ( maxSKSD ) using kernel tricks and the optimal test directions . • A GOF test is derived based on an unbiased estimator of maxSKSD with optimal test directions . MaxSKSD achieves superior performance on benchmark problems and restricted Boltzmann machine models ( Liu et al. , 2016 ; Huggins & Mackey , 2018 ) . ∗Work done at Microsoft Research Cambridge • We evaluate the maxSKSD in model learning by two schemes . First , we train an independent component analysis ( ICA ) model in high dimensions by directly minimising maxSKSD , which results in faster convergence compared to baselines ( Grathwohl et al. , 2020 ) . Further , we propose a particle inference algorithm based on maxSKSD called the sliced Stein variational gradient descent ( S-SVGD ) as a novel variant of the original SVGD ( Liu & Wang , 2016 ) . It alleviates the posterior collapse of SVGD when applied to training variational autoencoders ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . 2 BACKGROUND . 2.1 KERNELIZED STEIN DISCREPANCY . For two probability distributions p and q supported on X ⊆ RD with continuous differentiable densities p ( x ) and q ( x ) , we define the score sp ( x ) = ∇x log p ( x ) and sq ( x ) accordingly . For a test function f : X → RD , the Stein operator is defined as Apf ( x ) = sp ( x ) T f ( x ) +∇Txf ( x ) . ( 1 ) For a function f0 : RD → R , the Stein class Fq of q is defined as the set of functions satisfying Stein ’ s identity ( Stein et al. , 1972 ) : Eq [ sq ( x ) f0 ( x ) +∇xf0 ( x ) ] = 0 . This can be generalized to a vector function f : RD → RD where f = [ f1 ( x ) , . . . , fD ( x ) ] T by letting fi belongs to the Stein class of q for each i ∈ D. Then the Stein discrepancy ( Liu et al. , 2016 ; Gorham & Mackey , 2015 ) is defined as D ( q , p ) = sup f∈Fq Eq [ Apf ( x ) ] = sup f∈Fq Eq [ ( sp ( x ) − sq ( x ) ) T f ( x ) ] . ( 2 ) When Fq is sufficiently rich , and q vanishes at the boundary of X , the supremum is obtained at f∗ ( x ) ∝ sp ( x ) − sq ( x ) with some mild regularity conditions on f ( Hu et al. , 2018 ) . Thus , the Stein discrepancy focuses on the score difference of p and q. Kernelized Stein discrepancy ( KSD ) ( Liu et al. , 2016 ; Chwialkowski et al. , 2016 ) restricts the test functions to be in a D-dimensional RKHS HD with kernel k to obtain an analytic form . By defining up ( x , x′ ) = sp ( x ) Tsp ( x′ ) k ( x , x′ ) + sp ( x ) T∇x′k ( x , x′ ) + sp ( x′ ) T∇xk ( x , x′ ) + Tr ( ∇x , x′k ( x , x′ ) ) the analytic form of KSD is : D2 ( q , p ) = ( sup f∈HD , ||f ||HD≤1 Eq [ Apf ( x ) ] ) 2 = Eq ( x ) q ( x′ ) [ up ( x , x′ ) ] . ( 3 ) 2.2 STEIN VARIATIONAL GRADIENT DESCENT . Although SD and KSD can be directly minimized for variational inference ( VI ) ( Ranganath et al. , 2016 ; Liu & Feng , 2016 ; Feng et al. , 2017 ) , Liu & Wang ( 2016 ) alternatively proposed a novel particle inference algorithm called Stein variational gradient descent ( SVGD ) . It applies a sequence of deterministic transformations to a set of points such that each of mappings maximally decreases the Kullback-Leibler ( KL ) divergence from the particles ’ underlying distribution q to the target p. To be specific , we define the mapping T ( x ) : RD → RD as T ( x ) = x+ φ ( x ) whereφ characterises the perturbations . The result from Liu & Wang ( 2016 ) shows that the optimal perturbation inside the RKHS is exactly the optimal test function in KSD . Lemma 1 . ( Liu & Wang , 2016 ) Let T ( x ) = x + φ ( x ) and q [ T ] ( z ) be the density of z = T ( x ) when x ∼ q ( x ) . If the perturbation φ is in the RKHSHD and ||φ||HD ≤ D ( q , p ) , then the steepest descent directions φ∗q , p is φ∗q , p ( · ) = Eq [ ∇x log p ( x ) k ( x , · ) +∇xk ( x , · ) ] ( 4 ) and ∇ KL [ q [ T ] ||p ] | =0 = −D2 ( q , p ) . The first term in Eq . ( 4 ) is called drift , which drives the particles towards a mode of p. The second term controls the repulsive force , which spreads the particles around the mode . When particles stop moving , the KL decrease magnitude D2 ( q , p ) is 0 , which means the KSD is zero and p = q a.e . 3 SLICED KERNELIZED STEIN DISCREPANCY . We propose the sliced Stein discrepancy ( SSD ) and kernelized version named maxSKSD . Theoretically , we prove their correctness as discrepancy measures . Methodology-wise , we apply maxSKSD to GOF tests , and develop two ways for model learning . 3.1 SLICED STEIN DISCREPANCY . Before moving to the details , we give a brief overview of the intuition on how to tackle the curse-offimensionality issue of SD ( The right figure of Figure 1 ) . For detailed explanation , refer to appendix B.1 . This issue of Stein discrepancy ( Eq.2 ) comes from two sources : the score function sp ( x ) and the test function f ( x ) defined on X ⊂ RD . First , we notice that comparing sp and sq is equivalent to comparing projected score srp = s T p r and s r q for all r ∈ SD−1 on an hyper-sphere ( Green square in Figure 1 ( Right ) ) . This operation reduces the test function ’ s output from RD to R ( Green circle in Figure 1 ( Right ) ) . However , its input dimension is not affected . Reducing the input dimension of test functions is non-trivial , as directly removing input dimensions results in the test power decrease . This is because less information is accessed by the test function ( see examples in appendix B.1 ) . Our solution to this problem uses Radon transform which is inspired by CT-scans . It projects the original test function f ( x ) in Stein discrepancy ( Eq . 2 ) ( as an RD → R mapping ) to a group of R → R functions along a set of directions ( g ∈ SD−1 ) . Then , this group of functions are used as the new test functions to define the proposed discrepancy . The invertibility of Radon transform ensures that testing with input in the original space RD is equivalent to the test using a group of low dimensional functions with input in R. Thus , the above two steps not only reduce the dimensions of the test function ’ s output and input , but also maintain the validity of the resulting discrepancy as each step is either equivalent or invertible . In detail , assume two distributions p and q supported on RD with differentiable densities p ( x ) and q ( x ) , and define the test functions f ( · ; r , g ) : RD → R such that f ( x ; r , g ) = frg ◦ hg ( x ) = frg ( x Tg ) , where hg ( · ) is the inner product with g and frg : R→ R. One should note that the r and g in f ( · ; r , g ) should not just be treated as parameters in a test function f . In fact , they are more like the index to indicate that for each pair of r , g , we need a new f ( · ; r , g ) , i.e . new frg , which is completely independent to other test functions . The proposed sliced Stein discrepancy ( SSD ) , defined using two uniform distributions pr ( r ) and pg ( g ) over the hypersphere SD−1 , is given by the following , with frg ∈ Fq meaning f ( · ; r , g ) ∈ Fq : S ( q , p ) = Epr , pg [ sup frg∈Fq Eq [ srp ( x ) frg ( xTg ) + rTg∇xT gfrg ( xTg ) ] ] . ( 5 ) We verify the proposed SSD is a valid discrepancy measure , namely , S ( q , p ) = 0 iff . q = p a.e . Theorem 1 . ( SSD Validity ) If assumptions 1-4 in appendix A are satisfied , then for two probability distributions p and q , S ( q , p ) ≥ 0 , and S ( q , p ) = 0 if and only if p = q a.e . Despite this attractive theoretical result , SSD is difficult to compute in practice . Specifically , the expectations over r and g can be approximated by Monte Carlo but this typically requires a very large number of samples in high dimensions ( Deshpande et al. , 2019 ) . We propose to relax such limitations by using only a finite number of slicing directions r from an orthogonal basis Or of RD , e.g . the standard basis of one-hot vectors , and the corresponding optimal test direction gr for each r. We call this variant maxSSD , which is defined as follows and validated in Corollary 1.1 : Smax ( q , p ) = ∑ r∈Or sup frgr∈Fq , gr∈SD−1 Eq [ srp ( x ) frgr ( xTgr ) + rTgr∇xT grfrgr ( xTgr ) ] . ( 6 ) Corollary 1.1 . ( maxSSD ) Assume the conditions in Theorem 1 , then Smax ( q , p ) = 0 iff . p = q a.e .
This paper tries to solve the curse-of-dimensionality problem of KSD and corresponding mode-collapse problem of SVGD by projecting both the input and output of test function onto 1D slices. By doing so, the paper proposes the new discrepancies called SSD and maxSKSD, and a new variant of SVGD called S-SVGD. Experiments on goodness-of-fit test (synthetic high-dim Gaussian & RBM) and model learning (ICA on synthetic data & amortized SVGD on MNIST) are reported in the main body of the paper.
SP:23124b43054b8f3b0cf5860a1fa0728f7edf8e63
Synthesising Realistic Calcium Traces of Neuronal Populations Using GAN
1 INTRODUCTION . The ability to record accurate neuronal activities from behaving animals is essential for the study of information processing in the brain . Electrophysiological recording , which measures the rate of change in voltage by microelectrodes inserted in the cell membrane of a neuron , has high temporal resolution and is considered the most accurate method to measure spike activities ( Dayan & Abbott , 2001 ) . However , this method is not without shortcomings ( Harris et al. , 2016 ) . For instance , a single microelectrode can only detect activity from few neurons in close proximity , and extensive pre-processing is required to infer single-unit activity from a multi-unit signal . Disentangling circuit computations in neuronal populations of a large scale remains a difficult task ( Rey et al. , 2015 ) . On the other hand , calcium imaging monitors the calcium influx in the cell as a proxy of an action potential ( Berridge et al. , 2000 ) . Contrary to electrophysiological recordings , this technique yields data with high spatial resolution and low temporal resolution ( Grienberger & Konnerth , 2012 ) , and has become a powerful imaging technique to monitor large neuronal populations . With the advancements in these recording technologies , it has become increasingly easier to obtain high-quality neuronal activity data in vivo from live animals . However , due to ethical considerations , the acquired datasets are often limited by the number of trials or the duration of each trial on a live animal . This poses a problem for assessing analysis techniques that take into account higher-order correlations ( Brown et al. , 2004 ; Staude et al. , 2010 ; Stevenson & Kording , 2011 ; Saxena & Cunningham , 2019 ) . Even for linear decoders , the number of trials can be more important for determining coding accuracy than the number of neurons ( Stringer et al. , 2019 ) . Generative models of neuronal activity hold the promise of alleviating the above problem by enabling the synthesis of an unlimited number of realistic samples for assessing advanced analysis methods . Popular modelling approaches such as the maximum entropy framework ( Schneidman et al. , 2006 ; Tkačik et al. , 2014 ) and the latent variable model ( Macke et al. , 2009 ; Lyamzin et al. , 2010 ) have shown ample success in modelling spiking activities , though many of these models re- quire strong assumptions on the data and can not generalize to different cortical areas . To this end , GANs have shown tremendous success in synthesizing data across a vast variety of domains and data-types ( Karras et al. , 2017 ; Gomez et al. , 2018 ; Donahue et al. , 2019 ) , and are good candidates for modelling neuronal activities . Spike-GAN ( Molano-Mazon et al. , 2018 ) demonstrated that GANs can model neural spikes that accurately match the statistics of real recorded spiking behaviour from a small number of neurons . Moreover , the discriminator in Spike-GAN is able to learn to detect which population activity pattern is the relevant feature , and this can provide insights into how a population of neurons encodes information . Ramesh et al . ( 2019 ) trained a conditional GAN ( Mirza & Osindero , 2014 ) , conditioned on the stimulus , to generate multivariate binary spike trains . They fitted the generative model with data recorded in the V1 area of macaque visual cortex , and the GAN generated spike trains were able to capture the firing rate and pairwise correlation statistics better than the dichotomized Gaussian model ( Macke et al. , 2009 ) and a deep supervised convolution model . Nevertheless , the aforementioned deep generative models operate on spike trains which are discrete in nature , and back-propagation on discrete data remains a difficult task ( Caccia et al. , 2018 ) . For instance , Ramesh et al . ( 2019 ) used the REINFORCE gradient estimate ( Williams , 1992 ) to train the generator in order to perform back-propagation on discrete data . Still , gradient estimation with the REINFORCE approach yields large variance , which is known to be challenging for optimization ( Maddison et al. , 2016 ; Zhang et al. , 2017 ) . In addition , generating and training on binary spike trains directly introduces uncertainty as the generator has to learn the deconvolution process as well , making it an even more difficult task . In this work , we investigate the possibility of synthesising continuous calcium fluorescent signals using the GAN framework , as a method to scale-up or augment the amount of population activity data . In addition , modelling the calcium signals directly has several advantages ( a ) the generator needs to learn the deconvolution process when synthesising directly on binary spike trains , hence there is additional uncertainty , which is not present for calcium signals . ( b ) Calcium imaging signals have inherently more information about the neuronal activities than binary spike trains . ( c ) Based on calcium signals with known ground-truth , calcium deconvolution algorithms can be evaluated . Hence , We devised a workflow to synthesize and evaluate calcium imaging signals , then validate the method on artificial data with known ground-truth as well as mimicking real two-photon calcium ( Ca2+ ) imaging data as recorded from the primary visual cortex of a behaving mouse ( Pakan et al. , 2018 ; Henschke et al. , 2020 ) . 2 METHODS . 2.1 NETWORK ARCHITECTURE . The original GAN framework , introduced in Goodfellow et al . ( 2014 ) , plays a min-max game where the generator G attempts to generate convincing samples from the latent space Z , and the discriminator D learns to distinguish between generated samples and real samples X . In this work , we use the WGAN-GP ( Gulrajani et al. , 2017 ) formulation of the loss function without the need of incorporating any information of the neural activities into the training objective : LD = E z∼Z [ D ( G ( z ) ) ] − E x∼X [ D ( x ) ] + λ E x̃∼X̃ [ ( ‖ ∇x̃D ( x̃ ) ‖2 −1 ) 2 ] ( 1 ) where λ denotes the gradient penalty coefficient , x̃ = x+ ( 1− ) x̂ are samples taken between the real and generated data distribution . For learning calcium signal generation , we adapted the WaveGAN architecture ( Donahue et al. , 2019 ) , which has shown promising results in audio signal generation . In the generator , we used 1-dimensional transposed convolution layers to up-sample the input noise . We added Layer Normalization ( Ioffe & Szegedy , 2015 ) in between each convolution and activation layer , in order to stabilize training as well as to make the operation compatible with the WGAN-GP framework . To improve the model learning performance and stability , the calcium signals were scaled to the range between 0 and 1 by normalizing with the maximum value of the calcium signal in the data . Correspondingly , we chose sigmoid activation in the output layer of the generator and then re-scaled the signals to their original range before inferring their spike trains . The architecture of the discriminator in our model is largely a mirror of the generator , with the exception of the removal of Layer Normalization and instead of up-sampling the input with transposed convolution , we used a simple convolution layer . Samples generated using transposed convolution often exhibit the ” checkerboard ” artifacts described by Odena et al . ( 2016 ) , where the output exhibits repeated patterns ( usually very subtle to the eye ) due to a filter being applied unevenly to the receptive field . In the context of signal generation , the discrimination could exploit the periodic artifacts pattern and learn a naive policy to reject generated samples . Donahue et al . ( 2019 ) proposed the Phase Shuffle mechanism in the discriminator to address the aforementioned issue . The Phase Shuffle layer randomly shifts the activated units after each convolution layer within [ −n , n ] , in order to distort the periodic pattern . Hence , the resulting samples constitute a more challenging task for the discriminator . Figure A.4 shows a simple illustration of the Phase Shuffle operation . In our network , we incorporated the Phase Shuffle operation , as well as using a kernel size that is divisible by the stride size , as suggested in Odena et al . ( 2016 ) . We apply the Phase Shuffle operation after each convolution layer , which has led to a noticeable improvement in the generated samples . Table A.1 shows the exact architecture of our model . 2.2 MODEL PIPELINE . We devised a consistent model analysis pipeline to evaluate the quality of samples generated by the model , as well as its ability to generalize , in the context of neuronal population spiking activities . The complete model analysis pipeline is shown in Figure A.2 . As calcium imaging is largely being used as a proxy to monitor spiking activities , we have decided to evaluate and present the inferred spike trains instead of raw calcium signals . We used the Online Active Set method to Infer Spikes ( OASIS ) AR1 deconvolution algorithm ( Friedrich et al. , 2017 ) to infer spiking activities from calcium fluorescent signals . We apply OASIS to both the training data and generated data to ensure the potential bias in the deconvolution process applies to the two sets of data . We then trained both the generator and discriminator with the WGAN-GP framework ( Gulrajani et al. , 2017 ) , with 5 discriminator update steps for each generator update step . We used the Adam optimizer ( Kingma & Ba , 2014 ) to optimize both networks , with a learning rate of λ = 10−4 , β1 = 0.9 and β2 = 0.9999 . To speed up the training process , we incorporated Mixed Precision training ( Micikevicius et al. , 2017 ) in our codebase . The exact hyper-parameters being used in this work can be found in Table A.2 . After inferring the spike trains from the generated calcium signals , we then measure the spike train statistics and similarities using the Electrophysiology Analysis Toolkit ( Denker et al. , 2018 ) . Following some of the previous works in spike generation ( Macke et al. , 2009 ; Molano-Mazon et al. , 2018 ; Ramesh et al. , 2019 ) , we evaluate the performance of our model with the following statistics and similarities : ( a ) mean firing rate for evaluating single neuron statistics ; ( b ) pairwise Pearson correlation coefficient for evaluating pairwise statistics ; ( c ) pairwise van-Rossum distance ( Rossum , 2001 ) for evaluating general spike train similarity . Importantly , we evaluate these quantities across the whole population for each neuron or neuron pair and each short time interval ( 100 ms ) and compare the resulting distributions over these quantities obtained from training data as well as generated data . We therefore validate the whole spatiotemporal first- and second-order statistics as well as general spike train similarities . 2.3 DATA . 2.3.1 DICHOTOMIZED GAUSSIAN ARTIFICIAL DATA . In order to verify that CalciumGAN is able to learn the underlying distribution and statistics of the training data , we generated our own ground-truth dataset with pre-defined mean and covariance using the dichotomized Gaussian ( DG ) model ( Macke et al. , 2009 ) . The model uses a multivariate normal distribution to generate latent continuous random variables which are then thresholded to generate binary variables representing spike trains . The DG model has mean vector and covariance matrix as free parameters . To generate data from this model , we used the sample means and sample covariances obtained from real recorded data ( see Section 2.3.2 ) . In alignment with the recorded data , we generated correlated spike trains for N = 102 neurons with a duration of 899 seconds and at 24Hz , hence a matrix with shape ( 21576 , 102 ) . In order to obtain calcium-like signals c from spike trains s with length T , we convolved the generated spike trains with a calcium response kernel and added noise , as described in Friedrich et al . ( 2017 ) : st = gst−1 + st 1 ≤ t ≤ T ( 2 ) c = b+ s+ σu u ∼ N ( 0 , 1 ) ( 3 ) where g denotes a finite impulse response filter , b is the baseline value of the signal and σ is the noise standard deviation . In our work , we set g = 0.95 , σ = 0.3 and b = 0 . We scale the signal range to the unit interval . The data is then segmented using a sliding window along the time dimension with a stride of 2 and a window size of T = 2048 ( around 85 seconds in experiment time ) . We apply the segmentation procedure to both the signal and spike data , hence resulting in two matrices with shape ( 9754 , 2048 , 102 ) . Examples of signals and spikes generated from the DG model can be found in Figure A.1a .
The paper proposes to use a GAN framework to generate the realistic neuronal calcium signals, enabling to scale-up the neuronal population activity data. The solution is based on WAVEGAN architecture with Wasserstein distance to train on calcium fluorescent signals. The experiments are performed in comparison to artificial calcium signals with known ground-truth closely resembles the underlying data distribution. The accuracy of the approach, robustness of generated signals from the model are evaluated.
SP:3f164a85f782ec9beeb00b19638f98d0cb6a6265
Episodic Memory for Learning Subjective-Timescale Models
1 INTRODUCTION . An agent endowed with a model of its environment has the ability to predict the consequences of its actions and perform planning into the future before deciding on its next move . Models can allow agents to simulate the possible action-conditioned futures from their current state , even if the state was never visited during learning . As a result , model-based approaches can provide agents with better generalization abilities across both states and tasks in an environment , compared to their model-free counterparts ( Racanière et al. , 2017 ; Mishra et al. , 2017 ) . The most popular framework for developing agents with internal models is model-based reinforcement learning ( RL ) . Model-based RL has seen great progress in recent years , with a number of proposed architectures attempting to improve both the quality and the usage of these models ( Kaiser et al. , 2020 ; Racanière et al. , 2017 ; Kansky et al. , 2017 ; Hamrick , 2019 ) . Nevertheless , learning internal models affords a number of unsolved problems . The central one of them is model-bias , in which the imperfections of the learned model result in unwanted over-optimism and sequential error accumulation for long-term predictions ( Deisenroth & Rasmussen , 2011 ) . Long-term predictions are additionally computationally expensive in environments with slow temporal dynamics , given that all intermediary states must be predicted . Moreover , slow world dynamics1 can inhibit the learning of dependencies between temporally-distant events , which can be crucial for environments with sparse rewards . Finally , the temporal extent of future predictions is limited to the objective timescale of the environment over which the transition dynamics has been learned . This leaves little room for flexible and context-dependent planning over varying timescales which is characteristic to animals and humans ( Clayton et al. , 2003 ; Cheke & Clayton , 2011 ; Buhusi & Meck , 2005 ) . The final issue exemplifies the disadvantage of the classical view on internal models , in which they are considered to capture the ground-truth transition dynamics of the environment . Furthermore , 1Worlds with small change in state given an action in more complex environments with first-person observations , this perspective does not take into account the apparent subjectivity of first-person experiences . In particular , the agent ’ s learned representations of the environment ’ s transition dynamics implicitly include information about time . Little work has been done to address the concept of time perception in model-based agents ( Deverett et al. , 2019 ) . Empirical evidence from the studies of human and animal cognition suggests that intelligent biological organisms do not perceive time precisely and do not possess an explicit clock mechanism responsible for keeping track of time ( Roseboom et al. , 2019 ; Sherman et al. , 2020 ; Hills , 2003 ) . For instance , humans tend to perceive time slower in environments rich in perceptual content ( e.g . busy city ) , and faster in environments with little perceptual change ( e.g . empty field ) . The mechanisms of subjective time perception still remain unknown ; however , recent computational models based on episodic memory were able to closely model the deviations of human time perception from veridical perception ( Fountas et al. , 2020b ) . Inspired by this account , in this work we propose subjective-timescale model ( STM ) , an alternative approach to learning a transition dynamics model , by replacing the objective timescale with a subjective one . The latter represents the timescale by which an agent perceives events in an environment , predicts future states , and which is defined by the sequences of episodic memories . These memories are accumulated on the basis of saliency ( i.e . how poorly an event was predicted by the agent ’ s transition model ) , which attempts to mimic the way humans perceive time , and resulting in the agent ’ s ability to plan over varying timescales and construct novel future scenarios . We employ active inference as the agent ’ s underlying cognitive framework . Active inference is an emerging framework within computational neuroscience , which attempts to unify perception and action under the single objective of minimising the free-energy functional . Similar to model-based RL , an active inference agent relies almost entirely on the characteristics and the quality of its internal model to make decisions . Thus , it is naturally susceptible to the previously mentioned problems associated with imperfect , objective-timescale models . The selection of active inference for the purposes of this paper is motivated by its biological plausibility as a normative framework for understanding intelligent behaviour ( Friston et al. , 2017a ; 2006 ) , which is in line with the general theme of this work . Furthermore , being rooted in variational inference , the free energy objective generates a distinct separation between the information-theoretic quantities that correspond to the different components of the agent ’ s model , which is crucial to define the memory formation criterion . We demonstrate that the resulting characteristics of STM allow the agent to automatically perform both short- and long-term planning using the same computational resources and without any explicit mechanism for adjusting the temporal extent of its predictions . Furthermore , for long-term predictions STM systematically performs temporal jumps ( skipping intermediary steps ) , thus providing more informative future predictions and reducing the detrimental effects of one-step prediction error accumulation . Lastly , being trained on salient events , STM much more frequently imagines futures that contain epistemically-surprising events , which incentivises exploratory behaviour . 2 RELATED WORK . Model-based RL . Internal models are extensively studied in the field of model-based RL . Using linear models to explicitly model transition dynamics has achieved impressive results in robotics ( Levine & Abbeel , 2014a ; Watter et al. , 2015 ; Bagnell & Schneider , 2001 ; Abbeel et al. , 2006 ; Levine & Abbeel , 2014b ; Levine et al. , 2016 ; Kumar et al. , 2016 ) . In general , however , their application is limited to low-dimensional domains and relatively simple environment dynamics . Similarly , Gaussian Processes ( GPs ) have been used ( Deisenroth & Rasmussen , 2011 ; Ko et al. , 2007 ) . Their probabilistic nature allows for state uncertainty estimation , which can be incorporated in the planning module to make more cautious predictions ; however , GPs struggle to scale to high-dimensional data . An alternative and recently more prevalent method for parametrising transition models is to use neural networks . These are particularly attractive due to their recent proven success in a variety of domains , including deep model-free RL ( Silver et al. , 2017 ) , ability to deal with high-dimensional data , and existence of methods for uncertainty quantification ( Blundell et al. , 2015 ; Gal & Ghahramani , 2016 ) . Different deep learning architectures have been utilised including fully-connected neural networks ( Nagabandi et al. , 2018 ; Feinberg et al. , 2018 ; Kurutach et al. , 2018 ) and autoregressive models ( Ha & Schmidhuber , 2018 ; Racanière et al. , 2017 ; Ke et al. , 2019 ) , showing promising results in environments with relatively high-dimensional state spaces . In particular , autoregressive architectures , such as Long Short-Term Memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) , are capable of modelling non-Markovian environments and of learning temporal dependencies . Nevertheless , LSTMs are still limited in their ability to learn relations between temporally-distant events , which is exacerbated in environments where little change occurs given an action . Uncertainty quantification using ensemble methods ( Kalweit & Boedecker , 2017 ; Clavera et al. , 2020 ; Buckman et al. , 2018 ) or Bayesian neural networks ( McAllister & Rasmussen , 2016 ; Depeweg et al. , 2017 ) have been proposed to tackle model bias and sequential error accumulation . Other works have focused on techniques to create more accurate long-term predictions . Mishra et al . ( 2017 ) used a segment-based approach to predict entire trajectories at once in an attempt to avoid one-step prediction error accumulation . A work by Ke et al . ( 2019 ) used an autoregressive model and introduced a regularising auxiliary cost with respect to the encodings of future observations , thus forcing the latent states to carry useful information for long-horizon predictions . In contrast , the work presented in this paper re-focuses the objective from attempting to create better parametrisation techniques or mitigating methods to simply transforming the timescale over which the dynamics of an environment is learned . As will be seen , our approach can lead to more accurate and efficient long-term predictions without compromising agent ’ s ability to plan over short time-horizons . Episodic Memory . In neuroscience , episodic memory is used to describe autobiographical memories that link a collection of first-person sensory experiences at a specific time and place ( Tulving , 1972 ) . Past studies in the field suggest that episodic memory plays an important role in human learning ( Mahr & Csibra , 2017 ) , and may capture a wide range of potential functional purposes , such as construction of novel future scenarios ( Schacter et al. , 2007 ; 2012 ; Hassabis et al. , 2007 ) , mental time-travel ( Michaelian , 2016 ) or assisting in the formation of new semantic memories ( Greenberg & Verfaellie , 2010 ) . A recent computational model of episodic memory ( Fountas et al. , 2020b ) also relates it to the human ability to estimate time durations . The application of episodic memory in reinforcement learning has been somewhat limited . Some works have employed simple forms of memory to improve the performance of a deep model-free RL agent via experience replay ( Mnih et al. , 2015 ; Espeholt et al. , 2018 ; Schaul et al. , 2016 ) . However , these methods do not incorporate information about associative or temporal dependencies between the memories ( Hansen et al. , 2018 ) . Read-write memory banks have also been implemented alongside gradient-based systems ( memory-augmented neural networks ) for assisting in learning and prediction ( Graves et al. , 2014 ; 2016 ; Hung et al. , 2019 ; Oh et al. , 2016 ; Jung et al. , 2018 ) . Further , episodic memory has been used for non-parametric Q-function approximation ( Blundell et al. , 2016 ; Pritzel et al. , 2017 ; Hansen et al. , 2018 ; Zhu et al. , 2020 ) . It has also been proposed to be used directly for control as a faster and more efficient alternative to model-based and model-free approaches in RL , such as instance-based control ( Lengyel & Dayan , 2007 ; Botvinick et al. , 2019 ; Gershman & Daw , 2017 ) and one-shot learning ( Kaiser et al. , 2017 ) . In contrast , our paper considers a novel way of using episodic memories – in defining the agent ’ s subjective timescale of the environment and training a transition dynamics model over the sequences of these memories . Active Inference . Until now , most of the work on active inference has been done in low-dimensional and discrete state spaces ( Friston et al. , 2015 ; 2017b ; c ; d ) . Recently , however , there has been a rising interest in scaling active inference and applying it to environments with continuous and/or large state spaces ( Fountas et al. , 2020a ; Tschantz et al. , 2019 ; Çatal et al. , 2019 ; Millidge , 2019 ; Ueltzhöffer , 2018 ) . Although these works used deep learning techniques , their generative models have so far been designed to be Markovian and trained over the objective timescale of the environment .
Most model-based RL algorithms learn dynamics models that predicts the next timestep. However, because of model-bias, frequency of timesteps, and objective timescales, the dynamics models can accumulate errors and limited by timescales. The authors propose subjective-timescale model (STM) that instead of predicting the next timesteps they find the "surprising" subsequences of the trajectories and learn temporal-skipping dynamics models over them. The paper shows the improvement over single-step prediction baselines in a first-person navigation domain.
SP:3e360ec6c3c576d09fc38169789f9df9dada9bea
Efficient randomized smoothing by denoising with learned score function
1 INTRODUCTION . The deep image classifiers are susceptible to deliberate noises as known as adversarial attacks ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Carlini & Wagner , 2017 ) . Even though many works proposed heuristics that can annul or mitigate adversarial attacks , most of them were broken by stronger attacks ( Athalye et al. , 2018 ; Athalye & Carlini , 2018 ) . The vulnerability of empirical defenses had led the researchers to scrutinize on certified defenses , which ensure the models to have constant output within the allowed set around given input . Unfortunately , many provable defenses are not feasible to large-scale neural networks because of their constraints on the architecture . On the other hand , randomized smoothing is a practical method that does not restrain the choice of neural networks . The randomized smoothing converts any base classifier to a smoothed classifier by making predictions over randomly perturbed samples . Then the smoothed classifiers are guaranteed to have a ` p certified radius , which is theoretically derived by the noise type used for smoothing . Since Cohen et al . ( 2019 ) derived tight ` 2 certified radius for Gaussian randomized smoothing , sequential works studied the certification bounds for various distributions ( Teng et al. , 2020 ; Yang et al. , 2020 ) . As base classifiers are required to predict randomly perturbed samples , natural classifiers are not sufficient for randomized smoothing . Therefore , many works proposed training ensemble of base classifiers accustomed for randomized smoothing . However , since each trained classifier only applies to specific noise distribution and level , it is expensive to protect against various ` p adversaries and robustness strength . In this work , we tackle the inefficiency of training random-ensemble of base classifiers by using one universal image denoiser to the pre-trained classifier . The idea of using denoiser for randomized smoothing was first introduced by Salman et al . ( 2020 ) and is refer to denoised smoothing . One step further , we study general image denoising problem for randomized smoothing with two different approaches : 1 ) direct training of image denoiser , and 2 ) solve the optimization problem by using a generative model to project to the learned data manifold . Then , we show that the score function , which is the gradient of log-density , is crucial for both approaches . We exploit multi-scale denoising score matching ( Song & Ermon , 2019 ) for score estimation , and propose an efficient algorithm simulated annealing for image denoising . Remark that we only require one score network to certify various noise distributions and levels . We provide experimentations on ImageNet and CIFAR-10 datasets to show the efficacy of our methods . Specifically , our denoisers perform better than original denoised smoothing , while can be applied to various noise types without any re-training . Further- more , we compare with the random-ensemble based method , which we refer to white-box smoothing , and show that our method works are comparable to them . In sum , we list our contributions : • We propose novel score-based image denoisers for randomized smoothing . • We improve denoised smoothing , which was originally proposed by Salman et al . ( 2020 ) and generalize to other distributions without training any neural networks . 2 RANDOMIZED SMOOTHING AND DENOISED SMOOTHING . 2.1 BACKGROUNDS ON RANDOMIZED SMOOTHING . Let f : Rd → Y be a classifier and q be a distribution on Rd . Then the randomized smoothing with q is a method that converts the base classifier f to the associated smoothed classifier g , where g ( x ) returns the class which is most likely to be predicted by the base classifier f when x is perturbed by a random noise sampled from q , i.e. , g ( x ) = arg max c∈Y Pr u∼q ( u ) [ f ( x + u ) = c ] . ( 1 ) The noise distribution is usually a symmetric log-concave distribution , i.e . q ( u ) = exp ( −φ ( u ) ) for some even and convex φ . Note that to control the robustness/accuracy tradeoff , we embed the noise level λ to q , then we have qλ ( u ) = exp ( −φ ( uλ ) ) . We mix the notations q and qλ throughout the paper . Robustness guarantee for smoothed classifiers Suppose an adversary can perturb the input x inside the allowed set B , which is usually an ` p ball centered at x . For the case when B is ` 2 ball and q is Gaussian distribution N ( 0 , σ2I ) , g ( x ) is robust within the radius R = σ 2 ( Φ−1 ( p1 ) − Φ−1 ( p2 ) ) ( 2 ) where Φ is inverse cumulative distribution function , and p1 = maxc Pr [ f ( x + u ) = c ] and p2 = maxc6=g ( x ) Pr [ f ( x + u ) = c ] . Cohen et al . ( 2019 ) first derived the certified radius by using Neyman-Pearson lemma , and later Salman et al . ( 2019a ) showed alternative derivation using the Lipschitz property of smoothed classifier . Furthermore when q is a centered Laplace distribution , the robustness certificate for ` 1 radius was derived by Teng et al . ( 2020 ) . Later , the proof methods are generalized to various distributions ( may not be log-concave ) that can certify various ` p radius ( Yang et al. , 2020 ) . Remark that the robustness guarantee depends on the noise distribution qλ and the performance of base classifier f under random perturbation with qλ . 2.2 RANDOMIZED SMOOTHING VIA IMAGE DENOISING . Even though the randomized smoothing can convert any classifier to a provably robust classifier , the smoothed classifier from natural classifiers are below the standard as they are not capable of predicting randomly perturbed samples . Many previous studies focused on training classifiers accustomed to randomized smoothing , which spans from noisy data augmentation ( Cohen et al. , 2019 ; Li et al. , 2019 ) to its variants such as adversarial training ( Salman et al. , 2019a ) or stability training ( Lee et al. , 2019 ; Zhai et al. , 2019 ) . However , such methods are computationally expensive and require a massive number of classifiers per noise types and levels . The idea of prepending denoiser to the classifier was first introduced by Salman et al . ( 2020 ) . By training denoiser Dθ : Rd → Rd , the smoothed classifier converted from f ◦ Dθ outperforms ’ no-denoiser ’ baseline . They proposed training denoisers with mean squared error ( MSE ) loss or classification ( CLF ) loss , or combining both methods . Formally , they are LMSE ( θ ) = Ex∼p , u∼q [ ‖Dθ ( x + u ) − x‖2 ] , ( 3 ) LCLF ( θ ) = Ex∼p , u∼q [ LCE ( F ( Dθ ( x + u ) ) , f ( x ) ) ] . ( 4 ) where LCE is the cross-entropy loss and F is soft version of hard classifier f . They showed that training with CLF loss makes perform better than denoiser with only MSE loss . Alternatively , Saremi & Srivastava ( 2020 ) trained neural empirical bayes estimator that can refine the white noise . Nonetheless , those methods still suffer from expensive training of numerous denoisers with respect to each noise types and levels . 3 SCORE-BASED IMAGE DENOISING . 3.1 FORMULATION OF IMAGE DENOISING PROBLEM . The image denoising is an example of linear inverse problem , which can be formulated as following : given an observation y = x + u with u ∼ q ( u ) finds x̂ ( y ) that is close to original x . Let x ∼ p ( x ) then the distribution of y is pq ( y ) = ∫ p ( y , x ) dx = ∫ p ( y|x ) p ( x ) dx = ∫ q ( y − x ) p ( x ) dx = ( p ∗ q ) ( y ) . One-step denoiser Like equation 3 , the most common approach to achieve denoiser is to train denoising autoencoder ( DAE ) Dθ with MSE loss ( Zhang et al. , 2017 ; ? ) . Suppose q is a Gaussian distributionN ( 0 , σ2I ) and let the distribution of y by pσ2 . Then the following proposition ( Robbins , 1956 ; Lu & Stephens , 2019 ; Saremi & Hyvarinen , 2020 ) reveals the relationship between the optimal denoiser Dθ∗ and pσ2 . Proposition 3.1 . Assume θ∗ ∈ arg minθ LMSE ( θ ) , then the following equation holds : Dθ∗ ( y ) = y + σ2∇y log pσ2 ( y ) ( 5 ) The proof of proposition 3.1 is in Appendix A . Let us define the score function of density p ( x ) by ∇x log p ( x ) , then the optimal DAE can be obtained by estimating the score of pσ2 . Let sθ ( · ; σ ) be score network that estimates score of smoothed density pσ2 . Then the denoiser from sθ is given by x̂ ( y ) = y + σ2sθ ( y ; σ ) . ( 6 ) Remark that it is only valid when q is Gaussian distribution . Multi-step denoiser Consider the maximum a posteriori ( MAP ) estimator that maximizes the conditional distribution p ( x|y ) . Formally the MAP loss is given by , arg min x LMAP ( x ; y ) = arg min x − log p ( x|y ) ( 7 ) = arg min x − log p ( x ) − log p ( y|x ) + log p ( y ) ( 8 ) = arg min x − log p ( x ) − log q ( y − x ) ( 9 ) = arg min x − log p ( x ) + φ ( y − x ) . ( 10 ) Note that we simply remove density term p ( y ) and rewrite with q. Lastly , we rewrite q with φ . Since the density p ( x ) is usually intractable for high-dimensional dataset , one may use approximation to make the MAP loss tractable . Many recent works focused on using cutting edge generative models such as generative adversarial network ( GAN ) or invertible neural networks to approximate p ( x ) in equation 9 ( Ulyanov et al. , 2018 ; Whang et al. , 2020 ; Asim et al. , 2020 ) . However , GAN suffer from mode collapse , and invertible neural networks require extremely long steps to reach local minima , which are not sufficient for randomized smoothing . Instead , we aim to approximate the gradient of LMAP by the score of Gaussian smooth densities . Let the approximate MAP loss with σ̃ by LMAP , σ̃ ( x ; y ) = − log pσ̃2 ( x ) + φ ( y − x ) . ( 11 ) Then we can approximate the gradient of LMAP , σ̃ ( x ; y ) by score network and perform gradient descent initialized with x0 = y as following : xt+1 = xt − α∇xtLMAP , σ̃ ( x ; y ) ≈ xt + α ( sθ ( xt ; σ̃ ) +∇xtφ ( y − xt ) ) . ( 12 ) Remark that the proposed method can be applied to any log-concave noise distributions . Following theorem shows the recovery guarantee of our methods when q is a Gaussian distribution . Theorem 3.2 . Let x∗ be local optimum of p ( x ) , and y = x∗ + u where u ∼ N ( 0 , σ2I ) . Assume − log p is µ-strongly convex within the neighborhood Br ( x ) = { z : ‖z − x‖ ≤ r } . Then , the gradient descent method on approximate loss LMAP , σ̃2 ( x ; y ) initialized by x0 = y converges to its local minima x̂ ( y ; σ̃ ) ∈ arg minLMAP , σ̃2 ( x ; y ) that satisfies : E‖x̂ ( y ; σ̃ ) − x∗‖2 ≤ σ √ d ( 1 + µσ̃2 ) 1 + µσ̃2 + µσ2 + σ̃ √ d ( 13 ) The proof of theorem 3.2 is in Appendix A . Remark that the upper bound in equation 13 increases as σ increases , which shows that the recovery becomes harder as σ becomes larger . Also the upper bound is strictly increasing function of σ̃ , and has the minimum when σ̃ = 0 .
This paper presents a denoising-based method for randomized smoothing that converts a base classifier into a smoothed one with p-robustness to adversarial examples. It considers a practical setting where the retraining/finetuning of the base classifier is largely inapplicable (e.g. the commercial classification service with only API provided to users). To do this, it adopts a recently proposed methodology termed denoised smoothing [1] by prepending a custom-trained denoiser to the pretrained classifier. The major novelty of this work lies at the proposed denoising method using learned score function. The new denoising method only requires training one score network and is readily applicable to defend various $l_p$ adversaries, which is a key feature not available in [1]. The experiments show the proposed method outperforms the previous denoising-based approach, and is sometimes on par with the white-box approach [2] that manipulates the classifier.
SP:074d113e06bfa79b8a5314560ef0b6669278abd5
Random Feature Attention
1 INTRODUCTION . Transformer architectures ( Vaswani et al. , 2017 ) have achieved tremendous success on a variety of sequence modeling tasks ( Ott et al. , 2018 ; Radford et al. , 2018 ; Parmar et al. , 2018 ; Devlin et al. , 2019 ; Parisotto et al. , 2020 , inter alia ) . Under the hood , the key component is attention ( Bahdanau et al. , 2015 ) , which models pairwise interactions of the inputs , regardless of their distances from each other . This comes with quadratic time and memory costs , making the transformers computationally expensive , especially for long sequences . A large body of research has been devoted to improving their time and memory efficiency ( Tay et al. , 2020c ) . Although better asymptotic complexity and prominent gains for long sequences have been achieved ( Lee et al. , 2019 ; Child et al. , 2019 ; Beltagy et al. , 2020 , inter alia ) , in practice , many existing approaches are less well-suited for moderatelength ones : the additional computation steps required by some approaches can overshadow the time and memory they save ( Kitaev et al. , 2020 ; Wang et al. , 2020 ; Roy et al. , 2020 , inter alia ) . This work proposes random feature attention ( RFA ) , an efficient attention variant that scales linearly in sequence length in terms of time and space , and achieves practical gains for both long and moderate length sequences . RFA builds on a kernel perspective of softmax ( Rawat et al. , 2019 ) . Using the well-established random feature maps ( Rahimi & Recht , 2007 ; Avron et al. , 2016 ; §2 ) , RFA approximates the dot-then-exponentiate function with a kernel trick ( Hofmann et al. , 2008 ) : exp ( x · y ) ≈ φ ( x ) · φ ( y ) . Inspired by its connections to gated recurrent neural networks ( Hochreiter & Schmidhuber , 1997 ; Cho et al. , 2014 ) and fast weights ( Schmidhuber , 1992 ) , we further augment RFA with an optional gating mechanism , offering a straightforward way of learning with recency bias when locality is desired . ∗The majority of this work was done while these authors were at DeepMind . RFA and its gated variant ( §3 ) can be used as a drop-in substitute for the canonical softmax attention , and increase the number of parameters by less than 0.1 % . We explore its applications in transformers on language modeling , machine translation , and long text classification ( §4 ) . Our experiments show that RFA achieves comparable performance to vanilla transformer baselines in all tasks , while outperforming a recent related approach ( Katharopoulos et al. , 2020 ) . The gating mechanism proves particularly useful in language modeling : the gated variant of RFA outperforms the transformer baseline on WikiText-103 . RFA shines in decoding , even for shorter sequences . In our head-to-head comparison on machine translation benchmarks , RFA decodes around 2× faster than a transformer baseline , without accuracy loss . Comparisons to several recent efficient transformer variants on three long text classification datasets show that RFA is competitive in terms of both accuracy and efficiency . Our analysis ( §5 ) shows that more significant time and memory efficiency improvements can be achieved for longer sequences : 12× decoding speedup with less than 10 % of the memory for 2,048-length outputs . 2 BACKGROUND . 2.1 ATTENTION IN SEQUENCE MODELING . The attention mechanism ( Bahdanau et al. , 2015 ) has been widely used in many sequence modeling tasks . Its dot-product variant is the key building block for the state-of-the-art transformer architectures ( Vaswani et al. , 2017 ) . Let { qt } Nt=1 denote a sequence of N query vectors , that attend to sequences of M key and value vectors . At each timestep , the attention linearly combines the values weighted by the outputs of a softmax : attn ( qt , { ki } , { vi } ) = ∑ i exp ( qt · ki/τ ) ∑ j exp ( qt · kj/τ ) v > i . ( 1 ) τ is the temperature hyperparameter determining how “ flat ” the softmax is ( Hinton et al. , 2015 ) .1 Calculating attention for a single query takes O ( M ) time and space . For the full sequence of N queries the space amounts to O ( MN ) . When the computation can not be parallelized across the queries , e.g. , in autoregressive decoding , the time complexity is quadratic in the sequence length . 2.2 RANDOM FEATURE METHODS . The theoretical backbone of this work is the unbiased estimation of the Gaussian kernel by Rahimi & Recht ( 2007 ) . Based on Bochner ’ s theorem ( Bochner , 1955 ) , Rahimi & Recht ( 2007 ) proposed random Fourier features to approximate a desired shift-invariant kernel . The method nonlinearly transforms a pair of vectors x and y using a random feature map φ ; the inner product between φ ( x ) and φ ( y ) approximates the kernel evaluation on x and y . More precisely : Theorem 1 ( Rahimi & Recht , 2007 ) . Let φ : Rd → R2D be a nonlinear transformation : φ ( x ) = √ 1/D [ sin ( w1 · x ) , . . . , sin ( wD · x ) , cos ( w1 · x ) , . . . , cos ( wD · x ) ] > . ( 2 ) When d-dimensional random vectors wi are independently sampled from N ( 0 , σ2Id ) , Ewi [ φ ( x ) · φ ( y ) ] = exp ( −‖x− y‖2 /2σ2 ) . ( 3 ) Variance of the estimation is inversely proportional to D ( Appendix A.2 ; Yu et al. , 2016 ) . Random feature methods proved successful in speeding up kernel methods ( Oliva et al. , 2015 ; Avron et al. , 2017 ; Sun , 2019 , inter alia ) , and more recently are used to efficiently approximate softmax ( Rawat et al. , 2019 ) . In §3.1 , we use it to derive an unbiased estimate to exp ( 〈· , ·〉 ) and then an efficient approximation to softmax attention . 3 MODEL . This section presents RFA ( §3.1 ) and its gated variant ( §3.2 ) . In §3.3 we lay out several design choices and relate RFA to prior works . We close by practically analyzing RFA ’ s complexity ( §3.4 ) . 1M = N in self-attention ; they may differ , e.g. , in the cross attention of a sequence-to-sequence model . 3.1 RANDOM FEATURE ATTENTION . RFA builds on an unbiased estimate to exp ( 〈· , ·〉 ) from Theorem 1 , which we begin with : exp ( x · y/σ2 ) = exp ( ‖x‖2 /2σ2 + ‖y‖2 /2σ2 ) exp ( −‖x− y‖2 /2σ2 ) ≈ exp ( ‖x‖2 /2σ2 + ‖y‖2 /2σ2 ) φ ( x ) · φ ( y ) . ( 4 ) The last line does not have any nonlinear interaction between φ ( x ) and φ ( y ) , allowing for a linear time/space approximation to attention . For clarity we assume the query and keys are unit vectors.2 attn ( qt , { ki } , { vi } ) = ∑ i exp ( qt · ki/σ2 ) ∑ j exp ( qt · kj/σ2 ) v > i ≈ ∑ i φ ( qt ) > φ ( ki ) v > i∑ j φ ( qt ) · φ ( kj ) = φ ( qt ) > ∑ i φ ( ki ) ⊗ vi φ ( qt ) · ∑ j φ ( kj ) = RFA ( qt , { ki } , { vi } ) . ( 5 ) ⊗ denotes the outer product between vectors , and σ2 corresponds to the temperature term τ in Eq . 1 . RFA can be used as a drop-in-replacement for softmax-attention . ( a ) The input is revealed in full to cross attention and encoder self-attention . Here RFA calculates attention using Eq . 5 . ( b ) In causal attention RFA attends only to the prefix.3 This allows for a recurrent computation . Tuple ( St ∈ R2D×d , zt ∈ R2D ) is used as the “ hidden state ” at time step t to keep track of the history , similar to those in RNNs . Then RFA ( qt , { ki } i≤t , { vi } i≤t ) = φ ( qt ) > St/ ( φ ( qt ) · zt ) , where St = St−1 + φ ( kt ) ⊗ vt , zt = zt−1 + φ ( kt ) . ( 6 ) 2D denotes the size of φ ( · ) . Appendix A.1 summarizes the computation procedure of RFA , and Figure 1 compares it against the softmax attention . Appendix A.3 derives causal RFA in detail . Analogously to the softmax attention , RFA has its multiheaded variant ( Vaswani et al. , 2017 ) . In our experiments we use causal RFA in a transformer language model ( §4.1 ) , and both cross and causal RFA in the decoder of a sequence-to-sequence machine translation model . 3.2 RFA-GATE : LEARNING WITH RECENCY BIAS . The canonical softmax attention does not have any explicit modeling of distance or locality . In learning problems where such inductive bias is crucial ( Ba et al. , 2016 ; Parmar et al. , 2018 ; Miconi et al. , 2018 ; Li et al. , 2019 , inter alia ) , transformers heavily rely on positional encodings . Answering to this , many approaches have been proposed , e.g. , learning the attention spans ( Sukhbaatar et al. , 2This can be achieved by ` 2-normalizing the query and keys . See §3.3 for a related discussion . 3It is also sometimes called “ decoder self-attention ” or “ autoregressive attention. ” 2019 ; Wu et al. , 2020 ) , and enhancing the attention computation with recurrent ( Hao et al. , 2019 ; Chen et al. , 2019 ) or convolutional ( Wu et al. , 2019 ; Mohamed et al. , 2019 ) components . RFA faces the same issue , but its causal attention variant ( Eq . 6 ) offers a straightforward way of learning with recency bias . We draw inspiration from its connections to RNNs , and augment RFA with a learned gating mechanism ( Hochreiter & Schmidhuber , 1997 ; Cho et al. , 2014 ; Peng et al. , 2018 , inter alia ) : gt = sigmoid ( wg · xt + bg ) , St = gt St−1 + ( 1− gt ) φ ( kt ) ⊗ vt , zt = gt zt−1 + ( 1− gt ) φ ( kt ) . ( 7 ) wg and bg are learned parameters , and xt is the input representation at timestep t.4 By multiplying the learned scalar gates 0 < gt < 1 against the hidden state ( St , zt ) , history is exponentially decayed , favoring more recent context . The gating mechanism shows another benefit of RFA : it would be otherwise more difficult to build similar techniques into the softmax attention , where there is no clear sense of “ recurrence ” ( Appendix A.5 ) . It proves useful in our language modeling experiments ( §4.1 ) .
The paper presents a linear time and space attention mechanism based on random features to approximate the softmax. The paper is clearly written and easy to follow. The results are convincing: not chasing SOTA, but comparing to sensible baselines, namely [Baevski & Auli 2019] for language modeling on Wikitext-103, and [Vaswani et al. 2017] for machine translation on WMT14 EN-DE/EN-FR and IWSLT14 DE-EN.
SP:e79752ff486049e2e9ec9f588aa918ca2399a5e2
Directed Acyclic Graph Neural Networks
1 INTRODUCTION . Graph-structured data is ubiquitous across various disciplines ( Gilmer et al. , 2017 ; Zitnik et al. , 2018 ; Sanchez-Gonzalez et al. , 2020 ) . Graph neural networks ( GNNs ) use both the graph structure and node features to produce a vectorial representation , which can be used for classification , regression ( Hu et al. , 2020 ) , and graph decoding ( Li et al. , 2018 ; Zhang et al. , 2019 ) . Most popular GNNs update node representations through iterative message passing between neighboring nodes , followed by pooling ( either flat or hierarchical ( Lee et al. , 2019 ; Ranjan et al. , 2020 ) ) , to produce a graph representation ( Li et al. , 2016 ; Kipf & Welling , 2017 ; Gilmer et al. , 2017 ; Veličković et al. , 2018 ; Xu et al. , 2019 ) . The relational inductive bias ( Santoro et al. , 2017 ; Battaglia et al. , 2018 ; Xu et al. , 2020 ) —neighborhood aggregation—empowers GNNs to outperform graph-agnostic neural networks . To facilitate subsequent discussions , we formalize a message-passing neural network ( MPNN ) architecture , which computes representations h ` v for all nodes v in a graph G in every layer ` and a final graph representation hG , as ( Gilmer et al. , 2017 ) : h ` v = COMBINE ` ( h ` −1v , AGGREGATE ` ( { h ` −1u | u ∈ N ( v ) } ) ) , ` = 1 , . . . , L , ( 1 ) hG = READOUT ( { hLv , v ∈ V } ) , ( 2 ) where h0v is the input feature of v , N ( v ) denotes a neighborhood of node v ( sometimes including v itself ) , V denotes the node set of G , L is the number of layers , and AGGREGATE ` , COMBINE ` , and READOUT are parameterized neural networks . For notational simplicity , we omit edge attributes ; but they can be straightforwardly incorporated into the framework ( 1 ) – ( 2 ) . Directed acyclic graphs ( DAGs ) are a special type of graphs , yet broadly seen across domains . Examples include parsing results of source code ( Allamanis et al. , 2018 ) , logical formulas ( Crouse et al. , 2019 ) , and natural language sentences , as well as probabilistic graphical models ( Zhang et al. , 2019 ) , neural architectures ( Zhang et al. , 2019 ) , and automated planning problems ( Ma et al. , 2020 ) . ∗To whom correspondence should be addressed . A directed graph is a DAG if and only if the edges define a partial ordering over the nodes . The partial order is an additionally strong inductive bias one naturally desires to incorporate into the neural network . For example , a neural architecture seen as a DAG defines the acyclic dependency of computation , an important piece of information when comparing architectures and predicting their performance . Hence , this information should be incorporated into the architecture representation for higher predictive power . In this work , we propose DAGNNs—directed acyclic graph neural networks—that produce a representation for a DAG driven by the partial order . In particular , the order allows for updating node representations based on those of all their predecessors sequentially , such that nodes without successors digest the information of the entire graph . Such a processing manner substantially differs from that of MPNNs where the information landed on a node is limited by a multi-hop local neighborhood and thus restricted by the depth L of the network . Modulo details to be elaborated in sections that follow , the DAGNN framework reads h ` v = F ` ( h ` −1v , G ` ( { h ` u | u ∈ P ( v ) } , h ` −1v ) ) , ` = 1 , . . . , L , ( 3 ) hG = R ( { h ` v , ` = 0 , 1 , . . . , L , v ∈ T } ) , ( 4 ) where P ( v ) denotes the set of direct predecessors of v , T denotes the set of nodes without ( direct ) successors , and G ` , F ` , and R are parameterized neural networks that play similar roles to AGGREGATE ` , COMBINE ` , and READOUT , respectively . A notable difference between ( 3 ) – ( 4 ) and ( 1 ) – ( 2 ) is that the superscript ` − 1 inside the underlined part of ( 1 ) is advanced to ` in the counterpart in ( 3 ) . In other words , MPNN aggregates neighborhood information from the past layer , whereas DAGNN uses the information in the current layer . An advantage is that DAGNN always uses more recent information to update node representations . Equations ( 3 ) – ( 4 ) outline several other subtle but important differences between DAGNN and MPNNs , such as the use of only direct predecessors for aggregation and the pooling on only nodes without successors . All these differences are unique to the special structure a DAG enjoys . Exploiting this structure properly should yield a more favorable vectorial representation of the graph . In Section 2 , we will elaborate the specifics of ( 3 ) – ( 4 ) . The technical details include ( i ) attention for node aggregation , ( ii ) multiple layers for expressivity , and ( iii ) topological batching for efficient implementation , all of which yield an instantiation of the DAGNN framework that is state of the art . For theoretical contributions , we study topological batching and justify that this technique yields maximal parallel concurrency in processing DAGs . Furthermore , we show that the mapping defined by DAGNN is invariant to node permutation and injective under mild assumptions . This result reassures that the graph representation extracted by DAGNN is discriminative . Because DAGs appear in many different fields , neural architectures for DAGs ( including , notably , D-VAE ( Zhang et al. , 2019 ) ) or special cases ( e.g. , trees ) are scattered around the literature over the years . Generally , they are less explored compared to MPNNs ; and some are rather applicationspecific . In Section 3 , we unify several representative architectures as special cases of the framework ( 3 ) – ( 4 ) . We compare the proposed architecture to them and point out the differences that lead to its superior performance . In Section 4 , we detail our comprehensive , empirical evaluation on datasets from three domains : ( i ) source code parsed to DAGs ( Hu et al. , 2020 ) ; ( ii ) neural architecture search ( Zhang et al. , 2019 ) , where each architecture is a DAG ; and ( iii ) score-based Bayesian network learning ( Zhang et al. , 2019 ) . We show that DAGNN outperforms many representative DAG architectures and MPNNs . Overall , this work contributes a specialized graph neural network , a theoretical study of its properties , an analysis of a topological batching technique for enhancing parallel concurrency , a framework interpretation that encompasses prior DAG architectures , and comprehensive evaluations . Supported code is available at https : //github.com/vthost/DAGNN . 2 THE DAGNN MODEL . A DAG is a directed graph without cycles . Denote by G = ( V , E ) a DAG , where V and E ⊂ V × V are the node set and the edge set , respectively . A ( strong ) partial order over a set S is a binary relation ≤ that is transitive and asymmetric . Some authors use reflexivity versus irreflexivity to distinguish weak partial order over strong partial order . To unify concepts , we forbid self-loops ( which otherwise are considered cycles ) in the DAG and mean strong partial order throughout . A set S with partial order ≤ is called a poset and denoted by a tuple ( S , ≤ ) . A DAG ( V , E ) and a poset ( S , ≤ ) are closely related . For any DAG , one can define a unique partial order≤ on the node set V , such that for all pairs of elements u , v ∈ V , u ≤ v if and only if there is a directed path from u to v. On the other hand , for any poset ( S , ≤ ) , there exists ( possibly more than ) one DAG that uses S as the node set and that admits a directed path from u to v whenever u ≤ v. In a DAG , all nodes without ( direct ) predecessors are called sources and we collect them in the set S. Similarly , all nodes without ( direct ) successors are called targets and we collect them in the set T . Additionally , we let X = { h0v , v ∈ V } be the set of input node features . 2.1 MODEL . The main idea of DAGNN is to process nodes according to the partial order defined by the DAG . Using the language of MPNN , at every node v , we “ aggregate ” information from its neighbors and “ combine ” this aggregated information ( the “ message ” ) with v ’ s information to update the representation of v. The main differences to MPNN are that ( i ) we use the current-layer , rather than the past-layer , information to compute the current-layer representation of v and that ( ii ) we aggregate from the direct-predecessor set P ( v ) only , rather than the entire ( or randomly sampled ) neighborhood N ( v ) . They lead to a straightforward difference in the final “ readout ” also . In the following , we propose an instantiation of Equations ( 3 ) – ( 4 ) . See Figure 1 for an illustration of the architecture . One layer . We use the attention mechanism to instantiate the aggregate operator G ` . For a node v at the ` -th layer , the output message m ` v computed by G ` is a weighted combination of h ` u for all nodes u ∈ P ( v ) at the same layer ` : m ` v︸︷︷︸ message : = G ` ( { h ` u | u ∈ P ( v ) } , h ` −1v ) = ∑ u∈P ( v ) α ` vu ( h ` −1v︸︷︷︸ query , h ` u︸︷︷︸ key ) h ` u︸︷︷︸ value . ( 5 ) The weighting coefficients α ` vu follow the query-key design in usual attention mechanisms , whereby the representation of v in the past layer , h ` −1v , serves as the query . Specifically , we define α ` vu ( h ` −1v , h ` u ) = softmax u∈P ( v ) ( w ` 1 > h ` −1v + w ` 2 > h ` u ) , ( 6 ) where w ` 1 and w ` 2 are model parameters . We use the additive form , as opposed to the usual dotproduct form,1 since it involves fewer parameters . An additional advantage is that it is straightforward to incorporate edge attributes into the model , as will be discussed soon . The combine operator F ` combines the message m ` v with the previous representation of v , h ` −1 v , and produces an updated representation h ` v . We employ a recurrent architecture , which is usually used for processing data in sequential order but similarly suits processing in partial order : h ` v = F ` ( h ` −1v , m ` v ) = GRU ` ( h ` −1v , ︸ ︷︷ ︸ input message︷︸︸︷ m ` v︸︷︷︸ state ) , ( 7 ) where h ` −1v , m ` v , and h ` v are treated as the input , past state , and updated state/output of a GRU , respectively . This design differs from most MPNNs that use simple summation or concatenation to combine the representations . It further differs from GG-NN ( Li et al. , 2016 ) ( which also employs a GRU ) , wherein the roles of the two arguments are switched . In GG-NN , the message is treated as the input and the node representation is treated as the state . In contrast , we start from node features and naturally use them as inputs . The message tracks the processed part of the graph and serves better the role of a hidden state , being recurrently updated . By convention , we define G ` ( ∅ , · ) = 0 for the aggregator so that for nodes with an empty directpredecessor set , the message ( or , equivalently , the initial state of the GRU ) is zero . Bidirectional processing . Just like in sequence models where a sequence may be processed by either the natural order or the reversed order , we optionally invert the directions of the edges in G to create a reverse DAG G̃ . We will use the tilde notation for all terms related to the reverse DAG . For example , the representation of node v in G̃ at the ` -th layer is denoted by h̃ ` v . Readout . After L layers of ( bidirectional ) processing , we use the computed node representations to produce the graph representation . We follow a common practice—concatenate the representations across layers , perform a max-pooling across nodes , and apply a fully-connected layer to produce the output . Different from the usual practice , however , we pull across only the target nodes and concatenate the pooling results from the two directions . Recall that the target nodes contain information of the entire graph following the partial order . Mathematically , the readout R produces hG = FC ( Max-Pool v∈T ( L ‖ ` =0 h ` v ) ‖ Max-Pool u∈S ( L ‖ ` =0 h̃ ` u ) ) . ( 8 ) Note that the target set T̃ of G̃ is the same as the source set S of G. If the processing is unidirectional , the right pooling in ( 8 ) is dropped . Edge attributes . The instantiation of the framework so far has not considered edge attributes . It is in fact simple to incorporate them . Let τ ( u , v ) be the type of an edge ( u , v ) and let yτ be a representation of edges of type τ . We insert this information during message calculation in the aggregator . Specifically , we replace the attention weights α ` vu defined in ( 6 ) by α ` vu ( h ` −1v , h ` u ) = softmax u∈P ( v ) ( w ` 1 > h ` −1v + w ` 2 > h ` u + w ` 3 > yτ ( u , v ) ) . ( 9 ) In practice , we experiment with slightly fewer parameters by setting w ` 3 = w ` 1 and find that the model performs equally well . The edge representations yτ are trainable embeddings of the model . Alternatively , if input edge features are provided , yτ ( u , v ) can be replaced by a neural networktransformed embedding for the edge ( u , v ) .
This paper introduces a model, Directed Acyclic Graph Neural Network (DAGNN), which processes information according to the flow defined by partial order. DAGNN can be regarded as a special case of previous GNN models, but specific to directed acyclic graph structures. The authors prove that the model satisfies the properties desired by DAG-based graph representation learning.Then they study topology batching on the proposed model to maximize parallel concurrency in processing DAGs. A comprehensive empirical evaluation is conducted on datasets from three domains to verify its effectiveness.
SP:3d2faa84203e50f95080e9d2de9660affe58e157
Synthesizer: Rethinking Self-Attention for Transformer Models
The dot product self-attention is known to be central and indispensable to stateof-the-art Transformer models . But is it really required ? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models . Via extensive experiments , we find that ( 1 ) random alignment matrices surprisingly perform quite competitively and ( 2 ) learning attention weights from token-token ( query-key ) interactions is useful but not that important after all . To this end , we propose SYNTHESIZER , a model that learns synthetic attention weights without token-token interactions . In our experiments , we first show that simple Synthesizers achieve highly competitive performance when compared against vanilla Transformer models across a range of tasks , including machine translation , language modeling , text generation and GLUE/SuperGLUE benchmarks . When composed with dot product attention , we find that Synthesizers consistently outperform Transformers . Moreover , we conduct additional comparisons of Synthesizers against Dynamic Convolutions , showing that simple Random Synthesizer is not only 60 % faster but also improves perplexity by a relative 3.5 % . Finally , we show that simple factorized Synthesizers can outperform Linformers on encoding only tasks . 1 INTRODUCTION . Transformer models ( Vaswani et al. , 2017 ) have demonstrated success across a wide range of tasks . This has resulted in Transformers largely displacing once popular auto-regressive and recurrent models in recent years . At the heart of Transformer models lies the query-key-value dot product attention . The success of Transformer models is widely attributed to this self-attention mechanism since fully connected token graphs , which are able to model long-range dependencies , provide a robust inductive bias . But is the dot product self-attention really so important ? Do we need it ? Is it necessary to learn attention weights via pairwise dot products ? This paper seeks to develop a deeper understanding of the role that the dot product self-attention mechanism plays in Transformer models . The fundamental role of dot product self-attention is to learn self-alignment , i.e. , to determine the relative importance of a single token with respect to all other tokens in the sequence . To this end , there have been memory metaphors and analogies constructed to support this claim . Indeed , the terms query , keys , and values imply that self-attention emulates a content-based retrieval process which leverages pairwise interactions at its very core . Moving against convention , this paper postulates that we can not only do without dot product self-attention but also content-based memory-like self-attention altogether . Traditionally , attention weights are learned at the instance or sample level , where weights are produced by instance-level pairwise interactions . As a result , these instance-specific interactions often fluctuate freely across different instances as they lack a consistent global context . This paper proposes SYNTHESIZER , a new model that learns to synthesize the self-alignment matrix instead of manually computing pairwise dot products . We propose a diverse suite of synthesizing functions and extensively evaluate them . We characterize the source information that these synthesizing functions receive , i.e. , whether they receive information from individual tokens , token-token interactions , and/or global task information . Intuitively , different source inputs to the synthesizing functions should capture diverse views , which may be useful when employed in conjunction . Aside from generalizing the standard Transformer model , we show that it is possible to achieve competitive results with fully global attention weights that do not consider token-token interactions or any instance-level ( local ) information at all . More specifically , a random matrix SYNTHESIZER model achieves a 27.27 BLEU score on WMT 2014 English-German1 . Via a set of rigorous experiments , we observe that the popular and well-established dot-product content-based attention can be approximated with simpler variants such as random matrices or dense layers without sacrificing much performance in some cases . In our experiments , we also show that our relatively simple Synthesizer models also outperform Dynamic Convolutions ( Wu et al. , 2019 ) with a +3.5 % relative improvement in perplexity while being 60 % faster . On encoding tasks , our factorized Synthesizers can outperform other low-rank efficient Transformer models such as Linformers ( Wang et al. , 2020 ) . While simple Synthesizer models are able to perform competitively , our experiments show that the pairwise dot product is still ultimately helpful . When composing our synthesizing functions with dot products , we find that they consistently improve the performance of Transformers . In general , we believe our findings will spur further investigation and discussion about the true role and utility of the self-attention mechanism in Transformer models . Our Contributions Our key contributions are described as follows : • We propose Synthetic Attention , a new way of learning to attend without explicitly attending ( i.e. , without dot product attention or content-based attention ) . Instead , we generate the alignment matrix independent of token-token dependencies and explore a potpourri of parameterized functions for synthesizing attention matrices . • We propose SYNTHESIZER , a new model that leverages Synthetic Attention . The model performs competitive to state-of-the-art Transformer models on a wide range of language tasks , including machine translation and language modeling . • Moreover , we show that ( 1 ) random learnable alignment matrices perform competitively and ( 2 ) token-token dependencies are not necessary to achieve good performance with Transformer models on certain tasks . • On large-scale masked language modeling on the C4 dataset ( Raffel et al. , 2019 ) and finetuning on SuperGLUE and GLUE benchmarks , we show that simple random Synthesizers can outperform/match Lightweight Dynamic convolutions ( Wu et al. , 2019 ) along with outperforming Transformers and Universal Transformers ( Dehghani et al. , 2018 ) . On two encoding tasks , factorized random Synthesizers outperform low-rank Linformers ( Wang et al. , 2020 ) . 2 RELATED WORK . Attention-based models are used across a wide spectrum of problem domains . Such models are especially popular , due to their effectiveness , in the language and vision domains . Attention models can be traced back to the machine translation models of ( Bahdanau et al. , 2014 ) and ( Luong et al. , 2015 ) , where attention is employed to learn soft word alignments between language pairs . The intuition behind the attention mechanism is deeply-rooted in the notion of memory-based retrieval ( Graves et al. , 2014 ; Weston et al. , 2014 ) , in which soft differentiable addressing of memory was initially proposed . The paradigm of learning self-alignments , also known as self-attention , has been largely popularized by Transformer models ( Vaswani et al. , 2017 ) . This technical narrative has also been explored by a number of other recent studies , including those on intra-attention ( Parikh et al. , 2016 ) , selfmatching networks ( Wang et al. , 2017 ) , and LSTMN ( Cheng et al. , 2016 ) . To this end , Transformer models , which function primarily based on self-attention and feed-forward layers , generally serve as a reliable replacement for autoregressive recurrent models . 1The originally reported result is 27.30 . The self-attention layer itself has been the subject of many recent technical innovations . For example , recent studies have investigated improving the layer ’ s overall efficiency via sparsification and reducing the complexity of computing the alignment matrix ( Child et al. , 2019 ; Kitaev et al. , 2020 ; Huang et al. , 2018 ; Tay et al. , 2020 ; Beltagy et al. , 2020 ) . These methods are tightly coupled with the query-key-value paradigm , employing a form of memory-based content retrieval as an attention mechanism . On the other end of the spectrum , there have been studies that advocate for replacing self-attention with convolution ( Wu et al. , 2019 ) . The recent surge in interest in simplifying the attention mechanism raises important questions about the role and utility of the pairwise dot products , which are one the defining characteristics of self-attention models . Meanwhile , in the image domain , ( Cordonnier et al. , 2019 ) shows connection of Transformers with CNNs . Our work is a new take on the self-attention mechanism in Transformer models . We delve deeper , starting with replacing the pairwise dot products with what we call synthesizing functions that learn attention matrices that may or may not depend on the input tokens . The most closely related work is ( ( Raganato et al. , 2020 ) ) , in which the authors propose using fixed ( i.e. , not learned ) attention patterns in Transformer encoders . However , the scope of their work is limited to encoders and relies on manually defined handcrafted patterns that seem to work well . Our work takes this intuition further and expands on this narrative . 3 THE PROPOSED METHOD . This section introduces our proposed SYNTHESIZER model . At its core , our model is essentially a Transformer model with self-attention modules replaced with our Synthetic Attention modules . Figure 3.1 illustrates the key ideas behind ( a ) Transformer ( b ) Dense Synthesizers and ( c ) Random Synthesizers . 3.1 SYNTHESIZER MODEL . This section introduces Synthetic Attention , our proposed self-attention module . Our model removes the notion of query-key-values in the self-attention module and directly synthesizes the alignment matrix instead . Dense Synthesizer Let us consider the simplest variation of the SYNTHESIZER model which is conditioned on each input token . Overall , our method accepts an input X ∈ R ` ×d and produces an output of Y ∈ R ` ×d . Here , ` refers to the sequence length and d refers to the dimensionality of the model . We first adopt F ( . ) , a parameterized function , for projecting input Xi from d dimensions to ` dimensions . Bi = F ( Xi ) ( 1 ) where F ( . ) is a parameterized function that maps Rd to R ` and i is the i-th token of X and is applied position-wise ( to each vector in the sequence of length ` ) . Intuitively , this can be interpreted as learning a token-wise projection to the sequence length ` . Essentially , with this model , each token predicts weights for each token in the input sequence . In practice , we adopt a simple two layered feed-forward layer with ReLU activations for F ( . ) : F ( Xi ) =W2 ( σR ( W1 ( Xi ) + b1 ) ) + b2 ( 2 ) where σR is the ReLU activation function and W1 ∈ Rd×d and W2 ∈ Rd× ` . Hence , Bi is now of R ` . Given B ∈ R ` × ` , we now compute : Y = Softmax ( B ) G ( X ) ( 3 ) where G ( . ) is another parameterized function of X that is analogous to V ( value ) in the standard Transformer model . This approach eliminates the dot product attention Y = Softmax ( QK > ) V altogether by replacing QK > in standard Transformers with the synthesizing function F ( . ) . Random Synthesizer The previous variant learns synthetic attention by conditioning on each input of X and projecting to ` dimensions . Hence , the Dense Synthesizer conditions on each token independently , as opposed to pairwise token interactions in the vanilla Transformer model . We consider another variation of SYNTHESIZER where the attention weights are not conditioned on any input tokens . Instead , the attention weights are initialized to random values . These values can then either be trainable or kept fixed ( denoted as Fixed ) . Let R be a randomly initialized matrix . The Random Synthesizer is defined as : Y = Softmax ( R ) G ( X ) . ( 4 ) where R ∈ R ` × ` . Notably , each head adds ` 2 parameters to the network . The basic idea2 of the Random Synthesizer is to not rely on pairwise token interactions or any information from individual token but rather to learn a task-specific alignment that works well globally across many samples . This is a direct generalization of the recently proposed fixed self-attention patterns Raganato et al . ( 2020 ) . Factorized Models The Dense Synthesizer adds d × ` parameters to the network . On the other hand , the Random Synthesizer adds ` × ` parameters . Here , note that we omit theQ , K projections in the standard Transformer which results in further parameter savings . Despite these savings , synthesized models can be cumbersome to learn when ` is large . Hence , we propose factorized variations of the SYNTHESIZER models and show that these variants perform comparably in practice . Factorized Dense Synthesizer Factorized outputs not only slightly reduce the parameter cost of the SYNTHESIZER but also aid in preventing overfitting . The factorized variant of the dense synthesizer can be expressed as follows : A , B = FA ( Xi ) , FB ( Xi ) ( 5 ) where FA ( . ) projects input Xi into a dimensions , FB ( . ) projects Xi to b dimensions , and a× b = ` . The output of the factorized module is now written as : Y = Softmax ( C ) G ( X ) . ( 6 ) where C = HA ( A ) ∗HB ( B ) where HA , HB are tiling functions and C ∈ R ` × ` . The tiling function simply duplicates the vector k times , i.e. , R ` → R ` ×k . In this case , HA ( · ) is a projection of Ra → Ra×b and HB ( · ) is a projection of Rb → Rb×a . To avoid having similar values within the same block , we compose the outputs of HA and HB . Factorized Random Synthesizer Similar to Factorized Synthesizers , we are also able to factorize R into low rank matrices R1 , R2 ∈ R ` ×k . Y = Softmax ( R1R > 2 ) G ( X ) . ( 7 ) 2We were not expecting this variation to work at all , but it turns out to be a strong baseline . Therefore , it is easy to see that , for each head , this reduces the parameter costs from ` 2 to 2 ( ` k ) where k < < ` and hence helps prevent overfitting . In practice , we use a small value of k = 8 . Mixture of Synthesizers Finally , we note that all of the proposed synthetic attention variants can be mixed in an additive fashion . This can be expressed as : Y = Softmax ( α1S1 ( X ) + · · ·αNSN ( X ) ) G ( X ) . ( 8 ) where S ( . ) is a parameterized synthesizing function and the α ( where ∑ α = 1 ) are learnable weights . In the case of mixing Random Factorized with standard Dense Synthesizers , this is expressed as : Y = Softmax ( α1R1R > 2 + α2F ( X ) ) G ( X ) . ( 9 ) We investigate several Mixture of Synthesizers variants in our experiments . On Parameters Depending on Sequence Length Random and dense Synthesizers both rely on parameters that depend on length ` . In general , we define a maximum length and dynamically truncate to the actual length of each batch . We note that this is in similar spirit to trainable positional encodings which have been common practice in Transformer models . Hence , we do not forsee any issue here . In the case that this is really a problem , one potential solution is to project to a smaller value b and tile b to the maximum sequence length . We leave this exploration to future work .
This paper challenges the common belief that self-attention with dot product is necessary to train good NLP models. Several variants of the Synthesizer model is proposed. The effectiveness of Synthesizer is surprisingly good, although not beating the dot-product attention. The authors further showed that mixing synthesizer and dot-product attention sometimes achieve better results. The idea is validated on Translation, NLU, Summarization, Dialogue, and Language Modeling.
SP:081c48c667eef561333c5b0d739e9dbebefa0f34
Learning Graph Normalization for Graph Neural Networks
1 INTRODUCTION . Graph Neural Networks ( GNNs ) have shown great popularity due to their efficiency in learning on graphs for various application areas , such as natural language processing ( Yao et al. , 2019 ; Zhang et al. , 2018 ) , computer vision ( Li et al. , 2020 ; Cheng et al. , 2020 ) , point cloud ( Shi & Rajkumar , 2020 ) , drug discovery ( Lim et al. , 2019 ) , citation networks ( Kipf & Welling , 2016 ) , and social networks ( Chen et al. , 2018 ) . A graph consists of nodes and edges , where nodes represent individual objects and edges represent relationships among those objects . In the GNN framework , the node or edge representations are alternately updated by propagating information along the edges of a graph via non-linear transformation and aggregation functions ( Wu et al. , 2020 ; Zhang et al. , 2018 ) . GNN captures long-range node dependencies via stacking multiple message-passing layers , allowing the information to propagate over multiple-hops ( Xu et al. , 2018 ) . In essence , GNN is a new kind of neural networks which exploits neural network operations over graph structure . Among the numerous kinds of GNNs ( Bruna et al. , 2014 ; Defferrard et al. , 2016 ; Maron et al. , 2019 ; Xu et al. , 2019 ) , message-passing GNNs ( Scarselli et al. , 2009 ; Li et al. , 2016 ; Kipf & Welling , 2016 ; Velickovic et al. , 2018 ; Bresson & Laurent , 2017 ) have been the most widely used due to their ability to leverage the basic building blocks of deep learning such as batching , normalization and residual connections . To update the feature representation of a node , many approaches are designed . For example , Graph ConvNet ( GCN ) ( Kipf & Welling , 2016 ) employs an averaging operation over the neighborhood node with the same weight value for each of its neighbors ; GraphSage ( Hamilton et al. , 2017 ) samples a fixed-size neighborhood of each node and performs mean aggregator or LSTM-based aggregator over the neighbors ; Graph Attention Network ( GAT ) ( Velickovic et al. , 2018 ) incorporates an attention mechanism into the propagation step , which updates the feature representation of each code via a weighted sum of adjacent node representations ; MoNet ( Monti et al. , 2017 ) designs a Gaussian kernel with learnable parameters to assign different weights to neighbors ; GatedGCN ( Bresson & Laurent , 2017 ) explicitly introduces edge features at each layer and updates edge features by considering the feature representations of these two con- It is well known that one of the critical ingredients to effectively train deep neural networks is normalization technique , e.g. , Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) is widely used to accelerate the deep neural networks training . Other than BN , several normalization methods have been developed from different perspectives , e.g. , Layer Normalization ( LN ) ( Ba et al. , 2016 ) and Group Normalization ( Wu & He , 2018 ) which operate along the channel dimension , Instance Normalization ( Ulyanov et al. , 2016 ) which performs a BN-like normalization for each sample , Switchable Normalization ( Luo et al. , 2019 ) which utilizes three distinct scopes—including channel , layer , and minibatch—to compute the first order and second order statistics . Each normalization method has its advantages and is suitable for some particular tasks . For instance , BN has achieved perfect performance in computer vision whereas LN outperforms BN in natural language processing ( Vaswani et al. , 2017 ) . As an analogue , in Dwivedi et al . ( 2020 ) , BN is utilized for each graph propagation layer during training GNNs . In Zhao & Akoglu ( 2020 ) , a novel normalization layer , denoted as PAIRNORM , is introduced to mitigate the over-smoothing problem and prevent all node representations from homogenization by differentiating the distances between different node pairs . Although these methods mentioned above have been demonstrated being useful in training GNNs , the local structure and global structure of the graph are ignored in these existing methods . Moreover , in previous work , only one of the mentioned normalization methods is selected and it is used for all normalization layers . This may limit the potential performance improvement of the normalization method and it is also hard to decide which normalization method is suitable to a specific task . Graph data contains rich structural information . By considering the structure information in the graph , in this paper , we propose two graph-aware normalization methods at different scales : a ) adjacency-wise normalization , and b ) graph-wise normalization . Unlike BN and LN , the adjacencywise normalization takes into account the local structure in the graph whereas the graph-wise normalization takes into account the global structure in the graph . On other hand , while multiple normalization methods are available for training GNNs and it is still hard to know in advance which normalization method is the most suitable to a specific task . To tackle with this deficiency , we further propose to learn attentive graph normalization by optimizing a weighted combination of multiple normalization methods . By optimizing the combination weights , we can select the best or the best combination of multiple normalization methods for training GNNs at a specific task automatically . The contributions of the paper are highlighted as follows . • We propose two graph-aware normalization methods : adjacency-wise normalization and graphwise normalization . To the best of our knowledge , it is for the first time that the graph-aware normalization method is proposed for training GNNs . • We present to learn attentive graph normalization by optimizing a weighted combination of different normalization methods . By learning the combination weights , we can automatically select the best normalization method or the best combination of multiple normalization methods for training GNNs at a specific task . • We conduct extensive experiments on benchmark datasets for different tasks and confirm that the graph-aware normalization methods leads to promising results and that the learned weights suggest the more appropriate normalization methods for specific task . 2 GRAPH-AWARE NORMALIZATION AT DIFFERENT SCALES . Suppose that we haveN graphs G1 , G2 , ... , GN in a mini-batch . Let Gk = ( Vk , Ek ) be the k-th graph , where Vk is the set of nodes and Ek is the set of edges . We use vk , i to denote the i-th node of graph Gk and use ek , i , j to denote the edge between nodes vk , i and vk , j of graph Gk . Moreover , we use hvk , i ∈ Rd to represent the feature of node vk , i and hjvk , i to represent the j-th element of hvk , i . We use N ( vk , i ) to represent the neighbors of node vk , i ( including node vk , i itself ) . For clarity , we formulate the normalization methods for training GNNs from different scales , as illustrated in Figure 1 ( a ) - ( d ) , including node-wise normalization , adjacency-wise normalization , graph-wise normalization and batch-wise normalization . Node-wise Normalization . Node-wise normalization on graph , denoted as GNn , considers to normalize the feature vector hvk , i of each node vk , i and compute the first and the second statistics over the d entries of the feature vector hvk , i as follows : ĥ ( n ) vk , i = hvk , i − µ ( n ) k , i 1 σ ( n ) k , i , µ ( n ) k , i = 1 d d∑ j=1 hjvk , i , σ ( n ) k , i = √√√√1 d d∑ j=1 ( hjvk , i − µ ( n ) k , i ) 2 , ( 1 ) where µ ( n ) k , i and σ ( n ) k , i are the mean and the standard deviation along the feature dimension for node vk , i , and 1 ∈ Rd represents a d-dimension vector of all 1 . Note that node-wise normalization is equivalent to applying LN to each node of the graph to reduce the “ covariate shift ” problem1 . Adjacency-wise Normalization . Each node in a graph has its neighbors . However , node-wise normalization performs normalization on each node individually and ignores the local structure in the graph . Here , we propose to take into account the adjacency structure in the graph and normalize the node features of the adjacent neighbors . We term it as adjacency-wise normalization on graph , denoted as GNa . For each node vk , i in graph Gk , we consider its adjacent nodesN ( vk , i ) , as illustrated in Figure 1 ( b ) . Specifically , the adjacency-wise normalization for node vk , i is defined as follows : ĥ ( a ) vk , i = hvk , i − µ ( a ) k , i1 σ ( a ) k , i , ( 2 ) µ ( a ) k , i = 1 |N ( vk , i ) | × d ∑ j′∈N ( vk , i ) d∑ j=1 hjvk , j′ , ( 3 ) σ ( a ) k , i = √√√√ 1 |N ( vk , i ) | × d ∑ j′∈N ( vk , i ) d∑ j=1 ( hjvk , j′ − µ ( a ) k , i ) 2 , ( 4 ) where µ ( a ) k , i and σ ( a ) k , i are the first order and second order statistics over the adjacent nodes . 2 Graph-wise Normalization . Note that nodes belonging to graph Gk naturally form a group . In order to preserve the global structure of a graph , we propose to normalize the node feature based on the first and the second order statistics computed over graph Gk . Specifically , we define a graph-wise 1The node-wise normalization method in Equation ( 1 ) can also be used to normalize the feature at each edge , as illustrated in Figure 1 ( e ) . 2For the edge ek , i , j , as Figure 1 ( f ) , the adjacent edgesN ( ek , i ) can be considered in a similar way . normalization on graph , denoted as GNg , for node vk , i as follows : ĥ ( g ) vk , i = ( hvk , i − µ ( g ) k ) Λ −1 k , ( 5 ) µ ( g ) k = 1 |Gk| ∑ vk , i∈Gk hvk , i , ( 6 ) where µ ( g ) k and Λk are the first order and the second order statistics in graph Gk in which Λk is a diagonal matrix with diagonal entry Λjjk is defined as Λjjk = √√√√ 1 |Gk| ∑ vk , i∈Gk ( hjvk , i − µ ( g ) , j k ) 2 . ( 7 ) If the task has only a single graph , then graph-wise normalization is similar to BN . However , unlike in BN , graph-wise normalization does not use a smoothing average updater.3 Batch-wise Normalization . To keep training stable , BN is one of the most critical components . For a mini-batch , there are N graphs . We compute the mean and standard deviation across over the graphs of a mini-batch , then each node feature hvk , i is normalized as follows : ĥ ( b ) vk , i = ( hvk , i − µ ( b ) ) Λ−1 , ( 8 ) µ ( b ) = 1 T N∑ k=1 |Gk|∑ i=1 hvk , i , ( 9 ) where T = ∑N k=1 |Gk| means the total number of the nodes in the N graphs and Λ is a diagonal matrix to keep the standard deviation of the note features over N graphs in which the diagonal entry Λjj is defined Λjj = √√√√ 1 T N∑ k=1 |Gk|∑ i=1 ( hjvk , i − µ ( b ) , j ) 2 . ( 10 ) Note that batch-wise normalization on graph , named as GNb , is effectively BN ( Ioffe & Szegedy , 2015 ) , which performs normalization over all nodes of the N graphs in a mini-batch . The normalization methods applying to node features hvk , i can also be extended to edge features hek , i , j where hek , i , j denotes the feature of edge ei , j in graph Gk , as illustrated in Figure 1 ( e ) - ( h ) . Remark . The properties of the four normalization methods are summarized as follows . • Node-wise normalization only considers to normalize the feature of each node individually but ignores the adjacency structure and the whole graph structures . It is equivalent to LN ( Ba et al. , 2016 ) in operation . • Adjacency-wise normalization takes the adjacent nodes into account , whereas graph-wise normalization takes into account the features of all nodes in a graph . • Batch-wise normalization is the same as the standard batch normalization ( Ioffe & Szegedy , 2015 ) . If the task only involves a single graph , then the batch-wise normalization is similar to the graph normalization except that momentum average used in batch-wise normalization is not used in the graph-wise normalization .
This paper proposes and evaluates different normalization techniques for graph neural networks. Also, the authors argue that the best normalization technique is task dependent, so they propose to use a weighted average of different normalizations that is learned during training, called AGN. In the paper they propose 4 different normalizations some of which are structure-dependent, and compare the performance of GCN, GAT and GatedGCN with and without these normalizations, and the learned combination of all of them.
SP:e8de5995140c90ed95c915f5724c0a910a99cfb9
Iterative Graph Self-Distillation
1 INTRODUCTION . Graphs are ubiquitous representations encoding relational structures across various domains . Learning low-dimensional vector representations of graphs is critical in various domains ranging from social science ( Newman & Girvan , 2004 ) to bioinformatics ( Duvenaud et al. , 2015 ; Zhou et al. , 2020 ) . Many graph neural networks ( GNNs ) ( Gilmer et al. , 2017 ; Kipf & Welling , 2016 ; Xu et al. , 2018 ) have been proposed to learn node and graph representations by aggregating information from every node ’ s neighbors via non-linear transformation and aggregation functions . However , the key limitation of existing GNN architectures is that they often require a huge amount of labeled data to be competitive but annotating graphs like drug-target interaction networks is challenging since it needs domainspecific expertise . Therefore , unsupervised learning on graphs has been long studied , such as graph kernels ( Shervashidze et al. , 2011 ) and matrix-factorization approaches ( Belkin & Niyogi , 2002 ) . Inspired by the recent success of unsupervised representation learning in various domains like images ( Chen et al. , 2020b ; He et al. , 2020 ) and texts ( Radford et al. , 2018 ) , most related works in the graph domain either follow the pipeline of unsupervised pretraining ( followed by fine-tuning ) or InfoMax principle ( Hjelm et al. , 2018 ) . The former often needs meticulous designs of pretext tasks ( Hu et al. , 2019 ; You et al. , 2020 ) while the latter is dominant in unsupervised graph representation learning , which trains encoders to maximize the mutual information ( MI ) between the representations of the global graph and local patches ( such as subgraphs ) ( Veličković et al. , 2018 ; Sun et al. , 2019 ; Hassani & Khasahmadi , 2020 ) . However , MI-based approaches usually need to sample subgraphs as local views to contrast with global graphs . And they usually require an additional discriminator for scoring local-global pairs and negative samples , which is computationally prohibitive ( Tschannen et al. , 2019 ) . Besides , the performance is also very sensitive to the choice of encoders and MI estimators ( Tschannen et al. , 2019 ) . Moreover , MI-based approaches can not be handily extended to the semi-supervised setting since local subgraphs lack labels that can be utilized for training . Therefore , we are seeking an approach that learns the entire graph representation by contrasting the whole graph directly without the need of MI estimation , discriminator and subgraph sampling . Motivated by recent progress on contrastive learning , we propose the Iterative Graph Self-Distillation ( IGSD ) , a teacher-student framework to learn graph representations by contrasting graph instances directly . The high-level idea of IGSD is based on graph contrastive learning where we pull sim- ilar graphs together and push dissimilar graph away . However , the performance of conventional contrastive learning largely depends on how negative samples are selected . To learn discriminative representations and avoid collapsing to trivial solutions , a large set of negative samples ( He et al. , 2020 ; Chen et al. , 2020b ) or a special mining strategy ( Schroff et al. , 2015 ; He et al. , 2020 ) are necessary . In order to alleviate the dependency on negative samples mining and still be able to learn discriminative graph representations , we propose to use self-distillation as a strong regularization to guide the graph representation learning . In the IGSD framework , graph instances are augmented as several views to be encoded and projected into a latent space where we define a similarity metric for consistency-based training . The parameters of the teacher network are iteratively updated as an exponential moving average of the student network parameters , allowing the knowledge transfer between them . As merely small amount of labeled data is often available in many real-world applications , we further extend IGSD to the semi-supervised setting such that it can effectively utilize graph-level labels while considering arbitrary amounts of positive pairs belonging to the same class . Moreover , in order to leverage the information from pseudo-labels with high confidence , we develop a self-training algorithm based on the supervised contrastive loss for fine-tuning . We experiment with real-world datasets in various scales and compare the performance of IGSD with state-of-the-art graph representation learning methods . Experimental results show that IGSD achieves competitive performance in both unsupervised and semi-supervised settings with different encoders and data augmentation choices . With the help of self-training , our performance can exceed state-of-the-art baselines by a large margin . To summarize , we make the following contributions in this paper : • We propose a self-distillation framework called IGSD for unsupervised graph-level representation learning where the teacher-student distillation is performed for contrasting graph pairs under different augmented views . • We further extend IGSD to the semi-supervised scenario , where the labeled data are utilized effectively with the supervised contrastive loss and self-training . • We empirically show that IGSD surpasses state-of-the-art methods in semi-supervised graph classification and molecular property prediction tasks and achieves performance competitive with state-of-the-art approaches in unsupervised graph classification tasks . 2 RELATED WORK . Contrastive Learning Modern unsupervised learning in the form of contrastive learning can be categorized into two types : context-instance contrast and context-context contrast ( Liu et al. , 2020 ) . The context-instance contrast , or so-called global-local contrast focuses on modeling the belonging relationship between the local feature of a sample and its global context representation . Most unsupervised learning models on graphs like DGI ( Veličković et al. , 2018 ) , InfoGraph ( Sun et al. , 2019 ) , CMC-Graph ( Hassani & Khasahmadi , 2020 ) fall into this category , following the InfoMax principle to maximize the the mutual information ( MI ) between the input and its representation . However , estimating MI is notoriously hard in MI-based contrastive learning and in practice tractable lower bound on this quantity is maximized instead . And maximizing tighter bounds on MI can result in worse representations without stronger inductive biases in sampling strategies , encoder architecture and parametrization of MI estimators ( Tschannen et al. , 2019 ) . Besides , the intricacies of negative sampling in MI-based approaches impose key research challenges like improper amount of negative samples or biased negative sampling ( Tschannen et al. , 2019 ; Chuang et al. , 2020 ) . Another line of contrastive learning approaches called context-context contrast directly study the relationships between the global representations of different samples as what metric learning does . For instance , a recently proposed model BYOL ( Grill et al. , 2020 ) bootstraps the representations of the whole images directly . Focusing on global representations between samples and corresponding augmented views also allows instance-level supervision to be incorporated naturally like introducing supervised contrastive loss ( Khosla et al. , 2020 ) into the framework for learning powerful representations . Graph Contrastive Coding ( GCC ) ( Qiu et al. , 2020 ) is a pioneer to leverage instance discrimination as the pretext task for structural information pre-training . However , our work is fundamentally different from theirs . GCC focuses on structural similarity to find common and transferable structural patterns across different graph datasets and the contrastive scheme is done through subgraph instance discrimination . On the contrary , our model aims at learning graph-level representation by directly contrasting graph instances such that data augmentation strategies and graph labels can be utilized naturally and effectively . Knowledge Distillation Knowledge distillation ( Hinton et al. , 2015 ) is a method for transferring knowledge from one architecture to another , allowing model compression and inductive biases transfer . Self-distillation ( Furlanello et al. , 2018 ) is a special case when two architectures are identical , which can iteratively modify regularization and reduce over-fitting if perform suitable rounds ( Mobahi et al. , 2020 ) . However , they often focus on closing the gap between the predictive results of student and teacher rather than defining similarity loss in latent space for contrastive learning . Semi-supervised Learning Modern semi-supervised learning can be categorized into two kinds : multi-task learning and consistency training between two separate networks . Most widely used semisupervised learning methods take the form of multi-task learning : argminθ Ll ( Dl , θ ) +wLu ( Du , θ ) on labeled data Dl and unlabeled data Du . By regularizing the learning process with unlabeled data , the decision boundary becomes more plausible . Another mainstream of semi-supervised learning lies in introducing student network and teacher network and enforcing consistency between them ( Tarvainen & Valpola , 2017 ; Miyato et al. , 2019 ; Lee , 2013 ) . It has been shown that semisupervised learning performance can be greatly improved via unsupervised pre-training of a ( big ) model , supervised fine-tuning on a few labeled examples , and distillation with unlabeled examples for refining and transferring the task-specific knowledge ( Chen et al. , 2020c ) . However , whether task-agnostic self-distillation would benefit semi-supervised learning is still underexplored . 3 PRELIMINARIES . 3.1 FORMULATION . Unsupervised Graph Representation Learning Given a set of unlabeled graphs G = { Gi } Ni=1 , we aim at learning the low-dimensional representation of every graph Gi ∈ G favorable for downstream tasks like graph classification . Semi-supervised Graph Representation Learning Consider a whole dataset G = GL ∪ GU composed by labeled data GL = { ( Gi , yi ) } li=1 and unlabeled data GU = { Gi } l+u i=l+1 ( usually u l ) , our goal is to learn a model that can make predictions on graph labels for unseen graphs . And with K augmentations , we get G′L = { ( G′k , y′k ) } Klk=1 and G′U = { G′k } K ( l+u ) k=l+1 as our training data . 3.2 GRAPH REPRESENTATION LEARNING . We represent a graph instance as G ( V , E ) with the node set V and the edge set E . The dominant ways of graph representation learning are graph neural networks with neural message passing mechanisms ( Hamilton et al. , 2017 ) : for every node v ∈ V , node representation hkv is iteratively computed from the features of their neighbor nodes N ( v ) using a differentiable aggregation function . Specifically , at the iteration k we get the node embedding as : hkv = σ ( Wk · CONCAT ( hk−1v , AGGREGATEk ( { hk−1u , ∀u ∈ N ( v ) } ) ) ) ( 1 ) Then the graph-level representations can be attained by aggregating all node representations using a readout function like summation or set2set pooling ( Vinyals et al. , 2015 ) . 3.3 GRAPH DATA AUGMENTATION . It has been shown that the learning performance of GNNs can be improved via graph diffusion , which serves as a homophily-based denoising filter on both features and edges in real graphs ( Klicpera et al. , 2019 ) . The transformed graphs can also serve as effective augmented views in contrastive learning ( Hassani & Khasahmadi , 2020 ) . Inspired by that , we transform a graph G with transition matrix T via graph diffusion and sparsification S = ∑∞ k=0 θkT k into a new graph with adjacency matrix S as an augmented view in our framework . While there are many design choices in coefficients θk like heat kernel , we employ Personalized PageRank ( PPR ) with θPPRk = α ( 1 − α ) k due to its superior empirical performance ( Hassani & Khasahmadi , 2020 ) . As another augmentation choice , we randomly remove edges of graphs to attain corrupted graphs as augmented views to validate the robustness of models to different augmentation choices .
Learning graph-level representations with only labels has been explored by many works. However, it's not easy to annotate every graph. This paper applies the ideas from semi-supervised classification task to improve the representation quality learned by graph neural network. Specifically the proposed solution combines several kinds of existing techniques including diffusion graph augmentation, mean teacher consistency, debiased contrastive loss and pseudo class consistency. Finally they are combined together to act as a regularization term by utilizing the unlabelled data. From this point of view, the novelty of this work is incremental, but it's still an interesting work for improving graph-level representations.
SP:3647115d0449f579f5ad7305103ecb553046d613
How Important is Importance Sampling for Deep Budgeted Training?
1 INTRODUCTION . The availability of vast amounts of labeled data is crucial in training deep neural networks ( DNNs ) ( Mahajan et al. , 2018 ; Xie et al. , 2020 ) . Despite prompting considerable advances in many computer vision tasks ( Yao et al. , 2018 ; Sun et al. , 2019a ) , this dependence poses two challenges : the generation of the datasets and the large computation requirements that arise as a result . Research addressing the former has experienced great progress in recent years via novel techniques that reduce the strong supervision required to achieve top results ( Tan & Le , 2019 ; Touvron et al. , 2019 ) by , e.g . improving semi-supervised learning ( Berthelot et al. , 2019 ; Arazo et al. , 2020 ) , fewshot learning ( Zhang et al. , 2018b ; Sun et al. , 2019b ) , self-supervised learning ( He et al. , 2020 ; Misra & Maaten , 2020 ) , or training with noisy web labels ( Arazo et al. , 2019 ; Li et al. , 2020a ) . The latter challenge has also experienced many advances from the side of network efficiency via DNN compression ( Dai et al. , 2018 ; Lin et al. , 2019 ) or , neural architecture search ( Tan & Le , 2019 ; Cai et al. , 2019 ) ; and optimization efficiency by better exploiting the embedding space ( Khosla et al. , 2020 ; Kim et al. , 2020 ) . All these approaches are designed under a common constraint : the large dataset size needed to achieve top results ( Xie et al. , 2020 ) , which conditions the success of the training process on computational resources . Conversely , a smart reduction of the amount of samples used during training can alleviate this constraint ( Katharopoulos & Fleuret , 2018 ; Mirzasoleiman et al. , 2020 ) . The selection of samples plays an important role in the optimization of DNN parameters during training , where Stochastic Gradient Descent ( SGD ) ( Dean et al. , 2012 ; Bottou et al. , 2018 ) is often used . SGD guides the parameter updates using the estimation of model error gradients over sets of samples ( mini-batches ) that are uniformly randomly selected in an iterative fashion . This strategy assumes equal importance across samples , whereas other works suggest that alternative strategies for revisiting samples are more effective in achieving better performance ( Chang et al. , 2017 ; Kawaguchi & Lu , 2020 ) and faster convergence ( Katharopoulos & Fleuret , 2018 ; Jiang et al. , 2019 ) . Similarly , the selection of a unique and informative subset of samples ( core-set ) ( Toneva et al. , 2018 ; Coleman et al. , 2020 ) can alleviate the computation requirements during training , while reducing the performance drop with respect to training on all data . However , while removing data samples speeds-up the training , a precise sample selection often requires a pretraining stage that hinders the ability to reduce computation ( Mirzasoleiman et al. , 2020 ; Sener & Savarese , 2018 ) . A possible solution to this limitation might be to dynamically change the important subset during training as done by importance sampling methods ( Amiri et al. , 2017 ; Zhang et al. , 2019b ) , which select the samples based on a sampling probability distribution that evolves with the model and often changes based on the loss or network logits ( Loshchilov & Hutter , 2015 ; Johnson & Guestrin , 2018 ) . An up-to-date importance estimation is key for current methods to succeed but , in practice , is infeasible to compute ( Katharopoulos & Fleuret , 2018 ) . The real importance of a sample changes after every iteration and estimations become out-dated , yielding considerable drops in performance ( Chang et al. , 2017 ; Zhang et al. , 2019b ) . Importance sampling methods , then , focus on selecting samples and achieve a speed-up during training as a side effect . They do not , however , strictly study possible benefits on DNN training when restricting the number of iterations used for training , i.e . the budget . Budgeted training ( Nan & Saligrama , 2017 ; Kachuee et al. , 2019 ; Li et al. , 2020b ) imposes an additional constraint on the optimization of a DNN : a maximum number of iterations . Defining this budget provides a concise notion of the limited training resources . Li et al . ( 2020b ) propose to address the budget limitation using specific learning rate schedules that better suit this scenario . Despite the standardized scenario that budgeted training poses to evaluate methods when reducing the computation requirements , there are few works to date in this direction ( Li et al. , 2020b ; Katharopoulos & Fleuret , 2018 ) . As mentioned , importance sampling methods are closely related , but the avoidance of budget restrictions makes it difficult to understand their utility given the sensitivity to hyperparamenters that they often exhibit ( Chang et al. , 2017 ; Loshchilov & Hutter , 2015 ) . In this paper , we overcome the limitations outlined above by analyzing the effectiveness of importance sampling methods when a budget restriction is imposed ( Li et al. , 2020b ) . Given a budget restriction , we study synergies among important sampling , and data augmentation ( Takahashi et al. , 2018 ; Cubuk et al. , 2020 ; Zhang et al. , 2018a ) . We find the improvements of importance sampling approaches over uniform random sampling are not always consistent across budgets and datasets . We argue and experimentally confirm ( see Section 4.4 ) that when using certain data augmentation ( Takahashi et al. , 2018 ; Cubuk et al. , 2020 ; Zhang et al. , 2018a ) , existing importance sampling techniques do not provide further benefits , making data augmentation the most effective strategy to exploit a given budget . 2 RELATED WORK . Few works exploit a budgeted training paradigm ( Li et al. , 2020b ) . Instead , many approaches aim to speed up the training convergence to a given performance by computing a better sampling strategy or carefully organizing the samples to allow the CNN to learn faster and generalize better . Other works , however , explore how to improve model performance by labeling the most important samples from an unlabeled set of data ( Yoo & Kweon , 2019 ; Ash et al. , 2020 ; Ren et al. , 2020 ) or how to better train DNNs when a limited number of samples per class is available ( Chen et al. , 2019 ; Zhou et al. , 2020 ; Albert et al. , 2020 ) . This section reviews relevant works aiming to improve the efficiency of the DNN training . Self-paced learning ( SPL ) and curriculum learning ( CL ) aim to optimize the training process and improve model performance by ordering the samples from easy to difficult ( Weinshall et al. , 2018 ; Bengio et al. , 2009 ; Hacohen & Weinshall , 2019 ; Cheng et al. , 2019 ) . For instance , CL manages to speed the convergence of the training at the initial stages due to focusing on samples whose gradients are better estimations of the real gradient ( Weinshall et al. , 2018 ) . The main drawback of these methods is that , in most of the cases , the order of the samples ( curriculum ) has to be defined before training , which is already a costly task that requires manually assessing the sample difficulty , transferring knowledge from a fully trained model , or pre-training the model on the given dataset . Some approaches remedy this drawback with a simple curriculum ( Lin et al. , 2017 ) or by learning the curriculum during the training ( Jiang et al. , 2018 ) ; these methods , however , do not aim to speed up the training by ordering the samples , but to improve network convergence by weighting the sample contribution to the loss . Core-set selection approaches aim to find the subset of samples that is most useful ( Toneva et al. , 2018 ; Coleman et al. , 2020 ; Mirzasoleiman et al. , 2020 ) . By identifying the most useful samples from a dataset , these methods aim at maintaining accuracy despite training in a subset of the data . The ability of these methods to reduce the training cost is very limited , since they require pre-training the model . However , these methods demonstrate that DNNs only need a portion of the samples to achieve peak performance . For example , Toneva et al . ( 2018 ) define “ forgetting events ” as the count of times that samples are miss-classified after being correctly predicted during training . They show that higher forgetting and importance are related , as removing samples with lower forgetting events damages the model less than removing the more forgotten ones . Mirzasoleiman et al . ( 2020 ) build clusters with the features from the model and use the centroids as the most informative samples . Coleman et al . ( 2020 ) demonstrate that the difficulty of a sample is invariant to the model capacity and show that they can speed up several sample selection tasks by reducing the size of the model . Importance sampling approaches lie in the middle ground between the previous two : they aim to speed up training convergence by leveraging the most useful samples at every training stage ( Katharopoulos & Fleuret , 2018 ; Jiang et al. , 2019 ; Zhang et al. , 2019b ) – which correspond to sample losses with highest gradient magnitude ( Needell et al. , 2014 ; Zhao & Zhang , 2015 ; Alain et al. , 2016 ) . More recently , Johnson & Guestrin ( 2018 ) has shown that the last layer gradients are a good approximation and are easier to obtain in deep learning frameworks . Alternative importance measures often used include the loss ( Jiang et al. , 2019 ) , the probability predicted for the true class ( Chang et al. , 2017 ) , or the ranking order of these probabilities ( Loshchilov & Hutter , 2015 ) . The approximation of the optimal distribution by importance sampling approaches avoids the cost of computing each sample importance at every iteration . However , they face one main challenge : the optimal sampling distribution changes very rapidly between iterations , leading to outdated estimations . Initial attempts on addressing this challenge included several hyper-parameters to smooth the estimated distribution ( Chang et al. , 2017 ) , more frequent distribution updates via additional forward passes ( Loshchilov & Hutter , 2015 ) , or different alternative measures to estimate the sampling distribution ( Amiri et al. , 2017 ) . Several works added complex support techniques to the training that aimed to estimate a better distribution : using robust optimization ( Johnson & Guestrin , 2018 ) , introducing repulsive point techniques ( Zhang et al. , 2019a ) , or adding a second network to be trained in parallel with the main model Zhang et al . ( 2019b ) . More recent methods leverage the random-then-greedy technique ( Lu & Mazumder , 2018 ) , where a random initial batch of samples is selected and then the probabilities of those samples are computed and used to select a secondary batch that is used for training . Within this scheme , ( Katharopoulos & Fleuret , 2018 ) define a theoretical bound for the magnitude of the gradients that allows for faster computation of the sampling probabilities and ( Jiang et al. , 2019 ) and ( Ioannou et al. , 2019 ) use the loss as a measure of sample importance to keep the sampling distribution updated through the training . Finally , ( Kawaguchi & Lu , 2020 ) introduces the top-k loss from ( Fan et al. , 2017 ) to perform the back-propagation step using the samples with highest losses only . Note that none of these methods avoids doing a full forward pass every epoch to update the sampling probabilities . Learning rate schedules have proven to be useful alternatives for faster convergence . The authors in ( Smith & Topin , 2019 ; Smith , 2017 ) propose a cyclic learning rate schedule to reach faster convergence by using larger learning rates at intermediate training stages and very low rates at the end . Li et al . ( Li et al. , 2020b ) also study the importance of the learning rate schedules to accelerate the training of DNNs . In particular , they explore budgeted training and propose a linearly decaying learning rate schedule that approaches zero at the end of the training , which without additional hyper-parameters , improves the standard learning rate schedules . Data augmentation techniques , generally , aim to increase the variance of the data to achieve better generalization . Recent approaches , however , go a step further and target specific weaknesses from CNNs : cutout ( DeVries & Taylor , 2017 ) drops contiguous patches of data from the input to force the network to spread its attention over the entire object , mixup ( Zhang et al. , 2018a ) proposes to train using convex combinations of images and label pairs which smooth class boundaries and improve model calibration ( Thulasidasan et al. , 2019 ) , and RICAP ( Takahashi et al. , 2018 ) combines the advantages of the two previous techniques by training on images generated from joining multiple patches and doing the corresponding convex combination of labels . More generally , RandAugment ( Cubuk et al. , 2020 ) randomly combines commonly used data augmentation techniques as a reduction of the search space of the recently proposed methods that find automated augmentation policies ( Ho et al. , 2019 ; Cubuk et al. , 2019 ) .
This paper investigates the use of importance sampling in budgeted training. Four importance sampling techniques from prior works are applied within the context of fixed training budgets, and compared under different conditions of training set selection, learning rate schedule and data augmentations. Each aims to sample more useful examples more frequently, by using the loss or gradient magnitude as an importance measure. Uniform sampling with and without replacement are used as baselines, and experiments are performed on cifar-10 and cifar-100. The final conclusion is that importance sampling with budgets as low as 20% the original training schedule offer little if any improvement over uniform sampling, while additional data augmentations work well to make up lost validation accuracy.
SP:1fc676213cbcfd690a3aea055066a3004f974325
VideoFlow: A Framework for Building Visual Analysis Pipelines
1 INTRODUCTION . The success of computer vision techniques is spawning intelligent visual analysis systems in real applications . Rather than serving individual models , these systems are often powered by a workflow of image/video decoding , several serial or parallel algorithm processing stages , as well as sinking analysis results . The varied visual analysis requirements in different real scenarios put forward a high demand on a framework for fast algorithm development , flexible pipeline construction , efficient workflow execution , as well as secure model protection . There exist some frameworks approaching some of the above mentioned targets , like DeepStream ( Purandare , 2018 ) and MediaPipe ( Lugaresi et al. , 2019 ) . DeepStream is on top of GStreamer ( GSTREAMER , 1999 ) , which primarily targets audio/video media editing rather than analysis . MediaPipe can be used to build prototypes to polished cross-platform applications and measure performance . Though it is flexible and extensible on calculators , efficiency , model security , and extension on more aspects are expected by real online services in industry . In this paper , we present VideoFlow , to meet the visual analysis requirements for both algorithm development and deployment in real systems with the following highlights . Flexibility . VideoFlow is designed around stateful Computation Graph and stateless Resource . Computation graph abstracts the visual processing workflow into a stateful directed acyclic graph . Developers can focus on the implementation of processing units ( graph nodes ) and the construction of the whole workflow . Resource is a stateless shared computation module of computation graphs . The most typical resource is deep learning model inference . Resource decouples the stateless visual processing components from the whole complicated visual analysis pipeline , helping developers focus on the optimization of these computation or Input/Output ( IO ) intensive implementation . Efficiency . VideoFlow is designed for better efficiency from four levels . ( 1 ) Resource-level : resources can aggregate the scattered computation requests from computation graph instances into intensive processing for better efficiency . ( 2 ) Video-level : all videos are analyzed in parallel in a shared execution engine . ( 3 ) Frame-level : video frames are parallelized on operations which are irrelevant to frame orders . ( 4 ) Operator-level : visual analysis is a multi-branch pipeline in most cases . The different branches and different operators of each branch without sequential dependency are analyzed in parallel . Extensibility . VideoFlow is designed from the beginning to be as modular as possible to allow easy extension to almost all its components . It can be extended to different hardware devices like Graphic Processing Units ( GPU ) , Neural Processing Unit ( NPU ) , etc . It can be hosted on either x86 or ARM platforms . Developers can customize their own implementations with VideoFlow as a dependent library . The extended implementations can be registered back to VideoFlow as plugins at runtime . Security . Model protection is an important problem in industry . VideoFlow encodes model files into encrypted binary codes as part of the compiled library . The secret key can be obscured into the same library , or exported to a separate key management service ( KMS ) . At runtime , VideoFlow decrypts the models and verifies authorization from a remote service periodically . VideoFlow has been incubated in the practices of the smart city innovation for more than three years . It is designed for computer vision practitioners , including engineers , researchers , students , and software developers . The targets of VideoFlow include : 1 ) free developers from the exhausting data loading/sinking , parallel programming and debugging to the optimization of algorithms ; 2 ) enable easy extension of video decoding , deep model inference and algorithm implementation ; 3 ) provide highly efficient framework for large scale visual processing in industry rather than just experimental prototypes . 4 ) protect the intellectual property of models and algorithms to make sure that they can only work with authorization . 2 RELATED WORK . 2.1 DEEP LEARNING FRAMEWORKS . Almost all existing deep learning frameworks like Caffe ( Jia et al. , 2014 ) , TensorFlow ( Abadi et al. , 2016 ) , PyTorch ( Paszke et al. , 2017 ) , MXNet ( Chen et al. , 2015 ) describe networks in directed graphs or even dynamic graphs . VideoFlow draws lessons from this successful design for visual analysis . The difference is that the basic units in deep networks are low level operations like convolutions , compared to higher level processing like object tracking in VideoFlow . The data transferred between operators in VideoFlow is also much more complex than the Tensor in deep learning . As to model inference , there are some specially optimized engines , like TensorRT ( Vanholder , 2016 ) and MKL-DNN/oneAPI ( Intel ) by hardware manufactures . In the open source community , developers put forward TVM for easy extension to different hardware for more effective inference ( Chen et al. , 2017 ) . On top of these engines , there are some serving platforms for individual models rather workflow construction , like tensorflow serving ( Google , 2016 ) , NVIDIA Triton Inference Server ( Goodwin & Jeong , 2019 ) . VideoFlow integrates these inference engines as Resources with their C++ interfaces . 2.2 VISUAL ANALYSIS FRAMEWORKS . The recent has witnessed some visual analysis frameworks . Nvidia launches the DeepStream project in the early days for video analysis on GPU ( Purandare , 2018 ) . It is oriented as well as optimized on GPU and TensorRT , regardless of the bustling heterogeneous hardware devices . Besides , it is built on top of GStreamer ( GSTREAMER , 1999 ) , which primarily targets audio/video media editing rather than analysis , limiting its flexibility and extensibility . The gst-video-analytics project ( Intel , 2019 ) is also built on top of GStreamer ( Deuermeyer & Andrey ) . Google proposed MediaPipe by building graphs for arbitrary streaming data processing with a computation graph as well ( Lugaresi et al. , 2019 ) . MediaPipe can be used to build prototypes to polished cross-platform applications and measure performance . Though it is flexible and extensible on calculators , real online visual analysis expects extension on more aspects , more efficiency optimization , and model security protection . Compared to MediaPipe , VideoFlow features these advantages for better application in both academia and industry . Another framework also named Videoflow ( de Armas , 2019 ) is designed to facilitate easy and quick definition of computer vision stream processing pipelines . However , it is just a prototype experimental platform , with limitations on extensibility , efficiency , and security . 3 ARCHITECTURE . VideoFlow is oriented around stateful Computation Graph and stateless Resource with a welloptimized execution engine . Computation graph is a directed acyclic graph describing the whole workflow of video or image analysis . As the two main components of a graph , Node and Edge denote visual processing operators and data flow between operators , respectively . Resource is shared for graph irrelevant computation . The architecture is shown in Figure 1 . 3.1 OPERATOR . Operator is the basic unit of visual analysis workflow . An operator depends on the outputs of its parent operators . Its own outputs can be consumed by arbitrary number of child operators . According to the number of inputs an outputs , operators are categorized as follows : • Entrypoint : operators that have zero inputs . This is the start of a computation graph . Each graph can have only one entrypoint . • Processor : operators that have at least one input and at least one output . Processors occupy most of the workflow of visual analysis . It ’ s also the main kind of operator with the highest demand on easy extension . • Sinker : operators that have zero outputs . This is the end of a computation graph . A graph can have multiple sinkers . 3.2 DATA FLOW . Data flow is the edge connection between two operators ( nodes ) . An operator may generate several number of data with different types for its child nodes . Data flow is a collection of arbitrary number of data pointers of arbitrary type ( vector < void * > in our C++ implementation ) in VideoFlow . VideoFlow guarantees that the incoming data pointers are always safe to be read . Developers do not need to care how many other operators are also consuming the data , or whether the data should be released during the workflow . 3.3 RESOURCE . Resource is the stateless computation unit shared by graphs . The most representative resource is deep model inference . Resource is abstracted due to three main reasons . Firstly , many operations like deep model inference and data sinking to databases have their own independent semantics . They are irrelevant to whether it is used for video or image processing , which step of the whole pipeline invokes the operation , or how the outputs will be post-processed . Secondly , these operations are often computation or IO intensive . Leaving them in the operators will incur bottlenecks on CPU , memory or network bandwidth due to large amount of resource competition . Gathering the scattered but shared requests from different graphs for uniform processing proves to be a good practice to improve efficiency . Thirdly , resource can be improved without affecting the visual analysis logic . For example , we can accelerate the inference speed of a PyTorch model by switching to a TensorRT worker . We can change to a more efficient database connector for more real-time data sinking . Without the abstraction of these resources , all affected operators have to be re-implemented to earn the benefits . 3.4 GRAPH CONSTRUCTION . Computation graph is described in json format with the following fields . “ resource ” describes the resources that will be used by operators . Each resource should have two fields : “ type ” to create the correct class instance and “ params ” to specify the resource configurations . “ ops ” describes the operators that will be used to construct computation graphs . Operators can be used multiple times by different graphs . As the same to resource , each operator should have two fields : “ type ” and “ params ” . “ graph ” is the place to define computation graphs . Each graph definition is a json dictionary of key-value pairs . Key is the operator name . Its value is a list of operator names as its child nodes . “ subgraphs ” [ optional ] is used to re-use resources , operators and graphs from other graph configuration files . “ libs ” [ optional ] specifies external dynamic libraries that should be loaded by VideoFlow , especially the extended libraries in Figure 3 . “ config ” [ optional ] is for global settings , currently including number of parallel image processing threads and number of frames for parallel video processing . An example file is provided in the supplementary material to show the person reidentification workflow ( Section 5 ) . 3.5 EXECUTION SCHEDULING . With the graph defined and constructed , execution scheduling determines which operator should be calculated . In real cases , there can be multiple computation graph instances running in parallel , each with either shared or different structures . Figure 2 shows the execution scheduling of these graphs . Each graph has several replicas , with each replica called as an order . Video frames are actually processed in these graph replicas/orders . The replicas are processed in parallel for framelevel parallelism . Each order starts from the Forward function of the entrypoint node . Forward . As Figure 2 shows , the forward function first checks if the current operator is ready to be executed . The readiness checking includes : 1 ) all parents of the current node have finished on this order . 2 ) the previous order of the current node has been executed if the current node is an ordered operator . If ready , the forward function puts its own processing function into the task queue of the execution engine , waiting to be executed . The processing function first finishes the internal processing logic of the operator . After that , it calls the Forward function of its following operators . If it is the leaf operator , it calls the Backward function of its own . Forward of entrypoints is specially implemented with a separate thread retrieving and dispatching data to idle orders . Backward is the process to reset the node to be ready to process latter frames . The backward function first checks if all its children have been reset . If so , it resets its forward status . Then it continues to call the Backward of all its parents . Execution Engine . The processing functions of operators are put into a task queue of the execution engine . All processing units share the same interface . The execution engine does not know which order of which graph a processing function comes from . All orders and all graphs are executed concurrently once they are put into the queue . Inside the engine there is a thread pool , with all threads fetching and executing tasks from the queue .
The paper presents a tutorial to a video analysis platform software, i.e., VideoFlow, which represents a video analysis task as a computation graph, provides common functions like video decoding and database storage, integrates deep learning frameworks, e.g. Caffe/Pytorch/MXNet as built-in inference engines, and supports heterogeneous hardware such as CPU/GPU/FPGA. VideoFlow also allows the customers to develop operator, decoder, and model inference extensions. The paper presents an example application of person ReID using the VideoFlow platform. The paper claims this VideoFlow software could be used in both academic and industrial scenarios.
SP:b31d37adc24ddff6ef32dc607fe3c8c29341a81d
SOAR: Second-Order Adversarial Regularization
1 INTRODUCTION . Adversarial training ( Szegedy et al. , 2013 ) is the standard approach for improving the robustness of deep neural networks ( DNN ) , or any other model , against adversarial examples . It is a data augmentation method that adds adversarial examples to the training set and updates the network with newly added data points . Intuitively , this procedure encourages the DNN not to make the same mistakes against an adversary . By adding sufficiently enough adversarial examples , the network gradually becomes robust to the attack it was trained on . One of the challenges with such a data augmentation approach is the tremendous amount of additional data required for learning a robust model . Schmidt et al . ( 2018 ) show that under a Gaussian data model , the sample complexity of robust generalization is √ d times larger than that of standard generalization . They further suggest that current datasets ( e.g. , CIFAR-10 ) may not be large enough to attain higher adversarial accuracy . A data augmentation procedure , however , is an indirect way to improve the robustness of a DNN . Our proposed alternative is to define a regularizer that penalizes DNN parameters prone to attacks . Minimizing the regularized loss function leads to estimators robust to adversarial examples . Adversarial training and our proposal can both be formulated in terms of robust optimization framework for adversarial robustness ( Ben-Tal et al. , 2009 ; Madry et al. , 2018 ; Wong & Kolter , 2018 ; Shaham et al. , 2018 ; Sinha et al. , 2018 ) . In this formulation , one is seeking to improve the worstcase performance of the model , where the performance is measured by a particular loss function ` . Adversarial training can be understood as approximating such a worst-case loss by finding the corresponding worst-case data point , i.e. , x+ δ with some specific attack techniques . Our proposed method is more direct . It is based on approximating the loss function ` ( x+ δ ) using its second-order Taylor series expansion , i.e. , ` ( x+ δ ) ≈ ` ( x ) +∇x ` ( x ) > δ + 1 2 δ > ∇2x ` ( x ) δ , and then upper bounding the worst-case loss using the expansion terms . By considering both gradient and Hessian of the loss function with respect to ( w.r.t . ) the input , we can provide a more accurate approximation to the worst-case loss . In our derivations , we consider both ` 2 and ` ∞ attacks . In our derivations , the second-order expansion incorporates both the gradient and Hessian of the loss function with respect to ( w.r.t . ) the input . We call the method Second-Order Adversarial Regularizer ( SOAR ) ( not to be confused with the Soar cognitive architecture Laird 2012 ) . In the course of development of SOAR , we make the following contributions : • We show that an over-parameterized linear regression model can be severely affected by an adversary , even though its population loss is zero . We robustify it with a regularizer that exactly mimics the adversarial training . This suggests that regularization can be used instead of adversarial training ( Section 2 ) . • Inspired by such a possibility , we develop a regularizer which upper bounds the worst-case effect of an adversary under an approximation of the loss . In particular , we derive SOAR , which approximates the inner maximization of the robust optimization formulation based on the second-order Taylor series expansion of the loss function ( Section 4 ) . • We study SOAR in the logistic regression setting and reveal challenges with regularization using Hessian w.r.t . the input . We develop a simple initialization method to circumvent the issue ( Section 4.1 ) . • We empirically show that SOAR significantly improves the adversarial robustness of the network against ` ∞ attacks and ` 2 attacks on CIFAR-10 and SVHN . Specifically , we evaluate using a PGD1000 white-box attack ( Madry et al. , 2018 ) , transferred PGD1000 attacks , AutoAttack ( Croce & Hein , 2020 ) , and SimBA ( Guo et al. , 2019 ) . 2 LINEAR REGRESSION WITH AN OVER-PARAMETRIZED MODEL . This section shows that for over-parameterized linear models , gradient descent ( GD ) finds a solution that has zero population loss , but is prone to attacks . It also shows that one can avoid this problem with defining an appropriate regularizer . Hence , we do not need adversarial training to robustify such a model . This simple illustration motivates the development of our method in next sections . We only briefly report the main results here , and defer the derivations to Appendix A . Consider a linear model fw ( x ) = 〈w , x 〉 with x , w ∈ Rd . Suppose that w∗ = ( 1 , 0 , 0 , . . . , 0 ) > and the distribution of x ∼ p is such that it is confined on a 1-dimensional subspace { ( x1 , 0 , 0 , . . . , 0 ) : x1 ∈ R } . This setup can be thought of as using an over-parameterized model that has many irrelevant dimensions with data that is only covering the relevant dimension of the input space . This is a simplified model of the situation when the data manifold has a dimension lower than the input space . We consider the squared error pointwise loss l ( x ; w ) = 12 |〈x , w 〉 − 〈x , w ∗ 〉|2 . Denote the residual by r ( x ; w ) = 〈x , w − w∗ 〉 , and the population loss by L ( w ) = E [ l ( X ; w ) ] . Suppose that we initialize the weights as w ( 0 ) = W ∼ N ( 0 , σ2Id×d ) , and use GD on the population loss , i.e. , w ( t + 1 ) ← w ( t ) − β∇wL ( w ) . It is easy to see that the partial derivatives w.r.t . w2 , ... , d are all zero , i.e. , no weight adaptation happens . With a proper choice of learning rate β , we get that the asymptotic solution is w̄ , limr→∞ w ( t ) = ( w∗1 , w2 ( 0 ) , w3 ( 0 ) , . . . , wd ( 0 ) ) > . That is , the initial random weights on dimensions 2 , . . . , d do not change . We make two observations . The first is that L ( w̄ ) = 0 , i.e. , the population loss is zero . So from the perspective of training under the original loss , we are finding the optimal solution . The second observation is that this model is vulnerable to adversarial examples . An FGSM-like attack that perturbs x by ∆x = ( 0 , ∆x2 , ∆x3 , . . . , ∆xd ) > with ∆xi = ε sign ( wi ( 0 ) ) ( for i = 2 , . . . , d ) has the population loss of EX , W [ l ( X + ∆x ) ; w̄ ) ] ≈ O ( ε2d2σ2 ) under the adversary at the asymptotic solution w̄ . When the dimension is large , this loss is quite significant . The culprit is obviously that GD is not forcing the initial weights to go to zero when there is no data from irrelevant and unused dimensions . This simple problem illustrates how the optimizer and an over-parameterized model might interact and lead to a solution that is prone to attacks . An effective solution is to regularize the loss such that the weights of irrelevant dimensions to go to zero . Generic regularizers such as ridge and Lasso regression lead to a biased estimate ofw∗1 , and thus , one is motivated to define a regularizer that is specially-designed for improving adversarial robustness . Bishop ( 1995 ) showed the close connection between training with random perturbation and Tikhonov Regularization . Inspired by this idea , we develop a regularizer that mimics the adversary itself . For this FGSM-like adversary , the population loss at the perturbed point is Lrobustified ( w ) , E [ l ( X + ∆x ; w ) ] = L ( w ) + εE [ r ( X ; w ) ] ‖w2 : d‖1 + ε2 2 ‖w2 : d‖21 . ( 1 ) Minimizing Lrobustified ( w ) is equivalent to minimizing the model at the point x′ = x + ∆x . The regularizer εE [ r ( X ; w ) ] ‖w2 : d‖1 + ε2 2 ‖w2 : d‖ 2 1 incorporates the effect of adversary in exact form . Nonetheless , there are two limitations of this approach . The first is that it is designed for a particular choice of attack , an FGSM-like one . We would like a regularizer that is robust to a larger class of attacks . The second is that this regularizer is designed for a linear model and the squared error loss . How can we design a regularizer for more complicated models , such as DNNs ? We address these questions by formulating the problem of adversarial robustness within the robust optimization framework ( Section 3 ) , and propose an approach to approximately solve it ( Section 4 ) . 3 ROBUST OPTIMIZATION FORMULATION . Designing an adversarial robust estimator can be formulated as a robust optimization problem ( Huang et al. , 2015 ; Madry et al. , 2018 ; Wong & Kolter , 2018 ; Shaham et al. , 2018 ) . To describe it , let us introduce our notations first . Consider an input space X ⊂ Rd , an output space Y , and a parameter ( or hypothesis ) spaceW , parameterizing a model f : X ×W → Y . In the supervised learning scenario , we are given a data distribution D over pairs of examples { ( Xi , Yi ) } ni=1 . Given the prediction of f ( x ; w ) and a target value y , the pointwise loss function of the model is denoted by ` ( x , y ; w ) , ` ( f ( x ; w ) , y ) . Given the distribution of data , one can define the population loss as L ( w ) = E [ ` ( X , Y ; w ) ] . The goal of the standard supervised learning problem is to find aw ∈ W that minimizes the population loss . A generic approach to do this is through empirical risk minimization ( ERM ) . Explicit or implicit regularization is often used to control the complexity of the hypothesis to avoid over- or under-fitting ( Hastie et al. , 2009 ) . As shown in the previous section , it is possible to find a parameter w that minimizes the loss through ERM , but leads to a model that is vulnerable to adversarial examples . To incorporate the robustness notion in the model , it requires defenders to reconsider the training objective . It is also important to formalize and constrain the power of the adversary , so we understand the strength of the attack to which the model is resistant . This can be specified by limiting that the adversary can only modify any input x to x+ δ with δ ∈ ∆ ⊂ X . Commonly used constraints are ε-balls w.r.t . the ` p-norms , though other constraint sets have been used too ( Wong et al. , 2019b ) . This goal can be formulated as a robust optimization problem where the objective is to minimize the adversarial population loss given some perturbation constraint ∆ : min w E ( X , Y ) ∼D [ max δ∈∆ ` ( X + δ , Y ; w ) ] ( 2 ) We have an interplay between two goals : 1 ) the inner-max term looks for the worst-case loss around the input , while 2 ) the outer-min term optimizes the hypothesis by minimizing such a loss . Note that solving the inner-max problem is often computationally difficult , so one may approximate it with a surrogate loss obtained from a particular attack . Adversarial training and its variants ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Kurakin et al. , 2016 ; Madry et al. , 2018 ; Wong et al. , 2019a ) can be intuitively understood as an approximation of this min-max problem via different δ ( x ) . As shown in Section 2 , one can design a regularizer that provides the exact value of the loss function at the attacked point for a particular choice of model , loss function , and adversary , cf . ( 1 ) . Under the robust optimization framework , the regularizer and adversarial training are two realizations of the inner-max objective in ( 2 ) , but using such a regularizer relieved us from using a separate inner optimization procedure , as is done in adversarial training . Motivated by that example and the robust optimization framework discussed here , we develop a regularizer that can be understood as an upper-bound on the worst-case value of the loss at an attacked point under a second-order approximation of the loss function .
The paper proposed a regularizer loss as an alternative to adversarial training to improve the robustness of neural networks against adversarial attacks. The new regularizer is derived from a second-order Tyler series expansion of the loss function in the model robustness optimization problem. Clear mathematical derivation and thoughtful empirical experimental results are provided. The proposed method outperformed baseline adversarial training methods with better or on part robustness and higher standard accuracy.
SP:f67271e00a669e2b64580762c04eb7b88965061d
How to Design Sample and Computationally Efficient VQA Models
1 INTRODUCTION . Many real-world complex tasks require both perception and reasoning ( or System I and System II intelligence ( Sutton & Barto , 2018 ) ) , such as VQA . What is the best way to integrate perception and reasoning components in a single model ? Furthermore , how would such an integration lead to accurate models , while being sample and computationally efficient ? Such questions are important to address when scaling reasoning systems to real world use cases , where empirical computation bounds must be understood in addition to the final model performance . There is a spectrum of methods in the literature exploring different ways of integrating perception and reasoning . Nowadays , the perception is typically carried out via neural models : such as CNNs for vision , and LSTMs ( Gers et al. , 1999 ) or Transformers ( Vaswani et al. , 2017 ) for language . Depending on the representation of perception input and their reasoning interface , a method can be either more towards the neural end of the spectrum or more toward the symbolic end . For the vision part , models can either use pixel-level or object-level symbolic representation . For the language part , models can generate either textual attention or programs , where the text is decomposed into a sequence of functions . Within the program representations , models typically operate on a selected discrete program or on probabilistic programs . The reasoning part used to produce the final answer can either use neural models , symbolic reasoning , or something in between , such as neural module networks ( NMN ) or soft logic blocks . Existing works for NMN methods leverage pixel-level representations and program representations such as NMN ( Hu et al. , 2017 ) , Prob-NMN ( Vedantam et al. , 2019 ) , and Stack-NMN ( Hu et al. , 2018 ) . Representative models that use object-level vision also leverage both neural and symbolic language and reasoning . Models that are more neural are LXMERT ( Tan & Bansal , 2019 ) and NSM ( Hudson & Manning , 2019 ) , while those that are more symbolic are NS-VQA ( Yi et al. , 2018 ) , NSCL ( Mao et al. , 2019 ) and NGS ( Li et al. , 2020 ) . A systematic comparison across these models is illustrated in Table 1 with more details in Appendix A . Overall , neural models have more expressive power but with more parameters , while more-symbolic models have more prior structures built into them but with fewer parameters . There is an interesting bias-variance trade-off in the model design . By encoding as much bias into the model as possible , one could reduce sample requirements . The different choices of perception and reasoning components also limit how the QA models will be trained . If both components are chosen as neural modules , then the training can be done in a very efficient end-to-end fashion . If the reasoning is carried out using more discrete operations , then the perception model needs to sample discrete outputs or take discrete inputs to interface with downstream reasoning . For instance , if symbolic reasoning is used , REINFORCE ( Williams , 1992 ) is typically used to train the perception models , which may require many samples during the optimization process . Alternatively , one can also use expensive abduction ( Li et al. , 2020 ) to manipulate the perception models outputs to provide the correct reasoning and then optimize these perception models using these pseudo-labels . Overall , more neural models will be easier to optimize , while more symbolic models will need additional expensive discrete sampling during optimization . To highlight this interesting fact , we call it the neuro-symbolic trade-off . This neuro-symbolic trade-off also affects sample efficiency and computational efficiency . To be more sample efficient , the model needs to be less neural , yet , a more neural model can be more computationally efficient during training . Thus a method that can achieve an overall good performance in terms of both sample and computation efficiency will require systematically determining which perception and reasoning components should be used and how to integrate them . To design such a model , we first test which method within each perception and reasoning component works the most efficiently . From this neuro-symbolic trade-off exploration we can design a model that uses these most efficient components and compare its overall performance against existing models . 2 PROBLEM SETTING . Before the exploration , we formally define the different choices for the vision , language , and reasoning components . In the general VQA setting we are provided with an image I , a natural language question Q , and an answer A . We now define how these basic inputs are used in each component . 2.1 REPRESENTATION FOR VISION . Given the image I there are two predominant visual representations : pixel and object-level attention . Pixel Attention . Given an image one can leverage traditional deep learning architectures used for image representations and classification such as ResNets ( He et al. , 2016 ) . Here the image is passed through many residual convolution layers before entering a MLP sub-network to perform a classification task . From one of these MLP linear layers , an intermediate dense image representation feature fI ∈ RD can be extracted , denoted by fI = ResNet ( I ) . These features are used further down the VQA pipeline , where the downstream model computes attention over the relevant part of the feature based on the question asked . Object-level . Another paradigm is to leverage object detection models such as Faster R-CNNs ( Ren et al. , 2015 ) to identify individual objects within images . Given objects in the image , one can conduct more object-level or symbolic reasoning over the image , instead of reasoning through a pixel by pixel basis . In this object-level representation , a set of object location bounding boxes ( bbox ) can be detected and labeled directly by using R-CNN asO = { ( bbox1 , label1 ) , · · · , ( bboxT , labelT ) } = R-CNN ( I ) for a preset number of T objects . Here o ∈ O can be labeled as “ small red shiny ball ” or “ large green tray ” based on what is in the image . Another approach is to factor the joint bounding box and label prediction into individual components to be handled by separate models . First the bounding boxes are extracted from the R-CNN as { bboxi } Ti=1 = R-CNN ( I ) . Then these can be passed into a separate MLP network to retrieve the labels { labeli } Ti=1 = MLP ( ResNet ( I [ bboxi ] ) ) , where I [ bbox ] is cropped image at that bounding box location . These can be used to define the final set of objects : O = { ( bboxi , labeli ) Ti=1 } . In such a setup , the benefit is that the R-CNN can be trained just as an object detector for a generic object class versus the background , whose annotations are easier to obtain . Furthermore , the number of supervised data the label MLP uses for training can be controlled separately . This is a useful mechanic during our model efficiency analysis where we work under the assumption that object bounding box is almost perfect , while object labeling is imperfect and expensive to annotate . 2.2 REPRESENTATION FOR LANGUAGE . The language representations operates on the natural text question Q . Some data sets also provide intermediate representations of each Q through a function program layout FP . FP represents the question as a sequence of abstract functionsF as FP = [ F1 , F2 , ... , Ft ] forFi ∈ F . These function programs are used jointly with the visual representations to conduct reasoning to arrive at answer A . Details about potential realizations of F are described in the following reasoning representation section 2.3 . Given the question Q and its representation FP we can define different approaches for representing the text . Text Attention . Just using the embedded text tokens E a model can embed the question Q through a recurrent network to generate a final question representation hT , where T is the maximum length sequence . Then hT can be put through an recurrent decoder to obtain a latent function at each step ct through an attentive combination of the hidden states ct = ∑ T at · ht . Symbolic Program . If we want to explicitly produce a FP for a corresponding question Q , we similarly encode the text as done for text attention . During decoding ct is passed through a MLP to predict a valid function token . Then the most likely program is selected as arg maxFP P ( FP | Q ) . Soft Program . When choosing a discrete symbolic program , the uncertainty of other function program parses is thrown out . Instead the probabilities for each program can be saved and an expected program can be computed as E [ P ( FP | Q ) ] . Intuitively all the possible programs have to be considered in this scenario which can be intractable . Instead soft program methods such as Stack-NMN factor this as E [ P ( FP | Q ) ] = E [ ∏ T P ( Ft | Q ) ] = ∏ T E [ P ( Ft | Q ) ] . This enables preserving a distribution of functions at each step instead of selecting a single one . 2.3 REPRESENTATION FOR REASONING . Given the visual and language representations , the reasoning component use these representations to derive the final answer A . Here we discuss methods that are neural , symbolic , and soft logic based . Neural Reasoning . Reasoning can be made directly with the image feature fI and encoded question hT such asA = MLP ( [ hT ; fI ] ) in a purely neural fashion . Other approaches can leverage the object representationsO . This is done by modulating the attention over whichO correspond to final answer A , conditioned on hT , as done in NSM or LCGN . LXMERT uses cross-modal attention between text embeddings E and O to predict the final answer . All these methods are more neural , but the FP can be leveraged as well to incorporate better biases through symbolic and soft programs . Symbolic Representations . From the question we can define abstract functions F to generate FP as described in the previous section . Representing F in a symbolic form enables encoding general knowledge or certain dataset ’ s domain specific language ( DSL ) into a model . This improves model interpretability and provides better inductive biases as well . Here we further describe two classes of these functions : fine grained and coarse . A fine grained representation of FP is sequence of n-ary predicates , functions with n arguments , composing F . For example , given the question Q = “ What is the shape of the thing left of the sphere ? ” , a sample fine grained program can be defined as FP = [ filter shape ( sphere , O ) , relate ( left , O ) , query shape ( O ) ] Here the visual representation ( O or fI ) and higher level argument concepts , such as sphere , are used as inputs to each function . We observe clear biases encoded into the function architecture , as given a scene graph of objects O and their relations , one could walk along this graph using FP to get the final answer A . The trade-off is that the model has to deal with more complex functions , whose input arguments and output types can vary . For example relate shape and relate return a subset of objects , while query shape returns a string . Such formulations are used by more neuro-symbolic methods such as NS-VQA and NS-CL . Coarse function types consist of simpler predicates whose arity is fixed , typically 1 , over F . Given the previous question Q , a coarse function can be defined as FP = [ filterθ ( fI ) , relateθ ( fI ) , queryθ ( fI ) ] . Here less structure is required with respect to the language and visual representation where each function can be parameterized as a NMNs . These require more parameters than DSL functions but are syntactically easier to handle as they typically just operate on a fixed dimensional image feature fI , thus implicitly encoding the function arguments . Symbolic Reasoning . Using any coarse or fine representation type for F , the symbolic reasoning can take place over the selected symbolic program FP . We define the high level execution of the symbolic reasoner to arrive at the answer by executing over FP as A = 〈FP , image representation〉S . In the fine grained and coarse samples this would look like :
The paper proposes a neuro-symbolic model for sample-efficient VQA, which turns each question into a probabilistic program which is then softly executed. The problem explored in the paper and its background and context presented clearly and it does a good job in motivating its importance and trade-offs between possible solutions. While the use of a probabilistic program to represent the questions might be too stiff / inflexible in my opinion and may not generalize well to less constrained natural language, this direction is still of course important and interesting. It also does a great job in presenting the existing approaches and comparing their properties. The writing is good and the model is presented clearly with a very useful diagram.
SP:885d09e9fb6fa10be309dcbfe259ecf35ccabb82
A Chaos Theory Approach to Understand Neural Network Optimization
Despite the complicated structure of modern deep neural network architectures , they are still optimized with algorithms based on Stochastic Gradient Descent ( SGD ) . However , the reason behind the effectiveness of SGD is not well understood , making its study an active research area . In this paper , we formulate deep neural network optimization as a dynamical system and show that the rigorous theory developed to study chaotic systems can be useful to understand SGD and its variants . In particular , we first observe that the inverse of the instability timescale of SGD optimization , represented by the largest Lyapunov exponent , corresponds to the most negative eigenvalue of the Hessian of the loss . This observation enables the introduction of an efficient method to estimate the largest eigenvalue of the Hessian . Then , we empirically show that for a large range of learning rates , SGD traverses the loss landscape across regions with largest eigenvalue of the Hessian similar to the inverse of the learning rate . This explains why effective learning rates can be found to be within a large range of values and shows that SGD implicitly uses the largest eigenvalue of the Hessian while traversing the loss landscape . This sheds some light on the effectiveness of SGD over more sophisticated second-order methods . We also propose a quasi-Newton method that dynamically estimates an optimal learning rate for the optimization of deep learning models . We demonstrate that our observations and methods are robust across different architectures and loss functions on CIFAR-10 dataset . 1 INTRODUCTION . An interesting observation from current deep learning research is that classification and regression accuracy gains seem to be achieved from the intricacy of the underlying models rather than the optimization algorithm used for their training . Actually , the de facto choice for the optimization algorithm is still the classic Stochastic Gradient Descent ( SGD ) algorithm ( Robbins & Monro , 1951 ) with minor modifications ( Duchi et al. , 2011 ; Sutskever et al. , 2013 ; Kingma & Ba , 2014 ) . Even though several sophisticated second-order and quasi-Newton methods ( Martens , 2010 ; Martens & Grosse , 2015 ; Berahas et al. , 2019 ) have been introduced , first-order methods remain popular and none of them seem to outperform SGD with a carefully tuned learning rate schedule ( Hardt et al. , 2016 ) . This indicates that SGD ( or in general first-order methods ) probably has some intrinsic properties that make it effective to optimize over-parametrized deep neural networks . Despite various attempts to explain such phenomenon ( Chaudhari & Soatto , 2018 ; Keskar et al. , 2016 ; Kleinberg et al. , 2018 ) , little is understood about the effectiveness of SGD over sophisticated second-order optimization methods . In this paper , we argue that chaos theory ( Sprott & Sprott , 2003 ) is a useful approach to understand the neural network optimization based on SGD . The basic idea is to view neural network optimization as a dynamical system where the SGD update equation maps from the space of learnable parameters to itself and describes the evolution of the system over time . Once the evolution is defined , the rich theory developed to study chaotic dynamical systems can be leveraged to analyze and understand SGD and its variants . In essence , chaos theory enables us to study the evolution of the learnable parameters ( i.e. , the optimization trajectory ) in order to understand the training behavior over large time scales ( i.e. , number of iterations ) . In particular , we focus on understanding the influence of the learning rate on the SGD optimization trajectory . First , by observing that the Lyapunov exponent of SGD is the most negative eigenvalue of the Hessian of the loss , we introduce an efficient and accurate method to estimate the loss curvature . Then , we empirically show that for a range of learning rate schedules , SGD traverses the optimization landscape across regions with largest eigenvalue of the Hessian similar to the inverse of the learning rate . This demonstrates that at a specific time step , performing SGD update is similar to performing a quasi-Newton step , considering only the largest eigenvalue of the Hessian of the loss . This , for the first time , sheds some light on the effectiveness of SGD over more sophisticated second-order methods and corroborates the observation that SGD robustly converges for a variety of learning rate schedules ( Sun , 2019 ) . Furthermore , as pointed out in ( LeCun et al. , 1993 ) , the inverse of the estimated curvature can be used as the learning rate when applying SGD to a new dataset or architecture . Hence , we can set up a “ feedback ” system where the quasi-Newton optimal learning rate is calculated dynamically based on the current largest eigenvalue of the Hessian ( curvature ) , and the learning rate is consequently adjusted during the training , allowing a “ parameter free ” stochastic gradient descent optimization . The experiments are conducted on CIFAR-10 dataset to demonstrate that our observations are robust across a variety of models , including a simple linear model regression and more modern deep neural network architectures , trained with both cross entropy and mean square error loss functions . 2 CHAOS THEORY FOR NEURAL NETWORK OPTIMIZATION . In recent years , several papers have used dynamical systems to study theoretical aspects of deep learning optimization ( Liu & Theodorou , 2019 ) . Essentially , this is achieved by defining the optimization of deep neural networks as the evolution of parameters over time . In particular , a dynamical system progresses according to a map function that describes how the system evolves in a specific time step . In the case of deep neural network optimization , this map function is defined from the space of parameters into itself . By describing the system evolution using such a map function , it is possible to leverage the mathematical machinery of dynamical systems . For instance , viewing SGD as a discrete approximation of a continuous stochastic differential equations , allowed Li et al . ( 2017 ) and An et al . ( 2018 ) to propose adaptive SGD algorithms . Furthermore , dynamical systems enabled LeCun et al . ( 1993 ) to relate learning rate with the inverse of the local Hessian in a quasiNewton optimization framework . Our paper also uses dynamical systems to study deep learning optimization , but differently from all methods above , we rely on chaos theory . Chaos theory ( Sprott & Sprott , 2003 ) studies the evolution of dynamical systems over large time scales and can categorize systems into chaotic or non chaotic . Under some simplifying but still general assumptions , chaotic systems are bounded and have strong dependence on the initial conditions . This means that chaotic systems evolving from different starting points that are within a relatively small region around a particular reference point , will diverge exponentially during the evolution process , where the amount of time taken for this divergence to happen is defined as the chaotic timescale . This chaotic timescale imposes a limit on our ability to predict the future state distribution of a dynamical system . In fact , the distribution of the future state , which have evolved for more than a few times the chaotic timescale , can not be distinguished from random distributions , even when the system is fully deterministic . We apply concepts from chaos theory to improve our current understanding of the optimization of deep neural networks . More specifically , we describe how to use standard chaos theory techniques to efficiently calculate the leading ( positive and negative ) eigenvalues of the Hessian of the loss function . With these eigenvalues we measure , in turn , the loss function curvature , which can be used to study the behavior of first-order optimization methods , such as SGD ( Robbins & Monro , 1951 ) . In particular , with this technique we formulate an explanation for the empirical robustness of SGD to the choice of learning rate and its scheduling function , and we investigate a method ( based on quasi-Newton second order method ) for dynamically finding the optimal learning rate during the optimization of deep neural networks . Such automated and dynamic estimation of optimal learning rate can lift a significant burden from the manual definition of learning rate schedules in deep learning optimization . 2.1 LYAPUNOV EXPONENTS . In chaos theory , the Lyapunov exponents define the divergence rate of infinitesimally close trajectories , and the inverse of the largest Lyapunov exponent is the timescale that corresponds to the onset of chaos into the system . Two arbitrarily close initial conditions generate two solutions that diverge with time . Under the assumption that the map function of the system is differentiable , if one observes this divergence for a short time window , it grows exponentially . If the initial divergence q ( 0 ) is made smaller , the time window can be made larger ( t→∞ ) . The largest Lyapunov exponent λ is a measure of the growth of the divergence q ( t ) in the direction q̂ ( 0 ) = q ( 0 ) /‖q ( 0 ) ‖ with the largest growth ( maxq̂ ( 0 ) ) along the trajectory , as in λ = max q̂ ( 0 ) lim t→∞ lim ‖q ( 0 ) ‖→0 1 t log ‖q ( t ) ‖ ‖q ( 0 ) ‖ . ( 1 ) In this paper , we rely on the local finite size Lyapunov exponent . In this context , local in time means that there is no limit to infinity for t in equation 1 – instead , it is an average over a constant time window t. Finite size means keeping the difference in parameter space fixed as a small constant with ‖q‖ = ∆q ( i.e. , no limit ‖q‖ → 0 in equation 1 ) . Using a finite size allows the study of the dynamic system at a specific spatial scale ( for a comprehensive review , see ( Cencini & Vulpiani , 2013 ) ) , corresponding to the eigenvalues of the Hessian of a spatially smoothed version of the loss ( or equivalently , to the numerical second derivative with a finite delta ) . When this analysis is used to study the Hessian , this is equivalent to calculating the local numerical second derivative . We found empirically that the results do not depend on the ∆q parameter within a large range of values . We will show in Sec . 3 that this timescale ( i.e. , the Lyapunov exponent ) corresponds to the most negative eigenvalue of the Hessian of the loss , when optimizing deep neural networks with SGD . Intuitively , a small difference in the initial condition will amplify exponentially in the directions with negative second derivatives and will dampen in directions with positive second derivatives . Empirically , we find that the chaotic timescales in effective training of deep neural networks are short ( in the order of tens of iterations ) when compared with the time of one epoch ( i.e. , the total number of iterations in one epoch ) . We also find that there are multiple unstable directions throughout the training , i.e. , the system is hyper-chaotic . 2.2 LYAPUNOV EXPONENTS FOR GD AND SGD . In this section we derive the formula to compute the largest Lyapunov exponents for Gradient Descent ( GD ) following ( Sprott & Sprott , 2003 ) . We first show that the largest Lyapunov exponent corresponds to the most negative eigenvalue of the Hessian of the loss and provide an algorithm to efficiently compute it . This will be later extended to calculate the largest ( or in general the top-k ) eigenvalue of the Hessian in section 3 . For simplicity of the exposition , in this section we initially consider the non-stochastic setting . Also , for the results of this section to hold , we assume that the Hessian of the loss does not change quickly through time , and it does not change quickly along the optimization trajectory compared to the chaotic time scale . These assumptions can easily be checked a posteriori , and we will show how to overcome this ( potential ) limitation in section 3 . Let θ be the vector of learnable parameters of the deep neural network , L ( · ) be the loss function , and α > 0 be the learning rate . The gradient descent step at iteration t is written as : θt+1 = θt − α dL ( θt ) dθ , ( 2 ) where the update step ∆θ = −αdL/dθ . In the limit of small steps the formulation is equivalent to a Partial Differential Equation ( PDE ) dθ dt = −α∂L ( θ ) ∂θ . ( 3 ) Integrating equation 3 gives the evolution of the system , which is equivalent to training the neural network . To compute the chaotic time scale ( i.e . the inverse of the Lyapunov exponent ) , one needs to analyze the difference in evolution of GD at two arbitrarily close initial points . To this end , we consider a small perturbation q0 added to the initial weights θ0 . For this perturbed starting point θ0 + q0 , the PDE becomes : d ( θ + q ) dt = −α∂L ( θ + q ) ∂θ . ( 4 ) In the limit of small q , considering the first order Taylor approximation of the above equation and subtracting equation 3 , we obtain : dq dt = ∂ ( −α∂L ( θ ) ∂θ ) ∂θ q . ( 5 ) Then , integrating equation 5 , we obtain the evolution of the perturbation under GD : q ( t ) = exp ( −α∂ 2L ( θ ) ∂θ2 t ) q0 . ( 6 ) This remains true as long as q ( t ) remains small , where the definition of small depends on the properties of L. We consider the decomposition of q0 as a sum of its projections on the eigenspace of the Hessian of the loss ( with the Hessian being represented at the exponent of the formula in equation 6 ) . In this space , the projection of q0 along the direction corresponding to the largest eigenvalue is the one growing the fastest . Starting with a random q0 , the direction of q that becomes dominant after sufficient time is aligned with the eigenvector of the largest eigenvalue of the matrix at the exponent , and the growth rate of q is equal to the corresponding eigenvalue . Measuring this growth rate provides a simple and linear ( in the number of parameters ) method to measure the leading eigenvalue . This procedure represents the calculation of the largest Lyapunov exponent , i.e. , the largest eigenvalue ( λ0 ) of the matrix −α∂2L/∂θ2 . Due to the minus sign , this corresponds to the smallest eigenvalue ( hN ) of the Hessian of the loss ( H = ∂2L/∂θ2 ) . More precisely , the smallest eigenvalue of the Hessian and the largest Lyapunov exponent are related as hN = −λ0/α . For non-convex losses , hN is the most negative eigenvalue and the matching eigenvector corresponds to the most unstable direction of the optimization of the loss . Once q ( t ) is aligned with the largest eigenvector , equation 6 becomes q ( t+ ∆t ) = exp ( λ0∆t ) q ( t ) . ( 7 ) The algorithm to calculate λ0 requires normalizing the length of q at each step to keep the increment “ small ” . This reference distance is equivalent to the choice of the step size for the calculation of finite difference based second derivative . In dynamical systems terminology this is called calculating the finite-size Lyapunov exponent . Now , the largest Lyapunov exponent is obtained by iterating the following two steps : λ0 ← log ( ‖q ( t+ ∆t ) ‖ ‖q ( t ) ‖ ) /∆t , ( 8 ) q ( t+ ∆t ) ← q ( t+ ∆t ) ‖q ( t ) ‖ ‖q ( t+ ∆t ) ‖ , where ‖·‖ denotes the L2 norm and ∆t denotes the time step . One could see that the computation of the largest Lyapunov exponent is analogous to the power method to compute the largest eigenvalue of a given matrix . This idea can be easily extended to compute the top-k Lyapunov exponents following the idea of Benettin et al . ( 1980 ) . Please refer to the appendix C. SGD can be described with the same approach , with the loss function replaced by L ( θ , ω ) where ω are random variables that describe which images are picked in each minibatch , the data augmentation used , and in principle any other random process engineered in the network . We note that chaos theory is fully applicable with equivalent results in such general stochastic setting ( Arnold , 1988 ) . In the subsequent analysis we will leverage this and work with SGD . Finally , we demonstrate how to extend the method explained in the current section to compute the Lyapunov exponent for SGD with momentum in Appendix B .
The authors use an insight from chaos theory to derive an efficient method of estimating the largest and smallest eigenvalues of the loss Hessian wrt the weights. To do that, they use nearby weight space positions, optimize for a bit (either gradient climbing or descending), check how quickly the points are departing from each other, and use that to estimate the extreme eigenvalues using a connection to Lyapunov coefficients in chaos theory. Then they use on the fly estimated largest eigenvalue to automatically tune the learning rate of SGD.
SP:fbb217eb911fc3b0d40b941281d08d0a399a459a
Deep Learning meets Projective Clustering
1 INTRODUCTION AND MOTIVATION . Deep Learning revolutionized Machine Learning by improving the accuracy by dozens of percents for fundamental tasks in Natural Language Processing ( NLP ) through learning representations of a natural language via a deep neural network ( Mikolov et al. , 2013 ; Radford et al. , 2018 ; Le and Mikolov , 2014 ; Peters et al. , 2018 ; Radford et al. , 2019 ) . Lately , it was shown that there is no need to train those networks from scratch each time we receive a new task/data , but to fine-tune a full pre-trained model on the specific task ( Dai and Le , 2015 ; Radford et al. , 2018 ; Devlin et al. , 2019 ) . However , in many cases , those networks are extremely large compared to classical machine learning models . For example , both BERT ( Devlin et al. , 2019 ) and XLNet ( Yang et al. , 2019 ) have more than 110 million parameters , and RoBERTa ( Liu et al. , 2019b ) consists of more than 125 million parameters . Such large networks have two main drawbacks : ( i ) they use too much storage , e.g . memory or disk space , which may be infeasible for small IoT devices , smartphones , or when a personalized network is needed for each user/object/task , and ( ii ) classification may take too much time , especially for real-time applications such as NLP tasks : speech recognition , translation or speech-to-text . Compressed Networks . To this end , many papers suggested different techniques to compress large NLP networks , e.g. , by low-rank factorization ( Wang et al. , 2019 ; Lan et al. , 2019 ) , prun- ∗equal contribution ing ( McCarley , 2019 ; Michel et al. , 2019 ; Fan et al. , 2019 ; Guo et al. , 2019 ; Gordon et al. , 2020 ) , quantization ( Zafrir et al. , 2019 ; Shen et al. , 2020 ) , weight sharing ( Lan et al. , 2019 ) , and knowledge distillation ( Sanh et al. , 2019 ; Tang et al. , 2019 ; Mukherjee and Awadallah , 2019 ; Liu et al. , 2019a ; Sun et al. , 2019 ; Jiao et al. , 2019 ) ; see more example papers and a comparison table in Gordon ( 2019 ) for compressing the BERT model . There is no consensus on which approach should be used in what contexts . However , in the context of compressing the embedding layer , the most common approach is low-rank factorization as in Lan et al . ( 2019 ) , and it may be combined with other techniques such as quantization and pruning . In this work , we suggest a novel low-rank factorization technique for compressing the embedding layer of a given model . This is motivated by the fact that in many networks , the embedding layer accounts for 20 % − 40 % of the network size . Our approach - MESSI : Multiple ( parallel ) Estimated SVDs for Smaller Intralayers - achieves a better accuracy for the same compression rate compared to the known standard matrix factorization . To present it , we first describe an embedding layer , the known technique for compressing it , and the geometric assumptions underlying this technique . Then , we give our approach followed by geometric intuition , and detailed explanation about the motivation and the architecture changes . Finally , we report our experimental results that demonstrate the strong performance of our technique . Embedding Layer . The embedding layer aims to represent each word from a vocabulary by a real-valued vector that reflects the word ’ s semantic and syntactic information that can be extracted from the language . One can think of the embedding layer as a simple matrix multiplication as follows . The layer receives a standard vector x ∈ Rn ( a row of the identity matrix , exactly one nonzero entry , usually called one-hot vector ) that represents a word in the vocabulary , it multiplies x by a matrix AT ∈ Rd×n to obtain the corresponding d-dimensional word embedding vector y = ATx , which is the row in A that corresponds to the non-zero entry of x . The embedding layer has n input neurons , and the output has d neurons . The nd edges between the input and output neurons define the matrix A ∈ Rn×d . Here , the entry in the ith row and jth column of A is the weight of the edge between the ith input neuron to the jth output neuron ; see Figure . 1 . Compressing by Matrix Factorization . A common approach for compressing an embedding layer is to compute the j-rank approximation Aj ∈ Rn×d of the corresponding matrix A via SVD ( Singular Value Decomposition ; see e.g. , Lan et al . ( 2019 ) ; Yu et al . ( 2017 ) and Acharya et al . ( 2019 ) ) , factor Aj into two smaller matrices U ∈ Rn×j and V ∈ Rj×d ( i.e . Aj = UV ) , and replace the original embedding layer that corresponds to A by a pair of layers that correspond to U and V . The number of parameters is then reduced to j ( n + d ) . Moreover , computing the output takes O ( j ( n+ d ) ) time , compared to the O ( nd ) time for computing ATx . As above , we continue to use Aj to refer to a rank-j approximation of a matrix A . Fine tuning . The layers that correspond to the matrices U and V above are sometimes used only as initial seeds for a training process that is called fine tuning . Here , the training data is fed into the network , and the error is measured with respect to the final classification . Hence , the structure of the data remains the same but the edges are updated in each iteration to give a better accuracy . Observe that typically , the SVD takes the form Aj = UDṼ , where the columns of U ∈ Rn×j are orthogonal , the rows of Ṽ ∈ Rj×d are orthogonal , and D ∈ Rj×j is a diagonal matrix . In this paper and in others , we say that Aj = UV where V = DṼ . Furthermore , the orthogonalization is used only to obtain a low rank approximation Aj = UV using SVD . After that , this property is not kept in the network during the training process ( when applying the fine-tuning ) . Geometric intuition . The embedding layer can be encoded into a matrix A ∈ Rn×d as explained above . Hence , each of the n rows of A corresponds to a point ( vector ) in Rd , and the j-rank approximation Aj ∈ Rn×d represents the projection on the j-dimensional subspace that minimizes the sum of squared distances ( “ errors ” ) to the points . Projecting these points onto any j-dimensional subspace of Rd would allow us to encode every point only via its j-coordinates on this subspace , and store only nj entries instead of the original nd entries of A . This is the matrix U ∈ Rn×j , where each row encodes the corresponding row in A by its j-coordinates on this subspace . The subspace itself can be represented by its basis of j d-dimensional vectors ( jd entries ) , which is the column space of a matrix V T ∈ Rd×j . Figure 2 illustrates the small pair of layers that corresponds to U and V , those layers are a compression for the original big layer that corresponds to A . However , our goal is not only to compress the network or matrix , but also to approximate the original matrix operator A . To this end , among all the possible j-subspaces of Rd , we may be interested in the j-subspace that minimizes the sum of squared distances to the points , i.e. , the sum of squared projected errors . This subspace can be computed easily via SVD . The corresponding projections of the rows of A on this subspace are the rows of the j-rank matrix Aj . The hidden or statistical assumption in this model is that the rows of the matrix A ( that represents the embedding layer ) were actually generated by adding i.i.d . Gaussian noise to each point in a set of n points on a j-dimensional subspace , that is spanned by what are called latent variables or factors . Given only the resulting matrix A , the j-subspace that maximizes the likelihood ( probability ) of generating the original points is spanned by the j largest singular vectors of A . Why a single distribution ? Even if we accept the assumption of Gaussian noise , e.g . due to simplicity of computations or the law of large numbers , it is not intuitively clear why we should assume that the rows of A were sampled from a single distribution . Natural questions that arise are : ( i ) Can we get smaller and/or more accurate models in real-world networks by assuming multiple instead of a single generating distribution ( i.e . multiple subspaces ) ? ( ii ) Can we efficiently compute the corresponding factorizations and represent them as part of a network ? 2 OUR CONTRIBUTION . We answer the above open questions by suggesting the following contributions . In short , the answers are : ( i ) In all the real-world networks that we tested , it is almost always better to assume k ≥ 2 distributions rather than a single one that generated the data . It is better in the sense that the resulting accuracy of the network is better compared to k = 1 ( SVD ) for the same compression rate . ( ii ) While approximating the global minimum is Max-SNP-Hard , our experiments show that we can efficiently compute many local minima and take the smallest one . We then explain how to encode the result back into the network . This is by suggesting a new embedding layer architecture that we call MESSI ( Multiple ( parallel ) Estimated SVDs for Smaller Intralayers ) ; see Figure 3 . Extensive experimental results show significant improvement . Computational Geometry meets Deep Learning . Our technique also constructs the matrix A ∈ Rn×d from a given embedding layer . However , inspired by the geometric intuition from the previous section , we suggest to approximate the n rows of A by clustering them to k ≥ 2 subspaces instead of one . More precisely , given an integer k ≥ 1 we aim to compute a set of k subspaces in Rd , each of dimension j , that will minimize the sum over every squared distance of every point ( row in A ) to its nearest subspace . This can be considered as a combination of j-rank or j-subspace approximation , as defined above , and k-means clustering . In the k-means clustering problem we wish to approximate n points by k center points that minimizes the sum over squared distance between every point to its nearest center . In our case , the k centers points are replaced by k subspaces , each of dimension j . In computational geometry , this type of problem is called projective clustering ( see Figure 4 ) , and its used in many tasks in the fields of Machine Learning and Computer Vision ( Feng et al. , 2011 ; Xu et al. , 2005 ; Liu et al. , 2012 ; Trittenbach and Böhm , 2019 ) , From Embedding layer to Embedding layers . The result of the above technique is a set of k matrices A1j , · · · , Akj , each of rank j and dimension ni × d where the ith matrix corresponds to the cluster of ni points that were projected on the ith j-dimensional subspace . Each of those matrices can be factored into two smaller matrices ( due to its low rank ) , i.e. , for every i ∈ { 1 , · · · , k } , we have Aij = U iV i , where U i ∈ Rni×j , and V i ∈ Rj×d . To plug these matrices as part of the final network instead of the embedded layer , we suggest to encode these matrices via k parallel sub-layers as described in what follows and illustrated in Figure 3 . Our pipeline : MESSI . We construct our new architecture as follows . We use A to refer to the n× d matrix from the embedding layer we seek to compress . The input to our pipeline is the matrix A , positive integers j and k , and ( for the final step ) parameters for the fine-tuning . 1 . Treating the n rows of A as n points in Rd , compute an approximate ( k , j ) -projective clustering . The result is k subspaces in Rd , each of dimension j , that minimize the sum of squared distances from each point ( row in A ) to its closest subspace . For the approximation , we compute a local minimum for this problem using the Expectation-Maximization ( EM ) method ( Dempster et al. , 1977 ) . 2 . Partition the rows of A into k different subsets according to their nearest subspace from the previous step . The result is submatrices A1 , . . . , Ak where Ai is a ni × d matrix and n1 + . . .+ nk = n. 3 . For each matrix Ai where 1 ≤ i ≤ k , factor it to two smaller matrices U i ( of dimensions ni × j ) and V i ( of dimensions j × d ) such that U iV i is the rank-j approximation of Ai . 4 . In the full network , replace the original fully-connected embedding layer by 2 layers . The first layer is a parallelization of k separate fully-connected layers , where for every i ∈ { 1 , · · · , k } the ith parallel layer consists of the matrix U i , i.e. , it has ni input neurons and j output neurons . Here , each row of A is mapped appropriately . The second layer is by combining the matrices V 1 , · · ·V k. Each of the k output vectors from the previous layer u1 , . . . , uk are combined as V 1u1 + . . .+ V kuk ; see Figure 3 for an illustration . 5 . Fine-tune the network . The result is a compressed embedding layer . Every matrix U i has nij parameters , and the matrix V i has jd parameters . Therefore the compressed embedding layer consists of nj + kjd parameters , in comparison to the uncompressed layer of nd parameters . Practical Solution . The projective clustering problem is known to be Max-SNP-hard even for d = 2 and j = 2 , for any approximation factor that is independent of n. Instead , we suggest to use an algorithm that provably converges to a local minimum via the Expectation-Maximization ( EM ) method ( Dempster et al. , 1977 ) , which is a generalization of the well known Lloyd algorithm ( Lloyd , 1982 ) . The resulting clusters and factorizations are used to determine the new architecture and its initial weights ; see Figure 3 for more details . We run on instances of AWS Amazon EC2 cloud , and detail our results in the next section . Open code and networks . Complete open code to reproduce the resulting networks is provided . We expect it to be useful for future research , and give the following few examples .
This work proposes a new approach, based on projective clustering, for compressing the embedding layers of DNNs for natural language modeling tasks. The authors show that the trade-off between compression and model accuracy can be improved by considering a set of k subspaces rather than just a single subspace. Methods for compressing DNNs is an active area of research and this paper presents a promising approach to do so as well as interesting results.
SP:d8f80f84b089766124693485390dbfce0c94527c
Box-To-Box Transformation for Modeling Joint Hierarchies
1 INTRODUCTION . Representation learning for hierarchical relations is crucial in natural language processing because of the hierarchical nature of common knowledge , for example , < Bird ISA Animal > ( Athiwaratkun & Wilson , 2018 ; Vendrov et al. , 2016 ; Vilnis et al. , 2018 ; Nickel & Kiela , 2017 ) . The ISA relation represents meaningful hierarchical relationships between concepts and plays an essential role in generalization for other relations , such as the generalization of < organ PARTOF person > based on < eye PARTOF of person > , and < organ ISA eye > . The fundamental nature of the ISA relation means that it is inherently involved in a large amount of compositional human reasoning involving other relations . Modeling hierarchies is essentially the problem of modeling a poset , or partially ordered set . The task of partial order completion , a general term to describe tasks which require learning a transitive relation , was introduced in ( Vendrov et al. , 2016 ) . The authors also introduce a model based on the reverse product order on Rn , which essentially models concepts as infinite cones . Region-based representations have been effective in representing hierarchical data , as containment between regions is naturally transitive . Vilnis et al . ( 2018 ) introduced axis-aligned hyperrectangles ( or boxes ) that are provably more flexible than cones , and demonstrated state-of-the-art performance in multiple tasks . Thus far , not as much effort has been put into modeling joint hierarchies . Patel et al . ( 2020 ) proposed to simultaneously model ISA and HASPART hierarchies from Wordnet ( Miller , 1995 ) . To do so , however , they effectively augmented the graph by duplicating the nodes to create a single massive hierarchy . Their model assigns two boxes BISA and BHASPART for each node n , which are unrelated , and therefore misses out on a large amount of semantic relatedness between ISA and HASPART . In this paper we propose a box-to-box transformation which translates and dilates box representations between hierarchies . Our proposed model shares information between the ISA and HASPART hierarchies via this transformation as well as cross-hierarchy containment training objectives . We compare BOX-TRANSFORM MODEL with multiple strong baselines under different settings . We substantially outperform the prior TWO-BOX MODEL while training with only the transitive reduction of both hierarchies and predicting inferred composition edges . As mentioned above , our model ’ s shared learned features should allow for more generalization , and we test this by training on a subset of the transitive reduction , where we find we are able to outperform strong baselines . Finally , we perform a detailed analysis of the model ’ s capacity to predict compositional edges and transitive closure edges , both from an overfitting and generalization standpoint , identifying subsets where further improvement is needed . 2 RELATED WORK . Recent advances in representing one single hierarchy mainly fall in two categories : 1 ) representing hierarchies in non-Euclidian space ( eg . hyperbolic space , due to the curvature ’ s inductive bias to model tree-like structures ) 2 ) using region-based representations instead of vectors for each node in the hierarchy ( Erk , 2009 ) . Hyperbolic space has been shown to be efficient in representing hierarchical relations , but also encounters difficulties in training ( Nickel & Kiela , 2017 ; Ganea et al. , 2018b ; Chamberlain et al. , 2017 ) . Categorization models in psychology often represent a concept as a region ( Nosofsky , 1986 ; Smith et al. , 1988 ; Hampton , 1991 ) . Vilnis & McCallum ( 2015 ) and Athiwaratkun & Wilson ( 2018 ) use Gaussian distributions to embed each word in the corpus , the latter of which uses thresholded divergences which amount to region representations . Vendrov et al . ( 2016 ) and Lai & Hockenmaier ( 2017 ) make use of the reverse product order on Rn+ , which effectively results in cone representations . Vilnis et al . ( 2018 ) further extend this cone representation to axis-aligned hyperrectangles ( or boxes ) , and demonstrate state-of-the-art performance on modeling hierarchies . Various training improvement methods for box embeddings have been proposed ( Li et al. , 2019 ; Dasgupta et al. , 2020 ) , the most recent of which is termed GumbelBox after it ’ s use of a latent noise model where box parameters are represented via Gumbel distributions . Region representations are also used for tasks which do not require modeling hierarchy . In Vilnis et al . ( 2018 ) , the authors also model conditional probability distributions using box embeddings . Abboud et al . ( 2020 ) and Ren et al . ( 2020 ) take a different approach , using boxes for their capacity to contain many vectors to provide slack in the loss function when modeling knowledge base triples or representing logical queries , respectively . Ren et al . ( 2020 ) also made use of an action on boxes similar to ours , involving translation and dilation , however our work differs in both the task ( i.e . representing logical queries vs. joint hierarchies ) and approach , as their model represents entities using vectors and a loss function based on a box-to-vector distance . The inductive bias of hyperbolic space is also exploited to model multiple relations , Ganea et al . ( 2018a ) learn hyperbolic transformations for multiple relations using Poincare embeddings , and show model improvement in low computational resource settings . Patel et al . ( 2020 ) , which our work is most similar to , represent joint hierarchies using box embeddings . However , they represent each concept with two boxes ignoring the internal semantics of the concepts . Modeling joint hierarchies shares some similarities with knowledge base completion , however the goals of the two settings are different . When modeling joint hierarchies you are attempting to learn simultaneous transitive relations , and potentially learn relevant compositional edges involving these relations . For knowledge base completion , on the other hand , you may be learning many different relations , and primarily seek to recover edges which were removed rather than inferring new compositional edges . Still , the models which perform knowledge base completion can be applied to this task , as the data can be viewed as knowledge base triples with only 2 relations . There have been multiple works that aim to build better knowledge representation ( Bordes et al. , 2013 ; Trouillon et al. , 2016 ; Sun et al. , 2019 ; Balazevic et al. , 2019a ) . Most relevant , Chami et al . ( 2020 ) ; Balazevic et al . ( 2019b ) recently proposed KG embedding methods which embeds entities in the Poincaré ball model of hyperbolic space . These models are intended to capture relational patterns present in multi-relational graphs , with a particular emphasis on hierarchical relations . 3 BACKGROUND . 3.1 BOX LATTICE MODEL . Introduced in Vilnis et al . ( 2018 ) , a box lattice model ( or box model ) is a geometric embedding which captures partial orders and lattice structure using n-dimensional hyperrectangles . Formally , we define the set of boxes B in Rn as B ( Rn ) = { [ x1 , x1 ] × · · · × [ xd , xd ] } , ( 1 ) where xi , xj ∈ R , and we represent all degenerate boxes where xi > xi with ∅ . A box model for a set S is a function Box : S → B ( Rn ) which captures some desirable properties of the set S. As the name implies , the box lattice model is particularly suited to representing partial orders and lattice structures . Definition 1 ( Poset ) . A partially ordered set , or poset , is a set P along with a relation such that , for each a , b , c ∈ P , we have 1. a a ( reflexivity ) 2. if a b and b a then a = b ( antisymmetry ) 3. if a b and b c then a c ( transitivity ) Definition 2 ( Lattice ) . A lattice is a poset where each pair of elements have a unique upper bound called the join , denoted by ∧ , and a unique lower bound called the meet , denoted by ∨ . The authors note that there are natural geometric operations which form a lattice structure on B : Box ( x ) ∧ Box ( y ) : = ∏ i [ max ( xi , yi ) , min ( x i , yi ) ] , ( 2 ) Box ( x ) ∨ Box ( y ) : = ∏ i [ min ( xi , yi ) , max ( x i , yi ) ] , ( 3 ) In other words , the meet of two boxes is the smallest containing box , and the join is the intersection , or ∅ if the boxes are disjoint . These geometric operations map very neatly to hierarchies , where the meet of two nodes is their closest common ancestor and the join is the closest common descendent ( or ∅ if no such node exists ) . The ability of this model to capture lattice structure using geometric operations makes it a natural choice to embed hierarchies . 3.2 PROBABILISTIC BOX MODEL TRAINING . In Vilnis et al . ( 2018 ) , the authors also introduced a probabilistic interpretation of box embeddings and a learning method which was improved upon in Li et al . ( 2019 ) and Dasgupta et al . ( 2020 ) . By using a probability measure µ on Rd ( or by constraining the space to [ 0 , 1 ] d ) , one can calculate box volumes as µ ( Box ( X ) ) . The pullback of this measure yields a probability measure on S , and thus the box model can be imbued with valid probabilistic semantics . In particular , since the box space B is closed under intersection , we can calculate joint probabilities by computing P ( X , Y ) = µ ( Box ( X ) ∧ Box ( Y ) ) and similarly compute conditional probabilities as P ( X | Y ) = µ ( Box ( X ) ∧ Box ( Y ) ) µ ( Box ( Y ) ) . ( 4 ) The conversion from a poset or lattice structure to probabilistic semantics is accomplished by assigning conditional probabilities , namely a b if and only if P ( b | a ) = 1 . We note that the properties required of the relation follow as a natural consequence of the axioms for conditional probability . Apart from providing rigor and interpretability , the calibrated probabilistic semantics also inform and facilitate the training procedure for box embeddings , which is accomplished via gradient descent using KL-divergence with respect to the aforementioned probability distribution as a loss function . As one might expect , care must be taken to handle the case when boxes are disjoint , as there is no gradient . In ( Vilnis et al. , 2018 ) the authors made use of the lattice structure to derive a lower bound on the probability , and ( Li et al. , 2019 ) introduced an approximation to Gaussian convolution over the boxes which similarly handled the case of disjoint boxes . ( Dasgupta et al. , 2020 ) improves this further by taking a random process perspective , ensembling over an entire family of box models . The endpoints of boxes are represented using Gumbel distributions , that is GumbelBox ( X ) = ∏ i [ Xi , X i ] , Xi ∼ MaxGumbel ( µi , β ) , Xi ∼ MinGumbel ( µi , β ) , ( 5 ) where µ , β are the location and scale parameters of the Gumbel distribution respectively . The MaxGumbel distribution is given by f ( x ; µ , β ) = 1 β exp ( −x−µβ − e − x−µβ ) , ( 6 ) and the MinGumbel distribution given by negating x an µ . The Gumbel distribution was chosen due to it ’ s min/max stability , making the set of Gumbel boxes closed under intersection , i.e . the intersection of two Gumbel boxes is another Gumbel box . We denote the space of all such boxes as G. The expected volume of a Gumbel box can be efficiently calculated analytically , and in Dasgupta et al . ( 2020 ) the authors use this expected volume to calculate the conditional probabilities mentioned in equation equation 4 . This training method leads to improved performance on a number of tasks , and is particularly beneficial when embedding trees , thus we will use this Gumbel box approach in our setting .
The paper focuses on modeling multiple hierarchical relations on a heterogenous graph. The task “modeling joint hierarchies” is essentially trying to infer whether a given pair of entities has a hierarchical connection especially when there exists multiple hierarchical relations (2 in the paper), and missing links. The paper proposes to embed entities using boxes whose endpoints follow the Gumbel distribution. Given there exists two hierarchical relations, the paper transforms the box of one entity under relation 1 to the box of the entity under relation 2 with a parameterized linear function. This is in contrast to previous work that parameterized the box of two relations using separate independent parameters.
SP:2f3bb20ca38e10fde160e4961d6b1796cadd465f
Spatio-Temporal Graph Scattering Transform
1 INTRODUCTION . Processing and learning from spatio-temporal data have received increasing attention recently . Examples include : i ) skeleton-based human action recognition based on a sequence of human poses ( Liu et al . ( 2019 ) ) , which is critical to human behavior understanding ( Borges et al . ( 2013 ) ) , and ii ) multi-agent trajectory prediction ( Hu et al . ( 2020 ) ) , which is critical to robotics and autonomous driving ( Shalev-Shwartz et al . ( 2016 ) ) . A common pattern across these applications is that data evolves in both spatial and temporal domains . This paper aims to analyze this type of data by developing novel spatio-temporal graph-based data modeling and operations . Spatio-temporal graph-based data modeling . Graphs are often used to model data where irregularly spaced samples are observed over time . Good spatio-temporal graphs can provide informative priors that reflect the internal relationships within data . For example , in skeleton-based human action recognition , we can model a sequence of 3D joint locations as data supported on skeleton graphs across time , which reflects both the human physical constraints and temporal consistency ( Yan et al . ( 2018 ) ) . Recent studies on modeling spatio-temporal graphs have followed either joint or separable processing frameworks . Joint processing is based on constructing a single spatio-temporal graph and processing ( e.g. , filtering ) via operations on this graph ( Kao et al . ( 2019 ) ; Liu et al . ( 2020 ) ) . In contrast , a separable processing approach works separately , and possibly with different operators , across the space and time dimension . In this case , independent graphs are used for space and ∗This work was mainly done while Chao Pan and Siheng Chen were working at Mitsubishi Electric Research Laboratories ( MERL ) . time ( Yan et al . ( 2018 ) ; Cheng et al . ( 2020 ) ) . However , no previous work thoroughly analyzes and compares these two constructions . In this work , we mathematically study these two types of graphs and justify the benefits of separable processing from both theoretical and empirical aspects . Spatio-temporal graph-based operations . Graph operations can be performed once the graph structure is given . Some commonly used graph operations include the graph Fourier transform ( Shuman et al . ( 2013 ) ) , and graph wavelets ( Hammond et al . ( 2011 ) ) . It is possible to extend those operations to the spatio-temporal graph domain . For example , Grassi et al . ( 2017 ) developed the short time-vertex Fourier transform and spectrum-based time-vertex wavelet transform . However , those mathematically designed , linear operations show some limitations in terms of empirical performances . In comparison , many recent deep neural networks adopt trainable graph convolution operations to analyze spatio-temporal data ( Yan et al . ( 2018 ) ; Liu et al . ( 2020 ) ) . However , most networks are designed through trial and error . It is thus hard to explain the rationale behind empirical success and further improve the designs ( Monga et al . ( 2019 ) ) . In this work , to fill in the gap between mathematically designed linear transforms and trainable spatio temporal graph neural networks , we propose a novel spatio-temporal graph scattering transform ( ST-GST ) , which is a mathematically designed , nonlinear operation . Specifically , to characterize the spatial and temporal dependencies , we present two types of graphs corresponding to joint and separable designs . We then construct spatio-temporal graph wavelets based on each of these types of graphs . We next propose the framework of ST-GST , which adopts spatio-temporal graph wavelets followed by a nonlinear activation function as a single scattering layer . All the filter coefficients in ST-GST are mathematically designed beforehand and no training is required . We further show that i ) a design based on separable spatio-temporal graph is more flexible and computationally efficient than a joint design ; and ii ) ST-GST is stable to small perturbations on both input spatio-temporal graph signals and structures . Finally , our experiments on skeletonbased human action recognition show that the proposed ST-GST outperforms spatio-temporal graph convolutional networks by 35 % accuracy in MSR Action3D dataset . We summarize the main contributions of this work as follows : •We propose wavelets for both separable and joint spatio-temporal graphs . We show that it is more flexible and computationally efficient to design wavelets based on separable spatio-temporal graphs ; •We propose a novel spatio-temporal graph scattering transform ( ST-GST ) , which is a non-trainable counterpart of spatio-temporal graph convolutional networks and a nonlinear version of spatiotemporal graph wavelets . We also theoretically show that ST-GST is robust and stable in the presence of small perturbations on both input spatio-temporal graph signals and structures ; • For skeleton-based human action recognition , our experiments show that : i ) ST-GST can achieve similar or better performances than spatio-temporal graph convolutional networks and other nondeep-learning approaches in small-scale datasets ; ii ) separable spatio-temporal scattering works significantly better than joint spatio-temporal scattering ; and iii ) ST-GST significantly outperforms spatio-temporal graph wavelets because of the nonlinear activation function . 2 RELATED WORK . Scattering transforms . Convolutional neural networks ( CNNs ) use nonlinearities coupled with trained filter coefficients and are well known to be hard to analyze theoretically ( Anthony & Bartlett ( 2009 ) ) . As an alternative , Mallat ( 2012 ) ; Bruna & Mallat ( 2013 ) propose scattering transforms , which are non-trainable versions of CNNs . Under admissible conditions , the resulting transform enjoys both great performance in image classification and appealing theoretical properties . These ideas have been extended to the graph domain ( Gama et al . ( 2019a ) ; Zou & Lerman ( 2020 ) ; Gao et al . ( 2019 ) ; Ioannidis et al . ( 2020 ) ) . Specifically , the graph scattering transform ( GST ) proposed in ( Gama et al . ( 2019a ) ) iteratively applies predefined graph filter banks and element-wise nonlinear activation function . In this work , we extend classical scattering transform to the spatio-temporal domain and provide a new mathematically designed transform to handle spatio-temporal data . The key difference between GST and our proposed ST-GST lies in the graph filter bank design , where ST-GST needs to handle both spatial and temporal domains . Spatio-temporal neural networks . Deep neural networks have been adapted to operate on spatiotemporal domain . For example , Liu et al . ( 2019 ) uses LSTM to process time series information , while ST-GCN ( Yan et al . ( 2018 ) ) combines a graph convolution layer and a temporal convolution layer as a unit computational block in the network architecture . However , those networks all require a huge amount of high-quality labeled data , and training them is computationally expensive , which may make them impractical for many real-world scenarios . Furthermore , many architectures are designed through trial and error , making it hard to justify the design choices and further improve them . In this work , the proposed ST-GST is a nonlinear transform with a forward procedure similar to that of ST-GCN . However , ST-GST does not require any training , which is useful in many applications where only limited training data is available . Furthermore , since all filter coefficients in ST-GST are predefined , it allows us to perform theoretical analysis and the related conclusions potentially shed some light on the design of spatio-temporal networks . Skeleton-based human action recognition . Conventional skeleton-based action recognition models learn semantics based on hand-crafted features ( Wang et al . ( 2012 ) ) . To handle time series information , some recurrent-neural-network-based models are proposed to capture the temporal dependencies between consecutive frames ( Kim & Reiter ( 2017 ) ) . Recently , graph-based approaches have gained in popularity while achieving excellent performance ( Yan et al. , 2018 ; Li et al. , 2019 ) . In this work , our experiments focus on this task and show that ST-GST outperforms the state-of-the-art spatio-temporal graph neural networks , like MS-G3D ( Liu et al. , 2020 ) , on small-scale datasets . 3 SPATIO-TEMPORAL GRAPH SCATTERING TRANSFORM . In this section , we first define spatio-temporal graph structures and signals . We next design our spatio-temporal graph wavelets . Finally , we present ST-GST . 3.1 SPATIO-TEMPORAL GRAPH STRUCTURES AND SIGNALS . Spatio-temporal data can be represented as a matrix X ∈ RN×T , where N is the number of the spatial positions and T is the number of time stamps . In this matrix , each row is a time-series for a spatial node , and each column is a spatial signal at a certain time stamp . Note that the index of spatial information can be arbitrary : we will associate each spatial location to a vertex on the spatial graph , and the edges will provide information about the relative position of the nodes . We can reshape the matrix to form a vector x of length NT , where the element x ( s , t ) : = x ( s−1 ) T+t is the feature value corresponding to the s-th vertex at time t. To construct a spatio-temporal graph , we create connections based on physical constraints . For example , for skeleton-based action recognition , the spatial graph is the human skeleton graph , reflecting bone connections ; see Fig . 1 ( a ) ; and the temporal graph is a line graph connecting consecutive time stamps ; see Fig . 1 ( b ) . As a starting point , we choose a spatial graph Gs = ( Vs , Es , As ) with |Vs| = N , reflecting the graph structure of each column in X and a temporal graph Gt = ( Vt , Et , At ) with |Vt| = T , reflecting the graph structure of each row in X . The separable spatio-temporal design is achieved by processing the columns and rows of X separately based on their respective graphs . As an alternative , a product graph , denoted as G = Gs Gt = ( V , E , A ) can be constructed to unify the relations in both the spatial and temporal domains , allowing us to process data jointly across space and time . The product graph Gs Gt has |V| = NT nodes and an appropriately defined NT ×NT adjacency matrix A . The operation interweaves two graphs to form a unifying graph structure . The edge weight A ( s1 , t1 ) , ( s2 , t2 ) : = A ( s1−1 ) T+t1 , ( s2−1 ) T+t2 characterizes the relation , such as similarity or dependency , between the s1-th spatial node at the t1-th time stamp and the s2-th spatial node at the t2-th time stamp . There are three commonly used product graphs ( Sandryhaila & Moura , 2014 ) : i ) Kronecker product : G = Gs⊗Gt with graph adjacency matrix as A = As⊗At and ⊗ represents the Kronecker product of matrices ; see Fig 1 ( c ) ; ii ) Cartesian product : G = Gs × Gt with A = As ⊗ IT + IN ⊗ At ; see Fig 1 ( d ) ; and iii ) strong product : G = Gs Gt with A = As⊗At + As⊗ IT + IN ⊗At , which can be viewed as a combination of Kronecker and Cartesian products ; see Fig 1 ( e ) . The joint spatio-temporal design is achieved based on a product graph . In this paper , we consider designs based on both separable graphs and product graphs .
The authors propose wavelets for both separable and joint spatio-temporal graphs. And then the authors design a spatio-temporal graph scattering transform (ST-GST), which is a non-trainable counterpart of spatio-temporal graph convolutional networks and a nonlinear version of spatiotemporal graph wavelets. Finally, the proposed SF-GST is conducted by experiments, and the results show that it appears to be effective. However, The authors did not give the explanation of the motivation about why did the STG should be scattered by wavelets. Besides, from the results in Table 1, the joint versions based on the proposed method. i.e., Joint Kronecker, Joint Cartesian and Joint Strong, have not achieved the satisfied performance, though only separable versions performs best.
SP:03895ea221824f6e57ea88ec7332efbbec207c7d
Explicit homography estimation improves contrastive self-supervised learning
1 INTRODUCTION . There is an ever-increasing pool of data , particularly unstructured data such as images , text , video , and audio . The vast majority of this data is unlabelled . The process of labelling is time-consuming , labour-intensive , and expensive . Such an environment makes algorithms that can leverage fully unlabelled data particularly useful and important . Such algorithms fall within the realm of unsupervised learning . A particular subset of unsupervised learning is known as Self-Supervised Learning ( SSL ) . SSL is a paradigm in which the data itself provides a supervision signal to the algorithm . Somewhat related is another core area of research known as transfer learning ( Wang et al. , 2020 ) . In the context of computer vision , this means being able to pre-train an encoder network offline on a large , varietal dataset , followed by domain-specific fine-tuning on the bespoke task at hand . The state-of-the-art for many transfer learning applications remains dominated by supervised learning techniques ( Tan et al. , 2020 ; Martinez et al. , 2019 ; Donahue et al. , 2014 ; Girshick et al. , 2014 ) , in which models are pre-trained on a large labelled dataset . However , self-supervised learning techniques have more recently come to the fore as potential alternatives that perform similarly on downstream tasks , while requiring no labelled data . Most selfsupervised techniques create a supervision signal from the data itself in one of two ways . The one approach are techniques that define a pre-text task beforehand that a neural network is trained to solve , such as inpainting ( Pathak et al. , 2016 ) or a jigsaw puzzle ( Noroozi & Favaro , 2016 ) . In this way , the pre-text task is a kind of proxy that , if solved , should produce reasonable representations for downstream visual tasks such as image or video recognition , object detection , or semantic segmentation . The other approach is a class of techniques known as contrastive methods ( Chen et al. , 2020a ; He et al. , 2019 ; Chen et al. , 2020b ) . These methods minimise the distance ( or maximise the similarity ) between the latent representations of two augmented views of the same input image , while simultaneously maximising the distance between negative pairs . In this way , these methods enforce consistency regularisation ( Sohn et al. , 2020 ) , a well-known approach to semi-supervised learning . These contrastive methods often outperform the pre-text task methods and are the current state-of-the-art in self-supervised learning . However , most of these contrastive methods have several drawbacks , such as requiring prohibitively large batch sizes or memory banks , in order to retrieve the negative pairs of samples ( Chen et al. , 2020a ; He et al. , 2019 ) . The intuition behind our proposed module is that any system tasked with understanding images can benefit from understanding the geometry of the image and the objects within it . An affine transformation is a geometric transformation that preserves parallelism of lines . It can be composed of any sequence of rotation , translation , shearing , and scaling . A homography is a generalisation of this notion to include perspective warping . A homography need not preserve parallelism of lines , however , it ensures lines remain straight . Mathematically , a homography is shown in Equation 1 . It has 8 degrees of freedom and is applied to a vector in homogenous coordinates . An affine transformation has the same form , but with the added constraint that φ3,1 = φ3,2 = 0 . Hφ = [ φ1,1 φ1,2 φ1,3 φ2,1 φ2,2 φ2,3 φ3,1 φ3,2 1 ] ( 1 ) The ability to know how a source image was transformed to get to a target image implicitly means that you have learned something about the geometry of that image . An affine transformation or , more generally , a homography is a natural way to encode this idea . Forcing the network to estimate the parameters of a random homography applied to the source images thereby forces it to learn semantics about the geometry . This geometric information can supplement the signal provided by a contrastive loss , or loss in the latent space . In this paper , we propose an additional module that can be used in tandem with contrastive selfsupervised learning techniques to augment the contrastive objective ( the additional module is highlighted in Figure 1 ) . The module is simple , model-agnostic , and can be used to supplement a contrastive algorithm to improve performance and supplement the information learned by the network to converge faster . The module is essentially an additional stream of the network with the objective of regressing the parameters of an affine transformation or homography . In this way , there is a multi-task objective that the network must solve : 1. minimising the original contrastive objective , and 2. learning the parameters of a homography applied to one of the input images from a vector difference of their latent representations . We force the latent space to encode the geometric transformation information by learning to regress the parameters of the transformation in an MLP that takes the vector difference of two latent representations of an input , x , and its transformed analogue , x′ . By including the information in this way , the network is not invariant to the components of the transformation but is still able to use them as a self-supervised signal for learning . Moreover , this approach serves as a novel hybrid of the pre-text tasks and contrastive learning by enforcing consistency regularisation ( Sohn et al. , 2020 ) . Through extensive empirical studies , we show that the additional objective of regressing the transformation parameters serves as a useful supplementary task for self-supervised contrastive learning , and improves performance for all considered datasets in terms of linear evaluation accuracy and convergence speed . The remainder of the paper is structured as follows . In Section 2 , we cover the related work in the area of self-supervised learning , going into detail where necessary . In Section 3 we detail our proposed method . We first introduce a framework and set of notation to make the formalisation of the approach clear . We then delve into the details behind the architecture and choices for the various part of the system . This is followed by a comprehensive set of experiments in Section 4 , including results of various datasets , as well as an ablative study . Finally , the paper is concluded with some closing remarks in Section 5 . 2 RELATED WORK . SSL is a popular research area within computer vision . Previous approaches can be broadly classed into two main categories . The first is where pre-text tasks are manually defined , and the goal of the algorithms is to solve these hand-crafted tasks ( Lee et al. , 2020 ; Doersch et al. , 2015 ; Gidaris et al. , 2018 ; Zhang et al. , 2016 ; Misra & Maaten , 2020 ) . Examples of such methods include inpainting ( Pathak et al. , 2016 ) , colourising ( Zhang et al. , 2016 ) , jigsaw puzzles ( Noroozi & Favaro , 2016 ) , patch prediction ( Doersch et al. , 2015 ) , and geometric image transformations ( Dosovitskiy et al. , 2014 ) such as using rotation as the pre-text task ( Gidaris et al. , 2018 ; Feng et al. , 2019 ) . Some of these pre-text approaches that deal with geometric image transformations are similar in spirit to our method . Gidaris et al . ( 2018 ) ; Feng et al . ( 2019 ) are two variants of predicting image rotations as an auxiliary task for unsupervised learning . Perhaps closer to our method is Dosovitskiy et al . ( 2014 ) , in which a set of transformations is applied to image patches , and the network is trained in a fully-unsupervised manner to predict surrogate classes defined by a set of transformed image patches by minimising the log loss . Our method , however , investigates a different , particular set of transformations ( those that define an affine transformation of general homography ) , and show this can be used to aid self-supervised performance , using the transformation parameters themselves as targets that need to be regressed ( using mean-squared error ) by the contrastive algorithm in a multitask manner . The discrepancy in the network ’ s ability to predict the actual values of the parameters of the affine transformation/homography serves as our additional supervision signal . A somewhat related approach to our proposed method within the pre-text task domain is proposed by Lee et al . ( 2020 ) . They propose to augment the learning process of a supervised learning algorithm with additional labels constructed using self-supervised labels . These labels are rotation classes and colour permutations . Importantly , they create a loss function which is based on a joint distribution of the original ( supervised ) labels and the self-supervised ( augmented ) labels . In this way , the network is not forced to be invariant to the transformations under consideration , since this has been shown to hurt performance ( Lee et al. , 2020 ) . Our method is different to this in that we propose a module to be integrated specifically with self-supervised algorithms . Additionally , we regress the transformation parameters in real vector space and do not create classes for the parameters . The other broad category of SSL is based on contrastive learning ( Chen et al. , 2020a ; He et al. , 2019 ; Caron et al. , 2020 ) , and this class of techniques represent the current state-of-the-art in selfsupervised learning , outperforming the hand-crafted pre-text task methods . These approaches learn representations by contrasting positive pairs of samples from negative pairs of samples in latent space . Such methods typically require that careful attention be paid to the negative samples . Additionally , they have the disadvantage of requiring prohibitively large batch sizes ( 4096-16000 ) , memory banks , or other mechanisms to retrieve the relevant negative samples . One popular such method is known as SimCLR ( Chen et al. , 2020a ) . SimCLR is a general framework for contrastive learning , and in its vanilla formulation consists of an encoder network parameterised by a CNN ( usually a variant of ResNet ( He et al. , 2016 ) ) and an MLP projection head . An input image is sampled , and two distinct views of that same input image are computed using a random augmentation . The augmentation consists of colour jiterring , Gaussian blurring , and random cropping . The two views are sent through the encoder network to produce two latent representations . These latent vectors are then sent through the projection head to produce final latent vectors . It is from these vectors that the loss is computed . In the case of SimCLR , the loss is normalised temperatured cross-entropy ( NT-Xent ) . A recent approach proposed in Grill et al . ( 2020 ) ( BYOL ) somewhat overcomes the aforementioned disadvantages of requiring negative pairs of samples ( which implicitly requires a large batch size ) . Two separate networks with their own weights are used in tandem to learn the representation . An online network ( consisting of an encoder , MLP projection head , and MLP prediction network ) is trained to predict the representation outputted by a target network . During training , the online network parameters are updated using backpropagation of error derivatives computed using a meansquared error loss . However , the target network parameters are updated using an exponential moving average . In this way , BYOL overcomes collapsed solutions in which every image produces the same representation . We test our module with both SimCLR and BYOL , since these two methods serve as two popular , recent approaches to contrastive SSL . Some helpful findings for guiding self-supervised research were demonstrated in Kolesnikov et al . ( 2019 ) . Core among these are that 1 ) standard architecture designs that work well in the fullysupervised setting do not necessarily work well in the self-supervised setting , 2 ) in the selfsupervised setting larger CNNs often means higher quality learned representations , and 3 ) the linear evaluation paradigm for assessing performance may take a long time to converge . Moreover , Newell & Deng ( 2020 ) find that the effectiveness of self-supervised pretraining decreases as the amount of labelled data increases , and that performance on one particular downstream task is not necessarily indicative of performance on other downstream tasks .
The authors propose a module that regresses the parameters of an affine transformation or homography as an additional objective in the contrastive self-supervised learning framework. The authors argue that the geometric information encoded in the proposed module can supplement the signal provided by a contrastive loss, improving both performance and convergence speed. The authors validate their claims with two recent contrastive self-supervised learning approaches (i.e., SimCLR, BYOL) on several benchmark datasets showing effective results.
SP:b6083b2193bf2ab0df08746ef2ec9e51b513525f
Variational inference for diffusion modulated Cox processes
1 INTRODUCTION . Cox processes ( Cox , 1955 ; Cox & Isham , 1980 ) , also known as doubly-stochastic Poisson processes , are a class of stochastic point processes wherein the point intensity is itself stochastic and , conditional on a realization of the intensity process , the number of points in any subset of space is Poisson distributed . These processes are widely used in the natural and physical sciences , engineering and operations research , and form useful models of a wide array of phenomena . We model the intensity by a diffusion process that is the solution of a stochastic differential equation ( SDE ) . This is a standard assumption across a range of applications ( Susemihl et al. , 2011 ; Kutschireiter et al. , 2020 ) . The measure induced by the solution of the SDE serves as a prior measure over sample paths , and our objective is to infer a posterior measure over the paths of the underlying intensity process , given realizations of the Poisson point process observations over a fixed time horizon . This type of inference problem has been studied in the Bayesian filtering literature ( Schuppen , 1977 ; Bain & Crisan , 2008 ; Särkkä , 2013 ) , where it is of particular interest to infer the state of the intensity process at any past time given all count observations till the present time instant ( the resulting posterior is called the smoothing posterior measure ) . In a seminal paper , Snyder ( 1972 ) derived a stochastic partial differential equation ( SPDE ) describing the dynamics of the corresponding posterior density for Cox processes . The solution of this smoothing SPDE requires the computation of an Itô stochastic integral with respect to the counting process . It has long been recognized ( Clark , 1978 ; Davis , 1981 ; 1982 ) that for stochastic smoothing ( and filtering ) theory to be useful in practice , it should be possible to compute smoothing posteriors conditioned on a single observed sample path . However , Itô integrals are not defined pathwise and deriving a pathwise smoothing density is remarkably hard . 30 years after Synder ’ s original work Elliott & Malcolm ( 2005 ) derived a pathwise smoothing SPDE in the form of a coupled system of forward and backward pathwise SPDEs . Nonetheless , solving the system of pathwise SPDEs , or sampling from the corresponding SDE , is still challenging and intractable in general . It is well known , for example , that numerical techniques for solving these SPDEs , such as the finite element method ( FEM ) , suffers from the curse of dimensionality ( Han et al. , 2018 ) . Therefore , it is of considerable interest to find more efficient methods to solve the smoothing SPDE . We take a variational inference approach to computing an approximate smoothing posterior measure . Variational representations of Bayesian posteriors in stochastic filtering and smoothing theory have been developed in considerable generality ; see ( Mitter & Newton , 2003 ) for a rigorous treatment . There are a number of papers that consider the computation of an approximate posterior distribution over the paths of the underlying intensity process that is observed with additive Gaussian noise ( Archambeau et al. , 2007 ; 2008 ; Cseke et al. , 2013 ; Susemihl et al. , 2011 ; Sutter et al. , 2016 ) . Susemihl et al . ( 2011 ) studied Bayesian filtering of Gaussian processes by deriving a differential equation characterizing the evolution of the mean-square error ( MSE ) in estimating the underlying Gaussian process . On the other hand , Sutter et al . ( 2016 ) compute a variational approximation to the smoothing posterior density when the underlying diffusion intensity is observed with additive Brownian noise . They choose their variational family to be a class of SDEs with an analytically computable marginal density . This setting is considerably different from our setting , where the observed process is a point process . Nonetheless , Sutter et al . ( 2016 ) provides methodological motivation for our current study . In the context of the computation of approximate smoothing/filtering posteriors for point process observations , Harel et al . ( 2015 ) developed an analytically tractable approximation to the filtering posterior distribution of a diffusion modulated marked point processes under specific modeling assumptions suited for a neural encoding/decoding problem . In general , however , analytical tractability can not be assured without restrictive assumptions . We present a stochastic variational inference ( SVI ) ( Hoffman et al. , 2013 ) method for computing a variational approximation to the smoothing posterior density . Our approach fixes an approximating family of path measures to those induced by a class of parametrized SPDEs . In particular , we parametrize the drift function of the approximating SPDEs by a neural network with input and output variables matching the theoretical smoothing SPDE . Thereafter , using standard stochastic analysis tools we compute a tractable lower bound to the evidence of observing a sample path of count observations , the so-called evidence lower bound ( ELBO ) . A sample average approximation ( SAA ) to the ELBO is further computed by simulating sample paths from the stochastic differential equation ( SDE ) corresponding to the approximating SPDE . Finally , by maximizing the ELBO , the neural network is trained using stochastic gradient descent ( SGD ) utilizing multiple batches of sample paths of count observations . Note that each sample path of the count observations entails the simulation of a separate SDE . We note that there are many problems in the natural and physical sciences , engineering and operations research where multiple paths of a point process ( over a finite time horizon ) may be obtained . For instance , we present an example in Section 5 modeling the demand for bikes rented during a 24 hour , one day time period in a bike-sharing platform , where the underlying driving intensity is subject to stochastic variations , and demand information is collected over multiple days . In contrast to the variational algorithm developed in Sutter et al . ( 2016 ) , where the variational lower bound must be re-optimized for new sample paths of the observation process , our variational method is more general and our approximation to the smoothing posterior can be used as a map for another ( unobserved ) sample path of count observations . Our computational approach can also be straightforwardly adapted to solve the problem of interest in Sutter et al . ( 2016 ) . In the subsequent sections , we describe our problem and method in detail and demonstrate the utility of our method with the help of numerical experiments . In particular , we show how the choice of approximating family enables us to use the trained neural network and in turn , the variational Bayesian smoothing posterior ( VBSP ) , to compute smoothing SPDE in almost ( 3/4 ) th of the computational time required to compute the original smoothing SPDE using FEM . Moreover , we also efficiently generate Monte Carlo samples from the learned VBSP and use them for inference on the bike-sharing dataset , whereas FEM failed to compute either VBSP or the true smoothing density for the given time-space discretization . 2 PROBLEM DESCRIPTION . Let Nt be a Cox process with unknown stochastic intensity { zt ∈ R+ , t ∈ [ 0 , T ] } . We use Nt′ , t to denote a sample path realization of Nt restricted to the interval [ t′ , t ] , and useNt to denoteNt−N0 ; recall that N0 = 0 by definition . As noted before , a Cox process conditioned on the intensity is a Poisson process . Therefore , given a realized sample path { zt , t ∈ [ 0 , T ] } of the intensity , and for any 0 ≤ t′ < t ≤ T , the marginal likelihood of observing Nt −Nt′ ∈ N counts in ( t′ , t ] is Nt −Nt′ ∼ L ( Nt −Nt′ = Nt −Nt′ | { zs } t′ < s≤t ) : = ( ∫ t t′ zsds ) Nt−Nt′ e− ∫ tt′ zsds ( Nt −Nt′ ) ! , ( 1 ) where L denotes the Poisson likelihood . Rather than directly modeling the intensity z , we will bring a little more flexibility to our setting , and assume that zt is a deterministic transformation of an another stochastic process xt through a known mapping h : Rd 7→ R+ : is zt = h ( xt ) . Note that the non-negative range of h ensures that the Poisson intensity zt = h ( xt ) is non-negative . Unless xt ∈ R+ , the mapping h can not be an identity function . We use the term intensity process to refer to either zt or xt . We model the intensity process { xt ∈ Rd , ∀t ∈ [ 0 , T ] } with the following SDE , dxt = b ( xt ) dt+ σ ( xt ) dBt , ∀t ≤ T and x0 = 0 , ( 2 ) where b : Rd 7→ Rd is the drift function , σ ( · ) : Rd 7→ Rd×d is the diffusion coefficient , and Bt is the d−dimensional Brownian motion ( or Wiener process ) . We assume that there exists a strong solution to the SDE above ( Oksendal , 2013 , Chapter 5 ) . Moreover , we assume that b ( · ) , h ( · ) , and σ ( · ) are fixed by the modeler apriori , and we are interested in inferring the unknown intensity process with their fixed definitions . Incorporating them will obscure our main contribution , and we leave it for future work . The model of the count observations above forms a diffusion modulated Cox process . Diffusion modulated Cox processes are widely used to model the arrival process in various service systems such as call centers , hospitals , airports etc . ( Zhang et al. , 2014 ; Wang et al. , 2020 ) . Zhang & Kou ( 2010 ) use a Gaussian process modulated Cox-process to infer proteins ’ conformation , in particular , they model the arrival rates of the photons collected from a laser excited protein molecule as a Gaussian process . Schnoerr et al . ( 2016 ) model spatio-temporal stochastic systems from systems biology and epidemiology using Cox process where intensity is modelled with diffusions . As stated in the introduction , we seek to infer the smoothing posterior measure over the unknown intensity process { xt , t ∈ [ 0 , T ] } using the count observations upto time T . Following terminology from the Bayesian filtering theory ( Särkkä , 2013 ) , we use smoothing to refer to inferring the unobserved intensity process at any past time given the observations upto the current time . Mathematically , the smoothing posterior is defined using the conditional expectation of the form E [ f ( xt ) |s ( Nu , u ∈ [ 0 , T ] ) ] , where s ( Nu , u ∈ [ 0 , T ] ) is the smallest sigma algebra ( or filtration ) generated by the Cox process { Nt } from time 0 to T . For brevity we write E [ f ( xt ) |s ( Nu , u ∈ [ 0 , T ] ) ] as E [ f ( xt ) |N0 , T ] . Interested readers may refer to Kutschireiter et al . ( 2020 ) for more details on non-linear filtering theory . We now provide a formal derivation of the smoothing posterior using Bayes ’ theorem ( Bain & Crisan ( 2008 ) ; Elliott & Malcolm ( 2005 ) ) . Observe the conditional expectation satisfies E [ f ( xt ) |N0 , T ] = E† [ Λ0 , T f ( xt ) |N0 , T ] E† [ Λ0 , T |N0 , T ] ( 3 ) for any measurable function f ( · ) and Λs , t : = L ( Ns , t ) L† ( Ns , t ) for any 0 ≤ s < t ≤ T , where L † is the unit intensity Poisson likelihood and E† [ · ] denotes the expectation with respect to L† . Note that L† does not depend on the stochastic intensity process x and forms a reference measure . The marginal smoothing posterior density is defined as pt ( x|N0 , T ) : = P ( xt ∈ dx|N0 , T ) , ( 4 ) which can be formally obtained from equation 3 by setting f ( xt ) = I { A } ( xt ) for any A ∈ Rd , where I { A } ( y ) is an indicator function that equals 1 when y ∈ A , otherwise 0 . Now , define the unnormalized filtering density function q̄t ( x ) as the function satisfying P ( xt ∈ dx|N0 , t ) = q̄t ( x ) dx∫ Rd q̄t ( ξ ) dξ , ( 5 ) and also define v̄t ( x ) : = E† [ Λt , T |N0 , T ] . Then , it can be shown ( Elliott & Malcolm ( 2005 ) ) that for any measurable function f , E [ f ( xt ) |N0 , T ] = E† [ Λ0 , T f ( xt ) |N0 , T ] E† [ Λ0 , T |N0 , T ] = ∫ Rd f ( ξ ) q̄t ( ξ ) v̄t ( ξ ) dξ∫ Rd q̄t ( ξ ) v̄t ( ξ ) dξ . ( 6 ) Next , recalling that h ( · ) is the mapping to ensure the intensity process is positive , define the function Ψt for a given sample path of count observations ( i.e. , pathwise ) as Ψt : = Ψ ( h ( x ) , t , Nt ) = exp [ ( 1− h ( x ) ) t+Nt log h ( x ) ] , ∀x ∈ Rd . Following Elliott & Malcolm ( 2005 , Theorem 4 ) one may use Ψt to derive a coupled system of pathwise SPDEs that characterize q̄t ( x ) and v̄t ( x ) . In particular , they show that qt = Ψ−1t q̄t is a solution to the following SPDE ∂tqt ( x ) = Ψ −1 t L ∗ [ Ψtqt ( x ) ] , ∀t ≤ T , q0 ( x ) = δx0 ( x ) , ( 7 ) where L∗ is the adjoint of L [ F ( x ) ] = 12 ∑ i , j ai , j ( x ) ∂xixjF ( x ) + ∑ i bi ( x ) ∂xiF ( x ) , which is the infinitesimal generator of the prior process for any twice-differentiable , continuous , and bounded function F : Rd 7→ R and a ( x ) = σ ( x ) σ ( x ) T , and δx0 ( x ) is the Dirac delta distribution at x0 . Moreover , they also show that vt ( x ) = Ψtv̄t ( x ) satisfies the following backward parabolic equation ∂tvt ( x ) = −ΨtL [ Ψ−1t vt ( x ) ] , ( 8 ) with terminal condition vT ( x ) = ΨT ( x ) . Now it follows from equation 6 that using the solution of these two SPDEs , the marginal smoothing posterior density for any t ∈ [ 0 , T ] satisfies pt ( x|N0 , T ) = qt ( ξ ) vt ( ξ ) dξ∫ Rd qt ( ξ ) vt ( ξ ) dξ . ( 9 ) Using the SPDEs in equation 7 , and 8 , together with 9 , it can be shown that the marginal smoothing posterior density pt ( x|N0 , T ) satisfies its own SPDE : for any t ∈ [ 0 , T ] , ∂tpt ( x|N0 , T ) = − ∑ i ∂xi [ { ( a ( x ) [ ∇ log ( Ψ−1t vt ( x ) ) ] ) i + bi ( x ) } pt ( x|N0 , T ) ] + 1 2 ∑ i , j ∂xixjai , j ( x ) pt ( x|N0 , T ) ( 10 ) and pt ( x|N0 , T ) = δx0 ( x ) with x0 = 0 . We present a detailed derivation in Appendix A.1 . Corresponding to this SPDE , there exists a smoothing posterior SDE , defined as dx̄t = { a ( x̄t ) [ ∇ log ( Ψ−1t vt ( x̄t ) ) ] + b ( x̄t ) } dt+ σ ( x̄t ) dB̄t and x̄0 = 0 , ( 11 ) where { x̄t } is a modification of the process { xt } such that B̄t is independent of the Cox process Nt ( and thus Bt ) . Observe that the entire sample path of the count observations N0 , T is summarized through the pathwise function Ψt and vt together in the drift term of this SDE . Also note that the diffusion coefficient of the smoothing posterior SDE is precisely the same as that of the prior SDE . The computation of the drift term in the smoothing posterior SDE requires the solving equation 8 for vt ( x ) which , in turn , makes the posterior computation challenging and computationally intractable in general . Consequently , the computation of the marginal posterior density ( and hence the path measure ) . Therefore , we propose a variational inference-based method to compute an approximation to the solution of the smoothing posterior SPDE , by computing an approximate solution to the smoothing posterior SDE in equation 11 .
The paper under review proposes a variational inference procedure for a specific class of Cox processes whose intensity is derived from a stochastic differential equation. The methodology relies on a restriction of candidate solutions the the subset for which the drift depends on $x_t$, $N_t$ and $t$; the drift is then modelled with a neural network. By simulating from the candidate model, a sample average approximation of the ELBO is used to compute a stochastic gradient descent algorithm, optimize the bound and thus estimate non-parametrically the drift.
SP:0268dac3486fd3de176b7170b12d864092ad856a
On Position Embeddings in BERT
1 INTRODUCTION . Position embeddings ( PEs ) are crucial in Transformer-based architectures for capturing word order ; without them , the representation is bag-of-words . Fully learnable absolute position embeddings ( APEs ) were first proposed by Gehring et al . ( 2017 ) to capture word position in Convolutional Seq2seq architectures . Sinusoidal functions were also used with Transformers to parameterize PEs in a fixed ad hoc way ( Vaswani et al. , 2017 ) . Recently , Shaw et al . ( 2018 ) used relative position embedding ( RPEs ) with Transformers for machine translation . More recently , in Transformer pretrained language models , BERT ( Devlin et al. , 2018 ; Liu et al. , 2019 ) and GPT ( Radford et al. , 2018 ) used fully learnable PEs . Yang et al . ( 2019 ) modified RPEs and used them in the XLNet pre-trained language model . To our knowledge , the fundamental differences between the various PEs have not been studied in a principled way . We posit that the aim of PEs is to capture the sequential nature of positions in vector space , or technically , to bridge the distances in N ( for positions ) and RD ( for position vectors ) . We therefore propose three expected properties for PEs : monotonicity , translation invariance , and symmetry 1 . Using these properties , we formally reinterpret existing PEs and show the limitations of sinusoidal 1Informally , as positions are originally positive integers , one may expect position vectors in vector space to have the following properties : 1 ) neighboring positions are embedded closer than faraway ones ; 2 ) distances of two arbitrary m-offset position vectors are identical ; 3 ) the metric ( distance ) itself is symmetric . PEs ( Vaswani et al. , 2017 ) : they can not adaptively meet the monotonicity property – thus we propose learnable sinusoidal PEs . We benchmark 13 PEs ( including APEs , RPEs , and their combinations ) in GLUE and SQuAD , in a total of 11 individual tasks . Several indicators are devised to quantitatively measure translation invariance , monotonicity , and symmetry , which can be further used to calculate their statistical correlations with empirical performance in downstream tasks . We empirically find that both text classification tasks ( in GLUE ) and span prediction tasks ( SQuAD V1.0 and V 2.0 ) can benefit from monotonicity ( in nearby offset ) and translation invariance ( in particular without considering special tokens like [ CLS ] ) , but symmetry decreases performance since it can not deal with directions between query vectors and key vectors when calculating attentions . Plus , models with unbalanced attention regarding directions ( generally attending more to preceding tokens than to succeeding tokens ) slightly correlate with better performance ( especially for span prediction tasks ) . Experiments also show that the fully-learnable APE performs better in classification , while RPEs perform better in span prediction tasks . This is explained by our proposed properties as follows : RPEs perform better in span prediction tasks since they meet better translation invariance , monotonicity , and asymmetry ; the fully-learnable APE which does not strictly have the translation invariance and monotonicity properties during parameterizations ( as it also performed worse in measuring translation invariance and local monotonicity than other APEs and all RPEs ) still performs well because it can flexibly deal with special tokens ( especially , unshiftable [ CLS ] ) . Regarding the newly-proposed learnable sinusoidal PEs , the learnable sinusoidal APE satisfies the three properties to a greater extent than other APE variants , and the learnable sinusoidal RPE exhibits better direction awareness than other PE variants . Experiments show that BERT with sinusoidal APEs slightly outperforms the fully-learnable APE in span prediction , but underperforms in classification tasks . Both for APEs and RPEs , learning frequencies in sinusoidal PEs appears to be beneficial . Lastly , sinusoidal PEs can be generalized to treat longer documents because they completely satisfy the translation invariance property , while the fully-learnable APE does not . The contributions of this paper are summarised below : 1 ) We propose three principled properties for PEs that are either formally examined or empirically evaluated by quantitative indicators in a novel Identical Word Probing test ; 2 ) We benchmark 13 PEs ( including APEs , RPEs and their combinations ) in GLUE , SQuAD V1.1 and SQuAD V2.0 , in a total of 11 individual tasks ; 3 ) we experimentally evaluate how the performance in individual tasks benefits from the above properties ; 4 ) We propose two new PEs to extend sinusoidal PEs to learnable versions for APEs/RPEs . 2 PROPERTIES OF POSITION EMBEDDINGS . Gehring et al . ( 2017 ) ; Vaswani et al . ( 2017 ) use absolute word positions as additional features in neural networks . Positions x ∈ N are distributively represented as an embedding of x as an element ~x ∈ RD in some Euclidean space . By standard methods in representation learning , similarity between embedded objects ~x and ~y is typically expressed by an inner product 〈~x , ~y〉 , for instance the dot product gives rise to the usual cosine similarity between ~x and ~y . Generally , if words appear close to each other in a text ( i.e. , their positions are nearby ) , they are more likely to determine the ( local ) semantics together , than if they occurred far apart . Hence , positional proximity of words x and y should result in proximity of their embedded representations ~x and ~y . One common way of formalizing this is that an embedding should preserve the order of distances among positions 2 . We denote φ ( · , · ) as a function to calculate closeness/proximity between embedded positions , and any inner product can be a special case of φ ( · , · ) with good properties . We can express preservation of the order of distances as : For every x , y , z ∈ N , |x− y| > |x− z| =⇒ φ ( ~x , ~y ) < φ ( ~x , ~z ) ( 1 ) Note that on the underlying space , the property in Eq . ( 1 ) has been studied for almost 60 years ( Shepard , 1962 ) , in both algorithmics ( Bilu & Linial , 2005 ; Badoiu et al. , 2008 ; Maehara , 2013 ) , and machine learning ( Terada & Luxburg , 2014 ; Jain et al. , 2016 ) under the name ordinal embedding . As we are interested in the simple case of positions from N , Eq . ( 1 ) reduces to the following property : 2Theoretical evidence for this is nontrivial unless we assume more about the particular non-linear functions . We empirically find that all learned PEs can preserve the order of distance Property 1 . Monotonicity : The proximity of embedded positions decreases when positions are further apart : ∀x , m , n ∈ N : m > n⇐⇒ φ ( ~x , −−−−→x+m ) < φ ( ~x , −−−→x+ n ) ( 2 ) A priori , a position embedding might treat every element N individually . However , considering pairs of positions based on their relative proximity ( rather than the absolute value of the positions ) , can lead to simplified and efficient position embeddings ( Wang et al. , 2020 ) . Such embeddings satisfy translation invariance : Property 2 . Translation invariance : The proximity of embedded positions are translation invariant : ∀x1 , . . . , xn , m ∈ N : φ ( ~x1 , −−−−→ x1 +m ) = φ ( ~x2 , −−−−→ x2 +m ) = · · · = φ ( ~xn , −−−−−→ xn +m ) ( 3 ) Finally , since the inner product is symmetric , we also consider whether φ ( · , · ) is symmetric : Property 3 . Symmetry : The proximity of embedded positions is symmetric , ∀x , y ∈ N : φ ( ~x , ~y ) = φ ( ~y , ~x ) ( 4 ) There is no generally accepted standard set of properties for position embeddings ; based on prior work as described above , we posit that the above properties are important , and now examine several existing PEs in relation to these properties , either formally ( in Sec . 3 ) or empirically ( in Sec . 4 ) . 3 UNDERSTANDING PES VIA THE PROPERTIES . PEs come in two variants : absolute PEs ( APEs ) where single positions are mapped to elements of the representation space , and relative PEs ( RPEs ) where the difference between positions ( i.e. , x−y for x , y ∈ N ) is mapped to elements of the embedding space . For Transformer-based architectures , the difference between APEs and RPEs manifests itself in the attention mechanism , in particular how the matrices of query , key , and value weights WQ , WK , and WV are used to calculate attention in each attention head . Consider two positions x , y ∈ N , let WEx be the word embedding of the word at position x , and let Px and Px−y be the embeddings of the position x and relative position x− y , respectively . The query-key-value vector for the word at position x is typically calculated as below for APEs and RPEs3 respectively : APE : QxKx Vx = ( WEx + Px ) WQWK WV ; RPE : QxKx Vx = WEx WQWK WV + 0Px−y Px−y ( 5 ) Observe that while the APEs calculation is linear in ( WQ , WK , WV ) with the word and position embeddings merged into the coefficient , the RPEs calculation is affine , with the relative position embedding Px−y acting as an offset independent of the word embedding WEx . In Transformers , the resulting representation is a sum of value vectors with weights depending on A = QKT , that is , Attention ( Q , K , V ) = softmax ( QKT / √ dk ) V . In the rest of the paper , we examine PEs in the above architecture with respect to the properties introduced in Section 2 . In particular , we study four well-known variants of PEs : ( 1 ) the fully learnable APE ( Gehring et al. , 2017 ) , ( 2 ) the fixed sinusoidal APE ( Vaswani et al. , 2017 ) , ( 3 ) the fully learnable RPE ( Shaw et al. , 2018 ) , and ( 4 ) the fixed sinusoidal RPE ( Wei et al. , 2019 ) . 3.1 UNDERSTANDING SINUSOIDAL PES . With a sinusoidal parameterization in PEs , we may use a specific proximity , i.e. , an efficient inner product like a dot product , to check if the sinusoidal form of PEs meets the above properties . The dot product between any two position vectors is Ax , y = 〈~x , ~y〉 = sum sin ( ω1x ) cos ( ω1x ) · · · sin ( ωD 2 x ) cos ( ωD 2 x ) sin ( ω1y ) cos ( ω1y ) · · · sin ( ωD 2 y ) cos ( ωD 2 y ) = sum sin ( ω1x ) sin ( ω1y ) cos ( ω1x ) cos ( ω1y ) · · · sin ( ωD 2 x ) sin ( ωD 2 y ) cos ( ωD 2 x ) cos ( ωD 2 y ) = D 2∑ i=0 cos ( ωi ( x− y ) ) ( 6 ) 3There are many variants of RPEs ( e.g. , ( Dai et al. , 2019 ) ) . As selecting RPEs is not the main concern in this paper , we give the original ( and typical ) RPEs only . One can easily extend this work to other RPE variants . Note that sinusoidal PEs satisfy both Property 2 ( translation invariance ) because the inner product is only associated with its position difference x − y , and Property 3 ( symmetry ) , because the dot product itself is symmetric : 〈~x , ~y〉 = 〈~y , ~x〉 . Note also that checking Property 1 is equivalent to checking monotonicity of the map ψ ( m ) = ∑D/2 i=1 cos ( ωim ) . ψ ( m ) is monotone on intervals where its first order derivative ψ′ ( m ) = ∑D/2 i=1 −ωi sin ( ωim ) does not change sign , and these intervals depend on the choice of ωi . With fixed frequencies ωi = ( 1/10000 ) 2i/D , it is monotonous when m is roughly between 0 and 50 , indicating that it can only strictly perceive a maximum distance of 50 and it is insensitive to faraway distances ( e.g . longer than 50 ) . Although sinusoidal PEs with fixed frequencies ( i.e. , ωi = ( 1/10000 ) 2i/D ) are common in APEs and RPEs , we argue that learning these frequencies is useful because it can adaptively adjust intervals of monotonicity ( they do not have to be 0-50 as in the fixed sinusoidal APE ) 4 . With trainable frequencies , we can adaptively allocate a number of frequencies in a data-driven way . App . A.2 explains the expressive power of sinusoidal PEs with trainable frequencies from the perspective of the Fourier series . Extending existing fixed sinusoidal PEs to a learnable version with learnable frequencies gives two variants : a learnable sinusoidal APE and a learnable sinusoidal RPE .
The paper presents a systematic analysis of approaches used to encode position information in transformers and in particular BERT-based models. The paper investigates absolute and relative position embedding strategies that use either fixed/learnable sinusoidal or fully learnable position embeddings. These embeddings are characterized based on different properties that are either inherent from their formulation or observed empirically such as monotonicity, translation invariance, and symmetry. Interestingly these properties appear to emerge naturally when having learnable parameters in APEs and RPEs.
SP:c653e54cd37cd4f661b12551c59344dbdfbb8329
Improving Calibration through the Relationship with Adversarial Robustness
1 Introduction . The robustness of machine learning algorithms is becoming increasingly important as ML systems are being used in higher-stakes applications . In one line of research , neural networks are shown to lack adversarial robustness – small perturbations to the input can successfully fool classifiers into making incorrect predictions ( Szegedy et al. , 2014 ; Goodfellow et al. , 2014 ; Carlini & Wagner , 2017b ; Madry et al. , 2017 ; Qin et al. , 2020b ) . In largely separate lines of work , researchers have studied uncertainty in model ’ s predictions . For example , models are often miscalibrated where the predicted confidence is not indicative of the true likelihood of the model being correct ( Guo et al. , 2017 ; Thulasidasan et al. , 2019 ; Lakshminarayanan et al. , 2017 ; Wen et al. , 2020 ; Kull et al. , 2019 ) . The calibration issue is exacerbated when models are asked to make predictions on data different from the training distribution ( Snoek et al. , 2019 ) , which becomes an issue in practical settings where it is important that we can trust model predictions under distributional shift . Despite robustness , in all its forms , being a popular area of research , the relationship between these perspectives has not been extensively explored previously . In this paper , we study the correlation between adversarial robustness and calibration . We discover that input data points that are sensitive to small adversarial perturbations ( are easily attacked ) are more likely to have poorly calibrated predictions . This holds true on a number of network architectures for classification and on all the datasets that we consider : CIFAR-10 ( Krizhevsky , 2009 ) , CIFAR-100 ( Krizhevsky , 2009 ) and ImageNet ( Russakovsky et al. , 2015 ) . This suggests that the miscalibrated predictions on those adversarially unrobust data points greatly degrades the performance of model calibration . Based on this insight , we hypothesize and study if calibration can be improved by giving different supervision to the model depending on adversarial robustness of each training data . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . To this end , we propose Adversarial Robustness based Adaptive Label Smoothing ( AR-AdaLS ) to integrate the correlations between adversarial robustness and calibration into training . Specifically , AR-AdaLS adaptively smooths the training labels conditioned on how vulnerable an input is to adversarial attacks . Our method improves label smoothing ( Szegedy et al. , 2014 ) by explicitly teaching the model to differentiate the training data according to their adversarial robustness and then adaptively smooth their labels . By giving different supervision to the training data , our method leads to better calibration over the model without an increase of latency during inference . In addition , since adversarially unrobust data points can be considered as outliers of the underlying data distribution ( Carlini et al. , 2019 ) , our method can even greatly improve model calibration on held-out shifted data . Further , we propose “ AR-AdaLS of Ensemble ” to combine our AR-AdaLS and deep ensembles ( Lakshminarayanan et al. , 2017 ; Snoek et al. , 2019 ) , to further improve the calibration performance under distributional shift . Last , we find an additional benefit of AR-AdaLS is improving model stability ( i.e. , decreasing variance over multiple independent runs ) , which is valuable in practical applications where changes in predictions across runs ( churn ) is problematic . In summary , our main contributions are as follows : • Relationship among Robustness Metrics : We find a significant correlation between adversarial robustness and calibration : inputs that are unrobust to adversarial attacks are more likely to have poorly calibrated predictions . • Algorithm : We hypothesize that training a model with different supervision based on adversarial robustness of each input will make the model better calibrated . To this end , we propose AR-AdaLS to automatically learn how much to soften the labels of training data based on their adversarial robustness . Further , we introduce “ AR-AdaLS of Ensemble ” to show how to apply AR-AdaLS to an ensemble model . • Experimental Analysis : On CIFAR-10 , CIFAR-100 and ImageNet , we find that AR-AdaLS is more effective than previous label smoothing methods in improving calibration , particularly for shifted data . Further , we find that while ensembling can be beneficial , applying AR-AdaLS to adaptively calibrate ensembles offers further improvements over calibration . 2 Related Work . Uncertainty estimates How to better estimate a model ’ s predictive uncertainty is an important research topic , since many models with a focus on accuracy may fall short in predictive uncertainty . A popular way to improve a model ’ s predictive uncertainty is to make the model well-calibrated , e.g. , post-hoc calibration by temperature scaling ( Guo et al. , 2017 ) , and multi-class Dirichlet calibration ( Kull et al. , 2019 ) . In addition , Bayesian neural networks , through learning a posterior distribution over network parameters , can also be used to quantify a model ’ s predictive uncertainty , e.g. , Graves ( 2011 ) ; Blundell et al . ( 2015 ) ; Welling & Teh ( 2011 ) . Dropout-based variational inference ( Gal & Ghahramani , 2016 ; Kingma et al. , 2015 ) can help DNN models make less over-confident predictions and be better calibrated . Recently , mixup training ( Zhang et al. , 2018 ) has been shown to improve both models ’ generalization and calibration ( Thulasidasan et al. , 2019 ) , by preventing the model from being over-confident in its predictions . Despite the success of improving uncertainty estimates over in-distribution data , Snoek et al . ( 2019 ) argue that it does not usually translate to a better performance on data that shift from the training distribution . Among all the methods evaluated by Snoek et al . ( 2019 ) under distributional shift , ensemble of deep neural networks ( Lakshminarayanan et al. , 2017 ) , is shown to be most robust to dataset shift , producing the best uncertainty estimates . Adversarial robustness On the other hand , machine learning models are known to be brittle ( Xin et al. , 2017 ) and vulnerable to adversarial examples ( Athalye et al. , 2018 ; Carlini & Wagner , 2017a , b ; He et al. , 2018 ; Qin et al. , 2020a ) . Many defenses have been proposed to improve model ’ s adversarial robustness ( Song et al. , 2017 ; Yang et al. , 2019 ; Goodfellow et al. , 2018 ) , however are further attacked by more advanced defense-aware attacks ( Carlini & Wagner , 2017b ; Athalye et al. , 2018 ) . Recently , Carlini et al . ( 2019 ) ; Stock & Cissé ( 2018 ) define adversarial robustness as the minimum distance in the input domain required to change the model ’ s output prediction by constructing an adversarial attack . The most recent work that is close to ours , Carlini et al . ( 2019 ) , makes the interesting observation that easily attackable data are often outliers in the underlying data distribution and then use adversarial robustness to determine an improved ordering for curriculum learning . Our work , instead , explores the relationship between adversarial robustness and calibration . In addition , we use adversarial robustness as an indicator to adaptively smooth the training labels to improve model calibration . Label smoothing Label smoothing is originally proposed in Szegedy et al . ( 2016 ) and is shown to be effective in improving the quality of uncertainty estimates in Müller et al . ( 2019 ) ; Thulasidasan et al . ( 2019 ) . Instead of minimizing the cross-entropy loss between the predicted probability p̂ and the one-hot label p , label smoothing minimizes the cross-entropy between the predicted probability and a softened label p̃ = p ( 1 − ) + Z , where Z is the number of classes in the dataset and is a hyperparameter which controls the degree of the smoothing effect . Our work makes label smoothing adaptive and incorporates the correlation with adversarial robustness to further improve calibration . 3 Correlations between Adversarial Robustness and Calibration . To explore the relationship between adversarial robustness and calibration , we first introduce the metrics to evaluate each of them ( arrows indicate which direction is better ) . Adversarial robustness ↑ Adversarial robustness measures the minimum distance in the input domain required to change the model ’ s output prediction by constructing an adversarial attack ( Carlini et al. , 2019 ; Stock & Cissé , 2018 ) . Specifically , given an input x and a classifier f ( · ) that predicts the class for the input , the adversarial robustness is defined as the minimum adversarial perturbation δ that enables f ( x + δ ) 6= f ( x ) . Following the work ( Carlini et al. , 2019 ) , we construct the ` 2 based CW attack ( Carlini & Wagner , 2017b ) and then use the ` 2 norm of the adversarial perturbation ‖δ‖2 to measure the distance to the decision boundary . Therefore , a more adversarially robust input requires a larger adversarial perturbation to change the model ’ s prediction . Expected calibration error ↓ Model calibration measures the alignment between the predicted probability and the accuracy . Well calibrated predictions convey the information about how much we should trust a model ’ s prediction . We follow the widely used expected calibration error ( ECE ) to measure the calibration performance of a network ( Guo et al. , 2017 ; Snoek et al. , 2019 ) . To compute the ECE , we need to first divide all the data into K buckets sorted by their predicted probability ( confidence ) of the predicted class . Let Bk represent the set of data in the k-th confidence bucket . Then the accuracy and the confidence of Bk are defined as acc ( Bk ) = 1|Bk| ∑ i∈Bk 1 ( ŷi = yi ) and conf ( Bk ) = 1|Bk| ∑ i∈Bk p̂ ŷi i , where ŷ and y represent the predicted class and the true class respectively , and p̂ŷ is the predicted probability of ŷ . The ECE is then defined as ECE =∑K k=1 |Bk| N |acc ( Bk ) − conf ( Bk ) | , where N is the number of data points . 3.1 Correlations . Based on the evaluation metrics , we can see that adversarial robustness and calibration are measuring quite different properties : the adversarial robustness measures the property of the data by computing the adversarial perturbation δ from the input domain , while the calibration metric measures the properties of the model ’ s predicted probability in the output space . Although adversarial robustness and calibration are conceptually different , they are both connected to the decision boundary . Specifically , adversarial robustness can be used to measure the distance to the decision boundary : if a data point is adversarially unrobust , i.e. , easy to find a small input perturbation to fool the classifier into wrong classifications , then this data point is close to the decision boundary . Meanwhile , models should have relatively less confident predictions on data points close to the decision boundary . However , as pointed out by ( Guo et al. , 2017 ; Snoek et al. , 2019 ) , existing deep neural networks are frequently over-confident , i.e. , having predictions with high confidence even whey they should be uncertain . Taking these two together , we hypothesize if examples that can be easily attacked by adversarial examples are also poorly calibrated . To test this , we perform experiments on the clean test set across three datasets : CIFAR-10 ( Krizhevsky , 2009 ) , CIFAR-100 ( Krizhevsky , 2009 ) and ImageNet ( Russakovsky et al. , 2015 ) with different networks , whose architecture and accuracy are shown in Table 1 . We refer to these models as “ Vanilla ” for each dataset in the following discussion . The details for training each vanilla network are included in Appendix A . To explore the relationship between adversarial robustness and calibration , we start with the relationship between adversarial robustness and confidence together with accuracy . Specifically , we rank the input data according to their adversarial robustness and then divide the dataset into R equally-sized subsets ( R = 10 used in this paper ) . For each adversarial robustness subset , we compute the accuracy and the average confidence score of the predicted class . As shown in the first row in Figure 1 , we can clearly see that both accuracy and confidence increase with the adversarial robustness of the input data , and confidence is consistently higher than accuracy in each adversarial robustness subset across three datasets . This indicates that although vanilla classification models achieve the state-of-the-art accuracy , they tend to give over-confident predictions , especially for those adversarially unrobust data points . Taking one step further , we particularly compute the expected calibration error ( ECE ) in each adversarial robustness subset , shown in the bottom row of Figure 1 . In general , we find that data points falling into lower adversarial robustness levels are more likely to be over-confident and less well calibrated ( larger ECE ) . For those adversarially robust examples , there is a better alignment between the model ’ s predicted confidence and accuracy , and the ECE over those examples is close to 0 . This nicely validates our hypothesis : inputs that are adversarially unrobust are more likely to have poorly calibrated predictions . On larger-scale ImageNet , while we still see the general trend holds , the least adversarially robust examples are relatively well calibrated . We hypothesize this may be due to larger training data and less overfitting . Furthermore , we also find an interesting correlation between adversarial robustness and model stability , which is measured by the variance of the predicted probability across M independent runs ( e.g. , M = 5 ) . The variance is computed as σ2 = 1M−1 1 N ∑M m=1 ∑N i=1 ( p̂m , i − p̄i ) 2 , where p̂m , i is the m-th model ’ s predicted probability of the i-th data and p̄i = 1M ∑M m=1 p̂m , i is the average predicted probability over M runs . As shown in the bottom row of Figure 1 , we see that those adversarially unrobust examples tend to have a much higher variance across all three datasets . This indicates that inputs that are unrobust to adversarial attacks are more likely to have unstable predictions . Algorithm 1 Training procedure for AR-AdaLS Input : number of classes Z , number of training epochs T , number of adversarial robustness subset R , learning rate of adaptive label smoothing α . For each adversarial robustness training subset , we initialize the soft label as the one-hot label p̃r , t = pr , where the initial soft label for the correct class p̃ z=y r , t = 1. for t = 1 to T do Minimize cross-entropy loss between soft label and predicted probability 1R ∑R r L ( p̃r , t , p̂r , t ) for r = 1 to R do Update p̃z=yr , t+1 ← p̃ z=y r , t − α · { conf ( Svalr ) t − acc ( Svalr ) t } . Eqn . ( 3 ) Clip p̃z=yr , t+1 to be within ( 1 Z , 1 ] Update r , t+1 ← ( p̃z=yr , t+1 − 1 ) · Z1−Z . Eqn . ( 4 ) Update p̃r , t+1 ← pr ( 1− r , t+1 ) + r , t+1Z . Eqn . ( 1 ) end for end for Taking all together , these empirical results nicely build a connection between very different concepts . In particular , adversarial robustness is measured over the input domain while both calibration and stability are measured over the output space . Given the strong empirical connection , we now ask : can we improve model calibration and stability by targeting adversarially unrobust examples ?
This paper proposes a new method (AR-AdaLS) for label smoothing to improve deep network calibration. In particular, the authors draw a connection between lack of calibration (overconfidence) and examples which are prone to adversarial attacks. They show that by generating smoothed targets based on the adversarial robustness of an example, they can further improve model calibration beyond traditional label smoothing.
SP:34177dc9d2e81610d167b996c3f106327c666f94
MIROSTAT: A NEURAL TEXT DECODING ALGORITHM THAT DIRECTLY CONTROLS PERPLEXITY
1 INTRODUCTION . Large-scale generative language models ( LMs ) have received recent attention due to their highquality open-ended text generation ability ( Brown et al. , 2020 ; Radford et al. , 2019 ) . Generating texts from these LMs usually relies on some form of random sampling . Pure sampling often leads to incoherent and low-quality texts ( Holtzman et al. , 2018 ) , whereas greedy decoding leads to excessive repetitions , another form of low quality . The right decoding algorithm is needed to generate highquality texts with controlled attributes ( Ippolito et al. , 2020 ; Zhang et al. , 2020 ; Ippolito et al. , 2019 ) . We introduce mirostat,1 a neural text decoding algorithm that actively controls the generative process to maintain the perplexity of generated text at a certain desired value . Mirostat uses an adaptive topk sampling algorithm to actively tune the value of k which helps maintain the overall perplexity of the text ; recall that top-k sampling ( Holtzman et al. , 2018 ; Fan et al. , 2018 ) is where the next word is sampled from the top k most probable choices . Top-k sampling and several other recent sampling methods are motivated by suppressing an unreliable tail in the probability distribution of trained LMs . Another sampling method is top-p , also known as nucleus sampling , where the next word is chosen from the top x probable choices , where 1The word mirostat is derived from mirum which is Latin for surprise and stat meaning control . This work was funded in part by the IBM-Illinois Center for Cognitive Computing Systems Research ( C3SR ) , a research collaboration as part of the IBM AI Horizons Network and the National Science Foundation Grant CCF-1717530 . x is the smallest integer such that their cumulative probability mass is at least p ( Holtzman et al. , 2020 ) . While top-k sampling involves a fixed number of most probable choices , top-p sampling involves a dynamic number of choices based on a fixed p value and shows better statistical and human-evaluated performance . For small values of k and p , these sampling methods unfortunately repeat phrases in generated text . This can be handled by penalizing repetitions and using appropriate temperature values ( Keskar et al. , 2019 ) or adding diversity to the generated text ( Zhang et al. , 2020 ; Vijayakumar et al. , 2018 ) . On the other hand , large values of k and p can lead to incoherent texts similar to pure sampling . Although choosing appropriate values of p or k can avoid repetition and incoherence , this involves ad hoc tuning of parameters . Even for a fixed value of p or k , the generated text can have varying statistical properties . Intriguingly , as we demonstrate via Example 1 in Appendix A , small values of a certain perplexity statistic of generated texts called surprise ( Def . 1 ) are closely linked to repetitions and large values of surprise are linked to incoherence . Perplexity is a statistical metric used to evaluate quality of neural text generation , and is closely related to average surprise as shown in Fig . 7 in Appendix A and formalized in Sec . 2 . A large-scale human subject experiment by Zhang et al . ( 2020 ) showed human-evaluated quality is closely related to the likelihood of the generated text for fixed number of tokens . In particular , reducing perplexity increases quality upto some point before the quality starts dropping . This implies that good control over perplexity of the generated text would give direct control over the quality of generated text ( as evaluated by humans ) . Generating texts with an appropriately chosen target perplexity value may maximize quality of generated text . Ergo mirostat . Now we summarize our key contributions . Sec . 3 shows theoretically how cross-entropy and hence perplexity grows in top-k and top-p sampling as a function of k and p respectively , which was previously unknown . Sec . 4 introduces mirostat sampling , which outputs texts with predetermined target perplexity value . Although perplexity may not fully capture the quality of text ( Hashimoto et al. , 2019 ) , much literature discusses its correlation to quality ( Zhang et al. , 2020 ) . Hence , our algorithm to control perplexity helps generate high-quality text . Sec . 5.1 experimentally shows much fluctuation in cross-entropy rates in top-k and top-p sampling as a function of their input parameters , hence unable to control perplexity of output text . Sec . 5.2 shows repetition is closely related to perplexity of the generated texts , mostly independent of the sampling method , but slightly dependent on the LM used . Sec . 5.3 experimentally shows mirostat sampling avoids both boredom and confusion traps for a wide range of target perplexity values . Sec . 5.4 provides our own experiments with human raters that demonstrate mirostat efficacy for fluency , coherence , and overall quality . 1.1 RELATED WORK . Sampling from distorted probability distribution Pure sampling from LMs often leads to incoherent text whereas greedy decoding leads to repetitions . Distorting probability distributions , as in top-k , top-p , or temperature sampling help improve quality of generated texts , if parameters are properly tuned ( Holtzman et al. , 2018 ; Fan et al. , 2018 ; Holtzman et al. , 2020 ) . Tuning these methods , however , is ad hoc and does not provide good control over the statistics of the output . Our method uses statistics of previously-generated tokens as input to generate the next token , by distorting the probability distribution so it helps control the overall statistics of the generated text . This ability to control the perplexity of the output is a key advantage of our method over previous work . This , when used with the relation between perplexity and human-evaluated quality observed by Zhang et al . ( 2020 ) , can yield text that has better quality control . Controllable text generation Controllable text generation has oft focused on semantics of the output text , as in LMs like CTRL ( Keskar et al. , 2019 ) , and sampling algorithms like plug-and-play LM ( Dathathri et al. , 2020 ) and constrained sentence generation by Metropolis-Hastings ( Miao et al. , 2019 ) . Contrarily our approach is purely statistical , guiding the decoder along a desired statistical path that addresses issues with pure sampling and greedy decoding . Quality-diversity tradeoff Top-k , top-p , and low-temperature sampling improve the quality of the text , but at the cost of reduced diversity . Applications like question-answering only demand highquality generation , but open-ended tasks such as story generation demand diversity too . Li et al . ( 2016 ) ; Vijayakumar et al . ( 2018 ) ; Kulikov et al . ( 2019 ) propose variants of beam search to induce diversity in generated text . However , Zhang et al . ( 2020 ) observe a tradeoff between quality and diversity ; they further observe diversity is closely related to entropy whereas quality is maximized in a certain range of observed likelihood values for fixed-length sentences . Our algorithm well-controls observed cross-entropy , the observed likelihood per token of generated text . Hence , by maintaining the observed cross-entropy in a certain range , we can ensure high-quality text generation . Repetitions Greedy decoding from LMs often lead to texts with excessive repetitions both at token- and sentence-levels . Several techniques have been proposed to address this . Token loss dynamic reweighting ( TLDR ) hypothesizes some tokens are more difficult to learn than others and so reweighting tokens during learning can balance things to reduce repetitions ( Jiang et al. , 2020 ) . Keskar et al . ( 2019 ) use a repetition penalty in decoding to reduce repetition of tokens . Welleck et al . ( 2020 ) suggest the cause for repetitions is a flaw in the training objective itself and use a new objective that gives less probability to unlikely sequence including texts with high repetitions . Variants of top-k sampling and repetition penalty in ( Keskar et al. , 2019 ) were used before by Foster & White ( 2007 ) to reduce repetitions . Here , we demonstrate a near-linear relation between repetitions and observed cross-entropy and so we directly control repetitions by controlling observed cross-entropy . 2 SURPRISE , CROSS-ENTROPY , AND PERPLEXITY . Here we formally define surprise , cross-entropy , and perplexity . For a random variable X ∈ X distributed as P , the surprisal associated with an instance x of X is defined as − logP ( x ) ( Han & Kobayashi , 2007 ) . Hence , less probable instances are more surprising than more probable instances . Extending the definition to conditional random variables , we next define the surprise associated with tokens and sentences with respect to generated text for a fixed model distribution PM . Definition 1 . The surprise value of a token X with respect to generated text X < i and model distribution PM for some fixed model M is SM ( X|X < i ) = − logPM ( X|X < i ) . We will soon see this quantity is directly related to perplexity . Now we define the average surprise for a sentence X with n tokens . Definition 2 . For a sentence Xn = ( X1 , . . . , Xn ) with n tokens , the surprise rate with respect to a probability distribution PM for some model M is SM ( Xn ) = − 1n ∑n i=1 logPM ( Xi|X < i ) . The cross-entropy of a discrete random variable X ∈ X distributed as PM with respect to a discrete random variable Y ∈ Y distributed as PN such that Y ⊆ X is H ( PN , PM ) = − ∑ y∈Y PN ( y ) logPM ( y ) = EPN [ SM ( Y ) ] . The cross-entropy rate of a stochastic process X = { Xi } , Xi ∈ X distributed as PM with respect to a stochastic process Y = { Yi } , Yi ∈ Y distributed as PN and Y ⊆ X is defined as H ( PN , PM ) = limn→∞ EPN [ SM ( Y n ) ] , when the limit exists . Further , if Y n is sampled from PN and if PN is a stationary ergodic source , then by the Shannon-McMillan-Breiman theorem ( Cover & Thomas , 2006 , Thm . 16.8.1 ) , we have limn→∞SM ( Y n ) = H ( PN , PM ) , when the limit exists . Now , the perplexity corresponding to H ( PN , PM ) is simply PPL ( PN , PM ) = 2H ( PN , PM ) , following Brown et al . ( 1992 ) ; Varshney et al . ( 2020 ) . For experiments , when the text is generated using PN , we approximate H ( PN , PM ) by SM ( Y n ) for a sentence of length of n. This is because natural language shows stationary ergodic property ( Manning & Schutze , 1999 ) . Perplexity denotes how close PN is to PM . The lower the perplexity , the closer the distributions PN and PM . 3 THEORETICAL ANALYSIS OF SAMPLING METHODS . Here we summarize theoretical results for different sampling methods ; details and proofs in App . B. Zipf ’ s law states that the frequency of occurrence of any word in the vocabulary is inversely proportional to its rank in the frequency table ( Zipf , 1965 ; Powers , 1998 ) . More precisely , for a vocabulary of size N = |V| the frequency of the ith most probable word is p ( i ; s , N ) = 1/ ( isHN , s ) , ( 1 ) where s is an exponent characterizing the distribution and HN , s = ∑N n=1 1 ns is the N th generalized harmonic number . Further , for human languages the exponent s is very close to 1 . Hence , when required , we write s = 1 + , for some small > 0 . For all of our theoretical analysis , we assume the sampled words follow Zipf ’ s law . First we summarize results for top-k sampling . Thm . 1 shows that S ( k ) grows steeply for small values of k , but grows very slowly for large values of k. Thm . 2 computes an approximation for H ( PMk , PM ) ; Fig . 1a shows this approximation is very good . Since H ( PMk , PM ) does not grow much beyond k = 2000 , it makes sense to tune k between 1 to 2000 to get a desired cross-entropy . Now we summarize the results for top-p sampling . Thm . 3 proves that S ( p ) behaves near-linearly in p. Further , Thm . 4 provides approximate expressions for H ( PMp , PM ) that show H ( PMp , PM ) grows approximately linearly with p ; this approximate linearity is also shown in Fig . 1b . This is in contrast to top-k sampling where H ( PMk , PM ) is highly nonlinear . Temperature is used to suitably distort the original distribution in so as to generate samples that avoid problems associated with pure sampling . In particular , lowering the temperature makes the sampling more greedy . For a given temperature T > 0 , the frequency of the kth most probable word in ( 1 ) is given by p ( k ; s , N , T ) = 1/ ( k s T HN , sT ) = p ( k ; sT , N ) . Hence the effect of temperature in our analysis is captured simply by modifying s to s/T .
Neural text generation models typically rely on sampling schemes for autoregressive decoding. This may range from pure sampling, top-k, top-p to temperature modulated sampling. These methods are mostly heuristic schemes and lack theoretical analysis. This paper tries to fill that gap by analyzing these schemes theoretically under the Zipfian distribution assumption (an underlying distribution in natural language corpora and generally true for open-ended language generation models). While filling the theoretical gaps, this work proposes an adaptive top-k decoding mechanism - Mirostat. This is based on the understanding that cross-entropy is a useful measure of the quality of the generated text.
SP:e1a78b637ef015d15ae3283f6bd3299e5244d457
Learning Aggregation Functions
1 INTRODUCTION . The need to aggregate representations is ubiquitous in deep learning . Some recent examples include max-over-time pooling used in convolutional networks for sequence classification ( Kim , 2014 ) , average pooling of neighbors in graph convolutional networks ( Kipf & Welling , 2017 ) , max-pooling in Deep Sets ( Zaheer et al. , 2017 ) , in ( generalized ) multi-instance learning ( Tibo et al. , 2017 ) and in GraphSAGE ( Hamilton et al. , 2017 ) . In all the above cases ( with the exception of LSTM-pooling in GraphSAGE ) the aggregation function is predefined , i.e. , not tunable , which may be in general a disadvantage ( Ilse et al. , 2018 ) . Sum-based aggregation has been advocated based on theoretical findings showing the permutation invariant functions can be sum-decomposed ( Zaheer et al. , 2017 ; Xu et al. , 2019 ) . However , recent results ( Wagstaff et al. , 2019 ) showed that this universal function representation guarantee requires either highly discontinuous ( and thus poorly learnable ) mappings , or a latent dimension equal to the maximum number of elements in the set . This suggests that learning set functions that are accurate on sets of large cardinality is difficult . Inspired by previous work on learning uninorms ( Melnikov & Hüllermeier , 2016 ) , we propose a new parametric family of aggregation functions that we call LAF , for learning aggregation functions . A single LAF unit can approximate standard aggregators like sum , max or mean as well as model intermediate behaviours ( possibly different in different areas of the space ) . In addition , LAF layers with multiple aggregation units can approximate higher order moments of distributions like variance , skewness or kurtosis . In contrast , other authors ( Corso et al. , 2020 ) suggest to employ a predefined library of elementary aggregators to be combined . Since LAF can represent sums , it can be seen as a smooth version of the class of functions that are shown in Zaheer et al . ( 2017 ) to enjoy universality results in representing set functions . The hope is that being smoother , LAF is more easily learnable . Our empirical findings show that this can be actually the case , especially when asking the model to generalize over large sets . In particular , in this paper we offer an extensive experimental analysis showing that : • LAF layers can learn a wide range of aggregators ( including higher-order moments ) on sets of scalars without background knowledge on the nature of the aggregation task • LAF layers on the top of traditional layers can learn the same wide range of aggregators on sets of high dimensional vectors ( MNIST images ) • LAF outperforms state-of-the-art set learning methods such as DeepSets and PNA on realworld problems involving point clouds and text concept set retrieval . • LAF performs comparably to PNA on random graph generation tasks , outperforming several graph neural networks architectures including GAT ( Veličković et al. , 2018 ) and GIN ( Xu et al. , 2019 ) The rest of this work is structured as follows . In Section 2 we define the LAF framework and show how appropriate parametrizations of LAF allow to represent a wide range of popular aggregation functions . In Section 3 we discuss some relevant related work . Section 4 reports synthetic and realworld experiments showing the advantages of LAF over ( sets of ) predifined aggregators . Finally , conclusions and pointers to future work are discussed in Section 5 . 2 THE LEARNING AGGREGATION FUNCTION FRAMEWORK . We use x = { x1 , . . . , xN } to denote finite multisets of real numbers xi ∈ R. Note that directly taking x to be a multiset , not a vector , means that there is no need to define properties like exchangeability or permutation equivariance for operations on x . An aggregation function agg is any function that returns for any multiset x of arbitrary cardinality N ∈ N a value agg ( x ) ∈ R. Standard aggregation functions like mean and max can be understood as ( normalized ) Lp-norms . We therefore build our parametric LAF aggregator around generalized Lp-norms of the form La , b ( x ) : = ( ∑ i xbi ) a ( a , b ≥ 0 ) . ( 1 ) La , b is invariant under the addition of zeros : La , b ( x ) = La , b ( x ∪ 0 ) where 0 is a multiset of zeros of arbitrary cardinality . In order to also enable aggregations that can represent conjunctive behavior such as min , we make symmetric use of aggregators of the multisets 1−x : = { 1−xi|xi ∈ x } . For La , b ( 1 − x ) to be a well-behaved , dual version of La , b ( x ) , the values in x need to lie in the range [ 0 , 1 ] . We therefore restrict the following definition of our learnable aggregation function to sets x whose elements are in [ 0 , 1 ] : LAF ( x ) : = αLa , b ( x ) + βLc , d ( 1− x ) γLe , f ( x ) + δLg , h ( 1− x ) ( 2 ) defined by tunable parameters a , . . . , h ≥ 0 , and α , . . . , δ ∈ R. In cases where sets need to be aggregated whose elements are not already bounded by 0 , 1 , we apply a sigmoid function to the set elements prior to aggregation . Table 1 shows how a number of important aggregation functions are special cases of LAF ( for values in [ 0 , 1 ] ) . We make repeated use of the fact that L0,1 returns the constant 1 . For max and min LAF only provides an asymptotic approximation in the limit of specific function parameters ( as indicated in the limits column of Table 1 ) . In most cases , the parameterization of LAF for the functions in Table 1 will not be unique . Being able to encode the powers of moments implies that e.g . the variance of x can be expressed as the difference 1/N ∑ i x 2 i − ( 1/N ∑ i xi ) 2 of two LAF aggregators . Since LAF includes sum-aggregation , we can adapt the results of Zaheer et al . ( 2017 ) and Wagstaff et al . ( 2019 ) on the theoretical universality of sum-aggregation as follows . Proposition 1 Let X ⊂ R be countable , and f a function defined on finite multisets with elements from X . Then there exist functions φ : X → [ 0 , 1 ] , ρ : R → R , and a parameterization of LAF , such that f ( x ) = ρ ( LAF ( φx ) ; α , β , γ , δ , a , b , c , d ) , where φx is the multiset { φ ( x ) |x ∈ x } . A proof in Wagstaff et al . ( 2019 ) for a very similar proposition used a mapping fromX into the reals . Our requirement that LAF inputs must be in [ 0 , 1 ] requires a modification of the proof ( contained in the supplementary material ) , which for the definition of φ relies on a randomized construction . Proposition 1 shows that we retain the theoretical universality guarantees of Zaheer et al . ( 2017 ) , while enabling a wider range of solutions based on continuous encoding and decoding functions . It should be emphasized at this point that the primary purpose of LAF is not to provide a uniform representation of different standard aggregators as displayed in Table 1 , but to enable a continuum of intermediate and hybrid aggregators . Figure 1 shows the graphs of 4 different randomly generated LAF functions over the unit square [ 0 , 1 ] × [ 0 , 1 ] , i.e. , evaluated over sets of size 2 . Parameters α , . . . , γ were randomly sampled in the interval [ 0 , 1 ] ; parameters b , d , f , h are randomly sampled from the integers 0 , . . . , 5 , and a , c , e , g are obtained as 1/i with i a random integer from 0 , . . . , 5 . The figure illustrates the rich repertoire of aggregation functions with different qualitative behaviors already for non-extreme parameter values . 2.1 LAF ARCHITECTURE . LAF can be easily used as a module of a larger architecture suitable for learning on sets . Several LAF units can be combined as shown in Figure 2 , to capture different aspects of the input set , which can be in general a set of vectors x = { x1 , . . . , xN } where xi ∈ Rd . Note that multiple aggregators are also used in related frameworks such as DeepSets ( Zaheer et al. , 2017 ) or Graph Neural Networks ( Veličković et al. , 2018 ; Corso et al. , 2020 ) . A module with r LAF units takes as input d-dimensional vectors and produces a vector of size r × d as output . Each LAF unit performs an element-wise aggregation of the vectors in the set such that Lk , j = LAF ( { xi , j , . . . , xN , j } ; αk , βk , γk , δk , ak , bk , ck , dk ) for k = 1 , . . . , r and j = 1 , . . . , d. The output vector can be then fed into the next layer . 3 RELATED WORK . Several studies address the problem of aggregating data over sets . Sum-decomposition strategies have been used in ( Zaheer et al. , 2017 ) for points cloud classification and set expansion tasks and in ( Santoro et al. , 2017 ) for question answering and dynamic physical systems computation . Max , sum and average are standard aggregation functions for node neighborhoods in graph neural networks ( Hamilton et al. , 2017 ; Kipf & Welling , 2017 ; Xu et al. , 2019 ; Veličković et al. , 2018 ) . Zaheer et al . ( 2017 ) first proved universal representation results for these standard aggregators when combined with learned mappings over inputs and results of the aggregation . However , Wagstaff et al . ( 2019 ) showed that these universality results are of little practical use , as they either require highly discontinuous mappings that would be extremely difficult to learn , or a latent dimension that is at least the size of the maximum number of input elements . Uninorms ( Yager & Rybalov , 1996 ) are a class of aggregation functions in fuzzy logic that can behave in a conjunctive , disjunctive or averaging manner depending on a parameter called neutral element . Melnikov & Hüllermeier ( 2016 ) proposed to learn fuzzy aggregators by adjusting these learnable parameters , showing promising results on combining reviewers scores on papers into an overall decision of acceptance or reject . Despite the advantage of incorporating different behaviours in one single function , uninorms present discontinuities in the regions between aggregators , making them not amenable to be utilized in fully differentiable frameworks . Furthermore the range of possible behaviours is restricted to those commonly used in the context of fuzzy-logic . The need for considering multiple candidate aggregators is advocated in a very recent work that was developed in parallel with our framework ( Corso et al. , 2020 ) . The resulting architecture , termed Principal Neighborhood Aggregation ( PNA ) combines multiple standard aggregators , including most of the ones we consider in the LAF framework , adjusting their outputs with degree scalers . However , the underlying philosophy is rather different . PNA aims at learning to select the appropriate aggregator ( s ) from a pool of candidates , while LAF explores a continuous space of aggregators that includes standard ones as extreme cases . Our experimental evaluation shows that PNA has troubles in learning aggregators that generalize over set sizes , despite having them in the pool of candidates , likely because of the quasi-combinatorial structure of its search space . On the other hand , LAF can successfully learn even the higher moment aggregators and consistently outperforms PNA . Closely connected , but somewhat complementary to aggregation operators are attention mechanisms ( Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) . They have been explored to manipulate set data in Lee et al . ( 2019 ) and in the context of multi-instance learning ( Ilse et al. , 2018 ) . Attention operates at the level of set elements , and aims at a transformation ( weighting ) of their representations such as to optimize a subsequent weighted sum-aggregation . While the objectives of attention-based frameworks and LAF partially overlap , they are functionally quite different . Exploring combinations of LAF with attention mechanisms is a possible subject of future work .
Universal function representation guarantee requires either highly discontinuous mappings or a highly dimensional latent space. For this reason the authors propose a new parametric family of aggregation functions, called LAF (for learning aggregation functions). It can be seen as a smooth version of the class of functions that are shown in DeepSets. LAF aggregator could learn all standard aggregation functions. Moreover in experiments the autors shows that LAF surpasses other aggregation methods.
SP:a3f2c5b8bc8bfa03ad589b322c82ac84bca605b2
Precondition Layer and Its Use for GANs
1 INTRODUCTION . Generative Adversarial Nets ( GANs ) ( Goodfellow et al. , 2014 ) successfully transform samples from one distribution to another . Nevertheless , training GANs is known to be challenging , and its performance is often sensitive to hyper-parameters and datasets . Understanding the training difficulties of GAN is thus an important problem . Recent studies in neural network theory ( Pennington et al. , 2017 ; Xiao et al. , 2018 ; 2020 ) suggest that the spectrum of the input-output Jacobian or neural tangent kernel ( NTK ) is an important metric for understanding training performance . While directly manipulating the spectrum of the Jacobian or NTK is not easy , a practical approach is to manipulate the spectrum of weight matrices , such as orthogonal initialization ( Xiao et al. , 2018 ) . For a special neural net , Hu et al . ( 2020 ) showed that orthogonal initialization leads to better convergence result than Gaussian initialization , which provides early theoretical evidence for the importance of manipulating the weight matrix spectrum . Motivated by these studies , we suspect that an ‘ adequate ’ weight matrix spectrum is also important for GAN training . Indeed , one of the most popular techniques for GAN training , spectral normalization ( SN ) ( Miyato et al. , 2018 ) , manipulates the spectrum by scaling all singular values by a constant . This ensures the spectral norm is upper bounded . However , we find that for some hyperparameters and for high-resolution datasets , SN-GAN fails to generate good images . In a study we find the condition numbers of weight matrices to become very large and the majority of the singular values are close to 0 during training . See Fig . 1 ( a ) and Fig . 2 ( a ) . This can happen as SN does not promote a small condition number . This finding motivates to reduce the condition number of weights during GAN training . Recall that controlling the condition number is also a central problem in numerical linear algebra , known as preconditioning ( see Chen ( 2005 ) ) . We hence seek to develop a “ plug-in ” preconditioner for weights . This requires the preconditioner to be differentiable . Out of various preconditioners , we find the polynomial preconditioner to be a suitable choice due to the simple differentiation and strong theoretical support from approximation theory . Further , we suggest to adaptively adjust the strength of the preconditioner during training so as to not overly restrict the expressivity . We show the efficacy of preconditioning on CIFAR10 ( 32 ⇥ 32 ) , STL ( 48 ⇥ 48 ) and LSUN bedroom , tower and living room ( 256 ⇥ 256 ) . Summary of contributions . For a deep linear network studied in ( Hu et al. , 2020 ) , we prove that if all weight matrices have bounded spectrum , then gradient descent converges to global min- imum at a geometric rate . We then introduce a PC-layer ( preconditioning layer ) that consists of a low-degree polynomial preconditioner . We further study adaptive preconditioning ( APC ) which adaptively controls the strength of PC on different layers in different iterations . Applying PC and APC to unconditional GAN training on LSUN data ( 256 ⇥ 256 ) , permits to generate high-quality images when SN-GAN fails . We also show that APC achieves better FID scores on CIFAR10 , STL , and LSUN than a recently proposed method of Jiang et al . ( 2019 ) . 1.1 RELATED WORK . Related to the proposed method is work by Jiang et al . ( 2019 ) , which also controls the spectrum in GAN training . They re-parameterize a weight matrix W via W = USV T , add orthogonal regularization of U , V and certain regularizer of entries of the diagonal matrix S. This approach differs from ours in a few aspects . First , Jiang et al . ( 2019 ) essentially solves a constrained optimization problem with constraints UTU = I , V TV = I using a penalty method ( Bertsekas , 1997 ) . In contrast , our approach solves an unconstrained problem since we add one layer into the neural net , similar to batch normalization ( BN ) ( Ioffe & Szegedy , 2015 ) and SN ( Miyato et al. , 2018 ) . Second , our PC layer is a direct generalization of SN as it includes SN-layer as a special case . In contrast , the method of Jiang et al . ( 2019 ) differs from SN-layer in any case . Our proposed method thus offers a smoother transition for existing users of SN . In a broader context , a number of approaches have been proposed to stabilize and improve GAN training , such as modifying the loss function ( Arjovsky et al. , 2017 ; Arjovsky & Bottou , 2017 ; Mao et al. , 2017 ; Li et al. , 2017b ; Deshpande et al. , 2018 ) , normalization and regularization ( Gulrajani et al. , 2017 ; Miyato et al. , 2018 ) , progressive growing techniques ( Karras et al. , 2018 ; Huang et al. , 2017 ) , changing architecture ( Zhang et al. , 2019 ; Karnewar & Wang , 2019 ) , utilizing side information like class labels ( Mirza & Osindero , 2014 ; Odena et al. , 2017 ; Miyato & Koyama , 2018 ) . Using this taxonomy , our approach fits the “ normalization and regularization ” category ( even though our method is not exactly normalization , the essence of “ embedded control ” is similar ) . Note that these directions are relatively orthogonal , and our approach can be potentially combined with other techniques such as progressive growing . However , due to limited computational resources , we focus on unconditional GANs using classical architectures , the setting studied by Miyato et al . ( 2018 ) . 1.2 NOTATION AND DEFINITION . We use eig ( A ) to denote the multiset ( i.e. , allow repetition ) of all eigenvalues of A . If all eigenvalues of A are non-negative real numbers , we say A is a positive semidefinite ( PSD ) matrix . The singular values of a matrix A 2 Rn⇥m are the square root of the eigenvalues of ATA 2 Rm⇥m . Let max ( A ) and min ( A ) denote the maximum and minimum singular values of A . Let kAk2 denote the spectral norm of A , i.e. , the largest singular value . The condition number of a square matrix A is traditionally defined as ( A ) = kAk2kA 1k2 = max ( A ) min ( A ) . We extend this definition to a rectangular matrix A 2 Rn⇥m where n m via ( A ) = max ( A ) min ( A ) . Let deg ( p ) denote the degree of a polynomial p and let Pk = { p | deg ( p ) k } be the set of polynomials with degree no more than k . 2 WHY CONTROLLING THE SPECTRUM ? . To understand why controlling the spectrum is helpful we leverage recent tools in neural network theory to prove the following result : if weight matrices have small condition numbers , then gradient descent for deep pyramid linear networks converges to the global-min fast . This is inspired by Hu et al . ( 2020 ) analyzing a deep linear network to justify orthogonal initialization . Similar to Hu et al . ( 2020 ) , we consider a linear network that takes an input x 2 Rdx⇥1 and outputs F ( ✓ ; x ) = WLWL 1 . . .W1x 2 Rdy⇥1 , ( 1 ) where ✓ = ( W1 , . . . , WL ) represents the collection of all parameters and Wj is a matrix of dimension dj ⇥ dj 1 , j = 1 , . . . , L. Here we define d0 = dx and dL = dy . Assume there exists r 2 { 1 , . . . , L } , such that dy = dL dL 1 · · · dr , and n d0 d1 · · · dr . This means the network is a pyramid network , which generalizes the equal-width network of Hu et al . ( 2020 ) . Suppose y = ( y1 ; . . . ; yn ) 2 Rndy⇥1 are the labels , and the predictions are F ( ✓ ; X ) = ( F ( ✓ ; x1 ) ; . . . , F ( ✓ ; xn ) ) 2 Rndy⇥1 . We consider a quadratic loss L ( ✓ ) = 12ky F ( ✓ ; X ) k 2 . Starting from ✓ ( 0 ) , we generate ✓ ( k ) = ( W1 ( k ) , . . . , WL ( k ) ) , k = 1 , 2 , . . . via gradient descent : ✓ ( k + 1 ) = ✓ ( k ) ⌘rL ( ✓ ( k ) ) . Denote the residual e ( k ) = F ( ✓ ( k ) ; X ) y . For given ⌧l 1 , µl 0 , l = 1 , . . . , L , define R , { ✓ = ( W1 , . . . , WL ) | ⌧l max ( Wl ) min ( Wl ) µl , 8l } . , LkXk2⌧L . . . ⌧1 ( ke ( 0 ) k+ kXkF ⌧L . . . ⌧1 ) , µ , ( µ1 . . . µL ) 2 min ( X ) 2 . The following result states that if ✓ ( k ) stay within region R ( i.e. , weight matrices have bounded spectrum ) for k = 0 , 1 , . . . , K , then the loss decreases at a geometric rate until iteration K. The rate ( 1 µ ) depends on ( ⌧L ... ⌧1 ) 2 ( µL ... µ1 ) 2 , which is related to the condition numbers of all weights . Theorem 1 Suppose ⌘ = 1 . Assume ✓ ( k ) 2 R , k = 0 , 1 , . . . , K. Then we have ke ( k + 1 ) k2 ( 1 µ ) ke ( k ) k2 , k = 0 , 1 , . . . , K. ( 2 ) See Appendix D.3.1 for the proof and detailed discussions . For proper initial point ✓ ( 0 ) where Wl ( 0 ) ’ s are full-rank , we can always pick ⌧l , l so that ✓ ( 0 ) 2 R. The trajectory { ✓ ( k ) } either stays in R forever ( in which case K = 1 ) , or leaves R at some finite iteration K. In the former case , the loss converges to zero at a geometrical rate ; in the latter case , the loss decreases to below ( 1 µ/ ) Kke ( 0 ) k2 . However , our theorem does not specify how large K is for a given situation . Previous works on convergence ( e.g. , Hu et al. , 2020 ; Du et al. , 2018 ; Allen-Zhu et al. , 2019 ; Zou et al. , 2018 ) bound the movement of the weights with extra assumptions , so that the trajectory stays in a certain nice regime ( related to R ) . We do not attempt to prove the trajectory stays in R. Instead , we use this as a motivation for algorithm design : if we can improve the condition numbers of weights during training , then the trajectory may stay in R for a longer time , and thus lead to smaller loss values . Next , we present the preconditioning layer as such a method .
This paper mainly solves the instability issue on the spectral normalization for generative adversarial networks (SN-GANs) when training with high dimensional data. To address this, the authors present a preconditioning layer (PC-layer) with two different ways (i.e., FPC and APC) to perform a low-degree polynomial preconditioning. Experiments on LSUN 256x256 training data demonstrate that FPC and APC are able to control the strength of preconditioning. My detailed comments are as follows.
SP:2434dec4e18251ecfe3d6a7838881e799aad8b4f
Differentiable Learning of Graph-like Logical Rules from Knowledge Graphs
Logical rules inside a knowledge graph ( KG ) are essential for reasoning , logical inference , and rule mining . However , existing works can only handle simple , i.e. , chain-like and tree-like , rules and can not capture KG ’ s complex semantics , which can be better captured by graph-like rules . Besides , learning graph-like rules is very difficult because the graph structure exhibits a huge discrete search space . To address these issues , observing that the plausibility of logical rules can be explained by how frequently it appears in a KG , we propose a score function that represents graph-like rules with learnable parameters . The score also helps relax the discrete space into a continuous one and can be uniformly transformed into matrix form by the Einstein summation convention . Thus , it allows us to learn graph-like rules in an efficient , differentiable , and end-to-end training manner by optimizing the normalized score . We conduct extensive experiments on real-world datasets to show that our method outperforms previous works due to logical rules ’ better expressive ability . Furthermore , we demonstrate that our method can learn high-quality and interpretable graph-like logical rules . 1 INTRODUCTION . Knowledge graph ( KG ) refers to a special type of directed graphs including various entities as nodes and relations as directed edges representing a large number of facts ( Auer et al. , 2007 ; Bollacker et al. , 2008 ) . In KG , logical rules are a set of compositional logical relations within a specific structure , which are important for reasoning ( Cohen et al. , 2019 ; Zhang et al. , 2019a ; Qu & Tang , 2019 ) , logical inference ( Dhingra et al. , 2020 ; Das et al. , 2018 ; Xiong et al. , 2017 ) , rule mining ( Sadeghian et al. , 2019 ; Yang et al. , 2017 ; Yang & Song , 2020 ) , theorem proving ( Rocktäschel & Riedel , 2017 ; Minervini et al. , 2018 ; 2020 ) , etc . Learning logical rules ( Galárraga et al. , 2015 ; Chen et al. , 2016 ) , as an important task , aims to infer a structural logical rule for logical query or relation , which can support logical query or link prediction while providing interpretable logical rules . The structure of logical queries can be various with very different semantics , as shown in Figure 1 , including chain-like , tree-like and graph-like rules . Learning the logical rules , especially the graph-like rules , are very difficult because both the logical structure and the relations assigned on each edge are unknown requiring to be inferred from input-output pairs , which compose a huge discrete searching space . In this paper , we dive into the problem of learning graph-like logical rules , including both the logical structure representing how logic connects and the relations assigned on different edges . Recently , a series of works on learning logical rule ( Yang et al. , 2017 ; Sadeghian et al. , 2019 ; Yang & Song , 2020 ) has been proposed , which not only can support tasks including logical query and link prediction , but as a side effect , can also provide the mined logical rules with high interpretability . As shown in Figure 1 , all these works are limited to learning chain-like rules ( the left case ) ( Yang et al. , 2017 ; Sadeghian et al. , 2019 ) or tree-like rules ( the middle case ) ( Hamilton et al. , 2018 ; Ren et al. , 2020 ; Yang & Song , 2020 ) . However , there are widely-existed graph-like logical rules , which the existing works can not handle due to their limited expressive ability about logical rules . Learning graph-like logical rules is very important in many scenarios such as recommendation systems , question-answering system and KG completion , while learning such complex rules is still an open and challenging problem . Graph-like rule Which book has two common readers with the book X while the two readers are friends ? Tree-like rule What is the address of the university that both the students X1 and X2 study at ? Chain-like rule Who is X ’ s friend ’ s supervisor ? Semantic Questions We propose a novel method that can explicitly learn the structural logical rules , including a logical structure and the relations assigned on each edge , and we can use the inferred logical rules for conducting inductive logical query with unseen entities and graphs . All the structural logical rules construct a discrete search space to explore , and searching for that is an NP-hard problem . To tackle with this problem , our method constructs a continuous space including both the structural information and the relational information to learn , which allows us to train our model in an endto-end differentiable manner . Specifically , as shown in Figure 1 , we take the frequency of a logical rule in KG as its score to estimate how likely a logical rule stands . After optimizing w.r.t . the normalized score , our model yields interpretable logical rules of high quality , and support inductive logical query and link prediction , which has been demonstrated by our extensive experiments on real-world datasets . Our contributions can be summarized as following three aspects , • We first propose the problem of learning graph-like rules and design an end-to-end differentiable model that can learn graph-like logical rules instead of only chain-like or tree-like rules , modeling both the logical structure describing how the logic connects and relations assigned on edges . • We provide a uniform expression by Einsum to represent the score of all graph-like logical rules , including the ones that can not be represented by a combination of matrix/element-wise addition/product , which is elegant for expression and convenient for implementation . • We conduct extensive experiments to demonstrate that our model has better expressive ability for graph-like logical rules and show our model can mine high-quality logical rules with high interpretability . 2 PROBLEM FORMULATION . Here , we formally introduce the definition of logical score , and based on that , we further introduce our model ’ s main focus , relation inference ( Yang et al. , 2017 ; Sadeghian et al. , 2019 ) and structural rule learning , and our evaluation task , logical query ( Hamilton et al. , 2018 ; Ren et al. , 2020 ) . Definition 1 ( Logical Score ) Logical rule is formulated by ∧ni=1Ri → Rcpx : sr where sr is the score for ∧ni=1Ri , andRi is a relationRi = Ri ( Vi , V ′i ) , Vi , V ′i ∈ { { Xj } , Y , { Zk } } for i = 1 , · · · , n and Rcpx is a relation Rcpx ( { Xj } , Y ) , { Xj } are input nodes , { Zk } are free-variable nodes , Y is output node . For strict logical query , for any Rcpx ( { Xj } , Y ) , there exists ( Z1 , · · · , ZK ) that make ∧ni=1Ri be true , we can draw the conclusion ∧ni=1Ri → Rcpx . However , because KG is usually noisy and incomplete , for learning logical rules , our key insight is to design the score as the number of freevariable tuples ( Z1 , · · · , ZK ) that make ∧ni=1Ri be true , which can capture the correlation between logical rules and the input-output pairs of a logical query . For example , for the case in the middle of Figure 1 , Rstudy at ( X1 , Z ) ∧ Rstudy at ( X2 , Z ) ∧ Raddress of ( Z , Y ) → Rcpx ( X1 , X2 , Y ) ; for the case in the right of Figure 1 , we have Rread ( X , Z1 ) ∧ Rread ( X , Z2 ) ∧ Rfriend ( Z1 , Z2 ) ∧ Rread ( inv ) ( Z1 , Y ) ∧ Rread ( inv ) ( Z2 , Y ) → Rcpx ( X , Y ) . Note that , Rcpx can both be a relation that exists in the KG and the human-defined logic rule for a query , which tends to be more complex . The score sr serves as two roles : ( i ) . when input-output pairs are given , it can measure how likely a logical rule is , which corresponds to the scenario of Task 1 and Task 2 ; ( ii ) . when logical rules for query and inputs are given , it can measure how much a output node fits the query , which corresponds to Task 3 . Task 1 ( Relation Inference ) GivenRcpx ( { Xj } , Y ) is satisfied and a logical structure composed by G = { e1 ( V1 , V ′1 ) , e2 ( V2 , V ′2 ) , · · · } , we need to infer how to assign relation Ri on each edge ei to form a logical rule ∧ni=1Ri ( Vi , V ′i ) that will make the score sr of Rcpx ( { Xj } , Y ) high . For this task , previous relation inference works ( Yang et al. , 2017 ; Sadeghian et al. , 2019 ) can also conduct this task but they limit the G to be a chain-like . We model the relation between the inputoutput pairs behind the query as Rcpx and infer its graph-like logical rule . Task 2 ( Structural Rule Learning ) Given Rcpx ( { Xj } , Y ) is satisfied and the possible max nodes number n̂ ≥ ne , where ne is the size of { { Xj } , Y } , we need to infer what structure G = { e1 ( V1 , V ′1 ) , e2 ( V2 , V ′2 ) , · · · , en ( Vn , V ′n ) } where ne ≤ n ≤ n̂ and the relations assigned on edges ∧ni=1Ri that will make the score sr high . For this task , logical structures in previous works ( Yang et al. , 2017 ; Sadeghian et al. , 2019 ; Yang & Song , 2020 ) are limited to chains or trees , and the number of input entities are limited to 1 . However , we can infer both the logical structure and the relations assigned on edges for graph-like rules . Task 3 ( Logical Query ) Given input nodes { Xj } and the query relation , the target nodes of this query can be represented by q = { Y |Rcpx ( { Xj } , Y ) } . Note that , in previous works ( Hamilton et al. , 2018 ; Ren et al. , 2020 ) , the logical ruleRcpx = ∧ni=1Ri is given , different from those works , we need to infer the ∧ni=1Ri for logic query . Our model targets at the inference of complex logical rules , and use the inferred logic rules to conduct logical query as evaluation task . For evaluation , we regard Task 3 as the main task and the other two tasks as side products . 3 RELATED WORKS . 3.1 LOGICAL QUERY FROM KNOWLEDGE GRAPHS . Logical rules learning ( Teru et al. , 2020 ; Evans & Grefenstette , 2018 ; Manhaeve et al. , 2018 ; Wang et al. , 2019 ; Ho et al. , 2018 ) is to learn logical rules ( Task 1 ) for logical query ( Task 3 ) in an inductive setting . Neural-LP ( Yang et al. , 2017 ) design an end-to-end differentiable framework to learn the probability of different logical rules . Furthermore , DRUM ( Sadeghian et al. , 2019 ) improve NeuralLP ( Yang et al. , 2017 ) by introducing the low-rank matrix decomposition . However , these two works can only tackle chain-like logical rules . Different from our model , they mainly focus on relatively simple logical rules such as chain-like or tree-like rules . To the best of our knowledge , our model is the first one that can learn to infer graph-like complex logical rule including structure and relations assigned on different edges . Logical queries ( Serge et al. , 1995 ) aims to learn how to accurately query an entity ( Task 3 ) according to given input entities and relations representing the logical rules in a transductive setting . According to Task 3 , the logic rules representing the semantics of query explicitly given at both training and testing stages in this branch of works , but in our paper , the logic rules require to be inferred in the training stage . For most of these works , the main idea is to project entities into the embedding space ( Bordes et al. , 2013 ; Trouillon et al. , 2016 ; Sun et al. , 2018 ; Balažević et al. , 2019 ) and transform the relations into a type of manipulation in embedding space , such as a linear projection . Hamilton et al . ( 2018 ) first proposes an embedding-based method for conduct query with tree-like logical rules . Ren et al . ( 2020 ) further improves Hamilton et al . ( 2018 ) by modeling the entities as box embedding rather than vector embedding , which is more natural for the manipulation for the conjunction of sets . Different from our model , these methods require explicit given logical structures with given relations on edges .
This paper proposes techniques that generate logical rules out of knowledge graphs; the idea is to produce more complex rules than usual by exploiting a differentiable formulation of the associated learning process. This is a relevant theme as rule learning from knowledge graphs is important in practice due to its potential interpretability (as compared to black-box schemes based on embeddings). The solution is relatively simple to describe, with a score that leads to differentiable learning, and some needed insights to obtain useful results. The empirical testing seems fine and does indicate that the method is useful in practice.
SP:bc280e927e60317d6c2382d5507f522ba58ebe42
Meta-learning with negative learning rates
1 INTRODUCTION . Deep Learning models represent the state-of-the-art in several machine learning benchmarks ( LeCun et al . ( 2015 ) ) , and their performance does not seem to stop improving when adding more data and computing resources ( Rosenfeld et al . ( 2020 ) , Kaplan et al . ( 2020 ) ) . However , they require a large amount of data and compute to start with , which are often not available to practitioners . The approach of fine-tuning has proved very effective to address this limitation : pre-train a model on a source task , for which a large dataset is available , and use this model as the starting point for a quick additional training ( fine-tuning ) on the small dataset of the target task ( Pan & Yang ( 2010 ) , Donahue et al . ( 2014 ) , Yosinski et al . ( 2014 ) ) . This approach is popular because pre-trained models are often made available by institutions that have the resources to train them . In some circumstances , multiple source tasks are available , all of which have scarce data , as opposed to a single source task with abundant data . This case is addressed by meta-learning , in which a model gains experience over multiple source tasks and uses it to improve its learning of future target tasks . The idea of meta-learning is inspired by the ability of humans to generalize across tasks , without having to train on any single task for long time . A meta-learning problem is solved by a bi-level optimization procedure : an outer loop optimizes meta-parameters across tasks , while an inner loop optimizes parameters within each task ( Hospedales et al . ( 2020 ) ) . The idea of meta-learning has gained some popularity , but a few recent papers argue that a simple alternative to meta-learning is just good enough , in which the inner loop is removed entirely ( Chen et al . ( 2020a ) , Tian et al . ( 2020 ) , Dhillon et al . ( 2020 ) , Chen et al . ( 2020b ) , Raghu et al . ( 2020 ) ) . Other studies find the opposite ( Goldblum et al . ( 2020 ) , Collins et al . ( 2020 ) , Gao & Sener ( 2020 ) ) . It is hard to resolve the debate because there is little theory available to explain these findings . In this work , using random matrix theory and exact solutions of linear models , we derive an algebraic expression of the average test loss of MAML , a simple and successful meta-learning algorithm ( Finn et al . ( 2017 ) ) , as a function of its hyperparameters . In particular , we study its performance as a function of the inner loop learning rate during meta-training . Setting this learning rate to zero is equivalent to removing the inner loop , as advocated by recent work ( Chen et al . ( 2020a ) , Tian et al . ( 2020 ) , Dhillon et al . ( 2020 ) , Chen et al . ( 2020b ) , Raghu et al . ( 2020 ) ) . Surprisingly , we find that the optimal learning rate is negative , thus performance can be increased by reducing the learning rate below zero . In particular , we find the following : • In the problem of mixed linear regression , we prove that the optimal learning rate is always negative in overparameterized models . The same result holds in underparameterized models provided that the optimal learning rate is small in absolute value . We validate the theory by running extensive experiments . • We extend these results to the case of nonlinear regression and wide neural networks , in which the output can be approximated by a linear function of the parameters ( Jacot et al . ( 2018 ) , Lee et al . ( 2019 ) ) . While in this case we can not prove that the optimal learning rate is always negative , preliminary experiments suggest that the result holds in this case as well . 2 RELATED WORK . The field of meta-learning includes a broad range of problems and solutions , see Hospedales et al . ( 2020 ) for a recent review focusing on neural networks and deep learning . In this context , metalearning received increased attention in the past few years , several new benchmarks have been introduced , and a large number of algorithms and models have been proposed to solve them ( Vinyals et al . ( 2017 ) , Bertinetto et al . ( 2019 ) , Triantafillou et al . ( 2020 ) ) . Despite the surge in empirical work , theoretical work is still lagging behind . Similar to our work , a few other studies used random matrix theory and exact solutions to calculate the average test loss for the problem of linear regression ( Advani & Saxe ( 2017 ) , Hastie et al . ( 2019 ) , Nakkiran ( 2019 ) ) . To our knowledge , our study is the first to apply this technique to the problem of meta-learning with multiple tasks . Our results reduce to those of linear regression in the case of one single task . Furthermore , we are among the first to apply the framework of Neural Tangent Kernel ( Jacot et al . ( 2018 ) , Lee et al . ( 2019 ) ) to the problem of meta-learning ( a few papers appeared after our submission : Yang & Hu ( 2020 ) , Wang et al . ( 2020a ) , Zhou et al . ( 2021 ) ) . Similar to us , a few theoretical studies looked at the problem of mixed linear regression in the context of meta-learning . In Denevi et al . ( 2018 ) , Bai et al . ( 2021 ) , a meta-parameter is used to bias the taskspecific parameters through a regularization term . Kong et al . ( 2020 ) looks at whether many tasks with small data can compensate for a lack of tasks with big data . Tripuraneni et al . ( 2020 ) , Du et al . ( 2020 ) study the sample complexity of representation learning . However , none of these studies look into the effect of learning rate on performance , which is our main focus . In this work , we focus on MAML , a simple and successful meta-learning algorithm ( Finn et al . ( 2017 ) ) . A few theoretical studies have investigated MAML , looking at : universality of the optimization algorithm ( Finn & Levine ( 2018 ) ) , bayesian inference interpretation ( Grant et al . ( 2018 ) ) , proof of convergence ( Ji et al . ( 2020 ) ) , difference between convex and non-convex losses ( Saunshi et al . ( 2020 ) ) , global optimality ( Wang et al . ( 2020b ) ) , effect of the inner loop ( Collins et al . ( 2020 ) , Gao & Sener ( 2020 ) ) . Again , none of these studies look at the effect of the learning rate , the main subject of our work . The theoretical work of Khodak et al . ( 2019 ) connects the learning rate to task similarity , while the work of Li et al . ( 2017 ) meta-learns the learning rate . 3 META-LEARNING AND MAML . In this work , we follow the notation of Hospedales et al . ( 2020 ) and we use MAML ( Finn et al . ( 2017 ) ) as the meta-learning algorithm . We assume the existence of a distribution of tasks τ and , for each task , a loss function Lτ and a distribution of data pointsDτ = { xτ , yτ } with input xτ and label yτ . We assume that the loss function is the same for all tasks , Lτ = L , but each task is characterized by a different distribution of the data . The empirical meta-learning loss is evaluated on a sample of m tasks , and a sample of nv validation data points for each task : Lmeta ( ω ; Dt , Dv ) = 1 mnv m∑ i=1 nv∑ j=1 L ( θ ( ω ; D ( i ) t ) ; x v ( i ) j , y v ( i ) j ) ( 1 ) The training set D ( i ) t = { x t ( i ) j , y t ( i ) j } j=1 : nt and validation set D ( i ) v = { x v ( i ) j , y v ( i ) j } j=1 : nv are drawn independently from the same distribution in each task i . The function θ represents the adaptation of the meta-parameter ω , which is evaluated on the training set . Different meta-learning algorithms correspond to a different choice of θ , we describe below the choice of MAML ( Eq.3 ) , the subject of this study . During meta-training , the loss of Eq.1 is optimized with respect to the meta-parameter ω , usually by stochastic gradient descent , starting from an initial point ω0 . The optimum is denoted as ω ? ( Dt , Dv ) . This optimization is referred to as the outer loop , while computation of θ is referred to as the inner loop of meta-learning . During meta-testing , a new ( target ) task is given and θ adapts on a set Dr of nr target data points . The final performance of the model is computed on test data Ds of the target task . Therefore , the test loss is equal to Ltest = Lmeta ( ω ? ( Dt , Dv ) ; Dr , Ds ) ( 2 ) In MAML , the inner loop corresponds to a few steps of gradient descent , with a given learning rate αt . In this work we consider the simple case of a single gradient step : θ ( ω ; D ( i ) t ) = ω − αt nt nt∑ j=1 ∂L ∂θ ∣∣∣∣ ω ; x t ( i ) j , y t ( i ) j ( 3 ) If the learning rate αt is zero , then parameters are not adapted during meta-training and θ ( ω ) = ω . In that case , a single set of parameters in learned across all data and there is no inner loop . However , it is important to note that a distinct learning rate αr is used during meta-testing . A setting similar to this has been advocated in a few recent studies ( Chen et al . ( 2020a ) , Tian et al . ( 2020 ) , Dhillon et al . ( 2020 ) , Chen et al . ( 2020b ) , Raghu et al . ( 2020 ) ) . We show that , intuitively , the optimal learning rate at meta-testing ( adaptation ) time αr is always positive . Surprisingly , in the family of problems considered in this study , we find that the optimal learning rate during meta-training αt is instead negative . We note that the setting αt = 0 effectively does not use the nt training data points , therefore we could in principle add this data to the validation set , but we do not consider this option here since we are interested in a wide range of possible values of αt as opposed to the specific case αt = 0 .
This paper studies meta-learning in the mixed linear regression setting, focusing on the effect of the within-task step-size on performance. For over-parameterized, under-parameterized, and NTK regimes they derive expressions for test-time loss that suggest that negative or close-to-zero learning rates are optimal, and provide experiments that closely match these results. However, some aspects of the mathematical approach are unclear, and the work's impact is limited without an investigation of the consequences of the analysis.
SP:ad96575881588cd2566d2c9c589882a6db9b3874
On the Consistency Loss for Leveraging Augmented Data to Learn Robust and Invariant Representations
1 INTRODUCTION . Recent advances in deep learning has delivered remarkable empirical performance over i.i.d test data , and the community continues to investigate the more challenging and realistic scenario when models are tested in robustness over non-i.i.d data ( e.g. , Ben-David et al. , 2010 ; Szegedy et al. , 2013 ) . Recent studies suggest that one cause of the fragility is the model ’ s tendency in capturing undesired signals ( Wang et al. , 2020 ) , thus combating this tendency may be a key to robust models . To help models ignore the undesired signals , data augmentation ( i.e. , diluting the undesired signals of training samples by applying transformations to existing examples ) is often used . Given its widely usage , we seek to answer the question : how should we train with augmented samples so that the assistance of augmentation can be taken to the fullest extent to learn robust and invariant models ? In this paper , We analyze the generalization behaviors of models trained with augmented data and associated regularization techniques . We investigate a set of assumptions and compare the worstcase expected risk over unseen data when i.i.d samples are allowed to be transformed according to a function belonging to a family . We bound the expected risk with terms that can be computed during training , so that our analysis can inspire how to regularize the training procedure . While all the derived methods have an upper bound of the expected risk , with progressively stronger assumptions , we have progressively simpler regularization , allowing practical choices to be made according to the understanding of the application . Our contributions of this paper are as follows : • We offer analyses of the generalization behaviors of augmented models trained with different regularizations : these regularizations require progressively stronger assumptions of the data and the augmentation functions , but progressively less computational efforts . For example , with assumptions pertaining to augmentation transformation functions , the Wasserstein distance over the original and augmented empirical distributions can be calculated through simple ` 1 norm distance . • We test and compare these methods and offer practical guidance on how to choose regularizations in practice . In short , regularizing the squared ` 2 distance of logits between the augmented samples and original samples is a favorable method , suggested by both theoretical and empirical evidence . • With an invariance test , we argue that vanilla augmentation does not utilize the augmented samples to the fullest extent , especially in learning invariant representations , thus may not be ideal unless the only goal of augmentation is to improve the accuracy over a specific setting . 2 RELATED WORK & KEY DIFFERENCES . Data augmentation has been used effectively for years . Tracing back to the earliest convolutional neural networks , we notice that even the LeNet applied on MNIST dataset has been boosted by mixing the distorted images to the original ones ( LeCun et al. , 1998 ) . Later , the rapidly growing machine learning community has seen a proliferate development of data augmentation techniques ( e.g. , flipping , rotation , blurring etc . ) that have helped models climb the ladder of the state-of-theart ( one may refer to relevant survey ( Shorten & Khoshgoftaar , 2019 ) for details ) . Recent advances expanded the conventional concept of data augmentation and invented several new approaches , such as leveraging the information in unlabelled data ( Xie et al. , 2019 ) , automatically learning augmentation functions ( Ho et al. , 2019 ; Hu et al. , 2019 ; Wang et al. , 2019c ; Zhang et al. , 2020 ; Zoph et al. , 2019 ) , and generating the samples ( with constraint ) that maximize the training loss along training ( Fawzi et al. , 2016 ) , which is later widely accepted as adversarial training ( Madry et al. , 2018 ) . While the above works mainly discuss how to generate the augmented samples , in this paper , we mainly answer the question about how to train the models with augmented samples . For example , instead of directly mixing augmented samples with the original samples , one can consider regularizing the representations ( or outputs ) of original samples and augmented samples to be close under a distance metric ( also known as a consistency loss ) . Many concrete ideas have been explored in different contexts . For example , ` 2 distance and cosine similarities between internal representations in speech recognition ( Liang et al. , 2018 ) , squared ` 2 distance between logits ( Kannan et al. , 2018 ) , or KL divergence between softmax outputs ( Zhang et al. , 2019a ) in adversarially robust vision models , Jensen–Shannon divergence ( of three distributions ) between embeddings for texture invariant image classification ( Hendrycks et al. , 2020 ) . These are but a few highlights of the concrete and successful implementations for different applications out of a huge collection ( e.g. , ( Wu et al. , 2019 ; Guo et al. , 2019 ; Zhang et al. , 2019b ; Shah et al. , 2019 ; Asai & Hajishirzi , 2020 ; Sajjadi et al. , 2016 ; Zheng et al. , 2016 ; Xie et al. , 2015 ) ) , and one can easily imagine methods permuting these three elements ( distance metrics , representation or outputs , and applications ) to be invented . Even further , although we are not aware of the following methods in the context of data augmentation , given the popularity of GAN ( Goodfellow , 2016 ) and domain adversarial neural network ( Ganin et al. , 2016 ) , one can also expect the distance metric generalizes to a specialized discriminator ( i.e . a classifier ) , which can be intuitively understood as a calculated ( usually maximized ) distance measure , Wasserstein-1 metric as an example ( Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ) . Key Differences : With this rich collection of regularizing choices , which one method should we consider in general ? More importantly , do we actually need the regularization at all ? These questions are important for multiple reasons , especially considering that there are paper suggesting that these regularizations may lead to worse results ( Jeong et al. , 2019 ) . In this paper , we answer the first question with a proved upper bound of the worst case generalization error , and our upper bound explicitly describes what regularizations are needed . For the second question , we will show that regularizations can help the model to learn the concept of invariance . There are also several previous discussions regarding the detailed understandings of data augmentation ( Yang et al. , 2019 ; Chen et al. , 2019 ; Hernández-Garcı́a & König , 2018 ; Rajput et al. , 2019 ; Dao et al. , 2019 ) , among which , ( Yang et al. , 2019 ) is probably the most relevant as it also defends the usage of regularizations . However , we believe our discussions are more comprehensive and supported theoretically , since our analysis directly suggests the ideal regularization . Also , empirically , we design an invariance test in addition to the worst-case accuracy used in the preceding work . 3 TRAINING STRATEGIES WITH AUGMENTED DATA . Notations ( X , Y ) denotes the data , where X 2 Rn⇥p and Y 2 { 0 , 1 } n⇥k ( one-hot vectors for k classes ) , and f ( · , ✓ ) denotes the model , which takes in the data and outputs the softmax ( probabilities of the prediction ) and ✓ denotes the corresponding parameters . g ( ) completes the prediction ( i.e. , mapping softmax to one-hot prediction ) . l ( · , · ) denotes a generic loss function . a ( · ) denotes a transformation that alters the undesired signals of a sample , i.e. , the data augmentation method . a 2 A , which is the set of transformation functions . P denotes the distribution of ( x , y ) . For any sampled ( x , y ) , we can have ( a ( x ) , y ) , and we use Pa to denote the distribution of these transformed samples . r ( · ; ✓ ) denotes the risk of model ✓ . b· denotes the estimated term · . 3.1 WELL-BEHAVED DATA TRANSFORMATION FUNCTION . Despite the strong empirical performance data augmentation has demonstrated , it should be intuitively expected that the performance can only be improved when the augmentation is chosen wisely . Therefore , before we proceed to analyze the behaviors of training with data augmentations , we need first regulate some basic properties of the data transformation functions used . Intuitively , we will consider the following three properties . • “ Dependence-preservation ” with two perspectives : Label-wise , the transformation can not alter the label of the data , which is a central requirement of almost all the data augmentation practice . Feature-wise , the transformation will not introduce new dependencies between the samples . • “ Efficiency ” : the augmentation should only generate new samples of the same label as minor perturbations of the original one . If a transformation violates this property , there should exist other simpler transformations that can generate the same target sample . • “ Vertices ” : There are extreme cases of the transformations . For example , if one needs the model to be invariant to rotations from 0 to 60 , we consider the vertices to be 0 rotation function ( thus identity map ) and 60 rotation function . In practice , one usually selects the transformation vertices with intuitions and domain knowledge . We now formally define these three properties . The definition will depend on the model , thus these properties are not only regulating the transformation functions , but also the model . We introduce the Assumptions A1-A3 corresponding to the properties . A1 : Dependence-preservation : the transformation function will not alter the dependency regarding the label ( i.e. , for any a ( ) 2 A , a ( x ) will have the same label as x ) or the features ( i.e. , for any a1 ( ) , a2 ( ) 2 A , a1 ( x1 ) ? ? a1 ( x2 ) for any x1 , x2 2 X that x1 6= x2 ) . A2 : Efficiency : for b✓ and any a ( ) 2 A , f ( a ( x ) ; b✓ ) is closer to x than any other samples under a distance metric de ( · , · ) , i.e. , de ( f ( a ( x ) ; b✓ ) , f ( x ; b✓ ) ) minx02X x de ( f ( a ( x ) ; b✓ ) , f ( x0 ; b✓ ) ) . A3 : Vertices : For a model b✓ and a transformation a ( ) , we use Pa , b✓ to denote the distribution of f ( a ( x ) ; b✓ ) for ( x , y ) ⇠ P . “ Vertices ” argues that exists two extreme elements in A , namely a+ and a , with certain metric dx ( · , · ) , we have dx ( Pa+ , b✓ , Pa , b✓ ) = sup a1 , a22A dx ( Pa1 , b✓ , Pa2 , b✓ ) ( 1 ) Note that dx ( · , · ) is a metric over two distributions and de ( · , · ) is a metric over two samples . Also , slightly different from the intuitive understanding of “ vertices ” above , A3 regulates the behavior of embedding instead of raw data . All of our follow-up analysis will require A1 to hold , but with more assumptions held , we can get computationally lighter methods with bounded error .
In order to improve the robustness of the learned models, prior work has proposed various data augmentation techniques and different ways of incorporating them into training. This work seeks to provide a general understanding of how we should train with augmented samples in order to learn robust and invariant models from both theoretical and empirical perspectives. More importantly, the authors showed that the regularization of the augmented samples in the training procedure can be inspired from the theoretical analysis since it directly suggests the ideal regularization.
SP:638a6687e5846937cea0e0be3a6e68ad743a787d
Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning
mKT + 1T ) for full worker participation and a convergence rate O ( √ K√ nT + 1T ) for partial worker participation , where K is the number of local steps , T is the number of total communication rounds , m is the total worker number and n is the worker number in one communication round if for partial worker participation . Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to T/m in full worker participation . We conduct extensive experiments on MNIST and CIFAR-10 to verify our theoretical results . 1 INTRODUCTION . Federated Learning ( FL ) is a distributed machine learning paradigm that leverages a large number of workers to collaboratively learn a model with decentralized data under the coordination of a centralized server . Formally , the goal of FL is to solve an optimization problem , which can be decomposed as : min x∈Rd f ( x ) : = 1 m m∑ i=1 Fi ( x ) , where Fi ( x ) , Eξi∼Di [ Fi ( x , ξi ) ] is the local ( non-convex ) loss function associated with a local data distribution Di and m is the number of workers . FL allows a large number of workers ( such as edge devices ) to participate flexibly without sharing data , which helps protect data privacy . However , it also introduces two unique challenges unseen in traditional distributed learning algorithms that are used typically for large data centers : • Non-independent-identically-distributed ( non-i.i.d . ) datasets across workers ( data heterogeneity ) : In conventional distributed learning in data centers , the distribution for each worker ’ s local dataset can usually be assumed to be i.i.d. , i.e. , Di = D , ∀i ∈ { 1 , ... , m } . Unfortunately , this assumption rarely holds for FL since data are generated locally at the workers based on their circumstances , i.e. , Di 6= Dj , for i 6= j . It will be seen later that the non-i.i.d assumption imposes significant challenges in algorithm design for FL and their performance analysis . • Time-varying partial worker participation ( systems non-stationarity ) : With the flexibility for workers ’ participation in many scenarios ( particularly in mobile edge computing ) , workers may randomly join or leave the FL system at will , thus rendering the active worker set stochastic and time-varying across communication rounds . Hence , it is often infeasible to wait for all workers ’ responses as in traditional distributed learning , since inactive workers or stragglers will significantly slow down the whole training process . As a result , only a subset of the workers may be chosen by the server in each communication round , i.e. , partial worker participation . In recent years , the Federated Averaging method ( FedAvg ) and its variants ( McMahan et al. , 2016 ; Li et al. , 2018 ; Hsu et al. , 2019 ; Karimireddy et al. , 2019 ; Wang et al. , 2019a ) have emerged as a prevailing approach for FL . Similar to the traditional distributed learning , FedAvg leverages local computation at each worker and employs a centralized parameter server to aggregate and update the model parameters . The unique feature of FedAvg is that each worker runs multiple local stochastic gradient descent ( SGD ) steps rather than just one step as in traditional distributed learning between two consecutive communication rounds . For i.i.d . datasets and the full worker participation setting , Stich ( 2018 ) and Yu et al . ( 2019b ) proposed two variants of FedAvg that achieve a convergence rate of O ( mKT + 1√ mKT ) with a bounded gradient assumption for both strongly convex and nonconvex problems , where m is the number of workers , K is the local update steps , and T is the total communication rounds . Wang & Joshi ( 2018 ) and Stich & Karimireddy ( 2019 ) further proposed improved FedAvg algorithms to achieve an O ( mT + 1√ mKT ) convergence rate without bounded gradient assumption . Notably , for a sufficiently large T , the above rates become O ( 1√ mKT ) 1 , which implies a linear speedup with respect to the number of workers.2 This linear speedup is highly desirable for an FL algorithm because the algorithm is able to effectively leverage the massive parallelism in a large FL system . However , with non-i.i.d . datasets and partial worker participation in FL , a fundamental open question arises : Can we still achieve the same linear speedup for convergence , i.e. , O ( 1√ mKT ) , with non-i.i.d . datasets and under either full or partial worker participation ? In this paper , we show the answer to the above question is affirmative . Specifically , we show that a generalized FedAvg with two-sided learning rates achieves linear convergence speedup with non-i.i.d . datasets and under full/partial worker participation . We highlight our contributions as follows : • For non-convex problems , we show that the convergence rate of the FedAvg algorithm on non-i.i.d . dataset areO ( 1√ mKT + 1T ) andO ( √ K√ nT + 1T ) for full and partial worker participation , respectively , where n is the size of the partially participating worker set . This indicates that our proposed algorithm achieves a linear speedup for convergence rate for a sufficiently large T . When reduced to the i.i.d . case , our convergence rate is O ( 1TK + 1√ mKT ) , which is also better than previous works . We summarize the convergence rate comparisons for both i.i.d . and non-i.i.d . cases in Table 1 . It is worth noting that our proof does not require the bounded gradient assumption . We note that the SCAFFOLD algorithm ( Karimireddy et al. , 2019 ) also achieves the linear speedup but extra variance reduction operations are required , which lead to higher communication costs and implementation complexity . By contrast , we do not have such extra requirements in this paper . • In order to achieve a linear speedup , i.e. , a convergence rate O ( 1√ mKT ) , we show that the number of local updatesK can be as large as T/m , which improves the T 1/3/m result previously shown in Yu et al . ( 2019a ) and Karimireddy et al . ( 2019 ) . As shown later in the communication complexity comparison in Table 1 , a larger number of local steps implies relatively fewer communication rounds , thus less communication overhead . Interestingly , our results also indicate that the number of local updates K does not hurt but rather help the convergence with a proper learning rates choice in full worker participation . This overcomes the limitation as suggested in Li et al . ( 2019b ) that local SGD steps might slow down the convergence ( O ( KT ) for strongly convex case ) . This result also reveals new insights on the relationship between the number of local steps and learning rate . 1This rate also matches the convergence rate order of parallel SGD in conventional distributed learning . 2To attain accuracy for an algorithm , it needs to take O ( 1 2 ) steps with a convergence rate O ( 1√ T ) , while needing O ( 1 m 2 ) steps if the convergence rate is O ( 1√ mT ) ( the hidden constant in Big-O is the same ) . In this sense , one achieves a linear speedup with respect to the number of workers . Notation . In this paper , we let m be the total number of workers and St be the set of active workers for the t-th communication round with size |St| = n for some n ∈ ( 0 , m ] . 3 We use K to denote the number of local steps per communication round at each worker . We let T be the number of total communication rounds . In addition , we use boldface to denote matrices/vectors . We let [ · ] it , k represent the parameter of k-th local step in the i-th worker after the t-th communication . We use ‖·‖2 to denote the ` 2-norm . For a natural number m , we use [ m ] to represent the set { 1 , · · · , m } . The rest of the paper is organized as follows . In Section 2 , we review the literature to put our work in comparative perspectives . Section 3 presents the convergence analysis for our proposed algorithm . Section 4 discusses the implication of the convergence rate analysis . Section 5 presents numerical results and Section 6 concludes this paper . Due to space limitation , the details of all proofs and some experiments are provided in the supplementary material . 2 RELATED WORK . The federated averaging ( FedAvg ) algorithm was first proposed by McMahan et al . ( 2016 ) for FL as a heuristic to improve communication efficiency and data privacy . Since then , this work has sparked many follow-ups that focus on FL with i.i.d . datasets and full worker participation ( also known as LocalSGD ( Stich , 2018 ; Yu et al. , 2019b ; Wang & Joshi , 2018 ; Stich & Karimireddy , 2019 ; Lin et al. , 2018 ; Khaled et al. , 2019a ; Zhou & Cong , 2017 ) ) . Under these two assumptions , most of the theoretical works can achieve a linear speedup for convergence , i.e. , O ( 1√ mKT ) for a sufficiently large T , matching the rate of the parallel SGD . In addition , LocalSGD is empirically shown to be communication-efficient and enjoys better generalization performance ( Lin et al. , 2018 ) . For a comprehensive introduction to FL , we refer readers to Li et al . ( 2019a ) and Kairouz et al . ( 2019 ) . 3For simplicity and ease of presentation in this paper , we let |St| = n. We note that this is not a restrictive condition and our proofs and results still hold for |St| ≥ n , which can be easily satisfied in practice . Algorithm 1 A Generalized FedAvg Algorithm with Two-Sided Learning Rates . Initialize x0 for t = 0 , · · · , T − 1 do The server samples a subset St of workers with |St| = n. for each worker i ∈ St in parallel do xit,0 = xt for k = 0 , · · · , K − 1 do Compute an unbiased estimate git , k = ∇Fi ( xit , k , ξit , k ) of∇Fi ( xit , k ) . Local worker update : xit , k+1 = x i t , k − ηLgit , k . end for Let ∆it = x i t , K − xit,0 = −ηL ∑K−1 k=0 g i t , k . Send ∆ i t to the server . end for At Server : Receive ∆it , i ∈ S. Let ∆t = 1|S| ∑ i∈S ∆ i t. Server Update : xt+1 = xt + η∆t . Broadcasting xt+1 to workers . end for For non-i.i.d . datasets , many works ( Sattler et al. , 2019 ; Zhao et al. , 2018 ; Li et al. , 2018 ; Wang et al. , 2019a ; Karimireddy et al. , 2019 ; Huang et al. , 2018 ; Jeong et al. , 2018 ) heuristically demonstrated the performance of FedAvg and its variants . On convergence rate with full worker participation , many works ( Stich et al. , 2018 ; Yu et al. , 2019a ; Wang & Joshi , 2018 ; Karimireddy et al. , 2019 ; Reddi et al. , 2020 ) can achieve linear speedup , but their convergence rate bounds could be improved as shown in this paper . On convergence rate with partial worker participation , Li et al . ( 2019b ) showed that the original FedAvg can achieve O ( K/T ) for strongly convex functions , which suggests that local SGD steps slow down the convergence in the original FedAvg . Karimireddy et al . ( 2019 ) analyzed a generalized FedAvg with two-sided learning rates under strongly convex , convex and non-convex cases . However , as shown in Table 1 , none of them indicates that linear speedup is achievable with non-i.i.d . datasets under partial worker participation . Note that the SCAFFOLD algorithm ( Karimireddy et al. , 2019 ) can achieve linear speedup but extra variance reduction operations are required , which lead to higher communication costs and implementation complexity . In this paper , we show that this linear speedup can be achieved without any extra requirements . For more detailed comparisons and other algorithmic variants in FL and decentralized settings , we refer readers to Kairouz et al . ( 2019 ) .
This paper provides a new analysis for the FedAvg algorithm, which assumes the data on different workers are non-IID and the objective functions are non-convex. The new analysis improved the existing bounds of FedAvg. Besides, the analysis is also extended to the non-stationary network, where the number of workers participating in the optimization may vary.
SP:33cd383e425b23699614bcff904cc4e52720c29c
Byzantine-Resilient Non-Convex Stochastic Gradient Descent
1 INTRODUCTION . Motivated by the pervasiveness of large-scale distributed machine learning , there has recently been significant interest in providing distributed optimization algorithms with strong fault-tolerance guarantees . In this context , the strongest , most stringent fault model is that of Byzantine faults ( Lamport et al. , 1982 ) : given m machines , each having access to private data , at most an α fraction of the machines can behave in arbitrary , possibly adversarial ways , with the goal of breaking or slowing down the algorithm . Although extremely harsh , this fault model is the “ gold standard ” in distributed computing ( Lynch , 1996 ; Lamport et al. , 1982 ; Castro et al. , 1999 ) , as algorithms proven to be correct in this setting are guaranteed to converge under arbitrary system behaviour . A setting of particular interest in this context has been that of distributed stochastic optimization . Here , the task is to minimize some stochastic function f ( x ) = Es∼D [ fs ( x ) ] over a distribution D , where fs ( · ) can be viewed as the loss function for sample s ∼ D. We assume there are m machines ( workers ) and an honest master , and α < 1/2 fraction of the workers may be Byzantine . In each iteration t , each worker has access to a version of the global iterate xt , which is maintained by the master . The worker can independently sample s ∼ D , compute ∇fs ( xt ) , and then synchronously send this stochastic gradient to the master . The master aggregates the workers ’ messages , and sends an updated iterate xt+1 to all the workers . Eventually , the master has to output an approximate minimizer of f . Clearly , the above description only applies to honest workers ; Byzantine workers may deviate arbitrarily and return adversarial “ gradient ” vectors to the master in every iteration . This distributed framework is quite general and well studied . One of the first references in this setting studied distributed PCA and regression ( Feng et al. , 2014 ) . Other early approaches ( Blanchard et al. , 2017 ; Chen et al. , 2017 ; Su & Vaidya , 2016a ; b ; Xie et al. , 2018a ) relied on defining generalizations of the geometric median . These approaches can withstand up to half of the nodes being malicious , but can have relatively high local computational cost Ω ( m2d ) ( Blanchard et al. , 2017 ; Chen et al. , 2017 ) , where m is the number of nodes and d is the problem dimension , and usually have suboptimal sample and iteration complexities . Follow-up work resolved this last issue when the objective f ( · ) is convex , leading to tight sample ∗The full and future editions of this paper can be found on https : //arxiv.org/abs/2012.14368 . †Microsoft Research Redmond , zeyuan @ csail.mit.edu ‡University of Waterloo , faezeeb75 @ gmail.com §Microsoft Research Redmond , jerrl @ microsoft.com ¶IST Austria , dan.alistarh @ ist.ac.at complexity bounds . Specifically , Yin et al . ( 2018 ) provided bounds for gradient descent-type algorithms , and showed that the bounds are tight when the dimension is constant . Alistarh et al . ( 2018 ) provided a stochastic gradient descent ( SGD ) type algorithm and showed that its sample and time complexities are asymptotically optimal even when the dimension is large . Non-convex Byzantine-resilient stochastic optimization . In this paper , we focus on the more challenging non-convex setting , and shoot for the strong goal of finding approximate local minima ( a.k.a . second-order critical points ) . In a nutshell , our main result is the following . Fix d to denote the dimension , and let the objective f : Rd → R be Lipschitz smooth and second-order smooth . We have m worker machines , each having access to unbiased , bounded estimators of the gradient of f . Given an initial point x0 , the SafeguardSGD algorithm ensures that , even if at most α < 1/2 fraction of the machines are Byzantine , after T = Õ ( ( α2 + 1m ) d ( f ( x0 ) −min f ( x ) ) ε4 ) parallel iterations , for at least a constant fraction of the indices t ∈ [ T ] , the following hold : ‖∇f ( xt ) ‖ ≤ ε and ∇2f ( xt ) − √ εI . If the goal is simply ‖∇f ( xt ) ‖ ≤ ε , then T = Õ ( ( α2 + 1m ) ( f ( x0 ) −min f ( x ) ) ε4 ) iterations suffice . Here , the Õ notation serves to hide logarithmic factors for readability . We spell out these factors in the detailed analysis . • When α < 1/ √ m , our sample complexity ( = mT ) matches the best known result in the non- Byzantine case ( Jin et al. , 2019 ) without additional assumptions , and enjoys linear parallel speedup : with m workers of which < √ m are Byzantine , the parallel speedup is Ω̃ ( m ) .1 • For α ∈ [ 1/ √ m , 1/2 ) , our parallel time complexity is Õ ( α2 ) times that needed when no parallelism is used . This still gives parallel speedup . This α2 factor appears in convex Byzantine distributed optimization , where it is tight ( Yin et al. , 2018 ; Alistarh et al. , 2018 ) . • The Lipschitz and second-order smoothness assumptions are the minimal assumptions needed to derive convergence rates for finding second-order critical points ( Jin et al. , 2019 ) . Comparison with prior bounds . The closest known bounds are by Yin et al . ( 2019 ) , who derived three gradient descent-type of algorithms ( based on median , mean , and iterative filtering ) to find a weaker type of approximate local minima . Since it relies on full gradients , their algorithm is arguably less practical , and their time complexities are generally higher than ours ( see Section 2.1 ) . Other prior works consider a weaker goal : to find approximate stationary points ‖∇f ( x ) ‖ ≤ ε only : Bulusu et al . ( 2020 ) additionally assumed there is a guaranteed good ( i.e . non-Byzantine ) worker known by the master , Xie et al . ( 2018b ) gave a practical algorithm when the Byzantine attackers have no information about the loss function or its gradient , Yang et al . ( 2019 ) ; Xie et al . ( 2018a ) ; Blanchard et al . ( 2017 ) derived eventual convergence without an explicit complexity bound , and the non-convex result obtained in Yin et al . ( 2018 ) is subsumed by Yin et al . ( 2019 ) , discussed above . Our algorithm and techniques . The structure of our algorithm is deceptively simple . The master node keeps track of the sum of gradients produced by each worker across time . It labels ( allegedly ) good workers as those whose sum of gradients “ concentrate ” well with respect to a surrogate of the median vector , and labels bad workers otherwise . Once a worker is labelled bad , it is removed from consideration forever . The master then performs the vanilla SGD , by moving in the negative direction of the average gradients produced by those workers currently labelled as good . We call our algorithm SafeguardSGD , since it behaves like having a safe guard to filter away bad workers . Its processing overhead at the master is O ( md ) , negligible compared to standard SGD . As the astute reader may have guessed , the key non-trivial technical ingredient is to identify the right quantity to check for concentration , and make it compatible with the task of non-convex optimization . In particular , we manage to construct such quantities so that ( 1 ) good non-Byzantine workers never get mislabelled as bad ones ; ( 2 ) Byzantine workers may be labelled as good ones ( which is inevitable ) but when they do , the convergence rates are not impacted significantly ; and ( 3 ) the notion does not require additional assumptions or running time overhead . The idea of using concentration ( for each worker across time ) to filter out Byzantine machines 1By parallel speedup we mean the reduction in wall-clock time due to sampling gradients in parallel among the m nodes . In each time step , the algorithm generates m new gradients , although some may be corrupted . traces back to the convex setting ( Alistarh et al. , 2018 ) . However , the quantities used in ( Alistarh et al. , 2018 ) to check for concentration are necessarily different from this paper , and our analysis is completely new , as deriving non-convex rates is known to be much more delicate and challenging . Recently , Bulusu et al . ( 2020 ) used similar concentration filters to Alistarh et al . ( 2018 ) in the nonconvex setting , but under stronger assumptions , and for the simpler task of finding stationary points . Many other algorithms do not rely on concentration filters . In each iteration , they ask each worker to compute a batch of stochastic gradients , and then use coordinate-wise median or mean over the batch average ( e.g . Yin et al . ( 2018 ; 2019 ) ; Yang et al . ( 2019 ) ) or iterative filtering ( e.g . Su & Xu ( 2018 ) ; Yin et al . ( 2019 ) ) by the master to derive a “ robust mean. ” These works fundamentally rely on each iteration to calculate an almost precise full gradient , so that they can apply a surrogate of full gradient descent . Such algorithms can introduce higher sample and time complexities ( see Section 2 ) , are less practical than stochastic gradient schemes , require additional restrictions on the resilience factor α , e.g . α < 1/4 ( Su & Xu , 2018 ) , and , critically , have been shown to be vulnerable to recent attacks ( Baruch et al. , 2019 ; Xie et al. , 2020 ) . Attack resilience and experimental validation . There is a growing literature on customized attacks against Byzantine-resilient algorithms , showing that many defenses can be entirely circumvented in real-world scenarios ( Baruch et al. , 2019 ; Xie et al. , 2020 ) . Our algorithm is provably correct against these attacks , a fact we also validate experimentally . We implemented SafeguardSGD to examine its practical performance against a range of prior works ( Xie et al. , 2018b ; Blanchard et al. , 2017 ; Chen et al. , 2017 ; Yin et al. , 2018 ; 2019 ) , and against recent attacks on the distributed task of training deep neural networks . Our experiments show that SafeguardSGD generally outperforms previous methods in convergence speed and final accuracy , sometimes by a wide accuracy margin . This is true not only against known Byzantine attacks , but also against attack variants we fine-crafted to specifically slow down our algorithm , and against transient node failures . 2 STATEMENT OF OUR THEORETICAL RESULT . We denote by ‖ · ‖ the Euclidean norm and [ n ] : = { 1 , 2 , . . . , n } . Given symmetric matrices A , B , we let ‖A‖2 denote the spectral norm of A . We use to denote Loewner ordering , i.e . A B if A−B is positive semi-definite . We denote by λmin ( A ) the minimum eigenvalue of matrix A . We consider arbitrary d-dimensional non-convex functions f : Rd → R satisfying the following : • f ( x ) is L-Lipschitz smooth : meaning ‖∇f ( x ) −∇f ( y ) ‖ ≤ L‖x− y‖ for any x , y ∈ Rd ; • f ( x ) is L2-second-order smooth : ‖∇2f ( x ) −∇2f ( y ) ‖2 ≤ L2 · ‖x− y‖ for any x , y ∈ Rd ; For notational simplicity of the proofs , we assume L = L2 = V = 1.2 Note that we have also assumed the domain of f is the entire space Rd . If instead there is a compact domain X ⊂ Rd , then one can use projected SGD and re-derive similar results of this paper . We choose to present our result in the simplest setting to convey our main ideas . Byzantine non-convex stochastic distributed optimization . We let m be the number of worker machines and assume at most an α fraction of them are Byzantine for α ∈ [ 0 , 12 ) . We denote by good ⊆ [ m ] the set of good ( i.e . non-Byzantine ) machines , and the algorithm does not know good . Assumption 2.1 . In each iteration t , the algorithm ( on the master ) is allowed to specify a point xt and query m machines . Each machine i ∈ [ m ] gives back a vector∇t , i ∈ Rd satisfying • If i ∈ good , the stochastic gradient∇t , i satisfies E [ ∇t , i ] = ∇f ( xt ) and ‖∇f ( xt ) −∇t , i‖ ≤ V .3 • If i ∈ [ m ] \ good , then∇t , i can be arbitrary ( w.l.o.g . we assume ‖∇f ( xt ) −∇t , i‖ ≤ V ) .4 Remark 2.2 . For each t and i 6∈ good , the vector ∇t , i can be adversarially chosen and may depend 2In the literature of convergence analysis for non-convex optimization , the final complexity bounds naturally and polynomially depend on these parameters L , L2 , V , and the way the dependence goes is typically unique ( Allen-Zhu , 2018a ; b ; Fang et al. , 2018 ; Jin et al. , 2019 ) . This is why it suffices to ignore their appearance and only compare the polynomial dependence on ε and d. 3One can instead assume Pr [ ‖∇f ( xt ) − ∇t , i‖ > t ] ≤ 2 exp ( −t2/2V2 ) and the results of this paper continue to hold up to logarithmic factors . To present the simplest theory , we do not include that version in this paper . We refer interested readers to Jin et al . ( 2019 ) for how to deal with such probabilistic assumption ( when there is no Byzantine worker ) . 4This requirement ‖∇f ( xt ) −∇t , i‖ ≤ V is “ without loss of generality ” because it is trivial for the algorithm to catch bad machines if they output∇t , i more than 2V away from the majorities . Algorithm 1 SafeguardSGD : perturbed SGD with double safe guard Input : point x0 ∈ Rd , rate η > 0 , lengths T ≥ T1 ≥ T0 ≥ 1 , threshold T1 > T0 > 0 ; 1 : good0 ← [ m ] ; 2 : for t← 0 to T − 1 do 3 : last1 ← max { t1 ∈ [ t ] : t1 is a multiple of T1 } ; 4 : last0 ← max { t0 ∈ [ t ] : t0 is a multiple of T0 } 5 : for each i ∈ goodt do 6 : receive∇t , i ∈ Rd from machine i ; 7 : Ai ← ∑t k=last1 ∇k , i |goodk| and Bi ← ∑t k=last0 ∇k , i |goodk| ; 8 : Amed ← Ai where i ∈ goodt is any machine s.t . ∣∣ { j ∈ goodt : ‖Aj −Ai‖ ≤ T1 } ∣∣ > m/2 . 9 : Bmed ← Bi where i ∈ goodt is any machine s.t . ∣∣ { j ∈ goodt : ‖Bj −Bi‖ ≤ T0 } ∣∣ > m/2 . 10 : goodt+1 ← { i ∈ goodt : ‖Ai −Amed‖ ≤ 2T1 ∧ ‖Bi −Bmed‖ ≤ 2T0 } ; 11 : xt+1 = xt − η ( ξt + 1 |goodt| ∑ i∈goodt ∇t , i ) ; Gaussian noise ξt ∼ N ( 0 , ν2I ) on { ∇t′ , i } t′≤t , i∈ [ m ] . In particular , the Byzantine machines can even collude during an iteration .
The paper considers stochastic gradient descent convergence in a distributed setting with m workers, where up to α workers can be Byzantine, i.e. perform in an arbitrarily adversarial way. In this setting, they develop a variant of SGD which finds a second-order stationary point, prevents Byzantine workers from significantly affecting convergence, and achieves α^2 + 1/m speedup compared with the sequential case. The main idea of the algorithm is to measure deviations of gradient updates for a certain number of rounds and detect Byzantine machines which must have a significant deviation to noticeably affect the algorithm’s behavior.
SP:86adaa9dd2414906f708b26e60c86b6e854bb222
Improving VAEs' Robustness to Adversarial Attack
1 INTRODUCTION . Variational autoencoders ( VAEs ) are a powerful approach to learning deep generative models and probabilistic autoencoders ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . However , previous work has shown that they are vulnerable to adversarial attacks ( Tabacof et al. , 2016 ; Gondim-Ribeiro et al. , 2018 ; Kos et al. , 2018 ) : an adversary attempts to fool the VAE to produce reconstructions similar to a chosen target by adding distortions to the original input , as shown in Fig 1 . This kind of attack can be harmful when the encoder ’ s output is used downstream , as in Xu et al . ( 2017 ) ; Kusner et al . ( 2017 ) ; Theis et al . ( 2017 ) ; Townsend et al . ( 2019 ) ; Ha & Schmidhuber ( 2018 ) ; Higgins et al . ( 2017b ) . As VAEs are often themselves used to protect classifiers from adversarial attack ( Schott et al. , 2019 ; Ghosh et al. , 2019 ) , ensuring VAEs are robust to adversarial attack is an important endeavour . Despite these vulnerabilities , little progress has been made in the literature on how to defend VAEs from such attacks . The aim of this paper is to investigate and introduce possible strategies for defence . We seek to defend VAEs in a manner that maintains reconstruction performance . Further , we are also interested in whether methods for defence increase the robustness of downstream tasks using VAEs . Our first contribution is to show that regularising the variational objective during training can lead to more robust VAEs . Specifically , we leverage ideas from the disentanglement literature ( Mathieu et al. , 2019 ) to improve VAEs ’ robustness by learning smoother , more stochastic representations that are less vulnerable to attack . In particular , we show that the total correlation ( TC ) term used to encourage independence between latents of the learned representations ( Kim & Mnih , 2018 ; Chen et al. , 2018 ; Esmaeili et al. , 2019 ) also serves as an effective regulariser for learning robust VAEs . Though a clear improvement over the standard VAE , a severe drawback of this approach is that the gains in robustness are coupled with drops in the reconstruction performance , due to the increased regularisation . Furthermore , we find that the achievable robustness with this approach can be limited ( see Fig 1 ) and thus potentially insufficient for particularly sensitive tasks . To address this , we apply TC–regularisation to hierarchical VAEs . By using a richer latent space representation than a standard VAE , the resulting models are not only more robust still to adversarial attacks than single-layer models with TC regularisation , but can also provide reconstructions which are comparable to , and often even better than , the standard ( unregularised , single-layer ) VAE . ∗Equal Contribution . Contact at : mwilletts @ turing.ac.uk ; acamuto @ turing.ac.uk Published as a conference paper at ICLR 2021 Adversary Target 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 Improving VAEs ’ Robustness to Adversarial Attack correlation penalty as being regularised VAEs . Leveraging the insight that regularised VAEs are robust to adversarial attacks , we develop a class of hierarchical VAEs that are more resilient still . Our model , Seatbelt-VAE , provides both robustness to adversarial attack , and higher quality reconstructions , relative to single layer regularised VAEs . We show that regularised hierarchical VAEs , without our proposed extensions , are not robust to adversarial attack . See Figure 1 for a demonstration of how adversarial attacks are highly effective on vanilla VAEs , less effective on regularised VAEs and close to ineffective on our proposed Seatbelt-VAE . Thus our key contributions are : • A demonstration that regularised VAEs , trained with an up-weighted total correlation , are significantly more robust to adversarial attacks than vanilla VAEs . • We introduce a hierarchical VAE , the Seatbelt-VAE , that provides further robustness to adversarial attack . • New connections between robustness , disentangling and adversarial attack , linked through regularisation . 2 . Background 2.1 . Variational Autoencoders Variational autoencoders ( VAEs ) are a deep extension of factor analysis suitable for high-dimensional data like images ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . They have a joint distribution over data x and latent variables z : p✓ ( x , z ) = p✓ ( x|z ) p ( z ) where p ( z ) = N ( 0 , I ) and p✓ ( x|z ) is an appropriate distribution given the form of the data , the parameters of which are represented by deep nets with parameters ✓ . As exact inference is intractable for this model , in a VAE we perform amortised stochastic variational inference . By introducing a variational posterior distribution over the latent variables q ( z|x ) = N ( µ ( x ) , ⌃ ( x ) ) , we can perform gradient ascent on the evidence lower bound ( ELBO ) L ( x ) = DKL ( q ( z|x ) ||p✓ ( x , z ) ) = Eq ( z|x ) log p✓ ( x|z ) DKL ( q ( z|x ) ||p ( z ) ) log p ( x ) w.r.t.both ✓ and jointly , using the reparameterisation trick to take gradients through Monte Carlo samples from q ( z|x ) . 2.2 . Attacks on VAEs In an adversarial attack an agent is trying to manipulate the behaviour of some machine learning model towards a goal of their choosing . Commonly in deep learning this would be fooling a classifier to misclassify an image through adding a small perturbation ( Akhtar & Mian , 2018 ; Gilmer et al. , 2018 ) . Very small changes in input , of little importance to the human eye , can produce large changes in the model ’ s ( a ) ( b ) ( c ) Figure 1 . Latent-space adversarial attacks on CelebA for different models : a ) Vanilla VAE b ) -TCVAE c ) our proposed Seatbelt- VAE . Clockwise within each plot we show the initial input , its reconstruction , the best adversarial input the adversary could pro- duce , the adversarial distortion that was added to make the adver- sarial input , the adversarial input ’ s reconstruction , and the target image . We are trying to make the initial input ( Hugh Jackman ) look like the target ( Anna Wintour ) . You can see that the ad- versarial reconstruction for the Vanilla VAE looks substantially like Wintour , indicating a successful attack . The -TCVAE adv . reconstruction does not look like Wintour , so the attack has not been successful , but it is not Jackman either . Our proposed model , Seatbelt-VAE , is sufficiently hard to attack that the output under attack still looks like Jackman , not Wintour . output . Attacks on VAEs have been proposed in Tabacof et al . ( 2016 ) ; Gondim-Ribeiro et al . ( 2018 ) ; Kos et al . ( 2018 ) . The adversary wants draws from the model to be close to a target image when given a distorted image as input . See Figure 1.a ) for an example of a successful attack on a vanilla VAE . Here we are trying to turn Hugh Jackman ( Original , top left ) into Anna Wintour ( Target , bottom left ) . We can see that , by adding a well-chosen distortion ( Distortion , bottom right ) , the reconstruction of Jackman goes from looking like a somewhat blurry version of the input ( Original rec. , top middle ) to a somewhat blurry version of Wintour ( Adversarial rec. , bottom middle ) . The adversary has achieved their goal . The current most effective mode of attack on VAEs , the latent space attack ( Tabacof et al. , 2016 ; Gondim-Ribeiro 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 Improving VAEs ’ Robustness to Adversarial Attack correlation penalty as being regularised VAEs . Leveraging the insight that regularised VAEs are robust to adversarial attacks , we develop a class of hierarchical VAEs that are more resilient still . Our model , Seatbelt-VAE , provides both robustness to adversarial attack , and higher quality reconstructions , relative to single layer regularised VAEs . We show that regularised hierarchical VAEs , without our proposed extensions , are not robust to adversarial attack . See Figure 1 for a demonstration of how adversarial attacks are highly effective on vanilla VAEs , less effective on regularised VAEs and close to ineffective on our proposed Seatbelt-VAE . Thus our key contributions are : • A demonstration that regularised VAEs , trained with an up-weighted total correlation , are significantly more robust to adversarial attacks than vanilla VAEs . • We introduce a hierarchical VAE , the Seatbelt-VAE , that provides further robustness to adversarial attack . • New connections between ro stness , disentangling and adversarial attack , linked through regularisation . 2 . Background 2.1 . Variational Autoencoders Variational autoencoders ( VAEs ) are a deep extension of factor analysis suitable for high-dimensional data like images ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . They have a joint distribution over data x and latent variables z : p✓ ( x , z ) = p✓ ( x|z ) p ( z ) where p ( z ) = N ( 0 , I ) and p✓ ( x|z ) is an appropriate distribution given the form of the data , the parameters of which are represented by deep nets with parameters ✓ . As exact inference is intractable for this model , in a VAE we perform amortised stochastic variational inference . By introducing a variational posterior distribution over the latent variables q ( z|x ) = N ( µ ( x ) , ⌃ ( x ) ) , we can perform gradient ascent on the evidence lower bound ( ELBO ) L ( x ) = DKL ( q ( z|x ) ||p✓ ( x , z ) ) = Eq ( z|x ) log p✓ ( x|z ) DKL ( q ( z|x ) ||p ( z ) ) log p ( x ) w.r.t.both ✓ and jointly , using the reparameterisation trick to take gradients through Monte Carlo samples from q ( z|x ) . 2.2 . Attacks on VAEs In an adversarial attack an agent is trying to manipulate the behaviour of some machine learning model towards a goal of their choosing . Commonly in deep learning this would be fooling a classifier to misclassify an image through adding a small perturbation ( Akhtar & Mian , 2018 ; Gilmer et al. , 2018 ) . Very small changes in input , of little importance to the human eye , can produce large changes in the model ’ s ( a ) ( b ) ( c ) Figure 1 . Latent-space adversarial attacks on CelebA for different models : a ) Vanilla VAE b ) -TCVAE c ) our proposed SeatbeltVAE . Clockwise within each plot we show he initial input , its reconstruction , the best adversarial input the adversary could produce , the adversarial distorti n that was added to make the adversarial input , the adversarial input ’ s reconstruction , and the target image . We are trying to make the initial input ( Hugh Jackman ) look like the target ( Anna Wintour ) . You can se that the adversarial reconstruction for the Vanilla VAE looks substantially like Wintour , indicating a successful attack . The -TCVAE adv . reconstruction does not look like Wintour , so the attack has not been successful , but it is not Jackman either . Our proposed model , Seatbelt-VAE , is sufficiently hard to attack that the output under attack still looks like Jackman , not Wint ur . output . Attacks on VAEs have been proposed in Tabacof et al . ( 2016 ) ; Gondim-Ribeiro et al . ( 2018 ) ; Kos et al . ( 2018 ) . The adversary wants draws from the model to be close to a target image when given a distorted image as input . See Figure 1.a ) for an example of a successful attack on a vanilla VAE . Here we are trying to turn Hugh Jackman ( Original , top left ) into Anna Wintour ( Target , bottom left ) . We can see that , by adding a well-chosen distortion ( Distortion , bottom right ) , the reconstruction of Jackman goes from looking like a somewhat blurry version of the input ( Original rec. , top middle ) to a somewhat blurry version of Wintour ( Adversarial rec. , bottom middle ) . The adversary has achieved their goal . The current most effective mode of attack on VAEs , the latent space attack ( Tabacof et al. , 2016 ; Gondim-Ribeiro 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 Improvi g VAEs ’ Robustness t A i l Attack correlation penalty as being regularised VAEs . Leveraging the insight that regularised VAEs are robust to adversarial attacks , we develop a class of hierarchical VAEs that re more r silient still . Our model , Seatbelt-VAE , provides both robustness to adversarial attack , and higher quality reconstructions , relative to single layer regula ised VAEs . We show that regularised hierarch cal VAEs , without our proposed extensions , are not robust to adversarial att . See Figure 1 for d monstration of how adversarial attacks are highly effective on vanilla s , less effective on regularised VAEs a d cl se to ineff ctive on our proposed Seatbelt-VAE . Thus our key contributions are : • A demonstration that regularised VAEs , trained with an up-weighted total correlation , are significa tly more r bust to adversarial attacks than vanilla VAEs . • We introduce a hierarchical VAE , the Seatbelt-VAE , that provides further robustness to adversarial attack . • New connections between robustness , disentangling and adversarial attack , linked through regularisation . 2 . Background 2.1 . Variational Autoencoders Variational autoencoders ( VAEs ) are a deep extensi n of factor analysis suitable for high-dimensional data like images ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . They have a joint distribution over data x and latent variables z : p✓ ( x , z ) = p✓ ( x|z ) p ( z ) where p ( z ) = N ( 0 , I ) and p✓ ( x|z ) is an appropriate distribution given the form of the data , the parameters of which are represented by deep nets with parameters ✓ . As exact inference is intractable for this model , in a VAE we perform amortised stochastic variational inf rence . By introducing a variational posterior distribution over the latent variables q ( z|x ) = N ( µ ( x ) , ⌃ ( x ) ) , we can perform gradient ascent on the evidence lower bound ( ELBO ) L ( x ) = DKL ( q ( z|x ) ||p✓ ( x , z ) ) = Eq ( z|x ) log p✓ ( x|z ) DKL ( q ( z|x ) ||p ( z ) ) log p ( x ) w.r.t.both ✓ and jointly , using the reparameterisation trick to take gradients through Monte Carlo samples from q ( z|x ) . 2.2 . Attacks on VAEs In an adversarial attack an agent is trying to manipulate the behaviour of some machine learning model towards a goal of their choosing . Commonly in deep learning this would be fooling a classifier to misclassify an image through adding a small perturbation ( Akhtar & Mian , 2018 ; Gilmer et al. , 2018 ) . Very small changes in input , of little importance to the human eye , can produce large changes in the model ’ s ( a ) ( b ) ( c ) Figure 1 . Latent-space adversarial attacks on CelebA for different models : a ) Vanilla VAE b ) -TCVAE c ) our proposed SeatbeltVAE . Clockwise within each plot we show the initial input , its reconstruction , the best adversarial input the adversary could produce , the adversarial distortion that was added to make the adversarial input , the adversarial input ’ s reconstruction , and the target image . We are trying to make the initial input ( Hugh Jackman ) look like the target ( Anna Wintour ) . You can see that the adversarial reconstruction for the Vanilla VAE looks substantially like Wintour , indicating a successful attack . The -TCVAE adv . reconstruction does not look like Wintour , so the attack has not been successful , but t is not Jackman either . Our proposed model , Seatbelt-VAE , is sufficiently hard to attack that the output under attack still looks like Jackman , not Wintour . output . Attacks on VAEs have been proposed in Tabacof et al . ( 2016 ) ; Gondim-Ribeiro et al . ( 2018 ) ; Kos et al . ( 2018 ) . The adversary wants draws from the model to be close to a target image when given a distorted image as input . See Figure 1.a ) for an example of a successful attack on a vanill VAE . Here we are trying to turn Hugh Jackman ( Original , top left ) int Anna Wintour ( Target , bottom left ) . We can see that , by adding a well-chosen distortion ( Distortion , bottom right ) , the reconstruction of Jackman goes from looking like a somewhat blurry version of the input ( Original rec. , top middle ) to a somewhat blurry version of Wintour ( Adversarial rec. , bottom middle ) . The adversary has achieved their goal . The current most effectiv mode of attack VAEs , the latent space attack ( Tabacof et al. , 2016 ; Gondim-Ribeiro 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 Improving VAEs ’ Robustness to Adversarial Attack correlation penalty as being regularised VAEs . Leveraging the i sight that regulari ed VAEs are robust to adversarial attacks , we dev lop a lass of hierarchical VAEs tha are more resili nt still . Our model , Seatbelt-VAE , provides both robustness to adversarial attack , and higher quality reconstructions , relativ to sin le lay r regularised VAEs . We how th t r gularised hierarchical VAEs , without our proposed exten ions , ar not robust to adversaria ttack . See Figure 1 for a d monstration of how adversarial attacks ar hig ly effective on vanilla VAEs , less effective on regulari d VAEs and cl se to ineffective on our proposed Seatbelt-VAE . Th s ur key contributions are : • A demonst ation that regularised VAEs , trained with an up-weighted tot correlation , are significantly more robust to adversarial attacks than vanilla VAEs . • We introduce a hierarchical VAE , the Seatbelt-VAE , that provid further robustness to adversarial attack . • N w co nections between robustness , disentangling and adversarial attack , linked through regularisation . 2 . Backgr und 2.1 . Variational Autoencoders Variational autoencoders ( VAEs ) are a deep extension of factor analysis suitable for high-dimensional data like images ( Kingma & Welling , 2013 ; Rezende t al. , 2 1 ) . They have a joint distribution over data x and latent variables z : p✓ ( x , = p✓ ( x|z ) p ( z ) where p ( z ) = N ( 0 , I ) and ✓ ( x|z ) is an appropriate distribution given the form of the data , the parameters of which are repres nted by deep n t with parameters ✓ . As exac inference is intractable for this model , in a VAE we perf rm amortised stochastic variational inference . By n r ducing a va iational p sterior distrib tion over the latent variables q ( z|x ) = N ( µ ( x ) , ⌃ ( x ) ) , we can perform gradi nt ascent on the evidence lower bound ( ELBO ) L ( x ) = DKL ( q ( z|x ) ||p✓ ( x , z ) ) = Eq ( z|x ) log p✓ x z DKL ( q ( z|x ) ||p ( z ) ) log p ( x ) w.r.t.both ✓ a d jointly , using the reparameterisation trick o take gradients thr ugh Monte Carlo samples from q ( z|x ) . 2.2 . Attacks on VAEs In an dversarial attack an agent is trying to manipulate the beh viour of some achine le ning model towards a goal of their choosing . Commonly in deep learni g this would be fo ling a classifier to misclassify an image thr ugh adding a small pe turbatio ( Akhtar & Mian , 2018 ; Gilmer t al. , 2018 ) . Very small changes in input , of little importance o the human eye , can produce large changes in the model ’ s ( a ) ( b ) ( c ) Figure 1 . Latent-space adversarial attacks on CelebA for different m dels : a ) Vanilla VAE b ) -TCVAE c ) our proposed SeatbeltVAE . Clockwise within each plot we show the initial input , its r construction , the best adversarial input the adversary could produce , the adversarial distorti n that was added to make the adversari l input , the adver arial input ’ s reconstruction , and the target image . W ar trying to make the initial input ( Hugh Jackman ) look like the ta get ( Ann Wintour ) . You can see that the adversarial reconstruction for the Vanilla VAE looks substantially like Wintour , indica ing a successful attack . The -TCVAE adv . reconstruction does not l ok like Wintour , so the attack has not bee successful , but it is not Jackman either . Our proposed model , Seatbelt-VAE , is sufficiently ard to a tack that the output under att still lo ks l ke Jackman , not Wintour . output . Att cks on VAEs have been pr posed in Tabacof et al . ( 2016 ) ; Gondim-Ribeiro et al . ( 2018 ) ; Kos et al . ( 2018 ) . The adversary wants draws from the model to be close to a target image when given d stor ed image as input . See Figure 1.a ) or an example of a successful attack on a vanill VAE . Here we ar trying to turn Hugh Jackman ( Original , top l ft ) i to Anna Wintour ( Target , bottom left ) . We can see that , by adding a well-chosen distortion ( Distortion , bott m right ) , the reconstruction of Jackman goes from looking like a somewhat blurry version of the input ( Original rec. , top middle ) to a somewhat blurry version of Wintour ( Adversarial r c , bottom middle ) . The adversary has achieved their goal . The urr nt most effective m de of attack on VAEs , the latent space attack ( Tabacof et al. , 2016 ; Gondim-Ribeiro 5 56 57 58 59 0 1 2 3 4 5 66 67 68 69 0 1 2 3 4 5 76 77 78 79 0 1 2 3 4 5 86 87 88 89 0 1 2 3 4 5 096 097 098 099 0 1 2 3 4 5 106 107 108 109 Improving VAEs ’ Robustness to Adve sarial Attack correlation penalty as being regularised VAEs . Leveraging the insight that regularised VAEs are robust to adversarial at acks , we develop lass of hierarchical VAEs that are more resilient still . O r m del , Se tbelt-VAE , provides both robustness to adversarial attack , and higher quality reconstructions , relative to single layer regularised VAEs . We show that regularised hierarchical VAEs , without our proposed extensions , are not robust to dv rsarial attack . See Figure 1 for a demonstration of how adversarial attacks are highly effective on vanilla VAEs , less effective on regularis d VAEs and close to ineffective n our proposed Seatbelt-VAE . Thus our key contributions re : • A dem nstration that regularised VAEs , t ained with an up-weighted total corr lati n , are significa ly more robust to advers ri l attac s than vanilla VAEs . • We introduce a hierarchical V E , the Seatbelt-VAE , that provides further robustness to adversarial attack . • New connections between robustness , disentangling and adversarial attack , linked through regularisation . 2 . Background 2.1 . Variational Autoencoders Variational autoencoders ( VAEs ) are a deep extension of factor analysis suitable for high-dimensional data like images ( Kingma & Welling , 2013 ; Rezend et al. , 2014 ) . They hav a joint distribution over data x and latent variables z : p✓ ( x , z ) = p✓ ( x|z ) p ( z ) where p ( z ) = N ( 0 , I ) nd p✓ ( x|z ) is an app opriate distri bution given the form of the d ta , th parameters f which are r presented by deep nets with parameters ✓ . As exact inference is intractable for this model , in a VAE we perform amortised stochastic variation l inference . By introducing a variational posterior distribution over the latent variables q ( z|x ) = N ( µ ( x ) , ⌃ ( x ) ) , we can perfo m gradient ascent on the evidenc lower bound ( ELBO ) L ( x ) = DKL ( q ( z|x ) ||p✓ ( x , z ) ) = Eq ( z|x ) log p✓ ( x|z ) DKL ( q ( z|x ) ||p ( z ) ) log p ( x ) w.r.t.both ✓ and jointly , using the reparameterisation trick to take gradients through Monte Carlo samples from q ( z|x ) . 2.2 . Attacks on VAEs In an adv sa ial attack an agent is trying to manipulate the behaviour of some m chine learning model towards a goal of their choosing . Commonly in d ep learning this would be fooling a classifier to misclassify an image through adding a small perturbation ( Akhtar & Mian , 2018 ; Gilmer et al. , 2018 ) . Very small changes in input , of little importance to the human eye , can produce large changes in the model ’ s ( a ) ( b ) ( c ) Figure 1 . Lat nt-space adversarial attacks Ce ebA for diff rent models : a ) Vanilla VAE b ) -TCVAE c ) our proposed S atbeltVAE . Clockwise wi hi each pl t we sh w th i itial input , its reco structio , the be t adversarial input the adversa y could pr - duce , the adversarial distortion that was d ed to make he adversarial input , the adversaria input ’ s reconstruction , nd the target image . We are trying to make the initial input ( Hugh Jackman ) look like the target ( Anna Wintour ) . You can see that the adversarial reconstruction for the Vanilla VAE looks substantially like Wintour , indicating a successful attack . The -TCVAE adv . reconstruction does not look like Wintour , so the attack has not been successful but it is not Jackman either . Our proposed model , Seatbelt-VAE , is sufficiently hard to attack that the output under attack still looks like Jackman , not Wintour . output . Attacks on VAEs have been proposed in Tabacof et al . ( 2016 ) ; Gondim-Ribeiro et al . ( 2018 ) ; Kos et al . ( 2018 ) . The adversary wants draws from the model to be close to a target image when given a distorted image as input . See Figure 1.a ) for an example of a successful attack on a vanilla VAE . Here we are trying to turn Hugh Jackman , top left ) into Anna Wintour ( Target , bottom left ) . e can see that , by adding a well-chosen distortion ( Distortion , bottom right ) , the reconstruction of Jackman goes from looking like a somewhat blurry version of the input ( Original rec. , top middle ) to a somewhat blurry version of Wintour ( Adversarial rec. , bottom middle ) . The adversary has achieved their goal . The current most effective mode of attack on VAEs , the latent space attack ( Tabacof et al. , 2016 ; Gondim-Ribeiro 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 Improving VAEs ’ Robustness to Adversarial Attack correlation penalty as being regularised VAEs . Leveraging the insight that regularised VAEs are robust to adversarial attacks , we develop a class of hierarchical VAEs that are more resilient till . Our model , Seatbelt-VAE , provides both robustness to adversarial attack , and higher quality reconstructions , relative to single layer regularised VAEs . We show that regularised hierarchical VAEs , without our proposed extensions , are not robust to adversarial attack . S e Figure 1 for a demon tration of how adv rsarial attacks are ighly effective on vanill VAEs , less effective on regularised VAEs nd close to i effective on our proposed Seatbelt-VAE . Thus our key contributions are : • A demonstration at eg larised VAEs , tra ned with an up-weighted total corr lation , are significa tly more robust to adversarial att cks than v nilla VAEs . • We introduce a hierarchical VAE , the Seatbelt-VAE , that provides further robustness to adversarial attack . • New connections between robustness , disentangling and adversarial attack , linked through regularisation . 2 . Background 2.1 . Variational Autoencoders Variatio al autoencoders ( VAEs ) are a deep extension of factor analysis suitable for high-dimensional data like images ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . They have a joint distribution ov r data x and latent variables z : p✓ ( x , z ) = p✓ ( x|z ) p ( z ) wh re p ( z ) = N ( 0 , I ) and p✓ ( x|z ) is an appropriate distribution given the form of the data , the pa ameters of which are represented y dee nets with parameters ✓ . As xact inference is intr c ble for this model , in VAE we perform mortised stochastic varia al inference . By i roducing a variational o terio distribution ov the latent vari bles q ( z|x ) = N ( µ ( x ) , ⌃ ( x ) ) , we can perform gradient ascent on he evidence lower bou d ( ELBO ) L ( x ) = DKL ( q ( z|x ) ||p✓ ( , z ) ) = Eq ( z|x ) log p✓ ( x|z ) DKL ( q ( z|x ) ||p ( z ) ) log p ( x ) w.r.t.both ✓ nd jointly , using the reparameterisation trick to take gradients through Monte Carlo samples from q ( z|x ) . 2.2 . Attacks on VAEs In an adversarial attack an agent is trying to manipulate the behaviour of so e machine learning model towards a goal of their choosing . Commonly in deep learning this would be fooling a classifier to misclassify an image through adding a small perturbation ( Akhtar & Mian , 2018 ; Gilmer et al. , 2018 ) . Very small changes in input , of little importance to the human eye , can produce large changes in the model ’ s ( a ) ( b ) ( c ) Figure 1 . Late t-space adversarial attacks on CelebA for different m dels : a ) Vanilla VAE b ) -TCVAE c ) our proposed SeatbeltVAE . Clockwise within each plot we show the initial input , its reconstructio , the best adversarial input the adversary could produce , the advers rial distortion that was added to make the adversarial input , the adversarial input ’ s reconstructio , and the target image . We are trying t make th initial input ( Hugh Jackman ) look like the targe ( Anna Wintour ) . You can see that the adversarial reconstruction or the Vanilla VAE looks substantially like Wintour , indicat ng a successful attack . The -TCVAE adv . r c nstruct on d es not look like Wintour , so the attack has not been successful , but it is n t Jackman either . Our proposed model , Seatbelt-VAE , is sufficiently hard to attack that the output under attack still looks like Jackman , not Wintour . output . Attacks on VAEs h ve be n pr p sed in Tabac f et al . ( 2016 ) ; Gon im-Ribeiro et al . ( 2018 ) ; K et l. ( 2018 ) . Th adv rsary w nts draws from the model to e close to target image when given a distorted image as input . See Figure 1.a ) for an example of a successful attack on a v nill VAE . Here we are tryin to turn Hugh Jackman ( Original , top left ) into Anna Wi tour ( Target , bottom left ) . We can see that , by adding a well-chosen distortion ( Distortion , bottom right ) , the reconstruction of Jackman goes from looking like a somewhat blurry version of the input ( Original rec. , op middle ) to a somewh t blurry vers on of Wintour ( Adversarial rec. , bottom middle ) . The adversary has achieved their goal . The current most effective mode of attack on VAEs , the latent space attack ( Tabacof et al. , 2016 ; Gondim-Ribeiro 5 56 57 58 59 0 1 2 3 4 5 66 67 68 69 0 1 2 3 4 5 76 77 78 79 0 1 2 3 4 5 86 87 88 89 0 1 2 3 4 5 096 097 098 099 0 1 2 3 4 5 106 107 108 109 Improving VAEs ’ Robustness to Adversarial Attack correlation penalty as being regularised VAEs . Leveraging the i si ht that regulari ed VAEs are robust to adversarial at acks , we evelop l ss of hierarchi l VAEs that are more resilient till . O r m del , Se tbe t- AE , provides both robustness to adv sarial ttack , and higher quality reconstructions , relative to single layer regularised VAEs . We show that regularised hierarchical VAEs , without our proposed extensions , are not robust to adversarial attack . Se Figure 1 for a demonstration of how adversarial attacks are high y effective on vanilla VAEs , less effective on regularis d VAEs and close to ineffective on our proposed Seatbelt-VAE . Thus our k y contributions are : • A demonstration that regularised VAEs , t ained with an up-weighted otal corr lati n , a significa ly more robust to a ve s ri l attac than vanilla VAEs . • We introduce a hierarchical VAE , the Seatbelt-VAE , that provides further robustness to adversarial attack . • Ne connectio s betwee robustness , disentangling and adversarial attack , linked through regularisation . 2 . Background 2.1 . Variational Autoencoders Variational autoencoders ( VAEs ) ar a deep extension of factor analy is suitable for high-dimensional data like images ( Kingma & Welling , 2013 ; Rezend et al. , 2 14 ) . Th y hav a joint distribution over data x and latent variables z : p✓ ( x , z ) = p✓ ( x|z ) p ( z ) where p ( z ) = N ( 0 , I ) nd p✓ ( x|z ) is an app opriate distri bution given the form of the d ta , th parameters of which are r presented by deep nets with parameters ✓ . As exact inference is intractable for this model , in a VAE we perform amortised stochastic variational inference . By introducing a variational posterior distrib tion over the latent variables q ( z|x ) = N ( µ ( x ) , ⌃ ( x ) ) , we can perfo m gradient ascent on the evide l wer bound ( ELBO ) L ( x ) = DKL ( q ( z|x ) ||p✓ ( x , z ) ) = Eq ( z|x ) log p✓ ( x|z ) DKL ( q ( z|x ) ||p ( z ) ) log p ( x ) w.r.t.both ✓ a d jointly , using the reparameterisation trick to take gradients through Monte Carlo samples from q ( z|x ) . 2.2 . Attacks on VAEs In an adv sa ial attack an agent is trying to man pulat he be viour of some m chine learning model towards a goal of their choosing . Commonly in d ep l arning this would be fooling a classifier to misclassify an image thr ugh adding a small perturbation ( Akhtar & Mian , 2018 ; Gilmer t al. , 2018 ) . Very small changes in input , of little importance to the human eye , can produce large changes in the model ’ s ( a ) ( b ) ( c ) Figure 1 . Latent-space adversarial attacks on CelebA for different models : a ) Vanilla VAE b ) -TCVAE c ) our proposed SeatbeltVAE . Clockwise within each plot we show the initial input , its reconstruction , the best adversarial input the adver ary could produce , the adversarial distortion that was added to make the adversarial input , the adversarial input ’ s reconstruction , and the target image . We are trying to make the initial input ( Hugh Jackman ) look like the target ( Anna Wintour ) . You can see that the adversarial reconstruction for the Vanill VAE looks substantially like Wintour , indicating a succ ssful ttack . The -TCVAE adv . reconstruction does n t look l ke Wi tour , so he attack has n t been successful , but it is not Jackman either . Our proposed model , Seatbelt-VAE , is sufficiently hard to attack that the output under attack still looks like Jackman , not Wintour . output . Attacks on VAEs have been proposed in Tabacof et al . ( 2016 ) ; Gondim-Ribeiro et al . ( 2018 ) ; Kos et al . ( 2018 ) . The adversary wants draws from the model to be close to a target image when given a distorted image as input . See Figur 1.a ) for an example of a successful attack on a vanilla VAE . Here we are trying to t rn Hugh Jackman , top left ) i to Anna Wintour ( Target , bottom left ) . e can see hat , by adding a well-chosen distortion ( Distortion , bottom right ) , the reconstruction of Jackman goes from looking like a somewhat blurry version of the input ( Original rec. , top middle ) to a somewhat blurry version of Wintour ( Adversarial rec. , bottom middle ) . The adversary has achieved their goal . The current most effective m de of attack on VAEs , the latent space attack ( Tabacof et al. , 2016 ; Gondim-Ribeiro 5 56 57 58 59 0 1 2 3 4 5 66 67 68 69 0 1 2 3 4 5 76 77 78 79 0 1 2 3 4 5 86 87 88 89 0 1 2 3 4 5 096 097 098 099 0 1 2 3 4 5 106 107 108 Impr ving VAEs ’ Robustnes to Adversarial Attack correlation penalty as being regularised VAEs . Leveraging the insight that regularised VAEs are robust to adversarial attacks , we develop lass of hierarchical VAEs that are more resilient still . O r m del , Se tbelt-VAE , provides both robustness to adversarial attack , and higher quality reconstructions , rel tive to single layer regul rised VAEs . We show that reg larised hierarchical VAEs , without our proposed extensi ns , are not robust to adversarial att ck . See Fig re 1 for a demonstration of how dv rsari l attacks are highly effective on vani la VAEs , less eff c ive on regularis d VAE and close to i eff ctive on our propose Seatbelt-VAE . Thus our key contributions are : • A demonstration that regulari ed VAEs , t ained with an up-weighted total corr lati n , are significa tly more robust to advers ri l attac s than vanilla VAE . • We introduce a hierarchical VAE , the Seatbelt-VAE , hat provides further robustness to adversarial attack . • New connections between obustness , di ent ngling and adversarial ttack , linked through regularisation . 2 . Background 2.1 . Variational Autoencoders Variatio al uto ncod rs ( VAEs ) ar d e extension of factor an lysis suitable for high-dimens onal data like images ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . They hav a joint distribution over data x and latent variabl s z : p✓ ( x , z ) = p✓ ( x|z ) p ( z ) where p ( z ) = N ( 0 , I ) and p✓ ( x|z ) is an app opriate distri bution given the form of the d ta , the parameters of hich are r presented b deep nets with parameters ✓ . As exact inference is intra table for this model , in a VAE we perform amortised stochastic variational inference . By introducing a variational posterior distribution over the latent variables q ( z|x ) = N ( µ ( x ) , ⌃ ( x ) ) , we can perfo m gradient ascent on the evidenc lower bound ( ELBO ) L ( x ) = DKL ( q ( z|x ) ||p✓ ( x , z ) ) = Eq ( z|x ) log p✓ ( x|z ) DKL ( q ( z|x ) ||p ( z ) ) log p ( x ) w.r.t.both ✓ and jointly , using the reparameterisation trick to take gradients through Monte Carlo samples from q ( z|x ) . 2.2 . Attacks on VAEs In an adv sa ial attack an gent is trying to manipulate the behaviour of some m chine learning model towards a goal of their choosing . Commonly in d ep learning this would be fooling a classifier to misclassify an image through adding a small perturbation ( Akhtar & Mian , 2018 ; Gilmer et al. , 2018 ) . Very small changes in input , of little importance to the human eye , can produce large changes in the model ’ s ( a ) ( b ) ( c ) Figure 1 . Latent-space adversarial attacks CelebA for differe t models : a ) Vanilla VAE b ) -TCVAE c ) our proposed SeatbeltVAE . Clockwise within each pl t we sh w th i itial input , its reconstruction , the best adversarial input the adversary could produce , the adversarial distortion that was added to make he adversarial input , the adversarial input ’ s reconstruct on , nd the target image . We are trying to m ke the initial input ( Hugh Jackman ) look like the targe ( Anna Wintour ) . You can see that the adversarial reconstruction for the Vanilla VAE looks substantially like Wint ur , indicating a successful attack . The -TCVAE adv . econstruction does ot lo k like Wintour , so the att k has not been succe sful , but it is not Jackman either . Our proposed model , Seatbelt-VAE , is sufficiently hard to attack that the output under attack still looks like Jackman , not Wintour . output . Attacks on VAEs have been proposed in Tabacof et al . ( 2016 ) ; Gondim-Ribeiro et al . ( 2018 ) ; Kos et al . ( 2018 ) . The adversary wants draws from the model to be close to a target image when given a distorted image as input . See Figur 1.a ) for an example of a successful attack on a vanilla VAE . Here we are trying to turn Hugh Jackman , top left ) into Anna Wintour ( Target , bottom left ) . e can see that , by adding a well-chosen distortion ( Distortion , bottom right ) , the reconstruction of Jackman goes from looking like a somewhat blurry version of the input ( Original rec. , top middle ) to a somewhat blurry version of Wintour ( Adversarial rec. , bottom middle ) . The adversary has achieved their goal . The current most effective mode of attack on VAEs , the latent space attack ( Tabacof et al. , 2016 ; Gondim-Ribeiro 55 56 57 58 59 0 1 62 63 64 65 66 67 68 69 0 1 72 73 74 75 76 77 78 79 0 1 82 83 84 85 86 87 88 89 0 1 092 093 094 095 096 097 098 099 0 1 102 103 104 105 106 107 108 109 Improving VAEs ’ Robustness to Adversarial Attack correlation penalty as b ing regularised VAEs . Leveraging the insight that regularised VAEs are robust to adversarial att cks , we develop class of hierarc i al VAEs that re more r ilient still . Our model , Seatbelt-VAE , provides both robustness to advers rial attack , nd higher qualit reconstructions , rel tive to single layer regularised VAEs . We show tha regularised hierarchical VAEs , without our proposed extensions , are not robust to adversarial attack . See Fig e 1 for d m n ration of how dversarial attacks are highly effectiv on vani la s , less eff c ive on regularised VAE a d cl s t ineffective on o r proposed Seatbelt-VAE . Thus our key contributions are : • A demonstration t t regularised VAEs , trai d with an up-weighted total correlation , are significantly more r bust to adversarial attacks than vanilla VAEs . • We introduce a hierarchical VAE , the Seatbelt-VAE , that provides further robustness t adversarial att ck . • New con ections between obustness , di ent ngling and adversarial ttack , linked through regu arisation . 2 . Background 2.1 . V ri tion l Autoencoders Variational autoencoders ( VAEs ) are a deep ext nsi n of factor analysis suitable for high-dim nsional dat like images ( Kingma & W lling , 2013 ; Rezende et al. , 2014 ) . They h e a joint di tribution ov r dat x and latent variables z : p✓ ( x , z ) = p✓ ( x|z ) p ( z ) where p ( z ) = N ( 0 , I ) and p✓ ( x|z ) is an appro riate distribution given the form of the ata , the parameters of hich are represented b deep nets with parameters ✓ . As exact inferenc is intractable for t is model , in a VAE we perform amortised stochastic variational inf rence . By introducing a variational posterior distribution over the latent variables q ( z|x ) = N ( µ ( x ) , ⌃ ( x ) ) , we can perform gradient ascent on the evidence lower bound ( ELBO ) L ( x ) = DKL ( q ( z|x ) ||p✓ ( x , z ) ) = Eq ( z|x ) log p✓ ( x|z ) DKL ( q ( z|x ) ||p ( z ) ) log p ( x ) w.r.t.both ✓ and jointly , using the reparameterisation trick to take gradients through Monte Carlo samples from q ( z|x ) . 2.2 . Attacks on VAEs In an adversarial attack an agent is trying to manipulate the behaviour of some machin learning model towards a goal of their choosing . Commonly in deep learning this would be fooling a classifier to misclassify an image through adding a small perturbation ( Akhtar & Mian , 2018 ; Gilmer et al. , 2018 ) . Very small changes in input , of little importance to the human eye , can produce large changes in the model ’ s ( a ) ( b ) ( c ) Figure 1 . Lat nt-space adv rsarial attacks o CelebA for different mod ls : a ) Vanill VAE b ) -TCVAE c ) o r proposed S atbeltVAE . Clockwise within each plot we how t initial input , its r tr ti , th b st dversarial input the advers ry could pro d ce , the dversarial dist rtion th t was added t m ke the adversarial put , the adversarial inp t ’ reconstru tion , and the target image . We a e trying to make the initial input ( Hugh Jackman ) look like the target ( Anna Wintour ) . You can see that the adversarial reconstruction for t e Va illa VAE looks s bstantially like Wint ur , indicating a uccessful attack . The -TCVAE dv . reconstruction does not lo k like Wintour , so the att ck has not been succe sful , but it is not Jackman either . Our proposed model , Seatbelt-VAE , is sufficiently hard to attack that the output under attack still looks like Jackman , not Wintour . output . Attacks n VAEs have been proposed in Tabacof et al . ( 2016 ) ; Gondim-Ribeiro et al . ( 2018 ) ; Ko et al . ( 2018 ) . The adversary wa ts draws from th m del to be close to a target image when given a distorte image as input . See Figur 1.a ) for n example of a succ ssful attack a vanill VAE . He e we are trying to turn Hugh Jackman ( Original , top left ) int Anna Wintour ( Target , bottom left ) . We can see that , by adding a well-chosen distortion ( Distortion , bottom right ) , t e reconstruction of Jackman goes from l oking like a s mewh blurry version of the input ( Original rec. , top middle ) to a somewhat blurry version of Wintour ( Adversarial rec. , bottom middle ) . The adversary has achieved their goal . The current most effectiv mode of attack VAEs , the latent sp ce attack ( Tabacof et al. , 2016 ; Gondim-Ribeiro 5 56 57 58 59 0 1 2 3 4 5 66 67 68 69 0 1 2 3 4 5 76 77 78 79 0 1 2 3 4 5 86 87 88 89 0 1 2 3 4 5 096 097 098 099 0 1 2 3 4 5 106 107 108 109 Improving VAEs ’ Robustness to Adversarial Attack correlation penalty as being regularised VAEs . Leveraging the i sight that r l VAEs are robust to adversarial att cks , we dev lop lass of hierarchical VAEs tha are more resili nt till . O r m del , Se tbelt-VAE , provides bot robustne s to adversarial attack , and higher quality reconstructions , relativ to single layer regularised VAEs . W how th t regularised hierarchical VAEs , without our proposed extensions , are not robust to adversarial attack . Se Fig re 1 for a demonstration of how dversarial attacks a e hig ly effectiv on vani a s , less eff c ive on regulari d VAE and cl e to i ffe tive on our proposed Sea belt-VAE . Th s ur key contributions are : • A dem nst ation that regularised VAEs , t ained with an up-weighted total corr lati n , are significa tly more robust to advers ri l attac s than vanilla VAEs . • We introduce a hierarchical VAE , the Se tbelt-VAE , that provid further robustness to adversari l attack . • N w co nect ons between obustness , di ent ngling and adversarial attack , linked through regularisation . 2 . Background 2.1 . Variatio al Autoencod rs Variational auto ncoders ( VAEs ) are a deep extension of factor analysis suitable for high-dimensional data like images ( Kingma & Welling , 2013 ; Reze d t al. , 2014 ) . They h v a joint distribution over data x and l tent variables z : p✓ ( x , = p✓ ( x|z ) p ( z ) where p ( z ) N ( 0 , I ) nd p✓ ( x|z ) is an app o riate distri bution given the form of the d ta , th parameters f hich are r pres nted b de p n ts with parameters ✓ . As exac inferenc is intr ctable f r this model , in a VAE we perf rm amortised stochastic variational inference . By n r ducing a va iational p sterior distrib tion over the latent variables q ( z|x ) = N ( µ ( x ) , ⌃ ( x ) ) , we can perfo m gradi nt ascent on the evide c lower bound ( ELBO ) L ( x ) = DKL ( q ( z|x ) ||p✓ ( x , z ) ) = Eq ( z|x ) log p✓ x z DKL ( q ( z|x ) ||p ( z ) ) log p ( x ) w.r.t.both ✓ a d jointly , using the reparameterisation trick o take gradients thr ugh Monte Carlo samples from q ( z|x ) . 2.2 . Attacks on VAEs In an adv sa ial attack an agent is trying to manipulat the beh viour of some chine le ning model towards a goal of their choosing . Commonly in d ep learning this would be fo ling a classifier to misclassify an image thr ugh adding a small pe turbat o ( Akhtar & Mian , 2018 ; Gilmer et al. , 2018 ) . Very small changes in input , of little importance o the human eye , can produce large changes in the model ’ s ( a ) ( b ) ( c ) Figure 1 . Latent-space adversari l at acks on CelebA for different mod ls : a ) Vanilla VAE b ) -TCVAE c ) ur propose Seat ltVAE . Clockwise withi e ch plot we show the initial input , its r construction , the best adver arial input the adversary could produce , the adversari l distorti n that was added to make the adversari l input , the adv r arial input ’ s reconstruction , and the target image . W a trying to mak th initi l input ( Hugh Jackman ) look like th t g t ( Ann Wi tour ) . Y u can see that the adversarial reconstruction for the Vanilla VAE looks substantially like Wint ur , indica a succe sful attack . The -TCVAE adv . reconstruc ion does not l k like Wintour , so the att ck has not been succe sful , but i is not Jackman either . Our proposed model , Seatbelt-VAE , is sufficiently ard to a tack that the output under att still lo ks l ke Jackman , not Wintour . output . Att cks on VAE have been pr posed in Tabacof et al . ( 2016 ) ; G ndim-Ribeiro et a . ( 2018 ) ; Kos et al . ( 2018 ) . The adversa wants draws from the odel to be close to a target image when given d st r ed image as input . See Figure 1.a ) or an example of a successful attack on a vanill VAE . Here we are trying to turn Hugh Jackman i i l , top left ) i to Anna Wintour ( Target , bottom left ) . e can see that , by adding a well-chosen distortion ( Distortion , bott m right ) , the reconstruction of Jackman goes from looking like a somewhat blurry version of the input ( Original rec. , top middle ) to a somewhat blurry version of Wintour ( Adversarial r c , bottom middle ) . The adversary has achieved their goal . The urrent most effective m de of attack on VAEs , the latent space attack ( Tabacof et al. , 2016 ; Gondim-Ribeiro a a es a r e o st a e t o r s l e av a p h . n e a . he , econs c , u , a , u c , . . , s . . , , . , , , . e a a r o o t on a VA E βTC VA E Se at be lt- VA E Adversarial Input Figure 1 : Adversarial attacks on CelebA for different models . Here we start with the image of Hugh Jackman and introduce an adversary that tries to produce reconstructions that look like Anna Wintour . This is done by applying a distortion ( third column ) to the original image to produce an adversarial input ( second column ) . We can see that th adversarial reconstruction for the Vanilla VAE looks s bstantially like Wintour , indicating a successful attack . Adding a regularisation term using the β-TCVAE produc s an adversarial reconstruction that does not look like Wintour , but it is also far from a successful reconstruction . The hierarchical version of a β-TCVAE ( which we call Seatbelt-VAE ) is sufficiently hard to attack that the output under attack s ill looks like Jackman , not Wintour . To summarise : We provide insights into what makes VAEs vulnerable to attack and how we might go about defending them . We unearth novel connections between disentanglement and adversarial robustness . We demonstrate that regularised VAEs , trained with an up-weighted total correlation , are much more robust to attacks than vanilla VAEs . Building on this we develop regularised hierarchical VAEs that are more robustness still and offer improved reconstructions . Finally , we show that robustness to adversarial attack also confers increased robustness to downstream tasks .
This work builds on the vulnerability of VAEs to adversarial attacks to propose investigate how training with alternative losses may alleviate this problem, with a specific focus on disentanglement. In particular it is found that disentanglement constraints may improve the robustness to adversarial attacks, to the detriment of the performance. In order to get the best of both, the author(s) propose a more flexible (hierarchical) model, trained with the beta-TC penalization on the ELBO. The algorithm, named Seatbelt-VAE, shows improvement over the beta-TC VAE in terms of reconstruction, as well as in term of adversarial robustness for several datasets (Chairs, 3D Faces, dSprites).
SP:14e55fd6a62febf4c0884964989ac6eb4ae70f63
Molecule Optimization by Explainable Evolution
1 INTRODUCTION . The space of organic molecules is vast , the size of which is exceeding 1060 ( Reymond et al. , 2010 ) . Searching over this vast space for molecules of interest is a challenging task in chemistry , material science , and drug discovery , especially given that molecules are desired to meet multiple criteria , e.g. , high potency and low toxicity in drug discovery . When human experts optimize molecules for better molecular properties , they will first come up with rationales within desirable molecules . Typically , the rationales are subgraphs in a molecule deemed to contribute primarily to certain desired molecular properties . Once rationales are identified , chemists will design new molecules on top of rationales hoping that , the desired properties of new molecules will be further enhanced due to the existence of rationale and changes of non-rationale parts . The cycle of identifying molecular rationales and redesigning new hypothetical molecules will be carried on until molecules that meet certain property criteria are discovered . In this paper , we develop a novel algorithm that mimics the process of molecule optimization by human experts . Our algorithm finds new molecules with better properties via an EM-like explainable evolutionary process ( Figure 1 ) . The algorithm alternates between two stages . During the first stage , we use an explainable local search method to identify rationales within high-quality molecules that account for their high property scores . During the second stage , we use a conditional generative model to explore the larger space of molecules containing useful rationales . Our method is novel in that we are using explainable models to help us exploit useful patterns in the molecules , yet leveraging generative models to help us explore the molecule landscape . Comparing to existing methods that directly learn a generative model using Reinforcement Learning or perform continuous optimization in the latent space of molecules ( Olivecrona et al. , 2017 ; You et al. , 2018a ; Dai et al. , 2018b ) , our method is more sample-efficient and can generate more novel and unique molecules that meet the criteria . We evaluate our algorithm against several state-of-the-art methods on a molecule optimization task involving multiple properties . Compared with baselines , our algorithm is able to increase the success ∗Correspondence to : Binghong Chen < binghong @ gatech.edu > . ∗ indicates equal contribution . Source code at https : //github.com/binghong-ml/MolEvol . rate by 50 % , novelty by 14 % , while having a competitive diversity . We further propose a new metric , QNU score , to jointly consider all three aspects , and show that we achieve a score of 52.7 % compared with 29.5 % by the best baseline . We also ask experienced chemists to evaluate top-50 generated molecules and find that 30 of them are as good as existing ones . The main contributions of this paper are summarized below : • We propose a novel EM-like evolution-by-explanation algorithm for molecule optimization ; • We present a novel , principled , explainable graph model based on an information-theoretic ap- proach to extract subgraphs essential for maintaining certain desired properties ; • Our approach outperforms existing state-of-the-arts by a large margin in terms of success rate ( 50 % better ) , novelty ( 14 % better ) , and an overall metric ( 79 % better ) on a real-world multiproperty optimization task . 2 RELATED WORK . There has been a surge of interest in using machine learning to discover novel molecules with certain properties in recent years . Most of the existing work defines a generative model for either the SMILES strings ( Weininger , 1988 ) or molecular graphs , and uses Reinforcement Learning algorithms to optimize the properties of the generated molecules ( Segler et al. , 2018 ; Olivecrona et al. , 2017 ; Guimaraes et al. , 2017 ; You et al. , 2018a ; Popova et al. , 2018 ; 2019 ; Samanta et al. , 2019 ; Zhou et al. , 2019 ; De Cao & Kipf , 2018 ; Kearnes et al. , 2019 ; Shi et al. , 2020 ; Jin et al. , 2020 ) . Others optimize the continuous representation of molecules in a latent space learned by variants of variational autoencoders ( Kusner et al. , 2017 ; Dai et al. , 2018b ; Jin et al. , 2018 ; Gómez-Bombarelli et al. , 2018 ; Kang & Cho , 2018 ; Liu et al. , 2018 ; Kajino , 2019 ) . More recent work attempts Evolutionary algorithms ( Nigam et al. , 2020 ; Leguy et al. , 2020 ; Winter et al. , 2019 ) , or focuses on finding high-quality molecules with synthesis paths ( Bradshaw et al. , 2019 ; Korovina et al. , 2020 ; Gottipati et al. , 2020 ) . Most similar to our approach is RationaleRL ( Jin et al. , 2020 ) , which extracts subgraphs from seed molecules using Monte Carlo Tree Search ( MCTS ) and generates full molecules by completing the subgraphs . Compared with previous work , our approach is the first to incorporate an explainable model in the iterative search process . Existing work on explainable models approaches the problems from three directions . The first line of work uses gradients of the outputs with respect to inputs to identify the salient features in the inputs ( Simonyan et al. , 2013 ; Springenberg et al. , 2014 ; Baehrens et al. , 2010 ) ; the second line of work approximates the model with simple interpretable models , such as locally additive mod- els ( Bach et al. , 2015 ; Kindermans et al. , 2016 ; Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ; Shrikumar et al. , 2017 ) ; the third line of work defines input pattern selection operators , such that the outputs of the model based on the selected input patterns have high mutual information with the original model outputs ( Chen et al. , 2018 ; Ying et al. , 2019 ) . Our explainable model is different from GNNExplainer ( Ying et al. , 2019 ) in that we optimize the discrete subgraph structure with learned variational predictor , instead of directly feeding continuous edge masking into the target model . 3 PROBLEM SETTING . In this paper , we study the problem of discovering molecules g from the molecular space G with a high property score , measured by a scoring function f . And usually , there is a set of seed molecules G0 ⊂ G from experts with high scores to start with . More formally , the problem can be stated as Molecule Optimization . Given a scoring function f : G 7→ [ 0 , 1 ] , and a set of seed molecules G0 ⊂ G , the goal is to learn a molecule generative model p ( g ) such that the expected score of the generated molecules is maximized , i.e. , max p ( · ) Eg∼p ( · ) [ f ( g ) ] = ∫ g∈G p ( g ) f ( g ) dg ( 1 ) To prevent the model p ( g ) from generating a small set of fixed molecules with high scores , we additionally require the learned distribution to be both novel and diverse , i.e. , generating molecules that are dissimilar to the set of reference molecules ( a subset of G0 ) and each other . The molecule optimization problem in Eq ( 1 ) is combinatorial in nature , which poses a significant challenge . To mimic the scientific discovery process , we allow the algorithm to query f on new molecules under a querying budget . Examples of some well-known scoring functions include the QED score measuring the drug-likeness ( Bickerton et al. , 2012 ) , the SA score measuring the synthetic accessibility ( Ertl & Schuffenhauer , 2009 ) , the TPSA score measuring the ability to permeate cells ( Prasanna & Doerksen , 2009 ) , etc . The scoring function is general and could also encode multiproperty objectives ( Olivecrona et al. , 2017 ; Brown et al. , 2019 ) . Optimizing multiple properties together suffers from the sparsity of high scores , a scenario which is shown to be more challenging than single property optimization ( Jin et al. , 2020 ) . When experts are optimizing the molecular property , they will first look for substructures that result in the formation of that property , and use them as the foundation for building novel molecules . These subgraphs are called rationales ( examples in Figure 1 ) . The set of rationales is formally defined as , S = { s | ∃g ∈ G , s.t . s is a subgraph of g } . ( 2 ) 4 OUR FRAMEWORK . Our novel framework for optimizing molecular property with generative models consists of a modeling component and an algorithm component . In our modeling component , we propose a rationalebased hierarchical generative model for p ( g ) , which first generates rationales and then completes molecules . In our algorithm component , we design an alternating optimization procedure that interleaves between rationale distribution optimization and molecule generative model optimization . Furthermore , we develop a novel explainable graph model to effectively carry out the rationale model optimization . Next , we will first start describing our hierarchical generative model . 4.1 RATIONALE-BASED HIERARCHICAL GENERATIVE MODEL . To tackle the challenging search problem , we develop a hierarchical generative model that mimics the process of molecule optimization by human experts . In our model , we first sample rationales s from a distribution p ( s ) , and then molecules g will be generated according to conditional distribution pθ ( g|s ) . More specifically , our overall molecular generative model pθ ( g ) can be defined as pθ ( g ) = ∫ s∈S p ( s ) pθ ( g|s ) ds , ( 3 ) where θ is the parameter of the conditional generative model , p ( s ) is the latent rationales distribution . Here pθ ( g|s ) is a graph completion model from rationale s. The architecture of pθ ( g|s ) can be arbitrary . In this work , we use a latent variable model with a Gaussian prior p ( z ) , pθ ( g|s ) = ∫ z p ( z ) pθ ( g|s , z ) dz , ( 4 ) where pθ ( g|s , z ) is a variant of the GraphRNN ( You et al. , 2018b ; Liu et al. , 2018 ) by conditioning the graph generation on subgraphs . As part of the initialization , pθ ( g|s ) is first pretrained on ChEMBL ( Gaulton et al. , 2017 ) , a drug-like molecule dataset , in the same fashion as the variational autoencoder ( Kingma & Welling , 2013 ) , where the encoder is a standard GCN with atoms as vertices and bonds as edges . Note that different from p ( z ) , which is a fixed prior , p ( s ) will be updated in each round . And since representing a distribution on S is difficult , we will use particles to represent p ( s ) in the algorithm . In order to improve the diversity of the generated molecules , we will also regularize the entropy of the rationale distribution p ( s ) , leading to the following diversity-promoting objective function J ( θ , p ( s ) ) = Eg∼pθ ( · ) [ f ( g ) ] + λ ·H [ p ( s ) ] , ( 5 ) with a hyperparameter λ > 0 controlling the strength of the regularization .
The paper tackles the problem of molecule property optimisation. To this end, the authors proposes an alternating approach consisting of an explainer model and a molecule completion model. The explainer model takes a complete molecule as input and outputs a subgraph that represents the part that contributes most to property prediction. Then, the molecule completion model uses the subgraphs to sample a complete graph that can maximise the property scores. The loss function of molecule completion model directly maximises the properties, which is non-differentiable so that the authors use a REINFORCE algorithm for optimisation.
SP:cf9319c2a107d0d34ff04da0f53201f3cdff4c24