source
stringlengths
273
149k
source_labels
sequence
paper_id
stringlengths
9
11
target
stringlengths
18
668
Generative Adversarial Networks (GANs) have proven to be a powerful framework for learning to draw samples from complex distributions. However, GANs are also notoriously difficult to train, with mode collapse and oscillations a common problem. We hypothesize that this is at least in part due to the evolution of the generator distribution and the catastrophic forgetting tendency of neural networks, which leads to the discriminator losing the ability to remember synthesized samples from previous instantiations of the generator. Recognizing this, our contributions are twofold. First, we show that GAN training makes for a more interesting and realistic benchmark for continual learning methods evaluation than some of the more canonical datasets. Second, we propose leveraging continual learning techniques to augment the discriminator, preserving its ability to recognize previous generator samples. We show that the ing methods add only a light amount of computation, involve minimal changes to the model, and in better overall performance on the examined image and text generation tasks. Generative Adversarial Networks BID6 (GANs) are a popular framework for modeling draws from complex distributions, demonstrating success in a wide variety of settings, for example image synthesis BID14 and language modeling. In the GAN setup, two agents, the discriminator and the generator (each usually a neural network), are pitted against each other. The generator learns a mapping from an easy-to-sample latent space to a distribution in the data space, which ideally matches the real data's distribution. At the same time, the discriminator aims to distinguish the generator's synthesized samples from the real data samples. When trained successfully, GANs yield impressive ; in the image domain for example, synthesized images from GAN models are significantly sharper and more realistic than those of other classes of models BID16. On the other hand, GAN training can be notoriously finicky. One particularly well-known and common failure mode is mode collapse BID0 BID35: instead of producing samples sufficiently representing the true data distribution, the generator maps the entire latent space to a limited subset of the real data space. When mode collapse occurs, the generator does not "converge," in the conventional sense, to a stationary distribution. Rather, because the discriminator can easily learn to recognize a modecollapsed set of samples and the generator is optimized to avoid the discriminator's detection, the two end up playing a never-ending game of cat and mouse: the generator meanders towards regions in the data space the discriminator thinks are real (likely near where the real data lie) while the discriminator chases after it. Interestingly though, if generated samples are plotted through time (as in FIG0), it appears that the generator can revisit previously collapsed modes. At first, this may seem odd. The discriminator was ostensibly trained to recognize that mode in a previous iteration and did so well enough to push the generator away from generating those samples. Why has the discriminator seemingly lost this ability?We conjecture that this oscillation phenomenon is enabled by catastrophic forgetting BID20 BID30: neural networks have a well-known tendency to forget how to complete old tasks while learning new ones. In most GAN models, the discriminator is a binary classifier, with the two classes being the real data and the generator's outputs. Implicit to the training of a standard classifier is the assumption that the data are drawn independently and identically distributed (i.i.d.). Importantly, this assumption does not hold true in GANs: the distribution of the generator class (and thus the discriminator's training data) evolves over time. Moreover, these changes in the generator's distribution are adversarial, designed specifically to deteriorate discriminator performance on the fake class as much as possible. Thus, the alternating training procedure of GANs in actuality corresponds to the discriminator learning tasks sequentially, where each task corresponds to recognizing samples from the generator at that particular point in time. Without any measures to prevent catastrophic forgetting, the discriminator's ability to recognize fake samples from previous iterations will be clobbered by subsequent gradient updates, allowing a mode-collapsed generator to revisit old modes if training runs long enough. Given this tendency, a collapsed generator can wander indefinitely without ever learning the true distribution. With this perspective in mind, we cast training the GAN discriminator as a continual learning problem, leading to two main contributions. (i) While developing systems that learn tasks in a sequential manner without suffering from catastrophic forgetting has become a popular direction of research, current benchmarks have recently come under scrutiny as being unrepresentative to the fundamental challenges of continual learning BID3. We argue that GAN training is a more realistic setting, and one that current methods tend to fail on.(ii) Such a reframing of the GAN problem allows us to leverage relevant methods to better match the dynamics of training the min-max objective. In particular, we build upon the recently proposed elastic weight consolidation BID15 and intelligent synapses BID39. By preserving the discriminator's ability to identify previous generator samples, this memory prevents the generator from simply revisiting past distributions. Adapting the GAN training procedure to account for catastrophic forgetting provides an improvement in GAN performance for little computational cost and without the need to train additional networks. Experiments on CelebA and CIFAR10 image generation and COCO Captions text generation show discriminator continual learning leads to better generations. Consider distribution p real (x), from which we have data samples D real. Seeking a mechanism to draw samples from this distribution, we learn a mapping from an easy-to-sample latent distribution p(z) to a data distribution p gen (x), which we want to match p real (x). This mapping is parameterized as a neural network G φ (z) with parameters φ, termed the generator. The synthesized data are drawn x = G φ (z), with z ∼ p(z). The form of p gen (x) is not explicitly assumed or learned; rather, we learn to draw samples from p gen (x).To provide feedback to G φ (z), we simultaneously learn a binary classifier that aims to distinguish synthesized samples D gen drawn from p gen (x) from the true samples D real. We also parameterize this classifier as a neural network D θ (x) ∈ with parameters θ, with D θ (x) termed the discriminator. By incentivizing the generator to fool the discriminator into thinking its generations are actually from the true data, we hope to learn G φ (z) such that p gen (x) approaches p real (x).These two opposing goals for the generator and discriminator are usually formulated as the following min-max objective: DISPLAYFORM0 At each iteration t, we sample from p gen (x), yielding generated data D gen t. These generated samples, along with samples from D real, are then passed to the discriminator. A gradient descent optimizer nudges θ so that the discriminator takes a step towards maximizing L GAN (θ, φ). Parameters φ are updated similarly, but to minimize L GAN (θ, φ). These updates to θ and φ take place in an alternating fashion. The expectations are approximated using samples from the respective distributions, and therefore learning only requires observed samples D real and samples from p gen (x).The updates to G φ (z) mean that p gen (x) changes as a function of t, perhaps substantially. (x). Because of the catastrophic forgetting effect of neural networks, the ability of D θ (x) to recognize these previous distributions is eventually lost in the pursuit of maximizing L GAN (θ, φ) with respect to only D gen t. This opens the possibility that the generator goes back to generating samples the discriminator had previously learned (and then forgot) to recognize, leading to unstable mode-collapsed oscillations that hamper GAN training (as in FIG0). Recognizing this problem, we propose that the discriminator should be trained with the temporal component of p gen (x) in mind. 3.1 CLASSIC CONTINUAL LEARNING Catastrophic forgetting has long been known to be a problem with neural networks trained on a series of tasks BID20 BID30. While there are many approaches to addressing catastrophic forgetting, here we primarily focus on elastic weight consolidation (EWC) and intelligent synapses (IS). These are meant to illustrate the potential of catastrophic forgetting mitigation to improve GAN learning, with the expectation that this opens up the possibility of other such methods to significantly improve GAN training, at low additional computational cost. To derive the EWC loss, BID15 frames training a model as finding the most probable values of the parameters θ given the data D. For two tasks, the data are assumed partitioned into independent sets according to the task, and the posterior for Task 1 is approximated as a Gaussian with mean centered on the optimal parameters for Task 1 θ * 1 and diagonal precision given by the diagonal of the Fisher information matrix F 1 at θ * 1. This gives the EWC loss the following form: DISPLAYFORM0 where L 2 (θ) = log p(D 2 |θ) is the loss for Task 2 individually, λ is a hyperparameter representing the importance of Task 1 relative to Task 2, DISPLAYFORM1 2, i is the parameter index, and L(θ) is the new loss to optimize while learning Task 2. Intuitively, the EWC loss prevents the model from straying too far away from the parameters important for Task 1 while leaving less crucial parameters free to model Task 2. Subsequent tasks in additional L EWC (θ) terms added to the loss for each previous task. By protecting the parameters deemed important for prior tasks, EWC as a regularization term allows a single neural network (assuming sufficient parameters and capacity) to learn new tasks in a sequential fashion, without forgetting how to perform previous tasks. While EWC makes a point estimate of how essential each parameter is at the of a task, IS BID39 protects the parameters according to their importance along the task's entire training trajectory. Termed synapses, each parameter θ i of the neural network is awarded an importance measure ω 1,i based on how much it reduced the loss while learning Task 1. Given a loss gradient g(t) = ∇ θ L(θ)| θ=θt at time t, the total change in loss during the training of Task 1 then is the sum of differential changes in loss over the training trajectory. With the assumption that parameters θ are independent, we have:where θ = dθ dt and (t 0, t 1) are the start and finish of Task 1, respectively. Note the added negative sign, as importance is associated with parameters that decrease the loss. The importance measure ω 1,i can now be used to introduce a regularization term that protects parameters important for Task 1 from large parameter updates, just as the Fisher information matrix diagonal terms F 1,i were used in EWC. This in an IS loss very reminiscent in form 1: DISPLAYFORM0 3.2 GAN CONTINUAL LEARNING The traditional continual learning methods are designed for certain canonical benchmarks, commonly consisting of a small number of clearly defined tasks (e.g., classification datasets in sequence). In GANs, the discriminator is trained on dataset DISPLAYFORM1 However, because of the evolution of the generator, the distribution p gen (x) from which D gen t comes changes over time. This violates the i.i.d. assumption of the order in which we present the discriminator data. As such, we argue that different instances in time of the generator should be viewed as separate tasks. Specifically, in the parlance of continual learning, the training data are to be regarded as DISPLAYFORM2 Thus motivated, we would like to apply continual learning methods to the discriminator, but doing so is not straightforward for the following reasons:• Definition of a task: EWC and IS were originally proposed for discrete, well-defined tasks. For example, BID15 applied EWC to a DQN BID25 learning to play ten Atari games sequentially, with each game being a clear, independent task. For GAN, there is no such precise definition as to what constitutes a "task," and as discriminators are not typically trained to convergence at every iteration, it is also unclear how long a task should be.• Computational memory: While Equations 2 and 4 are for two tasks, they can be extended to K tasks by adding a term L EWC or L IS for each of the K − 1 prior tasks. As each term L EWC or L IS requires saving both a historical reference term θ * k and either F k or ω k (all of which are the same size as the model parameters θ) for each task k, employing these techniques naively quickly becomes impractical for bigger models when K gets large, especially if K is set to the number of training iterations T.• Continual not learning: Early iterations of the discriminator are likely to be non-optimal, and without a forgetting mechanism, EWC and IS may forever lock the discriminator to a poor initialization. Additionally, the unconstrained addition of a large number of terms L EWC or L IS will cause the continual learning regularization term to grow unbounded, which can disincentivize any further changes in θ. To address these issues, we build upon the aforementioned continual learning techniques, and propose several changes. Number of tasks as a rate: We choose the total number of tasks K as a function of a constant rate α, which denotes the number of iterations before the of a task, as opposed to arbitrarily dividing the GAN training iterations into some set number of segments. Given T training iterations, this means a rate α yields K = T α tasks. Online Memory: Seeking a way to avoid storing extra θ * k, F k, or ω k, we observe that the sum of two or more quadratic forms is another quadratic, which gives the classifier loss with continual learning the following form for the (k + 1) th task: DISPLAYFORM3 DISPLAYFORM4 κ,i, and Q κ,i is either F κ,i or ω κ,i, depending on the method. We name models with EWC and IS augmentations EWC-GAN and IS-GAN, respectively.1 BID39 DISPLAYFORM5, where ∆1,i = θ1,i − θ0,i and ξ is a small number for numerical stability. We however found that the inclusion of (∆1,i) 2 can lead to the loss exploding and then collapsing as the number of tasks increases and so omit it. We also change the hyperparameter c into Under review as a conference paper at ICLR 2019Controlled forgetting: To provide a mechanism for forgetting earlier non-optimal versions of the discriminator and to keep L CL bounded, we add a discount factor γ: DISPLAYFORM6 Together, α and γ determine how far into the past the discriminator remembers previous generator distributions, and λ controls how important memory is relative to the discriminator loss. Note, the terms S k and P k can be updated every α steps in an online fashion: DISPLAYFORM7 This allows the EWC or IS loss to be applied without necessitating storing either Q k or θ * k for every task k, which would quickly become too costly to be practical. Only a single variable to store a running average is required for each of S k and P k, making this method space efficient. Augmenting the discriminator with the continual learning loss, the GAN objective becomes: DISPLAYFORM8 Note that the training of the generator remains the same; full algorithms are in Appendix A. Here we have shown two methods to mitigate catastrophic forgetting for the original GAN; however, the proposed framework is applicable to almost all of the wide range of GAN setups. Continual learning in GANs There has been previous work investigating continual learning within the context of GANs. Improved GAN BID32 introduced historical averaging, which regularizes the model with a running average of parameters of the most recent iterations. Simulated+Unsupervised training BID34 proposed replacing half of each minibatch with previous generator samples during training of the discriminator, as a generated sample at any point in time should always be considered fake. However, such an approach necessitates a historical buffer of samples and halves the number of current samples that can be considered. Continual Learning GAN BID33 applied EWC to GAN, as we have, but used it in the context of the class-conditioned generator that learns classes sequentially, as opposed to all at once, as we propose. BID36 independently reached a similar on catastrophic forgetting in GANs, but focused on gradient penalties and momentum on toy problems. The heart of continual learning is distilling a network's knowledge through time into a single network, a temporal version of the ensemble described in BID10. There have been several proposed models utilizing multiple generators BID4 or multiple discriminators BID2 BID27, while Bayesian GAN BID31 considered distributions on the parameters of both networks, but these all do not consider time as the source of the ensemble. Unrolled GAN BID22 ) considered multiple discriminators "unrolled" through time, which is similar to our method, as the continual learning losses also utilize historical instances of discriminators. However, both EWC-GAN and IS-GAN preserve the important parameters for prior discriminator performance, as opposed to requiring backpropagation of generator samples through multiple networks, making them easier to implement and train. GAN convergence While GAN convergence is not the focus of this paper, convergence does similarly avoid mode collapse, and there are a number of works on the topic BID9 BID37 BID26 BID21. From the perspective of BID9, EWC or IS regularization in GAN can be viewed as achieving convergence by slowing the discriminator, but per parameter, as opposed to a slower global learning rate. 5.1 DISCRIMINATOR CATASTROPHIC FORGETTING While FIG0 implies catastrophic forgetting in a GAN discriminator, we can show this concretely. To do so, we first train a DCGAN on the MNIST dataset. Since the generator is capable of generating an arbitrary number of samples at any point, we can randomly draw 70000 samples to comprise a new, "fake MNIST" dataset at any time. By doing this at regular intervals, we create datasets {D Having previously generated a series of datasets during the training of a DCGAN, we now reinitialize the discriminator and train to convergence on each D after fine-tuning on D gen t ; this is unsurprising, as p gen (x) has evolved specifically to deteriorate discriminator performance. While there is still a dropoff with EWC, forgetting is less severe. While the training outlined above is not what is typical for GAN, we choose this set-up as it closely mirrors the continual learning literature. With recent criticisms of some common continual learning benchmarks as either being too easy or missing the point of continual learning BID3, we propose GAN as a new benchmark providing a more realistic setting. From FIG3, it is clear that while EWC certainly helps, there is still much room to improve with new continual learning methods. However, the merits of GAN as a continual learning benchmark go beyond difficulty. While it is unclear why one would ever use a single model to classify successive random permutations of MNIST , many real-world settings exist where the data distribution is slowly evolving. For such models, we would like to be able to update the deployed model without forgetting previously learned performance, especially when data collection is expensive and thus done in bulk sometime before deployment. For example, autonomous vehicles BID12 will eventually encounter unseen car models or obstacles, and automated screening systems at airport checkpoints BID18 will have to deal with evolving bags, passenger belongings, and threats. In both cases, sustained effectiveness requires a way to appropriately and efficiently update the models for new data, or risk obsolescence leading to dangerous blindspots. Many machine learning datasets represent singe-time snapshots of the data distribution, and current continual learning benchmarks fail to capture the slow drift of the real-world data. The evolution of GAN synthesized samples represents an opportunity to generate an unlimited number of smoothly evolving datasets for such experiments. We note that while the setup used here is for binary real/fake classification, one could also conceivably use a conditional GAN BID23 to generate an evolving multi-class classification dataset. We leave this exploration for future work. We show on a toy dataset consisting of a mixture of eight Gaussians, as in the example in FIG0. Following the setup of BID22, the real data are evenly distributed among eight 2-dimensional Gaussian distributions arranged in a circle of radius 2, each with covariance 0.02I (see Figure 4). We evaluate our model with Inception Score (ICP) BID32, which gives a rough measure of diversity and quality of samples; higher scores imply better performance, with the true data ing in a score of around 7.870. For this simple dataset, since we know the true data distribution, we also calculate the symmetric Kullback-Leibler divergence (Sym-KL); lower scores mean the generated samples are closer to the true data. We show computation time, measured in numbers of training iterations per second (Iter/s), averaged over the full training of a model on a single Nvidia Titan X (Pascal) GPU. Each model was run 10 times, with the mean and standard deviation of each performance metric at the end of 25K iterations reported in TAB1. The performance of EWC-GAN and IS-GAN were evaluated for a number of hyperparameter settings. We compare our against a vanilla GAN , as well as a state-ofthe-art GAN with spectral normalization (SN) BID24 applied to the discriminator. As spectral normalization augments the discriminator loss in a way different from continual learning, we can combine the two methods; this variant is also shown. Note that a discounted version of discriminator historical averaging BID32 can be recovered from the EWC and IS losses if the task rate α = 1 and Q k,i = 1 for all i and k, a poor approximation to both the Fisher information matrix diagonal and importance measure. If we also set the historical reference termθ * k and the discount factor γ to zero, then the EWC and IS losses become 2 weight regularization. These two special cases are also included for comparison. We observe that augmenting GAN models with EWC and IS consistently in generators that better match the true distribution, both qualitatively and quantitatively, for a wide range of hyperparameter settings. EWC-GAN and IS-GAN in a better ICP and FID than 2 weight regularization and discounted historical averaging, showing the value of prioritizing protecting important parameters, rather than all parameters equally. EWC-GAN and IS-GAN also outperform a stateof-the-art method in SN-GAN. In terms of training time, updating the EWC loss requires forward propagating a new minibatch through the discriminator and updating S and P, but even if this is done at every step (α = 1), the ing algorithm is only slightly slower than SN-GAN. Moreover, doing so is unnecessary, as higher values of α also provide strong performance for a much smaller time penalty. Combining EWC with SN-GAN leads to even better , showing that the two methods can complement each other. IS-GAN can also be successfully combined with SN-GAN, but it is slower than EWC-GAN as it requires tracking the trajectory of parameters at each step. Sample generation evolution over time is shown in Figure 4 of Appendix C. Since EWC-GAN achieves similar performance to IS-GAN but at less computational expense, we focus on the former for experiments on two image datasets, CelebA and CIFAR-10. Our EWC-GAN implementation is straightforward to add to any GAN model, so we augment various popular implementations. Comparisons are made with the TTUR BID9 variants 2 of DCGAN and WGAN-GP BID7, as well as an implementation 3 of a spectral normalized BID24 ) DCGAN (SN-DCGAN). Without modifying the learning rate or model architecture, we show with and without the EWC loss term added to the discriminator for each. Performance is quantified with the Fréchet Inception Distance (FID) BID9 for both datasets. Since labels are available for CIFAR-10, we also report ICP for that dataset. Best values are reported in TAB2, with samples in Appendix C. In each model, we see improvement in both FID and ICP from the addition of EWC to the discriminator. We also consider the text generation on the MS COCO Captions dataset BID1, with the pre-processing in BID8. Quality of generated sentences is evaluated by BLEU score BID28. Since BLEU-b measures the overlap of b consecutive words between the generated sentences and ground-truth references, higher BLEU scores indicate better fluency. Self BLEU uses the generated sentences themselves as references; lower values indicate higher diversity. We apply EWC and IS to textGAN, a recently proposed model for text generation in which the discriminator uses feature matching to stabilize training. This model's (labeled "EWC" and "IS") are compared to a Maximum Likelihood Estimation (MLE) baseline, as well as several state-of-the-art methods: SeqGAN BID38, RankGAN BID19, GSGAN BID13 and LeakGAN . Our variants of textGAN outperforms the vanilla textGAN for all BLEU scores (see TAB3), indicating the effectiveness of addressing the forgetting issue for GAN training in text generation. EWC/IS + textGAN also demonstrate a significant improvement compared with other methods, especially on BLEU-2 and 3. Though our variants lag slightly behind LeakGAN on BLEU-4 and 5, their self BLEU scores TAB4 indicate it generates more diverse sentences. Sample sentence generations can be found in Appendix C. We observe that the alternating training procedure of GAN models in a continual learning problem for the discriminator, and training on only the most recent generations leads to consequences unaccounted for by most models. As such, we propose augmenting the GAN training objective with a continual learning regularization term for the discriminator to prevent its parameters from moving too far away from values that were important for recognizing synthesized samples from previous training iterations. Since the original EWC and IS losses were proposed for discrete tasks, we adapt them to the GAN setting. Our implementation is simple to add to almost any variation of GAN learning, and we do so for a number of popular models, showing a gain in ICP and FID for CelebA and CIFAR-10, as well as BLEU scores for COCO Captions. More importantly, we demonstrate that GAN and continual learning, two popular fields studied independently of each other, have the potential to benefit each other, as new continual learning methods stand to benefit GAN training, and GAN generated datasets provide new testing grounds for continual learning. To produce a smoothly evolving series of datasets for continual learning, we train a DCGAN on MNIST and generate an entire "fake" dataset of 70K samples every 50 training iterations of the DC-GAN generator. We propose learning each of these generated datasets as individual tasks for continual learning. Selected samples are shown in Figure 3 from the datasets D gen t for t ∈ {5, 10, 15, 20}, each generated from the same 100 samples of z for all t. Note that we actually trained a conditional DCGAN, meaning we also have the labels for each generated image. For experiments in FIG3, we focused on the real versus fake task to demonstrate catastrophic forgetting in a GAN discriminator and thus ignored the labels, but future experiments can incorporate such information. Figure 3: Image samples from generated "fake MNIST" datasets C EXAMPLES OF GENERATED SAMPLES Sample generations are plotted during training at 5000 step intervals in Figure 4. While vanilla GAN occasionally recovers the true distribution, more often than not, the generator collapses and then bounces around. Spectral Normalized GAN converges to the true distribution quickly in most runs, but it mode collapses and exhibits the same behavior as GAN in others. EWC-GAN consistently diffuses to all modes, tending to find the true distribution sooner with lower α. We omit IS-GAN, as it performs similarly to EWC-GAN. Figure 4: Each row shows the evolution of generator samples at 5000 training step intervals for GAN, SN-GAN, and EWC-GAN for two α values. The proposed EWC-GAN models have hyperparameters matching the corresponding α in TAB1. Each frame shows 10000 samples drawn from the true eight Gaussians mixture (red) and 10000 generator samples (blue). We also show the generated image samples for CIFAR 10 and CelebA in Figure 5, and generated text samples for MS COCO Captions in Table 5.(a) CIFAR 10 (b) CelebA Figure 5: Generated image samples from random draws of EWC+GANs. Table 5: Sample sentence generations from EWC + textGAN a couple of people are standing by some zebras in the the view of some benches near a gas station a brown motorcycle standing next to a red fence a bath room with a broken tank on the floor red passenger train parked under a bridge near a river some snow on the beach that is surrounded by a truck a cake that has been perform in the for takeoff a view of a city street surrounded by trees two giraffes walking around a field during the day crowd of people lined up on motorcycles two yellow sheep with a baby dog in front of other sheep an intersection sits in front of a crowd of people a red double decker bus driving down the street corner an automobile driver stands in the middle of a snowy park five people at a kitchen setting with a woman there are some planes at the takeoff station a passenger airplane flying in the sky over a cloudy sky three aircraft loaded into an airport with a stop light there is an animal walking in the water an older boy with wine glasses in an office two old jets are in the middle of london three motorcycles parked in the shade of a crowd group of yellow school buses parked on an intersection a person laying on a sidewalk next to a sidewalk talking on a cell phone a chef is preparing food with a sink and stainless steel appliances
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJzuHiA9tQ
Generative Adversarial Network Training is a Continual Learning Problem.
Many problems with large-scale labeled training data have been impressively solved by deep learning. However, Unseen Class Categorization (UCC) with minimal information provided about target classes is the most commonly encountered setting in industry, which remains a challenging research problem in machine learning. Previous approaches to UCC either fail to generate a powerful discriminative feature extractor or fail to learn a flexible classifier that can be easily adapted to unseen classes. In this paper, we propose to address these issues through network reparameterization, \textit{i.e.}, reparametrizing the learnable weights of a network as a function of other variables, by which we decouple the feature extraction part and the classification part of a deep classification model to suit the special setting of UCC, securing both strong discriminability and excellent adaptability. Extensive experiments for UCC on several widely-used benchmark datasets in the settings of zero-shot and few-shot learning demonstrate that, our method with network reparameterization achieves state-of-the-art performance. The rich and accessible labeled data has fueled the revolutionary successes of deep learning in various tasks, e.g., visual recognition BID7 ), object detection BID20 ), machine translation BID1 ), etc. However, requiring numerous annotated data severely limits the applicability of deep learning algorithms to Unseen Class Categorization (UCC) for which we only have access to a limited amount of information, which is frequently encountered in industrial applications. Recently, an increasing number of approaches have been proposed to solve UCC with the help of either attribute descriptions (zero-shot learning (ZSL)) BID9; BID30 ) or one/a few labeled samples for each class (few-shot learning (FSL)) BID22; BID29 ).Previous approaches to UCC mainly have the following characteristics and limitations: (i) To obtain powerful discriminative feature representation, they often train a deep classification model employing state-of-the-art multi-class classification techniques. However, such models are hard to be adapted to new classes with limited supervision information due to the high volume of model parameters and the gradual updating scheme. (ii) To ensure the consistency of training and test settings and adaptability to new classes, previous methods often train a deep model in an episode fashion BID26 ), sometimes along with some specially designed meta-learning updating rules BID4 ). With episode-based training, the model acquires adaptability to new tasks after many training episodes using the knowledge it grasps during the training. However, the episode-based training strategy severely limits the model's capability of extracting discriminative features, because it does not fully exploit the diversity and variance of all classes within the training dataset. The trained model treats the classes in each episode as new classes and attempts to separate them. Therefore, it does not have memory of the competing information of these classes against all the other ones in the whole dataset beyond the current episode. Due to the neglect of this global (dataset-wise rather than episode-wise) discriminative information, the feature extraction capability of the model is suppressed, thus limiting the UCC performance. To address these issues, we propose to secure both powerful discriminability of feature extraction and strong adaptability of model classification through network reparameterization, i.e., reparametrizing the learnable weights of a network as a function of other variables. We decouple the feature extraction module and the classification module of a deep classification model, learn the former as a standard multi-class classification task to obtain a discriminative feature extractor, and learn the latter employing a light deep neural network that generates generic classification weights for unseen classes given limited exemplar information. We train the classification weight generator by following the episode-based training scheme to secure the adaptability. Our method can be flexibly applied to both ZSL and FSL, where the exemplar information about unseen classes are provided in the form of either the semantic attributes or one/a few labeled samples. Extensive experiments show that our proposed method achieves state-of-the-art performance on widely-used benchmark datasets for both tasks. With regard to the form of the exemplar information provided about unseen classes, UCC can be classified as zero-shot learning and few-shot learning. ZSL requires recognizing unseen classes based on their semantic descriptions. It is approached by finding an embedding space where visual samples and semantic descriptions of a class are interacted so that the semantic description of an unseen class can be queried by its visual samples. Since the embedding space is often of high dimension, finding the best match of a given vector among many candidates shall inevitably encounter the hubness problem BID17 ), i.e., some candidates will be biased to be the best matches for many of the queries. Depending on the chosen embedding space, the severeness of this problem varies. Some approaches select the semantic space as the embedding space and project visual features to the semantic space BID10; BID5. Projecting the visual features into a often much lower-dimensional semantic space shrinks the variance of the projected data points and thus aggravates the hubness problem. Alternatively, some methods project both visual and semantic features into a common intermediate space BID0; BID23; BID31 ). However, due to lacking training samples from unseen classes, these methods are prone to classify test samples into seen classes BID21 ) (for the generalized ZSL setting, seen classes are included when testing). Recently, BID30 proposed to choose the visual space as the embedding space and learned a mapping from the semantic space to visual space. Benefiting from the abundant data diversity in visual space, this method can mitigate the hubness problem at some extent. However, the limitation of this method is that it strives only to learn a mapping from semantic space to visual space such that the visual samples of a class coincide with the associated semantic description; it however neglects the separation information among visual features of different classes. Our method avoids this problem. We formulate bridging the semantic space and the visual space as a visual feature classification problem conditioned on the semantic features. We learn a deep neural network that generates classification weights for the visual features when fed with the corresponding semantic features. By nature of a classification problem, both intra-class compactness (visual features of the same classes are assigned with the same label) and inter-class separability (visual features of different classes are assigned with different labels) are exploited, hence ing in a better mapping. FSL aims to recognize unseen classes when provided with one/a few labeled samples of these classes. A number of methods address it from the perspective of deep metric learning by learning deep embedding models that output discriminative feature for any given images BID19; BID26 BID22; BID24; BID23 ). The difference lies in the loss functions used. More common approaches are based on meta-learning, also called learning to learn, which is to learn an algorithm (meta-learner) that outputs a model (the learner) that can be applied on a new task when given some information (meta-data) about the new task. Following this line, approaches such as META-LSTM BID18 ), MAML BID4 ), Meta-SGD ), DEML+Meta-SGD ), Meta-Learn LSTM BID18 ), Meta-Networks BID13 ), and REPTILE BID14 ) aim to optimize the meta-learned classifiers to be easily fine-tuned on new few-shot tasks using the small-scale support set provided. The common limitation of the above methods is that they adopt the episode-based training scheme to secure adaptability to new classes, which however compromises the capability of discriminative feature extraction due to the forgetting of global (dataset-wise) competing information of among classes beyond individual episodes. Perhaps closest to our approach, BID6 proposed the DFSVL algorithm which approaches FSL also in virtue of classification weight generation. The major limitation of DFSVL is that it obtains classification weights for unseen classes simply as a mixture of feature embeddings of support images of novel classes and attended pretrained weights of base (seen) classes, which is too weak to bridge feature embeddings and classification weights. Besides, it cannot bridge information across different domains (due to dimension inconsistency) so that is not applicable for ZSL. We instead learn a network to generate classification weights directly from feature embeddings of support images; it is more powerful and flexible to solve both ZSL and FSL within the same framework. We focus on Unseen Class Categorization (UCC), which is to recognize objects of unseen classes given only minimal information (a few labeled samples or the attributes) about the classes. Formally, suppose we have three sets of data DISPLAYFORM0 Our main contribution in this paper is the proposed framework that can address both ZSL and FSL with minimal changes. FIG0 diagrams our framework. Instead of jointly learning the feature extraction network weights and classification weights, which in a heavy model that is hard to be adjusted for novel classes with limited supervision information, we reparametrize the learnable weights of a classification model as the combination of learnable parameters of a feature extraction model and a weight generation model. In other words, we decouple the feature extraction network f θ and the classification weight W of a standard classification network. We train f θ as a standard multiclass classification task and learn another network g φ to generate the classification weight W. Since f θ is trained as a standard multi-class classification task to distinguish all classes within the training set, it is supposed to be able to generate more discriminative feature representations for images of unseen classes than that generated by a model trained in episode-based fashion where the model is train to distinguish several classes within mini-batches. Meanwhile, we train g φ in episode-based fashion by constantly sampling new classes and minimizing the classification loss (cross entropy loss on top of Softmax outputs) using the weights generated by g φ. After training, whenever some new classes come, along with supporting information in the form of either attribute vectors (ZLS) or a few-labeled samples (FSL), g φ is supposed to be able to generate generic classification weights that can effectively classify query images that belong to these new classes. Thanks to this network reparameterization strategy, we are able to get a powerful and flexible UCC model. We adopt the cosine similarity based cross entropy loss to train the weight generator g φ. Traditional multi-layer neural networks use dot product between the output vector of previous layer and the incoming weight vector as the input to activation function. BID12 recently showed that replacing the dot product with cosine similarity can bound and reduce the variance of the neurons and thus in models of better generalization. BID6 further showed that using the cosine similarity instead of dot product for calculating classification score in the last fullyconnected layer of deep neural network brings benefit for classification, with some minor revisions. We adopt this technique to train our weight generator g φ. The classification score of a sample (e x, y) is calculated as DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 where s is a learnable scalar controlling the peakiness of the probability distribution generated by the softmax operator BID6 ), w j is the classification weight for class j generated by neural network g φ taking supporting information of the class as input, x is the input image, a j is the attribute vector for class j for ZSL, x i,j is the i-th input image of class j for FSL, j = 1,..., N f, and N f is the number of shots for FSL.In a typical UCC task T, the loss function is calculated as DISPLAYFORM4 where λ is a hyper-parameter weighting the l 2 -norm regularization of the learnable parameters of neural network g φ. For ZSL, we are provided with semantic class attributes S = A t ∪ A u as the assistance for UCC. The basic assumption for existing ZSL algorithms is that the visual-attribute relationship learned from seen classes in a certain embedding space is class-invariant and can be applied to unseen classes. With this assumption, existing methods either project visual features to semantic space or reversely project semantic features to visual space, or alternatively project both visual and semantic features to an intermediate space. In any case, the coincidence of visual and semantic features of a class is utilized to learn the visual-attribute relationship. BID30 recently showed that it is advantageous to select the visual space as the embedding space because the abundance of data diversity in the visual space can significantly mitigate the so-called "hubness" problem. Their objective function is as follows: DISPLAYFORM0 where f θ is a feature extraction model which outputs a representation vector f θ (x i) using image x i as input. h ψ is a mapping function which projects attribute vector a yi of class y i to the embedding space where f θ (x i) lies. Through minimizing the least square embedding loss, the visual-attribute relationship can be established. With this relationship, in the testing stage, the attributes A u of unseen classes are mapped to the visual feature embedding space in which the visual feature of an images of any unseen class can find the best class attribute through nearest neighbor searching. One can observe that this method learns the visual-attribute relationship by only utilizing the coincidence of the visual samples of a class with the associated semantic description. It however neglects to explore the inter-class separation of different classes, which shall be crucial to further avoid the hubness problem. To remedy this, we reformulate the learning of visual-attribute relationship from a regression problem to a visual feature classification problem. We directly learn a network g φ that outputs the classification weights for classifying visual features and use the cross-entropy loss on top of Softmax outputs to guide learning g φ. Through this reformulation, both intra-class compactness and inter-class separability are elegantly exploited for learning the visual-attribute relationship: DISPLAYFORM1 and DISPLAYFORM2 3. Calculate loss according to Eq. 5 4. Update g φ through back-propagation. end while Visual features of the same classes should be assigned with the same label (compactness), while visual features of different classes are assigned with different labels (separability).We follow the network reparameterization scheme by decoupling the feature extraction module f θ and the classification weight module which is generated by g φ. The feature extraction module f θ is trained as a standard multi-class classification task to enable us to obtain a discriminative feature representation for any given image. To learn g φ, we adopt the episode based training scheme by continuously exposing g φ with new (randomly sampled) ZSL tasks so as to secure good performance when new real tasks arrive in the testing stage. More specifically, we keep randomly sampling from D t = {X t, Y t} and A t ZSL tasks and feeding them to the network. Each task consists of M z classes and the associated M z attribute vectors. For each class, we randomly sample N z images. With a batch of M z N z images B v and M z attribute vectors B a, we train g φ by minimizing the loss function defined in Eq. 5. In the testing stage, given attributes of unseen classes A u, or S = A t ∪ A u for all (seen and unseen) classes as in generalized ZSL setting, we generate the corresponding classification weights using g φ. The generated classification weights, integrated with the feature extraction network f θ serve to classify images of unseen classes. Algorithm 1 outlines the main steps of our method for ZSL. For FSL, one/a few labeled samples D s = {X s, Y s} for each unseen class are provided to help recognize objects of these classes. Our novel categorization framework can be easily extended from ZSL to FSL, simply by replacing the semantic attribute vectors with feature embedding vectors as the input to the classification weight generation network g φ. To train g φ, we keep randomly sampling FSL tasks from D t = {X t, Y t}, each of which consists of a support set and a query set. Images in the both sets are from the same classes. The support set consists of M f classes and N f images for each class. With the feature embeddings B e of the M f N f images as input, g φ generates the classification weights for the M f classes, which are then used to classify the feature embeddings of images from the query set. Note that if N f > 1, i.e., each class has multiple support samples, we average the embeddings of all images belonging to the same class and feed the averaged embedding to g φ. Similar to ZSL, we learn the ing model by optimizing the loss function defined in Eq. 5. Algorithm 2 outlines the main steps for FSL.One of the most distinct aspects of our method from the existing ones is that we decouple the feature extraction module and the classifier module of the deep classification model, and train each module on the most beneficial tasks. We train the feature extraction module as a standard multi-class classification task. This is motivated by the observation that a simple classifier (e.g., nearest neighbor), when taking as input features obtained by a powerful extractor, can outperform some sophisticated FSL models that use weaker feature extraction models. For example, as shown in Fiugre 2, using nearest neighbor (NN) as the classifier, we can achieve better one-shot classification accuracy than a recent FSL algorithm PROTO NET BID22 ), when using features extracted by ResNet18 BID7 The reason for this surprising is that the episode-based training scheme of existing FSL methods inherently suppresses obtaining a powerful feature extractor: In each episode, the model is fed with a new FSL task that is assumed to have no relationship with the previous ones. The model is trained to separate well the several classes within the task. However, since all training tasks are sampled from the training dataset, one class shall appear in many tasks. The inter-class separation across the whole dataset is neglected by existing FSL methods. Therefore, there is a dilemma for existing FSL algorithms: They need to be trained in an episodebased fashion to ensure flexibility, but which in return compromises feature discriminability. To avoid this awkward situation, our proposed method decoupling the network and training different components in different ways ensures powerful discriminability and strong adaptability. We evaluate our framework for both zero-shot learning and few-shot learning tasks. Datasets and evaluation settings. We employ the most widely-used zero-shot classification datasets for performance evaluation, namely, AwA1 (Lampert et al. FORMULA1), AwA2 BID29 ), CUB BID27 ), SUN BID16 ) and aPY BID3 ). The statistics of the datasets are shown in Table 1. We follow the GBU setting proposed in BID29 ) and evaluate both the conventional ZSL setting and the generalized ZSL (GZSL) setting. In the conventional ZSL, test samples are restricted to the unseen classes, while in the GZSL, they may come from either seen classes or unseen classes. Implementation details. Following BID29 ), we adopt ResNet101 as our feature extraction model f θ which in a 2048-dimension vector for each input image. For the weight generation model g φ, we utilize two FC+ReLU layers to map semantic vectors to visual classification weights. The dimension of the intermediate hidden layer are 1600 for all the five datasets. We train g φ with Adam optimizer and a learning rate 10 −5 for all datasets by 1,000,000 randomly sample ZSL tasks. Each task consists of 32 randomly sampled classes, 4 samples for each class, i.e., M z = 32 and N z = 4. The hyper-parameters λ is chosen as 10 −4, 10 −3, 10 −3, 10 −5 and 10 −4 for AwA1, AwA2, CUB, SUN and aPY, respectively. Our model is implemented with PyTorch. Experimental . TAB3 shows the experimental . For the conventional ZSL setting, our method reaches the best for three out of the five datasets, while being very close to the best for one of the left two. Remarkably, our method consistently outperforms DEM BID30 ) for all the five datasets, which substantiates the benefit of our method of taking consideration of interclass separability when learning the mapping from semantic space to visual space. For GZSL setting where seen classes are also included to be the candidates, our method significantly outperforms all competing methods, reaching performance gains over the second best even about 30% in the AWA1 dataset. We analyze the reason for our dramatic advantage is that our method considers inter-class separation during the training stage so that the ant classification weights for the seen classes possess good separation property after training. When they are concatenated with the classification weights generated from semantic descriptions of unseen classes in the testing stage, they shall be AwA2 CUB aPY SUN ZSL GZSL ZSL GZSL ZSL GZSL ZSL GZSL ZSL GZSL DAP (Lampert et al. FORMULA1 44.1 0.0 46.1 0.0 40.0 1.7 33.8 4.8 39.9 4.2 CONSE BID15 45.6 0.4 44.5 0.5 34.3 1.6 26.9 0.0 38.8 6.8 SSE BID31 60.1 7.0 61.0 8.1 43.9 8.5 34.0 0.2 51.5 2.1 DEVISE BID5 54.2 13.4 59.7 17.1 52.0 23.8 39.8 4.9 56.5 16.9 SJE BID0 65.6 11.3 61.9 8.0 53.9 23.5 32.9 3.7 53.7 14.7 LATEM BID28 55.1 7.3 55.8 11.5 49.3 15.2 35.2 0.1 55.3 14.7 ESZSL BID21) 58.2 6.6 58.6 5.9 53.9 12.6 38.3 2.4 54.5 11 ALE BID0 ) 59.9 16.8 62.5 14.0 54.9 23.7 39.7 4.6 58.1 21.8 SYNC BID2 54.0 8.9 46.6 10.0 55.6 11.5 23.9 7.4 56.3 7.9 SAE BID9 53.0 1.8 54.1 1.1 33.3 7.8 8.3 0.4 40.3 8.8 DEM BID30 68.4 32.8 67.1 30.5 51.7 19.6 35.0 11.1 61.9 20.5 RELATION NET BID23 quite discriminative to discern that the incoming images do not belong to their classes. From the perspective of hubness problem, since the classification weights for seen class have good separation property, the weight vectors are less likely to be clustered in the embedding space, so that the risk is reduced that some candidates are selected as the nearest neighbors for many query images. Datasets and evaluation settings. We evaluate few-shot classification on two widely-used datasets, Mini-ImageNet BID26 ) and CUB BID27 ). The Mini-ImageNet dataset has 60,000 images from 100 classes, 600 images for each class. We follow previous methods and use the splits in BID18 for evaluation, i.e., 64, 16, 20 classes as training, validation, and testing sets, respectively. The CUB dataset is a fine-grained dataset of totally 11,788 images from 200 categories of birds. As the split in BID18, we use 100, 50, 50 classes for training, validation, and testing, respectively. For both datasets, we resize images to 224×224 to meet the requirement of our adopted feature extraction network. Following the previous methods, we evaluate both 5-way 1-shot and 5-way 5-shot classification tasks where each task instance involves classifying test images from 5 sampled classes with 1 (1-shot) or 5 (5-shot) randomly sampled images for each class as the support set. In order to reduce variance we repeat the evaluation task 600 times and report the mean of the accuracy with a 95% confidence interval. Implementation details. We use ResNet18 as our feature extraction model f θ which in a 512-dimension vector for each input image after average pooling. We train f θ on the two experimental datasets by following the standard classification learning pipeline: We use Adam optimizer with an initial learning rate 10 −3 which decays to the half every 10 epochs. The model is trained with 100 epochs. As for g φ, we use two FC+ReLU layers, same as in ZSL. The dimension of the intermediate hidden layer is 512 for both datasets. We train g φ using Adam optimizer with a learning rate 10 −5 and set the hyper-parameters λ = 10 −5 for both datasets. The model is trained with 60000 randomly sampled FSL tasks, each of which consist of 5 classes, with 1 or 5 samples as the support samples and another 15 as the query samples. Experimental . TAB4 shows the of the proposed method and the most recent ones. From the table, we can get some interesting observations. First, the baseline method "ResNet18 + NN" beats most competing FSL algorithms where various sophisticated strategies are used. Meanwhile, the accuracy of feeding the classifier of PROTO NET with features obtained by ResNet18 ("ResNet18 feat. + PROTO NET classifier") is much higher than that obtained by training PROTO NET end to end with ResNet18 as the base model ("ResNet18 + PROTO NET"). These support our analysis that the episode-based training scheme adopted by existing FSL approaches suppresses the discriminability of the feature extraction model. Second, compared with the baseline methods "ResNet18 feat. + NN" and "ResNet18 feat. + PROTO NET classifier", which use the same feature representations as our method, we get obvious improvements. This substantiates the benefit of the proposed weight generation strategy for FSL. Third, compared with the existing methods, our method reaches the best in the both datasets for both 1-shot and 5-shot evaluation settings, often by large margins. This shows the great advantage of our method for handling the FSL problem. As we can see above, our method dramatically outperforms existing methods for the GZSL setting. The advantage is much more significant than that for the ZSL setting. We have analyzed the reason is that the classification weights generated from the attributes of seen classes show good separation property so that the hubness problem is not as severe as that for other methods. The hubness problem refers that in ZSL, some candidate points are prone to be the nearest neighbors of many query points when the dimension is high. So, if the candidate points are more evenly distributed in the space, the less severe of the hubness problem should be. To validate this, we use t-SNE BID25 ) to visualize the classification weight vectors generated from all 200 class semantic vectors in the CUB dataset. As a comparison, we do the same thing for DEM BID30 ) which also learns mapping from semantic space to visual space. The is shown in FIG4. We can observe that the points are more evenly distributed for our method than that for DEM. This further validates the benefit of our method in avoiding the hubness problem. In this paper, we propose a flexible framework for unseen class categorization with limited information provided about these classes. We secure two key factors, a powerful feature extractor and a flexible classifier, through network reparameterization. We decouple the feature extraction module and the classification module of a deep model for UCC. The feature extraction module is learned in a standard multi-class classification framework and the classification weight vector is generated by a network from exemplar information of the unseen classes. We train the classification weight generator in an episode-by-episode fashion to enable it flexibility for new tasks. Applying our framework for zero-shot learning (ZSL), we achieve much better especially for the generalized ZSL setting than the state-of-the-art owing to our incorporation of inter-class separation information for learning the mapping from semantic space to visual space. For few-shot learning (FSL), we also achieve remarkable performance gains relative to existing methods due to the flexible scheme that make it possible a powerful feature extraction model and a flexible weight generation model.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJeyV2AcKX
A unified frame for both few-shot learning and zero-shot learning based on network reparameterization
Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure. Experimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always possible. Alternatively, protein folding can be modeled using computational methods, which however are not guaranteed to always produce optimal . GraphQA is a graph-based method to estimate the quality of protein models, that possesses favorable properties such as representation learning, explicit modeling of both sequential and 3D structure, geometric invariance and computational efficiency. In this work, we demonstrate significant improvements of the state-of-the-art for both hand-engineered and representation-learning approaches, as well as carefully evaluating the individual contributions of GraphQA. Protein molecules are predominantly present in biological forms, responsible for their cellular functions. Therefore, understanding, predicting and modifying proteins in biological processes are essential for medical, pharmaceutical and genetic research. Such studies strongly depend on discovering mechanical and chemical properties of proteins through the determination of their structure. At the high level, a protein molecule is a chain of hundreds of smaller molecules called amino acids. Identifying a protein's amino-acid sequence is nowadays straightforward. However, the function of a protein is primarily determined by its 3D structure. Spatial folding can be determined experimentally, but the existing procedures are time-consuming, prohibitively expensive and not always possible. Thus, several computational techniques were developed for protein structure prediction (; ;). So far, no single method is always best, e.g. some protein families are best modeled by certain methods, also, computational methods often produce multiple outputs. Therefore, candidate generation is generally followed by an evaluation step. This work focuses on Quality Assessment (QA) of computationally-derived models of a protein . QA, also referred to as model accuracy estimation (MAE), estimates the quality of computational protein models in terms of divergence from their native structure. The downstream goal of QA is two-fold: to find the best model in a pool of models and to refine a model based on its local quality. Computational protein folding and design have recently received attention from the machine learning community (; ; ; b; ; ;), while QA has yet to follow. This is despite the importance of QA for structural biology and the availability of standard datasets to benchmark machine learning techniques, such as the biannual CASP event . The field of bioinformatics, on the other hand, has witnessed noticeable progress in QA for more than a decade: from earlier works using artificial neural networks or support vector machines to more recent deep learning methods based on 1D-CNNs, 3D-CNNs and LSTMs (; ; Pagès et al., 2018;). In this work, we tackle Quality Assessment with Graph Convolutional Networks, which offer several desirable properties over previous methods. Through extensive experiments, we show significant improvements over the state-of-the-art, and offer informative qualitative and quantitative analyses. GRAPHQA predicts local and global scores from a protein's graph using message passing among residues with chemical bond or spatial proximity. CASP QA algorithms score protein models by comparison with experimentally-determined conformations. Protein Quality Assessment (QA) methods are evaluated in CASP since CASP7 . Current techniques can be divided into two categories: singlemodel methods which operate on a single protein model to estimate its quality , and consensus methods that use consistency between several candidates to estimate their quality . Single-model methods are applicable to a single protein in isolation and in the recent CASP13 performed comparable to or better than consensus methods for the first time . Recent single-model QA works are based on deep learning, except VoroMQA that takes a statistical approach on atom-level contact area . 3DCNN adopts a volumetric representation of proteins . Ornate improves 3DCNN by defining a canonical orientation (Pagès et al., 2018). ProQ3D uses a multi-layer perceptron on fixed-length protein descriptors . ProQ4 adopts a pre-trained 1D-CNN that is fine-tuned in a siamese configuration with a rank loss . VoroMQA and ProQ3D are among the top performers of CASP13 . Graph Convolutional Networks (GCNs) bring the representation learning power of CNNs to graph data, and have been recently applied with success to multiple domains, e.g. physics , visual scene understanding and natural language understanding . Molecules can be naturally represented as graphs and GCNs have been proven effective in several related tasks, including molecular representation learning , protein interface prediction , chemical property prediction (; ; a), drug-drug interaction , drugtarget interaction molecular optimization , and generation of proteins, molecules and drugs (a; ; ; b;). However, to the best of our knowledge, GCNs have never been applied to the problem of protein quality assessment. • This work is the first to tackle QA with GCN which bring several desirable properties over previous methods, including representation learning (3DCNN, Ornate), geometric invariance (VoroMQA, Ornate), sequence learning (ProQ4, AngularQA), explicit modeling of 3D structure (3DCNN, Ornate, VoroMQA) and computational efficiency. • Thanks to these desirable properties, a simple GCN setup achieves improved compared to the more sophisticated state-of-the-art methods such as ProQ4. This is demonstrated through extensive experiments on multiple datasets and scoring regimes. • Novel representation techniques are employed to explicitly reflect the sequential (residue separation) and 3D structure (angles, spatial distance and secondary structure) of proteins. • Enabled by the use of GCN, we combine the optimization of local and global score for QA, improving over the performance of a global-only scoring method. • Through an extensive set of ablation studies, the significance of different components of the method, including architecture, loss, and features, are carefully analyzed. We start describing our method by arguing for representation of protein molecules as graphs in learning tasks, then we define the problem of protein quality assessment (QA), and finally we present the proposed GRAPHQA architecture. Proteins are large molecular structures that perform vital functions in all living organisms. At the chemical level, a protein consists of one or more chains of smaller molecules, which we interchangeably refer to as residues for their role in the chain, or as amino acids for their chemical composition. The sequence of residues S = {a i} that composes a protein represents its primary structure, where a i is one of the 22 amino acid types. The interactions between neighboring residues and with the environment dictate how the chain will fold into complex spatial structures that represent the protein's secondary structure and tertiary structure. Therefore, for learning tasks involving proteins, a suitable representation should reflect both the identity and sequence of the residues, i.e. the primary structure, and geometric information about the protein's arrangement in space, i.e. its tertiary structure. Some works use RNN or 1D-CNN to model proteins as sequence with the spatial structure potentially embedded in the handcrafted residue features . Other recent works explicitly model proteins' spatial structure using 3D-CNN but ignore its sequential nature (; Pagès et al., 2018). We argue that a graph-based learning can explicitly model both the sequential and geometric structures of proteins. Moreover, it accommodates proteins of different lengths and spatial extent, and is invariant to rotations and translations. In the simplest form, a protein can be represented as a linear graph, where nodes represent amino acids and edges connect consecutive residues according to the primary structure. This set of edges, which represent the covalent bonds that form the protein backbone, can be extended to include the interactions between non-consecutive residues, e.g. through Van der Waals forces or hydrogen bonds, commonly denoted as contacts. By forming an edge between all pairs of residues that are within a chemically reasonable distance of each other, the graph becomes a rich representation of both the sequential and geometric structure of the protein (figure 2). We refer to this representation, composed of residues, bonds and contacts, as the protein graph: where i, j = 1,..., |S| are residue indices, C = {(x, y, z) i } represents the spatial arrangement of the residues, i.e. the protein's conformation, and d max is a cutoff distance for contacts. With the protein's structure encoded in the graph, additional residue and relationship features can be encoded as nodes and edges attributes, v i and e i,j respectively. Section 3.2 describes, in detail, an attribution that preserves the sequence information and 3D geometry while remaining invariant to rotation. Figure 2: Protein representations for learning. Sequential representations for LSTM or 1D-CNN fail to represent spatial proximity of non-consecutive residues. Volumetric representations for 3D-CNN fail instead to capture sequence information and is not rotation invariant. Protein graphs explicitly represent both sequential and spatial structure, and are geometrically invariant by design. Experimental identification of a protein's native structure can be time consuming and prohibitively expensive. Alternatively, computational folding methods are used to generate decoy conformations for a specific target protein. Since no single method is consistently best, a Quality Assessment (QA) step is used to identify the decoys that most correctly represent the native structure. If the native structure C native is experimentally determined, the quality of a decoy can be measured by comparison, e.g., in the CASP challenge, decoys submitted for a target are scored against the unreleased native structure. Some comparative algorithms compute global (per decoy) scores, which can be used for ranking and represent the principal factor for CASP, while others produce local (per residue) scores which help identify incorrect parts of a decoy . In most scenarios, however, the native structure is not available and quality must be estimated based on physical and chemical properties of the decoy, e.g. in drug development, it would be unpractical to synthesize samples of novel proteins and researchers rely on computational folding and quality assessment instead. Here we introduce GRAPHQA, a graph-based QA neural network that learns to predict local and global scores, with minimal feature and model engineering, using existing datasets of scored proteins. In this paper, we train GRAPHQA on two widely-used scoring algorithms: the Global Distance Test Total Score , which is the official CASP score for decoy-level quality assessment, and the Local Distance Difference Test , a residue-level score. We denote them as. With GRAPHQA i (P) and GRAPHQA g (P) denoting the network's local and global predictions for an input P, the learning objective is to minimize the following Mean Squared Error (MSE) losses: Note that, for the sole purpose of sorting decoy according to ground-truth quality, training with a ranking loss would be sufficient . Instead, MSE forces the output to match the quality score, which is a harder objective, but in a network can be more easily inspected and possibly used to improve existing folding methods in an end-to-end fashion (section 4.2). GRAPHQA is a graph convolutional network that operates on protein graphs using the messagepassing algorithm described in. The building block of GRAPHQA, a graph layer, takes a protein graph as input (with an additional global feature u), and performs the following propagation steps to output a graph with updated node/edge/global features and unchanged structure: Update edges Aggregate edges Update nodesē Aggregate all edges Aggregate all nodes Update global features where φ represent three update functions that transform nodes/edges/global features (e.g. a MLP), and ρ represent three pooling functions that aggregate features at various levels (e.g. sum or mean). Similarly to CNNs, multiple graph layers are stacked together to allow local information to propagate to increasingly larger neighborhoods (i.e. receptive field). This enables the network to learn quality-related features at multiple scales: secondary structures in the first layers, e.g. α-helices and β-sheets, and larger structures in deeper layers e.g. domain structures and arrangements. The GRAPHQA architecture is conceptually divided in three stages (figure 1). At the input, the encoder increases the node and edge features' dimensions through 2×(Linear-Dropout-ReLU) transformation and adds a global bias. Then, at its the core, L message-passing layers operate on the encoded graph, leveraging its structure to propagate and aggregate information. The update functions φ consist of Linear-Dropout-ReLU transformations, with the size of the linear layers progressively decreasing. We use average pooling for the aggregation functions ρ, since preliminary experiments with max/sum pooling performed poorly. Finally, the readout layer outputs local and global quality scores by applying a Linear-Sigmoid operation to the latest node and global features, respectively. Following the common practice in Quality Assessment, we use the data from past years' editions of CASP, encompassing several targets with multiple scored decoys each. Removing all sequences with |S| < 50 from CASP 7-10 in a dataset of ∼100k scored decoys (P, { q i}, q g ) t,d, which we randomly split into a training set (402 targets) and a validation set for hyper-parameter optimization (35 targets). CASP 11 and 12 are set aside for testing against top-scoring methods (table 3). We evaluate the performances of GRAPHQA on the following standard metrics. At the global level, we compare the predicted and ground-truth GDT TS scores and compute: Root Mean Squared Error (RMSE), Pearson correlation coefficient computed across all decoys of all targets (R), and Pearson correlation coefficient computed on a per-target basis and then averaged over all targets (R target). At the local level, we compare the predicted and ground-truth LDDT scores and compute: RMSE, Pearson correlation coefficient computed across all residues of all decoys of all targets (R), and Pearson correlation coefficient computed on a per-decoy basis and then averaged over all decoys of all targets (R model). Of these, we focus on R target and R model, which respectively measure the ability to rank decoys by quality and to distinguish the correctly-predicted parts of a model from those that need improvement. A detailed description of these and other metrics can be found in appendix E. Node features The node attributes v i of a protein graph P represent the identity, statistical, and structural features of the i-th residue. We encode the residue identity using a one-of-22 encoding of the corresponding amino acid. , we also add two residue-level statistics computed using Multiple Sequence Alignment (MSA) , namely selfinformation and partial entropy, each described by a 23-dimensional vector. Finally, we add a 14-dimensional vector of 3D spatial information including the dihedral angles, surface accessibility and secondary structure type as determined by DSSP . Edge features An edge represents either a contact or a bond between two residues i and j w.r.t. to the conformation C = {(x, y, z) i }. An edge always exists between two consecutive residues, while non-consecutive residues are only connected if ||C i − C j || < d max, with d max optimized on the validation set. We further enrich this connectivity structure by encoding spatial and sequential distances as an 8D feature vector e i,j. Spatial distance is encoded using a radial basis function exp(−d 2 i,j /σ), with σ determined on the validation set. Sequential distance is defined as the number of amino acids between the two residues in the sequence and expressed using a separation encoding, i.e. a one-hot encoding of the separation |i − j| according to the classes {0, 1, 2, 3, 4, 5 : 10, > 10}. The MSE losses in equation 2 are weighted as L tot = λ L + λ g L g and minimized using Adam Optimizer with L 2 regularization. GRAPHQA is significantly faster to train than LSTM or 3D-CNN methods, e.g. 35 epochs takes ∼ 2 hours on one NVIDIA 2080Ti GPU with batches of 200 graphs. This allows us to perform extensive hyper-parameter search. Table 4 reports the search space, as well as the parameters of the model with highest R target on the validation set. We compare GRAPHQA with the following methods, chosen either for their state-of-the-art performances or because they represent a class of approaches for Quality Assessment. ProQ3D computes fixed-size statistical descriptions of the decoys in CASP 9-10, including Rosetta energy terms, which are then used to train a Multi Layer Perceptron on quality scores. In ProQ4 , a 1D-CNN is trained to predict LDDT scores from a vectorized representation of protein sequences, a global score is then obtained by averaging over all residues. ProQ4 is pretrained on a large dataset of protein secondary structures and then fine tuned on CASP 9-10 using a siamese configuration to improve ranking performances. Their are reported on both CASP 11, which is used as a validation set, and CASP 12. 3DCNN trains a CNN on a three-dimensional representation of atomic densities to rank the decoys in CASP 7-10 according to their GDT TS scores. Notably, no additional feature is used other than atomic structure and type, however, the fixed-size volumetric representation of this method is sensitive to rotations and does not scale well with protein size. Ornate (Pagès et al., 2018) applies a similar 3D approach to predict local CAD-scores and achieves rotation invariance by specifying a canonical residue-centered orientation. Although optimized for local scoring, the average of the predicted scores is shown to correlate well with GDT TS. AngularQA feeds a sequence-like representation of the protein structure to an LSTM to predict GDT TS scores. The LSTM network is trained on decoys from 3DRobot and CASP 9-11, while CASP 12 is used for model selection and testing. VoroMQA and RWplus (Olechnovič & ;) are two statistical potential methods that represent an alternative to the other machine-learning based methods. Table 1 compares the performances of GRAPHQA and other state-of-the-art methods on CASP 11 and 12, while figure 3 contains a graphical representation of true vs. predicted scores for all target in CASP 12, and an example funnel plot for the decoys of a single target. A more in-depth evaluation on the stage 1 and stage 2 splits of CASP 11, 12, 13 and CAMEO can be found in appendix F. Of all methods, only GRAPHQA and ProQ4 co-optimize for local and global predictions, the former thanks to the graph-based architecture, the latter thanks to its siamese training configuration (the reported for ProQ3D refer to two separate models trained for either local or global scores). At the local level, our method proves to be on par or better than ProQ3D and ProQ4, demonstrating the ability to evaluate quality at the residue level and distinguishing correctly predicted parts of the protein chain. At the global level, significantly higher R and R target metrics indicate than GRAPHQA is more capable than other state-of-the-art methods at ranking decoys based on their overall quality. As shown in our ablation studies (section 5), hand-engineered features like MSA and DSSP contribute to the performances of GRAPHQA, yet we wish to prove that our method can learn directly from raw data. GRAPHQA RAW is a variant that relies uniquely on the one-hot encoding of amino acid identity, similarly to how 3D-CNN and Ornate employ atomic features only. The for GRAPHQA RAW show that, even without additional features, our method outperforms purely representation-learning methods. In this section we analyse how various components of our method contribute to the final performance, ranging from optimization and architectural choices to protein feature selection. Unless stated otherwise, all ablation studies follow the training procedure described in section 3.3 for a lower number of epochs. We report on CASP 11 as mean and standard deviation of 10 runs. Local and global co-optimization We investigate the interplay between local and global predictions, specifically whether co-optimizing for both is beneficial or detrimental. At the global level, models trained to predict only global scores achieve a global RMSE of 0.129±.007, whereas models trained to predict both local and global scores obtain 0.117±.006, suggesting that local scores can provide additional information and help the assessment of global quality. At the local level instead, co-optimization does not seem to improve performances: models trained uniquely on local scores achieve a local RMSE of 0.121±.002, while models trained to predict both obtain 0.123±.004. In this study, we test the combined effects of the depth of the network L and the cutoff value d max. On the one hand, every additional message-passing layer allows to aggregate information from a neighborhood that is one hop larger than the previous, effectively extending the receptive field at the readout. On the other hand, the number of contacts included in the graph affects its connectivity and the propagation of messages, e.g. low d max correspond to a low average degree and long shortest paths between any two residues, and vice versa (section B.2). Thus, an architecture that operates on sparsely-connected graphs will require more message-passing layers to achieve the same holistic view of a shallow network operating on denser representations. However, this trade off is only properly exposed if u, φ u, ρ u are removed from the architecture. In fact, this global pathway represents a propagation shortcut that connects all nodes in the graph and sidesteps the limitations of shallow networks. With the global pathway disabled, global predictions are computed in the readout layer by aggregating node features from the last MP layer. Figure 4 reports the RMSE obtained by networks of different depth with no global path, operating on protein graphs constructed with different cutoff values. As expected, the shallow 3-layer architecture requires more densely-connected inputs to achieve the same performances of the 9-layer network. Surprisingly, local predictions seem to be more affected by these factors than global predictions, suggesting that a large receptive field is important even for local scores. We evaluate the impact of node and edge features on the overall prediction performances (figure 5). For the nodes, we use the amino acid identity as a minimal representation and combine it with: a) DSSP features, b) partial entropy, c) self information, d) both DSSP and MSA features. All features improve both local and global scoring, with DSSP features being marginally more relevant for LDDT. For the edges, we evaluate the effect of having either: a) a binary indicator of bond/contact, b) geometric features, i.e. the euclidean distance between residues, c) sequential features, i.e. the categorical encoding of the separation between residues, d) both distance and separation encoding. Progressively richer edge features seem to be benefit LDDT predictions, while little improvement can be seen at the global level. The design of GRAPHQA makes it suitable not only for scoring, but also to identify refinement opportunities for computationally-created decoys. Figure 6 shows a decoy that correctly models the native structure of its target for most of the sequence, but one extremity to which both GRAPHQA and LDDT assign low local scores. Unlike LDDT, however, GRAPHQA is fully differentiable and the trained model can be used to explain the factors that influenced a low score and provide useful feedback for computational structure prediction. A simple approach for explaining predictions of a differentiable function f (x) is Sensitivity Analysis , which uses ∇ x f to measure how variations in the input affect the output. In figure 6 we consider the scores predicted for two different residues and compute the magnitude of the gradients w.r.t. the edges of the graph. Interestingly, GRAPHQA Figure 5: Ablation study of node and edge features. All node features improve both local and global scoring, with DSSP features being marginally more relevant for LDDT (left). Richer edge features benefit LDDT predictions, while little improvement can be seen at the global level (right). is able to capture quality-related dependencies not only in the neighborhood of the selected residues, but also further apart in the sequence. Finally, we measure whether the global predictions of GRAPHQA could be used to improve the contact maps used by computational methods to build protein models. If the network has learned a meaningful scoring function for a decoy, then the gradient of the score w.r.t. the contact distances should aim in the direction of the native structure. Considering all decoys of all targets in CASP 11, we obtain an average cosine similarity cos (∂GRAPHQA g /∂d, d decoy − d native) of 0.14±.08, which suggests that gradients can be used as a coarse feedback for end-to-end protein model prediction. For the first time we applied graph convolutional networks to the important problem of protein quality assessment (QA). Since proteins are naturally represented as graphs, GCN allowed us to collect the individual benefits of the previous QA methods including representation learning, geometric invariance, explicit modeling of sequential and 3D structure, simultaneous local and global scoring, and computational efficiency. Thanks to these benefits, and through an extensive set of experiments, we demonstrated significant improvements upon the state-of-the-art on various metrics and datasets and further analyzed the via thorough ablation and qualitative studies. Finally, we wish that Quality Assessment will gain popularity in the machine learning community, that could benefit from several curated datasets and ongoing regular challenges. We believe that richer geometric representations, e.g. including relative rotations, and raw atomic representations could represent an interesting future direction for learning-based Quality Assessment. Global Distance Test Total Score (GDT TS) Global Distance Test Total Score (GDT TS) is a global-level score obtained by first superimposing the structure of a decoy to the experimental structure using an alignment heuristic, and then computing the fraction of residues whose position is within a certain distance from the corresponding residue in the native structure (figure 7). This percentage is computed at different thresholds and then averaged to produce a score in the range, which we rescale between 0 and 1 (table 2). Table 2 i 2.5Å x x 5 6.3Å x 20% 60% 80% 100% Local Distance Difference Test (LDDT) Local Distance Difference Test (LDDT), is a residue-level score that does not require alignment of the structures and compares instead the local neighborhood of every residue, in the decoy and in the native structure. If we define the neighborhood of a residue as the set of its contacts, i.e. the set of other residues that lie within a certain distance from it, we can express the quality of that residue as the percentage of contacts that it shares with the corresponding residue in the native structure. Figure 8: Example of LDDT scoring for residue 7: the residues within a radius R 1 are {6, 8, 10} the native structure (left) and {6, 8} for the decoy (right); at a radius R 2 we have {3, 6, 8, 9, 10, 11} the native structure (left) and {3, 6, 8, 9, 10} for the decoy (right). We consider all decoys of all target included in CASP 7-13, excluding proteins whose sequence is shorter than 50 residues and targets that have been canceled by the organizers. In addition to CASP datasets, we test our method on all targets published in CAMEO between July and December 2017. The cutoff value d max determines which edges are included in the graph and, consequentially, its connectivity. A low cutoff implies a sparsely connected graph, with few edges and long paths between nodes. A higher cutoff yields a denser graph with more edges and shorter paths. In figure 9 we report some statistics about number of edges, average degree and average shortest paths, evaluated at different cutoff values on 1700 decoys from CASP 11. C GRAPHQA ARCHITECTURE In this section, we illustrate in more detail the structure of the GRAPHQA architecture, as well as the hyperparameter space that was explored to optimize performances on the validation set. Within GRAPHQA, a protein structure is represented as a graph whose nodes correspond to residues and whose edges connect interacting pairs of amino acids. At the input, the features of the i-th residue are encoded in a node feature vector v i. Similarly, the features of the pairwise interaction between residues i and j are encoded in an edge feature vector e i,j. A global bias term u is also added to represent information that is not localized to any specific node/edge of the graph. With this graph representation, one layer of message passing performs the following updates. 1. For every edge i → j, the edge feature vector is updated using a function φ e of adjacent nodes v i and v j, of the edge itself e i,j and of the global attribute u: 2. For every node i, features from incident edges {e j,i} are aggregated using a pooling function ρ e→v:ē For every node i, the node feature vector is updated using a function φ v of aggregated incident edgesē i, of the node itself v i and of the global attribute u: 4. All edges are aggregated using a pooling function ρ e→u: e = ρ e→u {e i,j} 5. All nodes are aggregated using a pooling function ρ v→u: 6. The global feature vector is updated using a function φ u of the aggregated edgesē, of the aggregated nodesv and of the global attribute u: In GRAPHQA, all intermediate updates are implemented as Linear-Dropout-ReLU functions, and all aggregation functions use average pooling. The encoder and readout layers do not make use of message passing, effectively processing every node/edge in isolation. Message passing is instead enabled for the core layers of the network and enables GRAPHQA to process information within progressively expanding neighborhoods. The number of neurons in the core message-passing layers decreases from the input to the output. Specifically it follows a linear interpolation between the input and output numbers reported below, rounded to the closest power of two. In preliminary experiments, we noticed that a progressive increase of the number of layers in convergence issues, which is in contrast to the practice of increasing the number of channels in Convolutional Neural Networks. We perform a guided grid search over the following hyper parameter space. The final model is chosen to be the one with the highest R target on the validation set. The following considerations were made: • The values for d max are chosen on the base that the typical bond length is ∼ 5Å and residueresidue interactions are negligible after ∼ 10Å. • The values for σ are chosen so that the RBF encoding of the edge length is approximately linear around ∼ 5Å. • The values for L are chosen to approximately match the average length of the shortest paths in the protein graphs at different cutoffs. • In addition to what described in section 2.3, we also tested an architecture with BatchNorm layers between the Dropout and ReLU operations, but apart from a significant slowdown we did not notice any improvement. To complement the analysis reported in the main text, we perform additional studies on the effect of feature representation and on the generalization ability of the trained model. The feature vectors associated to the edge of the graph represent two types of distances between residues, namely spatial distance and separation in the sequence. In this study we evaluate the effect of different representations on validation performances. Spatial distance is the physical distance between ammino acids, measured as the euclidean distance between their β carbon atoms. We consider three possible encodings for this distance: • Absent: spatial distance is not provided as input; • Scalar: spatial distance is provided as a raw scalar value (inÅngstrom); • RBF: spatial distance is encoded using 32 RBF kernels, with unit variance, equally spaced between 0 and 20. Figure 10 reports the aggregated performances on CASP 11 of ten runs for each of the above. The rich representation of the RBF kernels seem to improve both LDDT and GDT TS scoring performances, even though the effect is rather limited. Figure 10: Spatial distance: absent, encoded as a scalar, encoded using RBF kernels. Separation is the number of residues between to amino acids in the sequence, we consider three possible encodings: • Absent: sequential separation is not provided as input; • Scalar: sequential separation is provided as a raw scalar value (positive integer); • Categorical: sequential separation is encoded as a one-hot categorical variable, according to the classes {0, 1, 2, 3, 4, 5 : 10, > 10}, which are based on typical interaction patterns within a peptidic chain. Figure 11 reports the aggregated performances on CASP 11 of ten runs for each of the above. For local scoring, the choice of encoding plays little difference as long as separation is present in the input. On the other hand, the choice of categorical encoding over scalar encoding in higher global scoring performance. In this study, we evaluate how the natural environment of a protein affects the predictive performances of our method. Targets from CASP 11 and 12 are classified as transmembrane and soluble according to and scored separately using GRAPHQA. Transmembrane proteins behave differently from soluble proteins as a consequence of the environment they are placed in. The former expose non-polar residues to the cellular membrane that surrounds their structure. On the contrary, the latter tend to present polar amino acids to the surrounding water-based solvent. Since this information is not explicitly provided to the model, we can compare the predictive performances between the two sets and check that it has actually learned a flexible protein representation. The outcome of this evaluation is shown in table 5. While it is evident that GRAPHQA performs better on soluble proteins, which are more numerous in the training set, it also scores transmembrane proteins to an acceptable degree. In the following, we use: Target proteins Decoys of a target Residue indexes of a target Global quality score (true) Local quality scores (true) Global quality score (predicted) Local quality scores (predicted) Root Mean Squared Error (RMSE) We compute RMSE between all true and predicted scores, for both LDDT and GDT TS. For LDDT, it it the square root of: For GDT TS, it it the square root of: Correlation coefficients We compute the Pearson (R), Spearman (ρ) and Kendall (τ) correlation coefficients between all true and predicted scores. Since all scores are treated equally, with no distinction between different decoys or different targets, a high value for these scores can be misleading. Thus, their per-model and per-target versions should be also checked. For LDDT: Correlation coefficients per-model For every decoy of every target, we compute the Pearson (R model), Spearman (ρ model) and Kendall (τ model) correlation coefficients between true and predicted residue-level scores (LDDT). We then report the average correlation coefficients across all decoys of all targets. The per-model correlation coefficients estimate the performance of the network to rank individual residues by their quality and distinguish correctly vs. incorrectly folded segments. Per-model correlation coefficients are computed only for LDDT: Correlation coefficients per-target For every target, we compute the Pearson (R target), Spearman (ρ target) and Kendall (τ target) correlation coefficients between true and predicted decoy-level scores (GDT TS). We then report the average correlation coefficients across all targets. With reference to the funnel plots, this would be the correlation between the markers in every plot, averaged across all plots. The per-target correlation coefficients estimate the performance of the network to rank the decoys of a target by their quality and select the ones with highest global quality. Per-target correlation coefficients are computed only for GDT TS: First Rank Loss (FRL) For every target, we compute the difference in GDT TS between the best decoy according to ground-truth scores and best decoy according to the predicted scores. We then report the average FRL across all targets. This represents the loss in (true) quality we would suffer if we were to choose a decoy according to our rankings and can is represented in the funnel plots by the gap between the two vertical lines indicating the true best (green) and predicted best (red). FRL measures the ability to select a single best decoy for a given target. In our experiments, however, we noticed that FRL is extremely subject to noise, as it only considers top-1 decoys. Therefore, we consider NDCG to be a superior metric for this purpose, though we have not seen it used in the QA literature. FRL is only computed for GDT TS: Recall at k (REC@k) We can describe Quality Assessment as an information retrieval task, where every target represents a query and its decoys are the documents available for retrieval. If we consider the best decoy to have a score of 1 and all others to have zero score, we can compute the average REC@k as the percentage of queries for which the best decoy is retrieved among the top-k . This metric, however, is subject to the same pitfalls of FRL, since it only considers the best decoy of every target and ignores the relevance of the others. As described below, NDCG offers a better perspective over the decoys retieved by a QA method. Normalized Discounted Cumulative Gain at k (NDCG@k) For a given query we consider the top-k decoys ranked according to their predicted global scores. Discounted Cumulative Gain at k (DCG@k) is computed as the cumulative sum of their ground-truth GDT TS scores (gain), discounted however according to the position in the list. A high DCG@k is obtained therefore by a) selecting the k-best decoys to be part of the top-k predictions, and b) sorting them in order of decreasing quality (the higher in the list, the lower the discount). Dividing DCG@k by DCG ideal @k (obtained by ranking according to the ground-truth scores), yields the Normalized Discounted Cumulative Gain NDCG@k ∈, which can be compared and averaged across targets. In this section we present additional for datasets, dataset splits, methods, and metrics that are excluded from the main text for sake of brevity. The datasets considered in the following pages are: CASP 11, CASP 12, CASP 13, CAMEO. The CASP 11 and 12 datasets are conventionally divided into: stage 1, containing 20 randomlyselected decoys per target, and stage 2, containing the top-150 decoys of each target. In the QA literature, some papers report either on the dataset as a whole, or on the stage 1 and stage 2 splits. Furthermore, some papers report performances of other methods that differ from the original papers for reasons that are left unspecified. In the main text, we adhere to the following rules to summarize the metrics we collected: • Metrics computed on stage 1 are considered noisy and ignored, since stage 1 splits contain only 20 randomly-selected decoys per target • Metrics computed on stage 2 and on the whole dataset are considered equally valid, allowing to "merge" from papers with different scoring strategies • If multiple values are reported from multiple sources for the same (method, dataset) pair, only the best one is reported T0759 T0760 T0761 T0762 T0763 T0764 T0765 T0766 T0767 T0768 T0769 T0770 T0771 T0772 T0773 T0774 T0776 T0777 T0778 T0780 T0781 T0782 T0783 T0784 T0785 T0786 T0787 T0788 T0789 T0790 T0791 T0792 T0794 T0796 T0797 T0798 T0800 T0801 T0803 T0805 T0806 T0813 T0814 T0815 T0816 T0817 T0818 T0819 T0820 T0821 T0822 T0823 T0824 T0825 T0827 T0829 T0830 T0831 T0832 T0833 T0834 T0835 T0836 T0837 T0838 T0840 T0841 T0843 T0845 T0847 T0848 T0849 T0851 T0852 T0853 T0854 T0859 T0860 T0861 T0862 T0863 T0864 T0865 T0866 T0868 T0869 T0870 T0871 T0872 T0873 T0879 T0886 T0889 T0891 T0892 T0893 T0896 T0897 T0898 T0900 T0902 T0903 T0904 T0911 T0912 T0918 T0920 T0921 T0922 T0928 T0941 At a close analysis, it appears that the model completely fails to score decoys of target T060 and defaults to predicting a small constant value. To help pinpointing the problem, we compare the predictions made by GRAPHQA and GRAPHQA RAW (available in the repository). It turns out that the model trained on amino acid identity only does not output the same degenerate predictions as its fully-featured counterpart (the predictions are not perfect, but definitely better than a constant). We suspect that an error in the preprocessing pipeline might have produced misleading features for T060, e.g. the multiple sequence alignment program that extracts self information and partial entropy, or the DSSP program that computes secondary structure features. CASP 13 is the most recent edition of CASP and, at the moment, only few targets are available for public evaluation, i.e. their decoy structures are fully characterized, submitted predictions and ground-truth GDT TS scores can be downloaded from the website. Here we present an evaluation on publicly available targets, while waiting to update these as more data is released. Also, it is important to note that GRAPHQA is only trained on CASP 7-10, while other participants have likely (re)trained their models on all previous CASP datasets as well as other datasets. However, even without retraining, we achieve performances that are in line with the presented for CASP 11 and 12. GDT T0950 T0951 T0953s1 T0953s2 T0954 T0957s1 T0957s2 T0958 T0960 T0963 T0966 T0968s1 T0968s2 T1003 T1005 T1008 T1009 2ML1_A 2ML2_A 2ML3_A 2MNT_A 2MP6_A 2MP8_A 2MPH_A 2MPK_A 2MPW_A 2MQ8_A 2MQB_A 2MQC_A 2MQD_A 2MQH_A 2MQM_A 2MR6_A 2MRC_A 2MRL_A 2MSV_A 2MU0_A 2MUI_A 2MVB_A 2MW1_A 2MXD_A 2MXX_A 2MZZ_A 2N00_A 2N03_A 2N0P_A 2N1D_A 2RTT_A 3WCG_D 3WD6_D 3WDX_B 3WE5_B 3WE9_A 3WFX_B 3WGQ_B 3WH9_B 3WJ1_A 3WJ2_D
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyxgBerKwB
GraphQA is a graph-based method for protein Quality Assessment that improves the state-of-the-art for both hand-engineered and representation-learning approaches
We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles. We specifically consider the setting of batch active learning, in which multiple samples are selected as opposed to a single sample as in classical settings so as to reduce the training overhead. Our approach bridges between uniform randomness and score based importance sampling of clusters when selecting a batch of new samples. Experiments on benchmark image classification datasets (MNIST, SVHN, and CIFAR10) shows improvement over existing active learning strategies. We introduce an extra denoising layer to deep networks to make active learning robust to label noises and show significant improvements. Supervised learning is the most widely used machine learning method, but it requires labelled data for training. It is time-consuming and labor-intensive to annotate a large dataset for complex supervised machine learning models. For example, ImageNet reported the time taken to annotate one object to be roughly 55 seconds. Hence an active learning approach which selects the most relevant samples for annotation to incrementally train machine learning models is a very attractive avenue, especially for training deep networks for newer problems that have littel annotated data. Classical active learning appends the training dataset with a single sample-label pair at a time. Given the increasing complexity of machine learning models, it is natural to expand active learning procedures to append a batch of samples at each iteration instead of just one. Keeping such training overhead in mind, a few batch active learning procedures have been developed in the literature (; ;). When initializing the model with a very small seed dataset, active learning suffers from the coldstart problem: at the very beginning of active learning procedures, the model is far from being accurate and hence the inferred output of the model is incorrect/uncertain. Since active learning relies on output of the current model to select next samples, a poor initial model leads to uncertain estimation of selection criteria and selection of wrong samples. Prior art on batch active learning suffers performance degradation due to this cold-start problem. Most active learning procedures assume the oracle to be perfect, i.e., it can always annotate samples correctly. However, in real-world scenarios and given the increasing usage of crowd sourcing, for example Amazon Mechanical Turk (AMT), for labelling data, most oracles are noisy. The noise induced by the oracle in many scenarios is resolute. Having multiple annotations on the same sample cannot guarantee noise-free labels due to the presence of systematic bias in the setup and leads to consistent mistakes. To validate this point, we ran a crowd annotation experiment on ESC50 dataset : each sample is annotated by 5 crowdworkers on AMT and the majority vote of the 5 annotations is considered the label. It turned out for some classes, 10% of the samples are annotated wrong, even with 5 annotators. Details of the experiment can be found in Appendix A. Under such noisy oracle scenarios, classical active learning algorithms such as (a) under-perform as shown in Figure 1. Motivating from these observations, we fashion a batch active learning strategy to be robust to noisy oracles. The main contributions of this work are as follows: we propose a batch sample selection method based on importance sampling and clustering which caters to drawing a batch which is simultaneously diverse and important to the model; we incorporate model uncertainty into the sampling probability to compensate poor estimation of the Noise channel is assumed to be a 10-symmetric channel, where ε is the probability of label error. importance scores when the training data is too small to build a meaningful model; we introduce a denoising layer to deep networks to robustify active learning to noisy oracles. Main , as shown in Fig. 3 demonstrate that in noise-free scenario, our method performs as the best over the whole active learning procedure, and in noisy scenario, our method outperforms significantly over state-of-the-art methods. Active Learning: Active learning is a well-studied problem and has gain interest in deep learning as well. A survey summarizes various existing approaches in . In a nutshell, two key and diverse ways to tackle this problem in the literature are discrimination and representation. The representation line of work focuses on selecting samples that can represent the whole unlabelled training set while the discrimination line of work aims at selecting'tough' examples from the pool set, for example, using information theoretic scores in , entropy as uncertainty in . Along the lines of ensemble methods we have works, for example, (; ;). A recent work of discrimination-based active learning uses mutual information, Bayesian Active Learning by Disagreement (BALD), as discriminating criteria. In ) the authors used dropout approximation to compute the BALD scores for modern Convolutional Neural Networks (CNNs). However, these approaches do not consider batch acquisition and hence lack of diversity in selected batch samples causing performance lag. Batch Active Learning: Active learning in the batch acquisition manner has been studied from the perspective of set selection and using submodularity or its variants in a variety of works. The authors in utilize submodularity for naive Bayes and nearest neighbor. The concept of adaptive submodularity is related to active learning as well. The problem solves adaptive greedy optimization with sequential decision making . Using this concept, considers pool-based Bayesian active learning with a finite set of candidate hypotheses. A pool-based active learning is also discussed in which considered risk minimization under given hypothesis space. The work in uses both discriminative and representative samples to select a batch. The authors in use coreset approach to select representative points of the pool set. Recently, an adversarial learning of variational auto-encoders is used for batch active learning in . The work make a representation of the training and pool, and adversarially select the pool representatives. Model Uncertainty: The uncertainty for deep learning models, especially CNNs, was first addressed in using dropout as Bayesian approximation. Model uncertainty approximation using Batch Normalization (BN) has been shown in . Both of these approaches in some sense exploit the stochastic layers (Dropout, BN) to extract model uncertainty. The importance of model uncertainty is also emphasized in the work of . The work witnesses model as well as label uncertainty which they termed as epistemic and aleatoric uncertainty, respectively. We also address both of these uncertainties in this work. Noisy Oracle: The importance of noisy labels from oracle has been realized in the works like (; b;) which utilized the concept of adap-tive submodularity for providing theoretical guarantees. studies the same problem but with correlated noisy tests. Active learning with noisy oracles is also studied in . However, these work do not consider deep learning setup. A binary classification task with the noisy oracle is considered in . The authors in used a variation of Expectation Maximization algorithm to estimate the correct labels as well as annotating workers quality. The closest work to us in the noisy oracle setting for deep learning models are (; 2016). The authors also propose to augment the model with an extra full-connected dense layer. However, the denoising layer does not follow any probability simplex constraint, and they use modified loss function for the noise accountability along with dropout regularization. In this section, we introduce the notations used throughout the paper. We then formally define the problem of batch active learning with noisy oracles. The ith (jth) row (column) of a matrix X is denoted as X i,. (X .,j). ∆ K−1 is the probability simplex of dimension K, where For a probability vector p ∈ ∆ K−1, the Shannon entropy is defined as: The KL-divergence is always non-negative and is 0 if and only if p = q. The expectation operator is taken as E. We are concerned with a K class classification problem with a sample space X and label space Y = {1, 2, . . ., K}. The classification model M is taken to be g θ: X → Y parameterized with θ. The softmax output of the model is given by p = softmax(g θ (x)) ∈ ∆ K−1. The batch active learning setup starts with a set of labeled samples D tr = {(x i, y i)} and unlabeled samples P = {(x j)}. With a query budget of b, we select a batch of unlabeled samples B as, B = ALG(D tr, M, b, P), |B| ≤ b, where ALG is the selection procedure conditioned on the current state of active learning (D tr, M, b, P). ALG is designed with the aim of maximizing the prediction accuracy E p X ×Y [(h θ (x) = y)]. Henceforth, these samples which can potentially maximize the prediction accuracy are termed as important samples. After each acquisition iteration, the training dataset is updated as D tr = D tr ∪ {(B, y B)} where y B are the labels of B from an oracle routine. The oracle takes an input x ∈ X and outputs the ground truth label y ∈ Y. This is referred to as'Ideal Oracle' and the mapping from x to y is deterministic. A'Noisy Oracle' flips the true output y to y which is what we receive upon querying x. Similar to (a), we assume that the label flipping is independent of the input x and thus can be characterized by the conditional probability p(y = i|y = j), where i, j ∈ Y. We also refer this conditional distribution as the noisy-channel, and hence the ideal oracle has noisy channel value of 1 for i = j and 0 otherwise. For rest of the paper, we use the noise channel as a K-symmetric channel (SC), see Figure 2b, which is a generalization of the binary symmetric channel. The K-SC is defined as follows where ε is the probability of a label flip, i.e., p(y = y) = ε. We resort to the usage of K-SC because of its simplicity, and in addition, it abstracts the oracle noise strength with a single parameter ε. Therefore, in noisy active learning, after the selection of required subset B, the training dataset (and then the model) is updated as D tr = D tr ∪ {(B, y B)}. Next, in Section 4, we discuss the proposed solution to noisy batch active learning. An ideal batch selection procedure so as to be employed in an active learning setup, must address the following issues, (i) select important samples from the available pool for the current model, and (ii) select a diverse batch to avoid repetitive samples. We note that, at each step, when active learning acquires new samples, both of these issues are addressed by using the currently trained model. However, in the event of an uncertain model, the quantification of diversity and importance of a batch of samples will also be inaccurate ing in loss of performance. This is often the case with active learning because we start with less data in hand and consequently an uncertain model. Therefore, we identify the next problem in the active learning as (iii) incorporation of the model uncertainty across active learning iterations. Batch selection: The construction of batch active learning algorithm by solving the aforementioned first two problems begins with assignment of an importance score (ρ) to each sample in the pool. Several score functions exist which perform sample wise active learning. To list a few, max-entropy, variation ratios, BALD, entropy of the predicted class probabilities . We use BALD as an importance score which quantifies the amount of reduction of uncertainty by incorporating a particular sample for the given model. In principle, we wish to have high BALD score for a sample to be selected. For the sake of completeness, it is defined as follows. where θ are the model parameters. We refer the reader to for details regarding the computation of BALD score in. To address diversity, we first perform clustering of the pooled samples and then use importance sampling to select cluster centroids. For clustering, the distance metric used is the square root of the Jensen-Shannon (JS) divergence between softmax output of the samples. Formally, for our case, it is defined as d: With little abuse of notation, we interchangeably use d(p i, p j) as d i,j where i, j are the sample indices and p i, p j are corresponding softmax outputs. The advantage of using JS-divergence is two folds; first it captures similarity between probability distributions well, second, unlike KL-divergence it is always bounded between 0 and 1. The boundedness helps in incorporating uncertainty which we will discuss shortly. Using the distance metric as d we perform Agglomerative hierarchical clustering for a given number of clusters N. A cluster centroid is taken as the median score sample of the cluster members. Finally, with all similar samples clustered together, we perform importance sampling of the cluster centroids using their importance score, and a random centroid c is selected as p(c = k) ∝ ρ k. The clustering and importance sampling together not only take care of selecting important samples but also ensure diversity among the selected samples. Uncertainty Incorporation: The discussion we have so far is crucially dependent on the output of the model in hand, i.e., importance score as well as the similarity distance. As noted in our third identified issue with active learning, of model uncertainty, these estimations suffers from inaccuracy in situations involving less training data or uncertain model. The uncertainty of a model, in very general terms, represents the model's confidence of its output. The uncertainty for deep learning models has been approximated in Bayesian settings using dropout in , and batch normalization (BN) in . Both use stochastic layers (dropout, BN) to undergo multiple forward passes and compute the model's confidence in the outputs. For example, confidence could be measured in terms of statistical dispersion of the softmax outputs. In particular, variance of the softmax outputs, variation ratio of the model output decision, etc, are good candidates. We denote the model uncertainty as σ ∈, such that σ is normalized between 0 and 1 with 0 being complete certainty and 1 for fully uncertain model. For rest of the work, we compute the uncertainty measure σ as variation ratio of the output of model's multiple stochastic forward passes as mentioned in . In the event of an uncertain model (σ → 1), we randomly select samples from the pool initially. However, as the model moves towards being more accurate (low σ) by acquiring more labeled samples through active learning, the selection of samples should be biased towards importance sampling and clustering. To mathematically model this solution, we use the statistical mechanics approach of deterministic annealing using the Boltzmann-Gibbs distribution . In Gibbs distribution p(i) ∝ e − i/kB T, i.e., probability of a system being in an ith state is high for low energy i states and influenced by the temperature T. For example, if T → ∞, then state energy is irrelevant and all states are equally probable, while if T → 0, then probability of the system being in the lowest energy state is almost surely 1. We translate this into active learning as follows: For a given cluster centroid c, if the model uncertainty is very high (σ → 1) then all points in the pool (including c) should be equally probable to get selected (or uniform random sampling), and if the model is very certain (σ → 0), then the centroid c itself should be selected. This is achieved by using the state energy analogue as distance d between Assign importance score to each x ∈ P as ρ x = I(θ; y|x, Perform Agglomerative clustering of the pool samples with N (b) number of clusters using square root of JS-divergence as distance metric to get D 4: Sample cluster centroid c from the categorical distribution Compute uncertainty estimate σ (t−1) of the model M (t−1), and Sample ζ from the Gibbs distribution p(ζ = s|B (t), c, β end for 10: Query oracle for the labels of B (t) and update Update model as Set P ← P \ B (t) 13: end for the cluster centroid c and any sample x in the pool, and temperature analogue as uncertainty estimate σ of the model. The distance metric d used by us is always bounded between 0 and 1 and it provides nice interpretation for the state energy. Since, in the event of low uncertainty, we wish to perform importance sampling of cluster centroids, and we have d c,c = 0 (lowest possible value), therefore by Gibbs distribution, cluster centroid c is selected almost surely. To construct a batch, the samples have to be drawn from the pool using Gibbs distribution without replacement. In the event of samples s 1,..., s n already drawn, the probability of drawing a sample ζ given the cluster centroid c, distance matrix D = [d i,j] and inverse temperature (or inverse uncertainty) β is written as where P = P\s 1:n. In theory, the inverse uncertainty β can be any f such that f: → R + ∪{0} and f (σ) → ∞ as σ → 0 and f (σ) = 0 for σ = 1. For example, few possible choices for β (= f (σ)) are − log(σ), e 1/σ − 1. Different inverse functions will have different growth rate, and the choice of functions is dependent on both the model and the data. Next, since we have drawn the cluster centroid c according to p(c = k) ∝ ρ k, the probability of drawing a sample s from the pool P is written as We can readily see that upon setting β → 0 in, p(ζ = s|s 1:n, β, D) reduces to 1/|P | which is nothing but the uniform random distribution in the leftover pool. On setting β → ∞, we have ζ = c with probability ρ c / c ρ c and ζ = c with probability 0, i.e., selecting cluster centroids from the pool with importance sampling. For all other 0 < β < ∞ we have a soft bridge between these two asymptotic cases. The approach of uncertainty based batch active learning is summarized as Algorithm 1. Next, we discuss the solution to address noisy oracles in the context of active learning. The noisy oracle, as defined in Section 3, has non-zero probability for outputting a wrong label when queried with an input sample. To make the model aware of possible noise in the dataset originating from the noisy oracle, we append a denoising layer to the model. The inputs to this denoising layer are the softmax outputs p of the original model. Figure 2a demonstrates the proposed solution for deep learning classification models. The denoising layer is a fully-connected K×K dense layer with 1: for t = 1, 2,..., T do 2: Query noisy oracle for the labels of B (t) and update Get M * (t) ← M (t) appended with noisy-channel layer at the end 5: Update noisy model as M * (t) using D Detach required model M (t) from M * (t) by removing the final noisy-channel layer 7: Set P ← P \ B (t) 8: end for weights W = [w i,j] such that its output p = Wp. The weights w i,j represent the noisy-channel transition probabilities such that w i,j = p(y = i|y = j). Therefore, to be a valid noisy-channel, W is constrained as W ∈ {W | W .,j ∈ ∆ K−1, ∀ 1 ≤ j ≤ K}. While training we use the model upto the denoising layer and train using p, or label prediction y while for validation/testing we use the model output p or label prediction y. The active learning algorithm in the presence of noisy oracle is summarized as Algorithm 2. We now proceed to Section 5 for demonstrating the efficacy of our proposed methods across different datasets. We evaluate the algorithms for training CNNs on three datasets pertaining to image classification; (i) MNIST , (ii) CIFAR10 , and (iii) SVHN . We use the CNN architectures from (fchollet, 2015; . For all the architectures we use Adam with a learning rate of 1e − 3. The implementations are done on PyTorch , and we use the Scikit-learn package for Agglomerative clustering. For training the denoising layer, we initialize it with the identity matrix I K, i.e., assuming it to be noiseless. The number of clusters N (b) is taken to be as 5b. The uncertainty measure σ is computed as the variation ratio of the output prediction across 100 stochastic forward passes, as coined in , through the model using a validation set which is fixed apriori. The inverse uncertainty function β = f (σ) in Algorithm 1 is chosen from l (e 1/σ − 1), −l log(σ), where l is a scaling constant fixed using cross-validation. The cross-validation is performed only for the noise-free setting, and all other with different noise magnitude ε follow this choice. This is done so as to verify the robustness of the choice of parameters against different noise magnitudes which might not be known apriori. We compare our approach with: (i) Random: A batch is selected by drawing samples from the pool uniform at random without replacement. (ii) BALD: Using model uncertainty and the BALD score, the authors in ) do active learning with single sample acquisition. We use the highest b scoring samples to select a batch. (iii) Coreset: The authors in proposed a coreset based approach to select the representative core centroids of the pool set. We use the 2 − OP T approximation greedy algorithm of the paper with similarity measure as l 2 norm between the activations of the penultimate layer. (iv) Entropy: The approach of is implemented via selecting b samples with the highest Shannon entropy H(p) of the softmax outputs. (v) VAAL: The variational adversarial active learning of . In all our experiments, we start with a small number of images 40 − 50 and retrain the model from scratch after every batch acquisition. In order to make a fair comparison, we provide the same initial point for all active learning algorithms in an experiment. We perform a total of 20 random initializations and plot the average performance along with the standard deviation vs number of acquired samples by the algorithms. Figure 3 shows that our proposed algorithm outperform all the existing algorithms. As an important observation, we note that random selection always works better in the initial stages of all experiments. This observation is explained by the fact that all models suffer from inaccurate predictions at the initial stages. The proposed uncertainty based randomization makes a soft bridge between uniform random sampling and score based importance sampling of the cluster centroids. The proposed approach uses randomness at the initial stages and then learns to switch to weigh the model based inference scores as the model becomes increasingly certain of its output. Therefore, the proposed algorithm always envelops the performance of all the other approaches across all three datasets of MNIST, CIFAR10, and SVHN. Figure 3 also shows the negative impact of noisy oracle on the active learning performance across all three datasets. The degradation in the performance worsens with increasing oracle noise strength ε. We see that doing denoisification by appending noisy-channel layer helps combating the noisy oracle in Figure 3. The performance of the proposed noisy oracle active learning is significantly better in all the cases. The prediction accuracy gap between algorithm with/without denoising layer elevates with increase in the noise strength ε. The most recent baselines like (VAAL ), (Coreset ) which make representation of the Training + Pool may not always perform well. While coreset assigns distance between points based on the model output which suffers in the beginning, VAAL uses training data only to make representations together with the remaining pool in GAN like setting. The representative of pool points may not always help, especially if there are difficult points to label and the model can be used to identify them. In addition to the importance score, the model uncertainty is needed to assign a confidence to its judgement which is poor in the beginning and gets strengthened later. The proposed approach works along this direction. Lastly, while robustness against oracle noise is discussed in , however, we see that incorporating the denoising later implicitly in the model helps better. The intuitive reason being, having noise in the training data changes the discriminative distribution from p(y|x) to p(y |x). Hence, learning p(y |x) from the training data and then recovering p(y|x) makes more sense as discussed in Section 4.2. The uncertainty measure σ plays a key role for the proposed algorithm. We have observed that under strong noise influence from the oracle, the model's performance is compromised due to spurious training data as we see in Figure3. This affects the estimation of the uncertainty measure (variation ratio) as well. We see in Figure 4 that the model uncertainty does not drop as expected due to the label noise. However, the aid provided by the denoising layer to combat the oracle noise solves this issue. We observe in Figure 4 that uncertainty drops at a faster rate as the model along with the denoising layer gets access to more training data. Hence, the proposed algorithm along with the denoising layer make better judgment of soft switch between uniform randomness and importance sampling using. The availability of better uncertainty estimates for modern deep learning architectures is a promising future research, and the current work will also benefit from it. In this paper we have proposed a batch sample selection mechanism for active learning with access to noisy oracles. We use mutual information between model parameters and the predicted class probabilities as importance score for each sample, and cluster the pool sample space with JensonShannon distance. We incorporate model uncertainty/confidence into Gibbs distribution over the clusters and select samples from each cluster with importance sampling. We introduce an additional layer at the output of deep networks to estimate label noise. Experiments on MNIST, SVHN, and CIFAR10 show that the proposed method is more robust against noisy labels compared with the state of the art. Even in noise-free scenarios, our method still performs the best for all three datasets. Our contributions open avenues for exploring applicability of batch active learning in setups involving imperfect data acquisition schemes either by construction or because of resource constraints. Under review as a conference paper at ICLR 2020 A ESC50 CROWD LABELING EXPERIMENT We selected 10 categories of ESC50 and use Amazon Mechanical Turk for annotation. In each annotation task, the crowd worker is asked to listen to the sound track and pick the class that the sound belongs to, with confidence level. The crowd worker can also pick "Unsure" if he/she does not think the sound track clearly belongs to one of the 10 categories. For quality control, we embed sound tracks that clearly belong to one class (these are called gold standards) into the set of tasks an annotator will do. If the annotator labels the gold standard sound tracks wrong, then labels from this annotator will be discarded. The confusion table of this crowd labeling experiment is shown in Figure 5: each row corresponds to sound tracks with one ground truth class, and the columns are majority-voted crowd-sourced labels of the sound tracks. We can see that for some classes, such as frog and helicopter, even with 5 crowd workers, the majority vote of their annotation still cannot fully agree with the ground truth class. We present rest of the experimental supplementary to the ones presented in the main body of Section 5. The active learning algorithm performance for oracle noise strength of ε = 0.2 and ε = 0.4 are presented in Figure 6. Similarly to what discussed in Section 5, we observe that the performance of proposed algorithm dominates all other existing works for ε = 0.2. We witnessed that the proposed algorithm performance (without denoising layer) is not able to match other algorithms (BALD and Entropy) when ε = 0.4, even with more training data. The reason for this behavior can be explained using the uncertainty measure σ output in the Figure 7. We see that under strong noise influence from the oracle, the model uncertainty doesn't reduce along the active learning acquisition iterations. Because of this behavior, the proposed uncertainty based algorithm sticks to put more weightage on uniform random sampling, even with more training data. However, we see that using denoising layer, we have better model uncertainty estimates under the influence of noisy oracle. Since the uncertainty estimates improve, as we see in Figure 7, for ε = 0.4, the proposed algorithm along with the denoising layer performs very well and has significant improvement in performance as compared to other approaches. The for CIFAR10 dataset with oracle noise strength of ε = 0.2 and 0.4 are provided in the Figure 8. We see that the proposed algorithm without/with using the denoising layer outperforms other benchmarks. We provide the active learning accuracy for SVHN dataset with oracle noise strength of ε = 0.2 and 0.4 in the Figure 8. Similar to other , we see that the proposed algorithm without/with using the denoising layer outperforms other benchmarks for ε = 0.2. For oracle noise strength of ε = 0.4, we see a similar trend as MNIST regarding performance compromise to the proposed uncertainty based batch selection. The reason is again found in the uncertainty estimates plot in Figure 10 for ε = 0.4. With more mislabeled training examples, the model uncertainty estimate doesn't improve with active learning samples acquisition. Hence, the proposed algorithm makes the judgment of staying close to uniform random sampling. However, unlike MNIST in Figure 7, the uncertainty estimate is not that poor for SVHN, i.e., it still decays. Therefore, the performance loss in proposed algorithm is not that significant. While, upon using the denoising layer, the uncertainty estimates improve significantly, and therefore, the proposed algorithm along with the denoising layer outperforms other approaches by big margin. Using the same setup as explained in Section 5, we evaluate the performance on CIFAR100 dataset for various active learning algorithms listed in Section 5.2. We observe in Figure 11 that the proposed uncertainty based algorithm perform similar or better than the baselines. The incorporation of denoising layer helps in countering the affects of noisy oracle as we demonstrate by varying the noise strength ε = 0.1, 0.3. For a quantitative look at the active learning , mean and standard deviation of the performance vs. acquisition, in the Figure 3, we present the in the tabular format in Table 1 for MNIST, Table 2 for CIFAR10, Table 3 for SVHN, and Table 4 for CIFAR100, respectively. Table 3 Active learning for SVHN dataset.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJxIkkSKwB
We address the active learning in batch setting with noisy oracles and use model uncertainty to encode the decision quality of active learning algorithm during acquisition.
Artistic style transfer is the problem of synthesizing an image with content similar to a given image and style similar to another. Although recent feed-forward neural networks can generate stylized images in real-time, these models produce a single stylization given a pair of style/content images, and the user doesn't have control over the synthesized output. Moreover, the style transfer depends on the hyper-parameters of the model with varying ``optimum" for different input images. Therefore, if the stylized output is not appealing to the user, she/he has to try multiple models or retrain one with different hyper-parameters to get a favorite stylization. In this paper, we address these issues by proposing a novel method which allows adjustment of crucial hyper-parameters, after the training and in real-time, through a set of manually adjustable parameters. These parameters enable the user to modify the synthesized outputs from the same pair of style/content images, in search of a favorite stylized image. Our quantitative and qualitative experiments indicate how adjusting these parameters is comparable to retraining the model with different hyper-parameters. We also demonstrate how these parameters can be randomized to generate which are diverse but still very similar in style and content. Style transfer is a long-standing problem in computer vision with the goal of synthesizing new images by combining the content of one image with the style of another BID8 BID12 BID0. Recently, neural style transfer techniques BID9 BID15 BID11 BID20 BID19 showed that the correlation between the features extracted from the trained deep neural networks is quite effective on capturing the visual styles and content that can be used for generating images similar in style and content. However, since the definition of similarity is inherently vague, the objective of style transfer is not well defined and one can imagine multiple stylized images from the same pair of content/style images. Existing real-time style transfer methods generate only one stylization for a given content/style pair and while the stylizations of different methods usually look distinct BID27 BID13, it is not possible to say that one stylization is better in all contexts since people react differently to images based on their and situation. Hence, to get favored stylizations users must try different methods that is not satisfactory. It is more desirable to have a single model which can generate diverse , but still similar in style and content, in real-time, by adjusting some input parameters. One other issue with the current methods is their high sensitivity to the hyper-parameters. More specifically, current real-time style transfer methods minimize a weighted sum of losses from different layers of a pre-trained image classification model BID15 BID13 (check Sec 3 for details) and different weight sets can into very different styles (Figure 6). However, one can only observe the effect of these weights in the final stylization by fully retraining the model with the new set of weights. Considering the fact that the "optimal" set of weights can be different for any pair of style/content (Figure 3) and also the fact that this "optimal" truly doesn't exist (since the goodness of the output is a personal choice) retraining the models over and over until the desired is generated is not practical. Content (Fixed) Figure 1: Adjusting the output of the synthesized stylized images in real-time. Each column shows a different stylized image for the same content and style image. Note how each row still resembles the same content and style while being widely different in details. The primary goal of this paper is to address these issues by providing a novel mechanism which allows for adjustment of the stylized image, in real-time and after training. To achieve this, we use an auxiliary network which accepts additional parameters as inputs and changes the style transfer process by adjusting the weights between multiple losses. We show that changing these parameters at inference time to stylizations similar to the ones achievable by retraining the model with different hyperparameters. We also show that a random selection of these parameters at run-time can generate a random stylization. These solutions, enable the end user to be in full control of how the stylized image is being formed as well as having the capability of generating multiple stochastic stylized images from a fixed pair of style/content. The stochastic nature of our proposed method is most apparent when viewing the transition between random generations. Therefore, we highly encourage the reader to check the project website https://goo.gl/PVWQ9K to view the generated stylizations. The strength of deep networks in style transfer was first demonstrated by BID10. While this method generates impressive , it is too slow for real-time applications due to its optimization loop. Follow up works speed up this process by training feed-forward networks that can transfer style of a single style image BID15 or multiple styles. Other works introduced real-time methods to transfer style of arbitrary style image to an arbitrary content image BID11 BID13. These methods can generate different stylizations from different style images; however, they only produce one stylization for a single pair of content/style image which is different from our proposed method. Generating diverse have been studied in multiple domains such as colorizations BID6 BID2, image synthesis BID3, video prediction BID1 BID17, and domain transfer BID14 BID32. Domain transfer is the most similar problem to the style transfer. Although we can generate multiple outputs from a given input image BID14, we need a collection of target or style images for training. Therefore we can not use it when we do not have a collection of similar styles. Style loss function is a crucial part of style transfer which affects the output stylization significantly. The most common style loss is Gram matrix which computes the second-order statistics of the feature activations BID10, however many alternative losses have been introduced to measure distances between feature statistics of the style and stylized images such as correlation alignment loss BID24, histogram loss BID25, and MMD loss BID18. More recent work BID22 has used depth similarity of style and stylized images as a part of the loss. We demonstrate the success of our method using only Gram matrix; however, our approach can be expanded to utilize other losses as well. To the best of our knowledge, the closest work to this paper is BID31 in which the authors utilized Julesz ensemble to encourage diversity in stylizations explicitly. Although this Figure 2: Architecture of the proposed model. The loss adjustment parameters α α α c and α α α s is passed to the network Λ which will predict activation normalizers γ α α α and β α α α that normalize activation of main stylizing network T. The stylized image is passed to a trained image classifier where its intermediate representation is used to calculate the style loss L s and content loss L c. Then the loss from each layer is multiplied by the corresponding input adjustment parameter. Models Λ and T are trained jointly by minimizing this weighted sum. At generation time, values for α α α c and α α α s can be adjusted manually or randomly sampled to generate varied stylizations. method generates different stylizations, they are very similar in style, and they only differ in minor details. A qualitative comparison in FIG6 shows that our proposed method is more effective in diverse stylization. 1e-2 1e-3 1e-4 Style Figure 3: Effect of adjusting the style weight in style transfer network from BID15. Each column demonstrates the of a separate training with all w l s set to the printed value. As can be seen, the "optimal" weight is different from one style image to another and there can be multiple "good" stylizations depending on ones' personal choice. Check supplementary materials for more examples. Style transfer can be formulated as generating a stylized image p which its content is similar to a given content image c and its style is close to another given style image s. DISPLAYFORM0 The similarity in style can be vaguely defined as sharing the same spatial statistics in low-level features, while similarity in content is roughly having a close Euclidean distance in high-level features BID11. These features are typically extracted from a pre-trained image classification network, commonly VGG-19 . The main idea here is that the features obtained by the image classifier contain information about the content of the input image while the correlation between these features represents its style. In order to increase the similarity between two images, Gatys et al. BID10 minimize the following distances between their extracted features: DISPLAYFORM1 where φ l (x) is activation of a pre-trained classification network at layer l given the input image x, while L l c (p) and L l s (p) are content and style loss at layer l respectively. G(φ l (p)) denotes the Gram matrix associated with φ l (p). The total loss is calculated as a weighted sum of losses across a set of content layers C and style layers S: DISPLAYFORM2 where w l c, w l s are hyper-parameters to adjust the contribution of each layer to the loss. Layers can be shared between C and S. These hyper-parameters have to be manually fine tuned through try and error and usually vary for different style images (Figure 3). Finally, the objective of style transfer can be defined as: DISPLAYFORM3 This objective can be minimized by iterative gradient-based optimization methods starting from an initial p which usually is random noise or the content image itself. Solving the objective in Equation 3 using an iterative method can be very slow and has to be repeated for any given pair of style/content image. A much faster method is to directly train a deep network T which maps a given content image c to a stylized image p BID15. T is usually a feed-forward convolutional network (parameterized by θ) with residual connections between downsampling and up-sampling layers BID26 and is trained on many content images using Equation 3 as the loss function: DISPLAYFORM0 The style image is assumed to be fixed and therefore a different network should be trained per style image. However, for a fixed style image, this method can generate stylized images in realtime BID15. Recent methods BID11 BID13 introduced real-time style transfer methods for multiple styles. But, these methods still generate only one stylization for a pair of style and content images. In this paper we address the following issues in real-time feed-forward style transfer methods: 1. The output of these models is sensitive to the hyper-parameters w l c and w l s and different weights significantly affect the generated stylized image as demonstrated in FIG3. Moreover, the "optimal" weights vary from one style image to another (Figure 3) and therefore finding a good set of weights should be repeated for each style image. Please note that for each set of w l c and w l s the model has to be fully retrained that limits the practicality of style transfer models. Top row demonstrates that randomizing α α α to different stylizations however the style features appear in the same spatial position (e.g., look at the swirl effect on the left eye). Middle row visualizes the effect of adding random noise to the content image in moving these features with fixed α α α. Combination of these two randomization techniques can generate highly versatile outputs which can be seen in the bottom row. Notice how each image in this row differs in both style and the spatial position of style elements. Look at FIG8 for more randomized .2. Current methods generate a single stylized image given a content/style pair. While the stylizations of different methods usually look very distinct BID27, it is not possible to say which stylization is better for every context since it is a matter of personal taste. To get a favored stylization, users may need to try different methods or train a network with different hyper-parameters which is not satisfactory and, ideally, the user should have the capability of getting different stylizations in real-time. We address these issues by conditioning the generated stylized image on additional input parameters where each parameter controls the share of the loss from a corresponding layer. This solves the problem since one can adjust the contribution of each layer to adjust the final stylized after the training and in real-time. Secondly, we address the problem by randomizing these parameters which in different stylizations. We enable the users to adjust w DISPLAYFORM0 To learn the effect of α α α c and α α α s on the objective, we use a technique called conditional instance normalization (Ulyanov et al.). This method transforms the activations of a layer x in the feedforward network T to a normalized activation z which is conditioned on additional inputs α α α = [α α α c, α α α s]: DISPLAYFORM1 where µ and σ are mean and standard deviation of activations at layer x across spatial axes BID11 and γ α α α, β α α α are the learned mean and standard deviation of this transformation. These parameters can be approximated using a second neural network which will be trained end-to-end with T: Since L l can be very different in scale, one loss term may dominate the others which will fail the training. To balance the losses, we normalize them using their exponential moving average as a normalizing factor, i.e. each L l will be normalized to: DISPLAYFORM2 DISPLAYFORM3 where L l (p) is the exponential moving average of L l (p). In this section, first we study the effect of adjusting the input parameters in our method. Then we demonstrate that we can use our method to generate random stylizations and finally, we compare our method with a few baselines in terms of generating random stylizations. We implemented Λ as a multilayer fully connected neural network. We used the same architecture as BID15 BID11 for T and only increased number of residual blocks by 3 (look at supplementary materials for details) which improved stylization . We trained T and Λ jointly by sampling random values for α α α from U. We trained our model on ImageNet BID5 ) as content images while using paintings from Kaggle Painter by Numbers (Kaggle) and textures from Descibable Texture Dataset BID4 as style images. We selected random images form ImageNet test set, MS-COCO BID21 and faces from CelebA dataset as our content test images. Similar to BID11, we used the last feature set of conv3 as content layer C. We used last feature set of conv2, conv3 and conv4 layers from VGG-19 network as style layers S. Since there is only one content layer, we fix α α α c = 1. Our implementation can process 47.5 fps on a NVIDIA GeForce 1080, compared to 52.0 for the base model without Λ sub-network. The primary goal of introducing the adjustable parameters α α α was to modify the loss of each separate layer manually. Qualitatively, this is demonstrable by increasing one of the input parameters from zero to one while fixing the rest of them to zero. FIG0 shows one example of such transition. Each row in this figure is corresponding to a different style layer, and therefore the stylizations at each row would be different. Notice how deeper layers stylize the image with bigger stylization elements from the style image but all of them still apply the coloring. We also visualize the effect of increasing two of the input parameters at the same time in FIG7. However, these transitions are best demonstrated interactively which is accessible at the project website https://goo.gl/PVWQ9K. To quantitatively demonstrate the change in losses with adjustment of the input parameters, we rerun the same experiment of assigning a fixed value to all of the input parameters while gradually increasing one of them from zero to one, this time across 100 different content images. Then we calculate the median loss at each style loss layer S. As can be seen in, increasing α l s decreases the measured loss corresponding to that parameter. To show the generalization of our method across style images, we trained 25 models with different style images and then measured median of the loss at any of the S layers for 100 different content images FIG4 -(bottom). We exhibit the same drop trends as before which means the model can generate stylizations conditioned on the input parameters. Finally, we verify that modifying the input parameters α α α s generates visually similar stylizations to the retrained base model with different loss weights w l s. To do so, we train the base model BID15 One application of our proposed method is to generate multiple stylizations given a fixed pair of content/style image. To do so, we randomize α α α to generate randomized stylization (top row of FIG1). Changing values of α α α usually do not randomize the position of the "elements" of the style. We can enforce this kind of randomness by adding some noise with the small magnitude to the content image. For this purpose, we multiply the content image with a mask which is computed by applying an inverse Gaussian filter on a white image with a handful (< 10) random zeros. This masking can shadow sensitive parts of the image which will change the spatial locations of the "elements" of style. Middle row in FIG1 demonstrates the effect of this randomization. Finally, we combine these two randomizations to maximizes the diversity of the output which is shown in the bottom row of FIG1. More randomized stylizations can be seen in FIG8 and at https://goo.gl/PVWQ9K. BID31. Our method generates diverse stylizations while StyleNet mostly differ in minor details. To the best of our knowledge, generating diverse stylizations at real-time is only have been studied at BID31 before. In this section, we qualitatively compare our method with this baseline. Also, we compare our method with a simple baseline where we add noise to the style parameters. The simplest baseline for getting diverse stylizations is to add noises to some parameters or the inputs of the style-transfer network. In the last section, we demonstrate that we can move the locations of elements of style by adding noise to the content input image. To answer the question that if we can get different stylizations by adding noise to the style input of the network, we utilize the model of which uses conditional instance normalization for transferring style. We train this model with only one style image and to get different stylizations, we add random noise to the style parameters (γ α α α and β α α α parameters of equation 6) at run-time. The stylization for this baseline are shown on the top row of FIG6. While we get different stylizations by adding random noises, the stylizations are no longer similar to the input style image. To enforce similar stylizations, we trained the same baseline while we add random noises at the training phase as well. The stylization are shown in the second row of FIG6. As it can be seen, adding noise at the training time makes the model robust to the noise and the stylization are similar. This indicates that a loss term that encourages diversity is necessary. We also compare the of our model with StyleNet BID31. As visible in FIG6, although StyleNet's stylizations are different, they vary in minor details and all carry the same level of stylization elements. In contrast, our model synthesizes stylized images with varying levels of stylization and more randomization. Our main contribution in this paper is a novel method which allows adjustment of each loss layer's contribution in feed-forward style transfer networks, in real-time and after training. This capability allows the users to adjust the stylized output to find the favorite stylization by changing input parameters and without retraining the stylization model. We also show how randomizing these parameters plus some noise added to the content image can in very different stylizations from the same pair of style/content image. Our method can be expanded in numerous ways e.g. applying it to multi-style transfer methods such as BID11, applying the same parametrization technique to randomize the correlation loss between the features of each layer and finally using different loss functions and pre-trained networks for computing the loss to randomize the outputs even further. One other interesting future direction is to apply the same "loss adjustment after training" technique for other classic computer vision and deep learning tasks. Style transfer is not the only task in which modifying the hyper-parameters can greatly affect the predicted and it would be rather interesting to try this method for adjusting the hyper-parameters in similar problems. Convolution 3 1 C SAME ReLU Convolution 3 1 C SAME Linear Add the input and the output Upsampling -C feature maps Nearest-neighbor interpolation, factor 2 Convolution 3 1 C SAME ReLUNormalization Conditional instance normalization after every convolution Optimizer Adam (α = 0.001, β 1 = 0.9, β 2 = 0.999) Batch size 8 Weight initialization Isotropic gaussian (µ = 0, σ = 0.01) BID15. Each column demonstrates the of a separate training. As can be seen, the "optimal" weight is different from one style image to another and there can be more than one "good" stylization depending on ones personal choice. Figure 13: Results of combining losses from different layers at generation time by adjusting their corresponding parameters. The first column is the style image which is fixed for each row. The content image is the same for all of the outputs. The corresponding parameter for each one of the losses is zero except for the one(s) mentioned in the title of each column. Notice how each layer enforces a different type of stylization and how the combinations vary as well. Also note how a single combination of layers cannot be the "optimal" stylization for any style image and one may prefer the from another column.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJg4E8IFdE
Stochastic style transfer with adjustable features.
Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards. However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the complexity of the environment and the language scales. To overcome this limitation, we present a framework within which instruction-conditional RL agents are trained using rewards obtained not from the environment, but from reward models which are jointly trained from expert examples. As reward models improve, they learn to accurately reward agents for completing tasks for environment configurations---and for instructions---not present amongst the expert data. This framework effectively separates the representation of what instructions require from how they can be executed. In a simple grid world, it enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements. We further show the method allows our agent to adapt to changes in the environment without requiring new expert examples. Figure 1: Different valid goal states for the instruction "build an L-like shape from red blocks".Developing agents that can learn to follow user instructions pertaining to an environment is a longstanding goal of AI research BID33. Recent work has shown deep reinforcement learning (RL) to be a promising paradigm for learning to follow language-like instructions in both 2D and 3D worlds (e.g. BID11 ; BID5, see Section 4 for a review). In each of these cases, being able to reward an agent for successfully completing a task specified by an instruction requires the implementation of a full interpreter of the instruction language. This interpreter must be able to evaluate the instruction against environment states to determine when reward must be granted to the agent, and in doing so requires full knowledge (on the part of the designer) of the semantics of the instruction language relative to the environment. Consider, for example, 4 arrangements of blocks presented in Figure 1. Each of them can be interpreted as a of successfully executing the instruction "build an L-like shape from red blocks", despite the fact that these arrangements differ in the location and the orientation of the target shape, as well as in the positioning of the irrelevant blue blocks. At best (e.g. for instructions such as the * Work done during an internship at DeepMind.† Now at Facebook AI Research.aforementioned one), implementing such an interpreter is feasible, although typically onerous in terms of engineering efforts to ensure reward can be given-for any admissible instruction in the language-in potentially complex or large environments. At worst, if we wish to scale to the full complexity of natural language, with all its ambiguity and underspecification, this requires solving fundamental problems of natural language understanding. If instruction-conditional reward functions cannot conveniently or tractably be implemented, can we somehow learn them in order to then train instruction-conditional policies? When there is a single implicit task, Inverse Reinforcement Learning (IRL; BID20 BID36 methods in general, and Generative Adversarial Imitation Learning BID12 in particular, have yielded some success in jointly learning reward functions from expert data and training policies from learned reward models. In this paper, we wish to investigate whether such mechanisms can be adapted to the more general case of jointly learning to understand language which specifies task objectives (e.g. instructions, goal specifications, directives), and use such understanding to reward language-conditional policies which are trained to complete such tasks. For simplicity, we explore a facet of this general problem in this paper by focussing on the case of declarative commands that specify sets of possible goal-states (e.g. "arrange the red blocks in a circle. "), and where expert examples need only be goal states rather than full trajectories or demonstrations, leaving such extensions for further work. We introduce a framework-Adversarial Goal-Induced Learning from Examples (AGILE)-for jointly training an instruction-conditional reward model using expert examples of completed instructions alongside a policy which will learn to complete instructions by maximising the thus-modelled reward. In this respect, AGILE relies on familiar RL objectives, with free choice of model architecture or training mechanisms, the only difference being that the reward comes from a learned reward model rather than from the environment. We first verify that our method works in settings where a comparison between AGILE-trained policies with policies trained from environment reward is possible, to which end we implement instructionconditional reward functions. In this setting, we show that the learning speed and performance of A3C agents trained with AGILE reward models is superior to A3C agents trained against environment reward, and comparable to that of true-reward A3C agents supplemented by auxiliary unsupervised reward prediction objectives. To simulate an instruction-learning setting in which implementing a reward function would be problematic, we construct a dataset of instructions and goal-states for the task of building colored orientation-invariant arrangements of blocks. On this task, without us ever having to implement the reward function, the agent trained within AGILE learns to construct arrangements as instructed. Finally, we study how well AGILE's reward model generalises beyond the examples on which it was trained. Our experiments show it can be reused to allow the policy to adapt to changes in the environment. Here, we introduce AGILE ("Adversarial Goal-Induced Learning from Examples", in homage to the adversarial learning mechanisms that inspire it), a framework for jointly learning to model reward for instructions, and learn a policy from such a reward model. Specifically, we learn an instructionconditional policy π θ with parameters θ, from a data stream G π θ obtained from interaction with the environment, by adjusting θ to maximise the expected total reward R π (θ) based on stepwise reward r t given to the policy, exactly as done in any normal Reinforcement Learning setup. The difference lies in the source of the reward: we introduce an additional discriminator network D φ, the reward model, whose purpose is to define a meaningful reward function for training π θ. We jointly learn this reward model alongside the policy by training it to predict whether a given state s is a goal state for a given instruction c or not. Rather than obtain positive and negative examples of instruction, state pairs from a purely static dataset, we sample them from a policy-dependent data stream. This stream is defined as follows: positive examples are drawn from a fixed dataset D of instructions c i paired with goal states s i; negative examples are drawn from a constantly-changing buffer of states obtained from the policy acting on the environment, paired with the instruction given to the policy. Formally, the policy is trained to maximize a return R π (θ) and the reward model is trained to minimize a cross-entropy loss L D (φ), the equations for which are: DISPLAYFORM0 DISPLAYFORM1 wherer t = [D φ (c, s t) > 0.5] In the equations above, the Iverson Bracket [. . .] maps truth to 1 and falsehood to 0, e.g. [x > 0] = 1 iff x > 0 and 0 otherwise. γ is the discount factor. With (c, s 1:∞) ∼ G π θ, we denote a state trajectory that was obtained by sampling (c, s 0) ∼ G and running π θ conditioned on c starting from s 0. B denotes a replay buffer to which (c, s) pairs from T -step episodes are added; i.e. it is the undiscounted occupancy measure over the first T steps. D φ (c, s) is the probability of (c, s) having a positive label according to the reward model, and thus [D φ (c, s t) > 0.5] indicates that a given state s t is more likely to be a goal state for instruction c than not, according to D. H(π θ) is the policy's entropy, and α is a hyperparameter. The approach is illustrated in Fig 2. Pseudocode is available in Appendix A. We note that Equation 1 differs from a traditional RL objective only in that the modelled rewardr t is used instead of the ground-truth reward r t. Indeed, in Section 3, we will compare policies trained with AGILE to policies trained with traditional RL, simply by varying the reward source from the reward model to the environment. Figure 2: Information flow during AGILE training. The policy acts conditioned on the instruction and is trained using the reward from the reward model (Figure 2a). The reward model is trained, as a discriminator, to distinguish between "A", the instruction, goal-state pairs from the dataset (Figure 2b), and "B", the instruction, state pairs from the agent's experience. Dealing with False Negatives Let us call Γ(c) the objective set of goal states which satisfy instruction c (which is typically unknown to us). Compared to the ideal case where all (c, s) would be deemed positive if-and-only-if s ∈ Γ(c), the labelling of examples implied by Equation 2 has a fundamental limitation when the policy performs well. As the policy improves, by definition, a increasing share of (c, s) ∈ B are objective goal-states from Γ(c). However, as they are treated as negative examples in Equation 2, the discriminator accuracy drops, causing the policy to get worse. We therefore propose the following simple heuristic to rectify this fundamental limitation by approximately identifying the false negatives. We rank (c, s) examples in B according to the reward model's output D φ (c, s) and discard the top 1 − ρ percent as potential false negatives. Only the other ρ percent are used as negative examples of the reward model. Formally speaking, the first term in Equation 2 becomes E(c,s)∼B D φ,ρ − log(1 − D φ (c, s)), where B D φ,ρ stands for the ρ percent of B selected, using D φ, as described above. We will henceforth refer to ρ as the anticipated negative rate. Setting ρ to 100% means using B D φ,100 = B like in Equation 2, but our preliminary experiments have shown clearly that this inhibits the reward model's capability to correctly learn a reward function. Using too small a value for ρ on the other hand may deprive the reward model of the most informative negative examples. We thus recommend to tune ρ as a hyperparameter on a task-specific basis. Reusability of the Reward Model An appealing advantage of AGILE is the fact that the reward model D φ and the policy π θ learn two related but distinct aspects of an instruction: the reward model focuses on recognizing the goal-states (what should be done), whereas the policy learns what to do in order to get to a goal-state (how it should be done). The intuition motivating this design is that the knowledge about how instructions define goals should generalize more strongly than the knowledge about which behavior is needed to execute instructions. Following this intuition, we propose to reuse a reward model trained in AGILE as a reward function for training or fine-tuning policies. Relation to GAIL AGILE is strongly inspired by-and retains close relations to-Generative Adversarial Imitation Learning (GAIL; BID12, which likewise trains both a reward function and a policy. The former is trained to distinguish between the expert's and the policy's trajectories, while the latter is trained to maximize the modelled reward. GAIL differs from AGILE in a number of important respects. First, AGILE is conditioned on instructions c so a single AGILE agent can learn combinatorially many skills rather than just one. Second, in AGILE the reward model observes only states s i (either goal states from an expert, or states from the agent acting on the environment) rather than state-action traces (s 1, a 1), (s 2, a 2),..., learning to reward the agent based on "what" needs to be done rather than according to "how" it must be done. Finally, in AGILE the policy's reward is the thresholded probability [D φ (c, s t)] as opposed to the log-probability log D φ (s t, a t) used in GAIL. Our reasoning for this change is that, when adapted to the setting with goal-specifications, a GAIL-style reward log D φ (c, s t) could take arbitrarily low values for intermediate states visited by the agent, as the reward model D φ becomes confident that those are not goal states. Empirically, we found that dropping the logarithm from GAIL-style rewards is indeed crucial for AGILE's performance, and that using the probability D φ (c, s t) as the rewardr t in a performance level similar to that of the discretized AGILE rewardr t = [D φ (c, s t)]. We experiment with AGILE in a grid world environment that we call GridLU, short for Grid Language Understanding and after the famous SHRDLU world BID33. GridLU is a fully observable grid world in which the agent can walk around the grid (moving up, down left or right), pick blocks up and drop them at new locations (see FIG2 for an illustration and Appendix C for a detailed description of the environment). All our models receive the world state as a 56x56 RGB image. With regard to processing the instruction, we will experiment with two kinds of models: Neural Module Networks (NMN) that treat the instruction as a structured expression, and a generic model that takes an unstructured instruction representation and encodes it with an LSTM.Because the language of our instructions is generated from a simple grammar, we perform most of our experiments using policy and reward model networks that are constructed using the NMN BID2 paradigm. NMN is an elegant architecture for grounded language processing in which a tree of neural modules is constructed based on the language input. The visual input is then fed to the leaf modules, which send their outputs to their parent modules, which process is repeated until the root of the tree. We mimick the structure of the instructions when constructing the tree of modules; for example, the NMN corresponding to the instruction DISPLAYFORM0, where m x denotes the module corresponding to the token x, and h s is a representation of state s. Each module m x performs a convolution (weights shared by all modules) followed by a token-specific Feature-Wise Linear Modulation (FiLM): DISPLAYFORM1 where h l and h r are module inputs, γ x is a vector of FiLM multipliers, β x are FiLM biases, and ⊕ are element-wise multiplication and addition with broadcasting, * denotes convolution. The representation h s is produced by a convnet. The NMN's output h N M N undergoes max-pooling and is fed through a 1-layer MLP to produce action probabilities or the reward model's output. Note, that while structure-wise our policy and reward model are mostly similar, they do not share parameters. NMN is an excellent model when the language structure is known, but this may not be the case for natural language. To showcase AGILE's generality we also experiment with a very basic structure-agnostic architecture. We use FiLM to condition a standard convnet on an instruction representation h LST M produced by an LSTM. The k-th layer of the convnet performs a computation DISPLAYFORM2 The same procedure as described above for h N M N is used to produce the network outputs using the output h 5 of the 5 th layer of the convnet. In the rest of the paper we will refer to the architectures described above as FiLM-NMN and FiLM-LSTM respectively. FiLM-NMN will be the default model in all experiments unless explicitly specified otherwise. Detailed information about network architectures can be found in Appendix G. For the purpose of training the policy networks both within AGILE, and for our baseline trained from ground-truth reward r t instead of the modelled rewardr t, we used the Asynchronous Advantage Actor-Critic (A3C; . Any alternative training mechanism which uses reward could be used-since the only difference in AGILE is the source of the reward signal, and for any such alternative the appropriate baseline for fair comparison would be that same algorithm applied to train a policy from ground-truth reward. We will refer to the policy trained within AGILE as AGILE-A3C. The A3C's hyperparameters γ and λ were set to 0.99 and 0 respectively, i.e. we did not use without temporal difference learning for the baseline network. The length of an episode was 30, but we trained the agent on advantage estimation rollouts of length 15. Every experiment was repeated 5 times. We considered an episode to be a success if the final state was a goal state as judged by a task-specific success criterion, which we describe for the individual tasks below. We use the success rate (i.e. the percentage of successful episodes) as our main performance metric for the agents. Unless otherwise specified we use the NMN-based policy and reward model in our experiments. Full experimental details can be found in Appendix D. Green triangle west of a red circle Our first task, GridLU-Relations, is an adaptation of the SHAPES visual question answering dataset BID2 in which the blocks can be moved around freely. GridLU-Relations requires the agent to induce the meaning of spatial relations such as above or right of, and to manipulate the world in order to instantiate these relationships. Named GridLU-Relations, the task involves five spatial relationships (NorthFrom, SouthFrom, EastFrom, WestFrom, SameLocation), whose arguments can be either the blocks, which are referred to by their shapes and colors, or the agent itself. To generate the full set of possible instructions spanned by these relations and our grid objects, we define a formal grammar that generates strings such as: DISPLAYFORM0 This string carries the meaning'put a red circle north from (above) a blue square'. In general, when a block is the argument to a relation, it can be referred to by specifying both the shape and the color, like in the example above, or by specifying just one of these attributes. In addition, the AGENT constant can be an argument to all relations, in which case the agent itself must move into a particular spatial relation with an object. FIG2 shows two examples of GridLU-Relations instructions and their respective goal states. There are 990 possible instructions in the GridLU-Relations task, and the number of distinct training instances can be loosely lower-bounded by 1.8 · 10 7 (see Appendix E for details).Notice that, even for the highly concrete spatial relationships in the GridLU-Relations language, the instructions are underspecified and somewhat ambiguous-is a block in the top-right corner of the grid above a block in the bottom left corner? We therefore decided (arbitrarily) to consider all relations to refer to immediate adjacency (so that Instruction equation 3 is satisfied if and only if there is a red circle in the location immediately above a blue square). Notice that the commands are still underspecified in this case (since they refer to the relationship between two entities, not their absolute positions), even if the degree of ambiguity in their meaning is less than in many real-world cases. The policy and reward model trained within AGILE then have to infer this specific sense of what these spatial relations mean from goal-state examples, while the baseline agent is allowed to access our programmed ground-truth reward. The binary ground-truth reward (true if the state is a goal state) is also used as the success criterion for evaluating AGILE.Having formally defined the semantics of the relationships and programmed a reward function, we compared the performance of an AGILE-A3C agent against a priviliged baseline A3C agent trained using ground-truth reward. Interestingly, we found that AGILE-A3C learned the task more easily than standard A3C (see the respective curves in FIG3). We hypothesize this is because the modeled rewards are easy to learn at first and become more sparse as the reward model slowly improves. This naturally emerging curriculum expedites learning in the AGILE-A3C when compared to the A3C-trained policy that only receives signal upon reaching a perfect goal state. We did observe, however, that the A3C algorithm could be improved significantly by applying the auxiliary task of reward prediction (RP; BID13, which was applied to language learning tasks by BID11 (see the A3C and A3C-RP curves in FIG3). This objective reinforces the association between instructions and states by having the agent replay the states immediately prior to a non-zero reward and predict whether or not it the reward was positive (i.e. the states match the instruction) or not. This mechanism made a significant difference to the A3C performance, increasing performance to 99.9%. AGILE-A3C also achieved nearly perfect performance (99.5%). We found this to be a very promising , since within AGILE, we induce the reward function from a limited set of examples. The best with AGILE-A3C were obtained using the anticipated negative rate ρ = 25%. When we used larger values of ρ AGILE-A3C training started quicker but after 100-200 million steps the performance started to deteriorate (see AGILE curves in FIG3), while it remained stable with ρ = 25%.Data efficiency These suggest that the AGILE reward model was able to induce a near perfect reward function from a limited set of instruction, goal-state pairs. We therefore explored how small this training set of examples could be to achieve reasonable performance. We found that with a training set of only 8000 examples, the AGILE-A3C agent could reach a performance of 60% (massively above chance). However, the optimal performance was achieved with more than 100,000 examples. The full are available in Appendix D. In the experiments we have reported so far the AGILE agent was trained on all 990 possible GridLU-Relation instructions. In order to test generalization to unseen instructions we held out 10% of the instructions as the test set and used the rest 90% as the training set. Specifically, we restricted the training instances and instruction, goal-state pairs to only contain instructions from the training set. The performance of the trained model on the test instructions was the same as on the training set, showing that AGILE did not just memorise the training instructions but learnt a general interpretation of GridLU-Relations instructions. AGILE with Structure-Agnostic Models We report the for AGILE with a structureagnostic FILM-LSTM model in FIG3 (middle). AGILE with ρ = 25% achieves a high 97.5% success rate, and notably it trains almost as fast as an RL-RP agent with the same architecture. Analyzing the reward model We compare the binary reward provided by the reward model with the ground-truth from the environment during training on the GridLU-Relation task. With ρ = 25% the accuracy of the reward model peaks at 99.5%. As shown in FIG3 (right) the reward model learns faster in the beginning with larger values of ρ but then deteriorates, which confirms our intuition about why ρ is an important hyperparameter and is aligned with the success rate learning curves in FIG3 (left). We also observe during training that the false negative rate is always kept reasonably low (<3% of rewards) whereas the reward model will initially be more generous with false positives (20-50% depending on ρ during the first 20M steps of training) and will produce an increasing number of false positives for insufficiently small values of ρ (see plots in Appendix E). We hypothesize that early false positives may facilitate the policy's training by providing it with a sort of curriculum, possibly explaining the improvement over agents trained from ground-truth reward, as shown above. The reward model as general reward function An instruction-following agent should be able to carry-out known instructions in a range of different contexts, not just settings that match identically the specific setting in which those skills were learned. To test whether the AGILE framework is robust to (semantically-unimportant) changes to the environment dynamics, we first trained the policy and reward model as normal and then modified the effective physics of the world by making all red square objects immovable. In this case, following instructions correctly is still possible in almost all cases, but not all solutions available during training are available at test time. As expected, this change impaired the policy and the agent's success rate on the instructions referring to a red square dropped from 98% to 52%. However, after fine-tuning the policy (additional training of the policy on the test episodes using the reward from the previously-trained-then-frozen reward model), the success rate went up to 69.3% (FIG4). This experiment suggests that the AGILE reward model learns useful and generalisable linguistic knowledge. The knowledge can be applied to help policies adapt in scenarios where the high-level meaning of commands is familiar but the low-level physical dynamics is not. The experiments thus far demonstrate that even without directly using the reward function AGILE-A3C performs comparably to its pure A3C counter-part. However, the principal motivation for the AGILE framework is to avoid programming the reward function. To model this setting more explicitly, we developed the task GridLU-Arrangements, in which each instruction is associated with multiple viable goal-states that share some (more abstract) common form. The complete set of instructions and forms is illustrated in FIG2. To get training data, we built a generator to produce random instantiations (i.e. any translation, rotation, reflection or color mapping of the illustrated forms) of these goal-state classes, as positive examples for the reward model. In the real world, this process of generating goal-states could be replaced by finding, or having humans annotate, labelled images. In total, there are 36 possible instructions in GridLU-Arrangements, which together refer to a total of 390 million correct goal-states (see Appendix F for details). Despite this enormous space of potentially correct goal-states, we found that for good performance it was necessary to train AGILE on only 100,000 (less than 0.3%) of these goal-states, sampled from the same distribution as observed in the episodes. To replicate the conditions of a potential AGILE application as close as possible, we did not write a reward function for GridLU-Arrangements (even though it would have been theoretically possible), and instead carried out all evaluation manually. Half of the episodes began with four square blocks (all of the same color), and the agent, in random unique positions, and an instruction sampled uniformly from the list of possible arrangement words. In the other half of the episodes, four square blocks of one color and four square blocks of a different color were initially each positioned randomly. The instruction in these episodes specified one of the two colors together with an arrangement word. We trained policies and reward models using AGILE with 10 different seeds for each level, and selected the best pair based on how well the policy maximised modelled reward. We then manually assessed the final state of each of 200 evaluation episodes, using human judgement that the correct shape has been produced as success criterion to evaluate AGILE. We found that the agent made the correct arrangement in 58% of the episodes. The failure cases were almost always in the episodes involving eight blocks 1. In these cases, the AGILE agent tended towards building the correct arrangement, but was impeded by the randomly positioned non-target-color blocks and could not recover. Nonetheless, these scores, and the compelling behaviour observed in the video (https://www.youtube.com/watch?v=07S-x3MkEoQ), demonstrate the potential of AGILE for teaching agents to execute semantically vague or underspecified instructions. Learning to follow language instructions has been approached in many different ways, for example by reinforcement learning using a reward function programmed by a system designer. consider instruction-following in 2D or 3D environments and reward the agent for arriving at the correct location or object. BID14 and BID18 train RL agents to produce goal-states given instructions. As discussed, these approaches are constrained by the difficulty of programming language-related reward functions, a task that requires an programming expert, detailed access to the state of the environment and hard choices above how language should map to the world. Agents can be trained to follow instructions using complete demonstrations, that is sequences of correct actions describing instruction execution for given initial states.; BID3 train semantic parsers to produce a formal representation of the query that when fed to a predefined execution model matches exactly the sequence of actions from the demonstration. BID1; BID17 sidestep the intermediate formal representation and train a Conditional Random Field (CRF) and a sequenceto-sequence neural model respectively to directly predict the actions from the demonstrations. A underlying assumption behind all these approaches is that the agent and the demonstrator share the same actuation model, which might not always be the case. In the case of navigational instructions the trajectories of the agent and the demonstrators can sometimes be compared without relying on the actions, like e.g. BID28, but for other types of instructions such a hard-coded comparison may be infeasible. train a log-linear model to map instruction constituents into their groundings, which can be objects, places, state sequences, etc. Their approach requires access to a structured representation of the world environment as well as intermediate supervision for grounding the constituents. Our work can be categorized as apprenticeship (imitation) learning, which studies learning to perform tasks from demonstrations and feedback. Many approaches to apprenticeship learning are variants of inverse reinforcement learning (IRL), which aims to recover a reward function from expert demonstrations BID0 BID36. As stated at the end of Section 2, the method most closely related to AGILE is the GAIL algorithm from the IRL family BID12. There have been earlier attempts to use IRL-style methods for instruction following BID16 BID30, but unlike AGILE, they relied on the availability of a formal reward specification language. To our knowledge, ours and the concurrent work by BID9 are the first works to showcase learning reward models for instructions from pixels directly. Besides IRL-style approaches, other apprenticeship learning methods involve training a policy BID15 BID29 or a reward function BID31 BID7 directly from human feedback. Several recent imitation learning works consider using goal-states directly for defining the task BID10 BID23. AGILE differs from these approaches in that goal-states are only used to train the reward module, which we show generalises to new environment configurations or instructions, relative to those seen in the expert data. We have proposed AGILE, a framework for training instruction-conditional RL agents using rewards from learned reward models, which are jointly trained from data provided by both experts and the agent being trained, rather than reward provided by an instruction interpreter within the environment. This opens up new possibilities for training language-aware agents: in the real world, and even in rich simulated environments BID4 BID34, acquiring such data via human annotation would often be much more viable than defining and implementing reward functions programmatically. Indeed, programming rewards to teach robust and general instruction-following may ultimately be as challenging as writing a program to interpret language directly, an endeavour that is notoriously laborious BID32, and some say, ultimately futile BID33.As well as a means to learn from a potentially more prevalent form of data, our experiments demonstrate that policies trained in the AGILE framework perform comparably with and can learn as fast as those trained against ground-truth reward and additional auxiliary tasks. Our analysis of the reward model's classifications gives a sense of how this is possible; the false positive decisions that it makes early in the training help the policy to start learning. The fact that AGILEs objective attenuates learning issues due to the sparsity of reward states within episodes in a manner similar to reward prediction suggests that the reward model within AGILE learns some form of shaped reward BID21, and could serve not only in the cases where a reward function need to be learned in the absence of true reward, but also in cases where environment reward is defined but sparse. As these cases are not the focus of this study, we note this here, but leave such investigation for future work. As the policy improves, false negatives can cause the reward model accuracy to deteriorate. We determined a simple method to mitigate this, however, leading to robust training that is comparable to RL with reward prediction and unlimited access to a perfect reward function. Another attractive aspect of AGILE is that learning "what should be done" and "how it should be done" is performed by two different model components. Our experiments confirm that the "what" kind of knowledge generalizes better to new environments. When the dynamics of the environment changed at test time, fine-tuning using frozen reward model allowed to the policy recover some of its original capability in the new setting. While there is a large gap to be closed between the sort of tasks and language experimented with in this paper and those which might be presented in "real world" situations or more complex environments, our provide an encouraging first step in this direction. Indeed, it is interesting to consider how AGILE could be applied to more realistic learning settings, for instance involving first-person vision of 3D environments. Two issues would need to be dealt with, namely training the agent to factor out the difference in perspective between the expert data and the agent's observations, and training the agent to ignore its own body parts if they are visible in the observations. Future work could focus on applying third-person imitation learning methods recently proposed by BID26 learn the aforementioned invariances. Most of our experiments were conducted with a formal language with a known structure, however AGILE also performed very well when we used a structure-agnostic FiLM-LSTM model which processed the instruction as a plain sequence of tokens. This suggest that in future work AGILE could be used with natural language instructions. Require: The policy network π θ, the discriminator network D φ, the anticipated negative rate ρ, a dataset D, a replay buffer B, the batch size BS, a stream of training instances G, the episode length T, the rollout length R. 1: while Not Converged do 2:Sample a training instance (c, s0) ∈ G. 3:t ← 0 4: DISPLAYFORM0 Act with π θ (c, s) and produce a rollout (c, st...t+R). Add (c, s) pairs from (c, st...t+R) to the replay buffer B. Remove old pairs from B if it is overflowing. ComputeLD DISPLAYFORM0 Compute the gradient DISPLAYFORM0 and use it to update φ. Synchronise θ and φ with other workers. 13: DISPLAYFORM0 end while 15: end while Require: The policy network π θ, the discriminator network D φ, a dataset D, a replay buffer B, a stream of training instances G, the episode length T. 1: while Not Converged do 2:Sample a training instance (c, s0) ∈ G. t ← 0 4: DISPLAYFORM0 Act with π θ (c, s) and produce a rollout (c, st...t+R). Use the discriminator D φ to compute the rewards rτ = [D φ (c, sτ) > 0.5]. Perform an RL update for θ using the rewards rτ. 8:Synchronise θ and φ with other workers. 9: t ← t + R 10: end while 11: end while We trained the policy π θ and the discriminator D φ concurrently using RMSProp as the optimizer and Asynchronous Advantage Actor-Critic (A3C) as the RL method. A baseline predictor (see Appendix G for details) was trained to predict the discounted return by minimizing the mean square error. The RMSProp hyperparameters were different for π θ and D φ, see TAB0. A designated worker was used to train the discriminator (see Algorithm 1). Other workers trained only the policy (see Algorithm 2). We tried having all workers write to the replay buffer B that was used for the discriminator training and found that this gave the same performance as using (c, s) pairs produced by the discriminator worker only. We found it crucial to regularize the discriminator by clipping columns of all weights matrices to have the L2 norm of at most 1. In particular, we multiply incoming weights w u of each unit u by min(1, 1/||w u || 2) after each gradient update as proposed by BID25. We linearly rescaled the policy's rewards to the [0; 0.1] interval for both RL and AGILE. When using RL with reward prediction we fetch a batch from the replay buffer and compute the extra gradient for every rollout. For the exact values of hyperparameters for the GridLU-Relations task we refer the reader to TAB0. The hyperparameters for GridLU-Arrangements were mostly the same, with the exception of the episode length and the rollout length, which were 45 and 30 respectively. For training the RL baseline for GridLU-Relations we used the same hyperparameter settings as for the AGILE policy. The GridLU world is a 5 × 5 gridworld surrounded by walls. The cells of the grid can be occupied by blocks of 3 possible shapes (circle, triangle, and square) and 3 possible colors (red, blue, and green). The grid also contains an agent sprite. The agent may carry a block; when it does so, the agent sprite changes color 2. When the agent is free, i.e. when it does not carry anything, it is able to enter cells with blocks. A free agent can pick a block in the cell where both are situated. An agent that carries a block cannot enter non-empty cells, but it can instead drop the block that it carries in any non-empty cell. Both picking up and dropping are realized by the INTERACT action. Other available actions are LEFT, RIGHT, UP and DOWN and NOOP. The GridLU agent can be seen as a cursor (and this is also how it is rendered) that can be moved to select a block or a position where the block should be released. FIG7 illustrates the GridLU world and its dynamics. We render the state of the world as a color image by displaying each cell as an 8 × 8 patch 3 and stitching these patches in a 56 × 56 image 4. All neural networks take this image as an input. Every experiment was repeated 5 times and the average is reported. RL vs. AGILE All agents were trained for 5 · 10 8 steps. Data Efficiency We trained AGILE policies with datasets D of different sizes for 5 · 10 8 steps. For each policy we report the maximum success rate that it showed in the course of training. GridLU-Arrangements We trained the agent for 100M time steps, saving checkpoints periodically, and selected the checkpoint that best fooled the discriminator according to the agent's internal reward. Data Efficiency We measure how many examples of instructions and goal-states are required by AGILE in order to understand the semantics of the GridLU-Relations instruction language. The are reported in FIG8. The AGILE-trained agent succeeds in more than 50% of cases starting from 8000 examples, but as many as 130000 is required for the best performance. All GridLU instructions can be generated from <instruction> using the following Backus-Naur form, with one exception: The first expansion of <obj> must not be identical to the second expansion of <obj> in <bring to instruction>. There are 15 unique possibilities to expand the nonterminal <obj>, so there are 150 unique possibilities to expand <go to instruction> and 840 unique possibilities to expand <bring to instruction> (not counting the exceptions mentioned above). Hence there are 990 unique instructions in total. However, several syntactically different instructions can be semantically equivalent, such as EastFrom(AGENT, Shape(rect, SCENE)) and WestFrom(Shape(rect, SCENE), AGENT).Every instruction partially specifies what kind of objects need to be available in the environment. For go-to-instructions we generate one object and for bring-to-instructions we generate two objects according to this partial specification (unspecified shapes or colors are picked uniformly at random). Additionally, we generate one "distractor object". This distractor object is drawn uniformly at random from the 9 possible objects. All of these objects and the agent are each placed uniformly at random into one of 25 cells in the 5x5 grid. The instance generator does not sample an instruction uniformly at random from a list of all possible instructions. Instead, it generates the environment at the same time as the instruction according to the procedure above. Afterwards we impose two'sanity checks': are any two objects in the same location or are they all identical? If any of these two checks fail, the instance is discarded and we start over with a new instance. Because of this rejection sampling technique, go-to-instructions are ultimately generated with approximately 25% probability even though they only represent ≈ 15% of all possible instructions. The number of different initial arrangements of three objects can be lower-bounded by 9 3 = 2300 if we disregard their permutation. Hence every bring-to-instruction has at least K = 2300 · 9 ≈ 2 · 10 4 associated initial arrangements. Therefore the total number of task instances can be lower-bounded with 840 · K ≈ 1.7 · 10 7, disregarding the initial position of the agent. During the training on GridLU-Relations we compared the predictions of the discriminator with those of the ground-truth reward checker. This allowed us to monitor several performance indicators of the discriminator, see FIG10. Instruction Syntax We used two types of instructions in the GridLU-Arrangements task, those referring only to the arrangement and others that also specified the color of the blocks. Examples Connected(AGENT, SCENE) and Snake(AGENT, Color('yellow', SCENE)) illustrate the syntax that we used for both instruction types. Number of Distinct Goal-States TAB1 presents our computation of the number of distinct goal-states in the GridLU-Arrangements Task. In this section we explain in detail the neural architectures that we used in our experiments. We will use * to denote convolution,, ⊕ to denote element-wise addition of a vector to a 3D tensor with broadcasting (i.e. same vector will be added/multiplied at each location of the feature map). We used ReLU as the nonlinearity in all layers with the exception of LSTM.FiLM-NMN We will first describe the FiLM-NMN discriminator D φ. The discriminator takes a 56x56 RGB image s as the representation of the state. The image s is fed through a stem convnet that consisted of an 8x8 convolution with 16 kernels and a 3x3 convolution with 64 kernels. The ing tensor h stem had a 5x5x64 shape. As a Neural Module Metwork BID2, the FiLM-NMN is constructed of modules. The module m x corresponding to a token x takes a left-hand side input h l and a right-hand side input h r and performs the following computation with them: DISPLAYFORM0 where γ x and β x are FiLM coefficients corresponding to the token x, W m is a weight tensor for a 3x3 convolution with 128 input features and 64 output features. Zero-padding is used to ensure that the output of m x has the same shape as h l and h r. The equation above describes a binary module that takes two operands. For the unary modules that received only one input (e.g. m red, m square) we present the input as h l and zeroed out h r. This way we are able to use the same set of weights W m for all modules. We have 12 modules in total, 3 for color words, 3 for shape words, 5 for relations words and one m AGEN T module used in go-to instructions. The modules are selected and connected based on the instructions, and the output of the root module is used for further processing. For example, the following computation would be performed for the instruction c 1 =NorthFrom(Color('red', Shape('circle', SCENE)), Color('blue', Shape('square', SCENE))): h nmn = m N orthF rom (m red (m circle (h stem)), m blue (m square (h stem))),and the following one for c 2 =NorthFrom(AGENT, Shape('triangle', SCENE)): h nmn = m N orthF rom (m AGEN T (h stem), m triangle (h stem)).Finally, the output of the discriminator is computed by max-pooling the output of the FiLM-NMN across spatial dimensions and feeding it to an MLP with a hidden layer of 100 units: DISPLAYFORM1 where w, W and b are weights and biases, σ(x) = e x /(1 + e x) is the sigmoid function. The policy network π φ is similar to the discriminator network D θ. The only difference is that it outputs softmax probabilites for 5 actions instead of one real number we use an additional convolutional layer to combine the output of FiLM-NMN and h stem: h merge = ReLU (W merge * [h nmn ; h stem] + b merge ), π(c, s) = softmax(W 2 ReLU (W 1 maxpool (h merge) + b 1 ) + b 2 ), the output h merge of which is further used in the policy network instead of h nmn. FIG11 illustrates our FiLM-NMN policy and discriminator networks. FiLM-LSTM For our structure-agnostic models we use an LSTM of 100 hidden units to predict FiLM biases and multipliers for a 5 layer convnet. More specifically, let h LST M be the final state of the LSTM after it consumes the instruction c. We compute the FiLM coefficients for the layer k ∈ [1; 5] as follows: DISPLAYFORM2 DISPLAYFORM3 and use them as described by the equation below: DISPLAYFORM4 where W k are the convolutional weights, h 0 is set to the pixel-level representation of the world state s. The characteristics of the 5 layers were the following: (8x8, 16, VALID), (3x3, 32, VALID), (3x3, 64, SAME), (3x3, 64, SAME), (3x3, 64, SAME), where (mxm, n out, p) stands for a convolutional layer with mxm filters, n out output features, and p ∈ {SAME, VALID} padding strategy. Layers with p = VALID do not use padding, whereas in those with p = SAME zero padding is added in order to produce an output with the same shape as the input. The layer 5 is also connected to layer 3 by a residual connection. Similarly to FiLM-NMN, the output h 5 of the convnet is max-pooled and fed into an MLP with 100 hidden units to produce the outputs: DISPLAYFORM5 Baseline prediction In all policy networks the baseline predictor is a linear layer that took the same input as the softmax layer. The gradients of the baseline predictor are allowed to propagate through the rest of the network. Reward prediction We use the h maxpool of the max-pooling operation (which was a part of all models that we considered) as the input to the reward prediction pathway of our model. h maxpool is fed through a linear layer and softmax to produce probabilities of the reward being positive or zero (the reward is never negative in AGILE)., where f an in is the product of kernel width, kernel height and the number of input features.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1xsSjC9Ym
We propose AGILE, a framework for training agents to perform instructions from examples of respective goal-states.
We present Multitask Soft Option Learning (MSOL), a hierarchical multi-task framework based on Planning-as-Inference. MSOL extends the concept of Options, using separate variational posteriors for each task, regularized by a shared prior. The learned soft-options are temporally extended, allowing a higher-level master policy to train faster on new tasks by making decisions with lower frequency. Additionally, MSOL allows fine-tuning of soft-options for new tasks without unlearning previously useful behavior, and avoids problems with local minima in multitask training. We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines in challenging multi-task environments. A key challenge in Reinforcement Learning (RL) is to scale current approaches to higher complexity tasks without requiring a prohibitive number of environmental interactions. However, for many tasks, it is possible to construct or learn efficient exploration priors that allow to focus on more relevant parts of the state-action space, reducing the number of required interactions. These include, for example, reward shaping , curriculum learning , some meta-learning algorithms (; ; ; ;), and transfer learning (; ; ; ; ;). One promising way to capture prior knowledge is to decompose policies into a hierarchy of subpolicies (or skills) that can be reused and combined in novel ways to solve new tasks (; ; ; ;). The idea of Hierarchical RL (HRL) is also supported by findings that humans appear to employ a hierarchical mental structure when solving tasks . In such a hierarchical RL policy, lower-level, temporally extended skills yield directed behavior over multiple time steps. This has two advantages: i) it allows efficient exploration, as the target states of skills can be reached without having to explore much of the state space in between, and ii) directed behavior also reduces the variance of the future reward, which accelerates convergence of estimates thereof. On the other hand, while a hierarchical approach can therefore significantly speed up exploration and training, it can also severely limit the expressiveness of the final policy and lead to suboptimal performance when the temporally extended skills are not able to express the required policy for the task at hand . Many methods exist for constructing and/or learning skills for particular tasks (; ; ; ; Ş imşek & ; ; ; ; a). Training on multiple tasks simultaneously is one promising approach to learn skills that are both relevant and generalise across tasks (; ; ; ;). Ideally, the entire hierarchy can be trained end-to-end on the obtained return, obviating the need to specify proxy rewards for skills . However, learning hierarchical policies end-to-end in a multitask setting poses two major challenges: i) because skills optimize environmental rewards directly, correctly updating them relies on already (nearly) converged master policies that use them similarly across all tasks, requiring complex training schedules , and ii) the end-to-end optimization is prone to local minima in which multiple skills have learned similar behavior. This second points is explained in more detail in Appendix A. In this paper, we propose Multitask Soft Option Learning (MSOL), a novel approach to learning hierarchical policies in a multi-task setting that extends Options , a common definition for skills, and casts the concept into the Planning as Inference (PAI) framework (see, e.g., , for a review). MSOL brings multiple advantages: i) it stabilizes end-to-end multitask training, removing the need for complex training schedules like in , ii) it gives rise to coordination between master policies, avoiding local minima of the type described in Appendix A, iii) it allows fine-tuning of options, i.e. adapting them to new tasks at test-time without the risk of unlearning previously acquired useful behavior, thereby avoiding suboptimal performance due to restricted expressiveness, iv) and lastly, we show how the soft option framework gives rise to a natural solution to the challenging task of learning option-termination policies. MSOL differentiates between a prior policy for each option, shared across all tasks, and a flexible task-specific posterior policy. The option prior can be fixed once it is fully trained, preventing unlearning of useful behavior even when the posteriors are updated. On new tasks, the option posteriors are initialized to the priors and regularized towards them, but are still adaptable to the specific task. This allows the same accelerated training as with'hard' options, but can solve more tasks due to the adjustable posteriors. Furthermore, during option learning, we can train prior and posterior policies simultaneously , all without the need for complex training schedules : training is stabilized because only the priors are shared across tasks. Our experiments demonstrate that MSOL outperforms previous hierarchical and transfer learning algorithms during transfer tasks in a multitask setting. MSOL only modifies the regularized reward and loss function, but does not require any specialized architecture. In particular, it also does not require artificial restrictions on the expressiveness of either the higher-level or intra-option policies. 2 PRELIMINARIES An agent's task is formalized as a MDP (S, A, ρ, P, r), consisting of the state space S, the action space A, the initial state distribution ρ, the transition probability P (s t+1 |s t, a t) of reaching state s t+1 by executing action a t in state s t, and the reward r(s t, a t) an agent receives for this transition. Planning as inference (PAI) (; ;) frames RL as a probabilistic-inference problem . The agent learns a distribution q φ (a|s) over actions a given states s, i.e., a policy, parameterized by φ, which induces a distribution over trajectories τ of length T, i.e., τ = (s 1, a 1, s 2, . . ., a T, s T +1): This can be seen as a structured variational approximation of the optimal trajectory distribution. Note that the true initial state probability ρ(s 1) and transition probability P (s t+1 |s t, a t) are used in the variational posterior, as we can only control the policy, not the environment. An advantage of this formulation is that we can incorporate information both from prior knowledge, in the form of a prior policy distribution, and the task at hand through a likelihood function that is defined in terms of the achieved reward. The prior policy p(a t |s t) can be specified by hand or, as in our case, learned (see Section 3). To incorporate the reward, we introduce a binary optimality variable O t , whose likelihood is highest along the optimal trajectory that maximizes return: p(O t = 1|s t, a t) = exp r(s t, a t)/β. The constraint r ∈ (−∞, 0] can be relaxed without changing the inference procedure . For brevity, we denote If a given prior policy p(a t |s t) explores the state-action space sufficiently, then p(τ, O 1:T) is the distribution of desirable trajectories. PAI aims to find a policy such that the variational posterior in approximates this distribution by minimizing the Kullback-Leibler (KL) divergence 2.2 MULTI-TASK LEARNING In a multi-task setting, we have a set of different tasks i ∈ T, drawn from a task distribution with probability ξ(i). All tasks share state space S and action space A, but each task has its own initialstate distribution ρ i, transition probability P i (s t+1 |s t, a t), and reward function r i. Our goal is to learn n tasks concurrently, distilling common information that can be leveraged to learn faster on new tasks from T. In this setting, the prior policy p θ (a t |s t) can be learned jointly with the taskspecific posterior policies q φi (a t |s t) . To do so, we simply extend to where is the regularised reward. Minimizing the loss in is equivalent to maximizing the regularized reward R reg i,t. Moreover, minimizing the term E τ ∼q ln implicitly minimizes the expected KL-divergence. In practise (see Appendix C.1) we will also make use of a discount factor γ ∈. For details on how γ arises in the PAI framework we refer to. Options are skills that generalize primitive actions and consist of three components: i) an intra-option policy p(a t |s t, z t) selecting primitive actions according to the currently active option z t, ii) a probability p(b t |s t, z t−1) of terminating the previously active option z t−1, and iii) an initiation set I ⊆ S, which we simply assume to be S. Note that by construction, the higherlevel (or master-) policy p(z t |z t−1, s t, b t) can only select a new option z t if the previous option z t−1 has terminated. 3 METHOD We aim to learn a reusable set of options that allow for faster training on new tasks from a given distribution. We learn both intra-option and termination policies, while preventing multiple options from learning the same behavior. To differentiate ourselves from classical'hard' options, which, once learned, do not change during new tasks, we call our novel approach soft-options (this is further discussed in Appendix B). Each soft-option consists of an option prior, which is shared across all tasks, and a taskspecific option posterior. The priors of both the intraoption policy and the termination policy capture how an option typically behaves and remain fixed once they are fully learned. At the beginning of training on a new task, they are used to initialize the task-specific posterior distribution. During training, the posterior is then regularized against the prior to prevent inadvertent unlearning. However, if maximizing the reward on certain tasks is not achievable with the prior policy, the posterior is free to deviate from it. We can thus speed up training using options, while remaining flexible enough to solve any task. Additionally, this soft option framework also allows for learning good priors in a multitask setting while avoiding local minima in which several options learn the same behavior. See Figure 1 for an overview over the hierarchical prior-posterior architecture that we explain further below. To express options in the PAI framework, we introduce two additional variables at each time step t: option selections z t, representing the currently selected option, and decisions b t to terminate them and allow the higher-level (master) policy to choose a new option. The agent's behavior depends on the currently selected option z t, by drawing actions a t from the intra-option posterior policy q L φi (a t |s t, z t). The selection z t itself is drawn from a master policy q, which conditions on b t ∈ {0, 1}, drawn by the termination posterior policy q T φi (b t |s t, z t−1). The master policy either continues with the previous z t−1 or draws a new option, where we set b 1 = 1 at the beginning of each episode. We slightly abuse notation by referring by δ(z t − z t−1) to the Kronecker delta δ zt,zt−1 for discrete and the Dirac delta distribution for continuous z t. The joint posterior policy is While z t can be a continuous variable, we consider only z t ∈ {1 . . . m}, where m is the number of available options. The induced distribution q φi (τ) over trajectories of task i, τ = (Our framework transfers knowledge between tasks by a shared prior p θ (a t, z t, b t |s t, z t−1) over all joint policies: H, and p L θ correctly, we can learn useful temporally extended options. The parameterized priors p T θ (b t |s t, z t−1) and p L θ (a t |s t, z t) are structurally equivalent to the posterior policies q T φi and q L φi so that they can be used as initialization for the latter. Optimizing the regularized return (see next section) w.r.t. θ distills the common behavior into the prior policy and softly enforces similarity across posterior distributions of each option amongst all tasks i. m selects the previous option z t−1 if b t = 0, and otherwise draws options uniformly to ensure exploration. Because the posterior master policy is different on each task, there is no need to distill common behavior into a joint prior. We extend the multitask objective in by substituting p θ (τ, O 1:T) and p φi (τ) with those induced by our hierarchical posterior policy in and the corresponding prior. The ing objective has the same form but with a new regularized reward that is maximized: As we maximize E q [R reg i,t], this corresponds to maximizing the expectation over, along the on-policy trajectories drawn from q φi (τ). Term 1 of the regularization encourages exploration in the space of options. It can also be seen as a form of deliberation cost as it is only nonzero whenever we terminate an option and the master policy needs to select another to execute. Term 2 softly enforces similarity between option posteriors across tasks and updates the prior towards the'average' posterior. It also encourages the master policy to pick the most specialized option whose posteriors across all tasks are most similar. In other words, the master policy q H φi is encouraged to pick option z t which maximizes r i, but minimizes term 2 by picking the option z t for which prior p L θ and posterior q L φi are the most similar. Because the prior is the average posterior, this rewards the master policy to pick the most specialized option (that still achieves high reward). As discussed in more detail in Appendix A, this allows us to escape the local optimization minima that hard options face in multitask learning, while still having fully specialized options after training. Lastly, we can use 3 to also encourage temporal abstraction of options. To do so, during option learning, we fix the termination prior p Choosing a large α encourages prolonged execution of one option, but allows switching whenever necessary. This is also similar to deliberation costs but with a more flexible cost model. Additionally, we can still distill a termination prior p T θ which can be used on future tasks. Instead of learning p T θ by minimizing the KL against the posterior termination policies, we can get more decisive terminations by minimizing andq φi (b = 1|s t, z t−1) = zt =zt−1 q H φi (z t |s t, z t−1, b t = 1) i.e., the learned termination prior distills the probability that the tasks' master policies would change the active option if they had the opportunity. Details on how we optimized the MSOL objective are given in Appendix C. Most hierarchical approaches rely on proxy rewards to train the lower level components and their terminations. Some of them aim to reach pre-specified subgoals , which are often found by analyzing the structure of the MDP (; ; Ş imşek et al., 2005; Ş imşek &), previously learned policies or predictability . Those methods typically require knowledge, or a sufficient approximation, of the transition model, both of which are often infeasible. Recently, several authors have proposed unsupervised training objectives for learning diverse skills based on their distinctiveness (; ; ;). However, those approaches don't learn termination functions and cannot guarantee that the required behavior on the downstream task is included in the set of learned skills. also incorporate reward information, but do not learn termination policies and are therefore restricted to learning multiple solutions to the provided task instead of learning a decomposition of the task solutions which can be re-composed to solve new tasks. A third usage of proxy rewards is by training lower level policies to move towards goals defined by the higher levels. When those goals are set in the original state space (a), this approach has difficulty scaling to high dimensional state spaces like images. Setting the goals in a learned embedding space (; ; b) can be difficult to train, though. In both cases, the temporal extension of the learned skills are set manually. On the other hand, also learn a hierarchical agent, but not to transfer skills, but to find decisions states based on how much information is encoded in the latent layer. also take an inference motivated approach to learning options. In particular propose a similarly structured hierarchical policy, albeit in a single task setting. However, they do not utilize learned prior and posterior distributions, but instead use expectation maximization to iteratively infer a hierarchical policy to explain the current rewardweighted trajectory distribution. Several previous works try to overcome the restrictive nature of options that can lead to sub-optimal solutions by allowing the higher-level actions to modulate the behavior of the lower-level policies;;. However, this significantly increases the required complexity of the higher-level policy and therefore the learning time. The multitask-and transfer-learning setup used in this work is inspired by and who suggest extracting options by using commonalities between solutions to multiple tasks. Prior multitask approaches often rely on additional human supervision like policy sketches or desirable sub-goals (; ;) in order to learn skills which transfer well between tasks. In contrast, our work aims at finding good termination states without such supervision. investigate the use of different priors for the higher-level policy while we are focussing on learning transferrable option priors. Closest to our work is Meta Learning of Shared Hierarchies (MLSH) which, however, shares the lower-level policies across all tasks without distinguishing between prior and posterior and does not learn termination policies. As discussed, this leads to local minima and insufficient diversity in the learned options. Similarly to us, differentiate between prior and posterior policies on multiple tasks and utilize a KLdivergence between them for training. However, they do not consider termination probabilities and instead only choose one option per task. Instead of transferring option policies between tasks, aim to share behavior through a latent embedding. Another interesting approach to multitask learning is which learns decision regions that are linear in the state instead of learning nonlinear master-and termination policies. Our approach is closely related to DISTRAL with which we share the multitask learning of prior and posterior policies. However, DISTRAL has no hierarchical structure and applies the same prior distribution over primitive actions, independent of the task. As a necessary hierarchical heuristic, the authors propose to also condition on the last primitive action taken. This works well when the last action is indicative of future behavior; however, in Section 5 we show several failure cases where a learned hierarchy is needed. We conduct a series of experiments to show: i) when learning hierarchies in a multitask setting, MSOL successfully overcomes the local minimum of insufficient option diversity, as described in Appendix A; ii) MSOL can learn useful termination policies; iii) MSOL is equally applicable to discrete as well as continuous domains; and iv) using soft options yields fast transfer learning while still reaching optimal performance, even on new, out-of-distribution tasks. All architectural details and hyper-parameters can be found in the appendix. For all experiments, we first train the exploration priors and options on n tasks from the available task distribution T (training phase is plotted in Appendix E). Subsequently, we test how quickly we can learn new tasks from T (or another distribution T). We compare the following algorithms: MSOL is our proposed method that utilizes soft options both during option learning and transfer. MSOL(frozen) uses the soft options framework during learning to find more diverse skills, but does not allow fine-tuning the posterior sub-policies after transfer. DISTRAL is a strong non-hierarchical transfer learning algorithm that also utilizes prior and posterior distributions. DISTRAL(+action) utilizes the last action as option-heuristic which works well in some tasks but fails when the last action is not sufficiently informative. MLSH is a multitask option learning algorithm like MSOL, but utilizes'hard' options for both learning and transfer, i.e., sub-policies that are shared exactly across tasks. It also relies on fixed option durations and requires a complex training schedule between master and intra-option policies to stabilize training. We use the author's MLSH implementation. Lastly, we compare against Option Critic (OC) , which takes the task-id as additional input in order to apply it to a multitask setting. We start with the 2D Moving Bandits environment proposed and implemented by , which is similar to the example in Appendix A. In each episode, the agent receives a reward of 1 for each time step it is sufficiently close to one of two randomly sampled, distinguishable, marked positions in the environment. The agent can take actions that move it in one of the four cardinal directions. Which position is not signaled in the observation. Each episode lasts 50 time steps. We compare against MLSH and DISTRAL to highlight challenges that arise in multitask training. We allow MLSH and MSOL to learn two options. During transfer, optimal performance can only be achieved when both options successfully learned to reach different marked locations, i.e., when they are diverse. In Figure 2 (a) we can see that MSOL is able to do so but the options learned by MLSH are not sufficiently diverse, for the reason explain in Appendix A. DISTRAL, even with the last action provided as additional input, is not able to quickly utilize the prior knowledge. The last action only conveys meaningful information when taking the goal locations into account: DISTRAL agents need to infer the intention based on the last action and the relative goal positions. While this is possible, in practice the agent was not able to do so, even with a much larger network. However, longer training allows DISTRAL to perform as well as MSOL, since its posterior is flexible, denoted by "DISTRAL(+action) limit". Lastly, MSOL(frozen) also outperforms DISTRAL(+action) and MLSH, but performs worse that MSOL. This highlights the utility of making options soft, i.e. adaptable. Next, we use a slightly modified version of the original Taxi domain to show learning of termination functions as well as transfer-and generalization capabilities. To solve the task, the agent must pick up a passenger on one of four possible locations by moving to their location and executing a special'pickup/drop-off' action. Then, the passenger must be dropped off at one of the other three locations, again using the same action executed at the corresponding location. The domain has a discrete state space with 30 locations arranged on a grid and a flag indicating whether the passenger was already picked up. The observation is a one-hot encoding of the discrete state, excluding passenger-and goal location. This introduces an information-asymmetry between the task-specific master policy, and the shared options, allowing them to generalize well . Walls (see Figure 3) limit the movement of the agent and invalid actions. We investigate two versions of Taxi. In the original (, just called Taxi), the action space consists of one no-op, one'pickup/drop-off' action and four actions to move in all cardinal directions. In Directional Taxi, we extend this setup: the agent faces in one of the cardinal directions and the available movements are to move forward or rotate either clockwise or counter-clockwise. In both environments the set of tasks T are the 12 different combinations of pickup/drop-off locations. Episodes last at most 50 steps and there is a reward of 2 for delivering the passenger to its goal and a penalty of -0.1 for each time step. During training, the agent is initialized to any valid state. During testing, the agent is always initialized without the passenger on board. We allow four learnable options in MLSH and MSOL. This necessitates the options to be diverse, i.e., one option to reach each of the four pickup/drop-off locations. Importantly, it also requires the options to learn to terminate when a passenger is picked up. As one can see in Figure 2 (b), MLSH struggles both with option-diversity and due to its fixed option duration which is not flexible enough for this environment. DISTRAL(+action) performs well in the original Taxi environment, as seen in Figure 2 (b), since here the last action is a good indicator for the agent's intention. However, in the directional case shown in Figure 2 (c), the actions are less informative and make it much harder for DISTRAL to use prior knowledge. By contrast, MSOL performs well in both taxi environments. Comparing its performance with MSOL(frozen) shows the utility of adaptable soft options during transfer. Figure 3, which visualizes the options learned by MSOL, shows that it successfully learns useful movement primitives and termination functions. The same soft option represents different behavior depending on whether it already picked up the passenger. This is expected as this behavior does not need to terminate the current option on three of the 12 tasks. Next we show how learning information-asymmetric soft options can help with transfer to unseen tasks. In Figure 2 (d) we show learning on four tasks from T using options that were trained on the remaining eight, comparing against A2C and OC. Note that in OC, there is no informationasymmetry: We share the same networks across all tasks and provide the task-id as additional input, including to the option-policies. This prevents them from generalizing well to unseen tasks. On the other hand, withholding the task-information from them would be similar to MLSH, which we already showed to struggle with local minima. The strong performance of MSOL on this task shows that we need soft options to be able to train information-asymmetric options that generalize well. We also investigate the utility of flexible soft options: In Figure 2 (e) we show learning performance on twelve changed tasks in which the pickup/dropoff locations where moved by one cell while the options were trained with the original locations. As expected, hard options are not able to solve this tasks. Even with additional access to primitive actions, exploration is inefficient . On the other hand, MSOL is able to quickly learn this new task by adapting the previously learned options, outperforming hard options and flat policies learned from scratch. Lastly, we show that MSOL can also be applied to continuous multitask domains. In particular, we investigate the MuJoCo environment'Swimmer' . Instead of rewarding forward movement as in the original implementation, now the rewarded movement direction depends on the task from T = {up, down, lef t, right}. We also include a small amount of additive action noise (details in the Appendix). We show that MSOL performs competitive even in the absence of known failure cases of DISTRAL (see Figure 2 (f)). Multitask Soft Option Learning (MSOL) proposes reformulating options using the perspective of prior and posterior distributions. This offers several key advantages. First, during transfer, it allows us to distinguish between fixed, and therefore knowledge-preserving option priors, and flexible option posteriors that can adjust to the reward structure of the task at hand. This effects a similar speed-up in learning as the original options framework, while avoiding sub-optimal performance when the available options are not perfectly aligned to the task. Second, utilizing this'soft' version of options in a multitask learning setup increases optimization stability and removes the need for complex training schedules between master and lower level policies. Furthermore, this framework naturally allows master policies to coordinate across tasks and avoid local minima of insufficient option diversity. It also allows for autonomously learning option-termination policies, a very challenging task which is often avoided by fixing option durations manually. Lastly, using this formulation also allows inclusion of prior information in a principled manner without imposing too rigid a structure on the ing hierarchy. We utilize this advantage to explicitly incorporate the bias that good options should be temporally extended. In future research, other types of information can be explored. As an example, one could investigate sets of tasks which would benefit from a learned master prior, like walking on different types of terrain. Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in neural information processing systems, pp. 3675-3683, 2016. where all posteriors of z 1 are moving to A, but for z 2, the posteriors on different tasks a and b move to different targets. Crucially, to maximize the regularized reward, the KL-divergences between priors and posteriors should be minimized along they trajectories, i.e. weighted by how likely they are to occur. Consequently, the higher D KL [z 1 is always used to reach A and z 2 (b) is always used to reach B, allowing bothz 1 andz 2 to also lead to A and B respectively. Assume we are faced with a new task and are given some prior knowledge in the form of a set of skills that we can use. Using those skills and their termination probabilities as prior policies p T and p L in the soft option framework, we can see β as a temperature parameter determining how closely we are restricted to following them. For β → ∞ we recover the classical'hard' option case and our posterior option policies are restricted to the prior. 1 For β = 0 the priors only initialize the otherwise unconstrained policy, quickly unlearning behavior that may be useful down the line. Lastly, for 0 < β < ∞ we use the prior information to guide exploration but are only softly restricted to the given skills and can also explore and use policies'close' to them. Even though R reg i depends on φ i, its gradient w.r.t. φ i vanishes. 2 Consequently, we can treat the regularized reward as a classical RL reward and use any RL algorithm to find the optimal hierarchical policy parameters φ i. In the following, we explain how to adapt A2C to soft options. The extension to PPO is straightforward. The joint posterior policy in depends on the current state s t and the previously selected option z t−1. The expected sum of regularized future rewards of task i, the value function V i, must therefore also condition on this pair: As V i (s t, z t−1) cannot be directly observed, we approximate it with a parametrized model V φi (s t, z t−1). The k-step advantage estimation at time t of trajectory τ is given by where the superscript'−' indicates treating the term as a constant. The approximate value function V φi can be optimized towards its bootstrapped k-step target by minimizing L V (φ i, τ 1: As per A2C, k ∈ [1 . . . n s] depending on the state . The corresponding policy gradient loss is The gradient w.r.t. the prior parameters θ is whereb t = δ zt−1 (z t) and z t ∼ q H (z t |s t, z t−1, b t = 1). To encourage exploration in all policies of the hierarchy, we also include an entropy maximization loss: Note that term 1 in already encourages maximizing L H (φ i, τ) for the master policy, since we chose a uniform prior p H (z t |b t = 1). As both terms serve the same purpose, we are free to drop either one of them. In our experiments, we chose to drop the term for q H in R reg t, which proved slightly more stable to optimize that the alternative. We can optimize all parameters jointly with a combined loss over all tasks i, based on sampled trajectories τ i:= τ i 1:T ∼ q φi and corresponding sampled values ofb i:=b C.2 TRAINING SCHEDULE For faster training, it is important to prevent the master policies q H from converging too quickly to allow sufficient updating of all options. On the other hand, a lower exploration rate leads to more clearly defined options. We consequently anneal the exploration bonus λ H with a linear schedule during training. Similarly, a high value of β leads to better options but can prevent finding the extrinsic reward r i (s t, a t) early on in training. Consequently, we increase β over the course of training, also using a linear schedule. All policies and value functions share the same encoder network with two fully connected hidden layers of size 64 for the Moving Bandits environment and three hidden layers of sizes 512, 256, and 512 for the Taxi environments. Distral was tested with both model sizes on the Moving Bandits task to make sure that limited capacity is not the problem. Both models ed in similar performance, the shown in the paper are for the larger model. On swimmer the encoder model size is 1024× 256 × 512. Master-policies, as well as all prior-and posterior policies and value functions consist of only one layer which takes the latent embedding produced by the encoder as input. Furthermore, the encoder is shared across tasks, allowing for much faster training since observations can be batched together. Options are specified as an additional one-hot encoded input to the corresponding network that is passed through a single 128 dimensional fully connected layer and concatenated to the state embedding before the last hidden layer. We implement the single-column architecture of Distral as a hierarchical policy with just one option and with a modified loss function that does not include terms for the master and termination policies. Our implementation builds on the A2C/PPO implementation by , and we use the implementation for MLSH that is provided by the authors (https://github.com/openai/mlsh). We use 2λ V = λ A = λ P = 1 in all experiments. Furthermore, we train on all tasks from the task distribution, regularly resetting individual tasks by resetting the corresponding master and re-initializing the posterior policies. Optimizing β for MSOL and Distral was done over {0.01, 0.02, 0.04, 0.1, 0.2, 0.4}. We use γ = 0.95 for Moving Bandits and Taxi and γ = 0.995 for Swimmer. For MLSH, we use the original hyper-parameters . The duration of each option is fixed to 10. The required warm-up duration is set to 9 and the training duration set to 1. We also use 30 parallel environments split between 10 tasks. This and the training duration are the main differences to the original paper. Originally, MLSH was trained on 120 parallel environments which we were unable to do due to hardware constraints. Training is done over 6 million frames per task. For MSOL and Distral we use the same number of 10 tasks and 30 processes. The duration of options are learned and we do not require a warm-up period. We set the learning rate to 0.01 and β = 0.2, α = 0.95, λ H = 0.05. Training is done over 0.6 million frames per task. For Distral we use β = 0.04, λ H = 0.05 and also 0.6 million frames per task. For MSOL we anneal β from 0.02 to 0.1 and λ H from 0.1 to 0.05. For Distral we use β = 0.04. We use 3 processes per task to collect experience for a batch size of 15 per task. Training is done over 1.4 million frames per task for Taxi and 4 million frames per task for Directional Taxi. MLSH was trained on 0.6 million frames for Taxi as due to it's long runtime of several days, using more frames was infeasible. Training was already converged. As shown in the main experiments, soft options can still be useful, even when the task distribution changes. It is unsurprising that hard options are not able to solve the task. However, interestingly, providing hard options and primitive actions to the master policy performs much worse than just learning from scratch. This phenomenon was investigated previously by and further justifies using soft options for transfer to out-of-distribution tasks. Whether training from scratch or re-using misspecified options that were trained on a different set of tasks is learning faster mainly depends on i) how strongly the options are misspecified and ii) how difficult the exploration problem is in the environment. This tradeoff is shown in Figure 6: On a smaller 8x8 grid (left), learning from scratch performs competitively because exploration is sufficiently simple. On the other hand, on a 10x10 grid (right, from the main paper), exploration is harder and soft options allow for significantly faster learning because they can guide the exploration in a helpful way. For training Distral and MSOL we use PPO instead of A2C as it generally achieves better performance on continuous tasks. We have λ H = 0.0004 for both MSOL and Distral for primitive actions and λ H = 0.1 for the master-and termination policies in MSOL. We use a learning rate of 0.0002, GAE with τ = 0.98. We collect 2000 steps in parallel on 6 processes per task, ing in a batchsize of 12, 000 per task. Training is done over 6 million frames with a linearly scheduled increase of β from 0 to 0.04 for MSOL and 0.01 for Distral. We set α = 0.98. MSOL and MSOL(frozen) share the same training as they only differ during testing. Further, note that the highest achievable performance for Taxi and Directional Taxi is higher during training as they can be initialized closer to the final goal (i.e. with the passenger on board). learning from scratch performs competitively. For larger grid sizes, soft options can accelerate training through faster exploration, even if they are misspecified because they were trained on a different set of tasks.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkeDGJBKvB
In Hierarchical RL, we introduce the notion of a 'soft', i.e. adaptable, option and show that this helps learning in multitask settings.
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning. The learning objective is achieved by providing signal to the latent encoding/embedding in VAE without changing its main backbone architecture, hence retaining the desirable properties of the VAE. We design an unsupervised and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE. In the unsupervised strategy, we guide the VAE learning by introducing a lightweight decoder that learns latent geometric transformation and principal components; in the supervised strategy, we use an adversarial excitation and inhibition mechanism to encourage the disentanglement of the latent variables. Guided-VAE enjoys its transparency and simplicity for the general representation learning task, as well as disentanglement learning. On a number of experiments for representation learning, improved synthesis/sampling, better disentanglement for classification, and reduced classification errors in meta learning have been observed. The resurgence of autoencoders (AE) (; ;) is an important component in the rapid development of modern deep learning. Autoencoders have been widely adopted for modeling signals and images . Its statistical counterpart, the variational autoencoder (VAE) , has led to a recent wave of development in generative modeling due to its two-in-one capability, both representation and statistical learning in a single framework. Another exploding direction in generative modeling includes generative adversarial networks (GAN) , but GANs focus on the generation process and are not aimed at representation learning (without an encoder at least in its vanilla version). Compared with classical dimensionality reduction methods like principal component analysis (PCA) and Laplacian eigenmaps , VAEs have demonstrated their unprecedented power in modeling high dimensional data of real-world complexity. However, there is still a large room to improve for VAEs to achieve a high quality reconstruction/synthesis. Additionally, it is desirable to make the VAE representation learning more transparent, interpretable, and controllable. In this paper, we attempt to learn a transparent representation by introducing guidance to the latent variables in a VAE. We design two strategies for our Guided-VAE, an unsupervised version (Fig. 1 .a) and a supervised version (Fig. 1.b). The main motivation behind Guided-VAE is to encourage the latent representation to be semantically interpretable, while maintaining the integrity of the basic VAE architecture. Guided-VAE is learned in a multi-task learning fashion. The objective is achieved by taking advantage of the modeling flexibility and the large solution space of the VAE under a lightweight target. Thus the two tasks, learning a good VAE and making the latent variables controllable, become companions rather than conflicts. In unsupervised Guided-VAE, in addition to the standard VAE backbone, we also explicitly force the latent variables to go through a lightweight encoder that learns a deformable PCA. As seen in Fig. 1.a, two decoders exist, both trying to reconstruct the input data x: Dec main. The main decoder, denoted as Dec main, functions regularly as in the standard VAE ; the secondary decoder, denoted as Dec sub, explicitly learns a geometric deformation together with a linear sub-space. In supervised Guided-VAE, we introduce a subtask for the VAE by forcing one latent variable to be discriminative (minimizing the classification error) while making the rest of the latent variable to be adversarially discriminative (maximizing the minimal classification error). This subtask is achieved using an adversarial excitation and inhibition formulation. Similar to the unsupervised Guided-VAE, the training process is carried out in an end-to-end multi-task learning manner. The is a regular generative model that keeps the original VAE properties intact, while having the specified latent variable semantically meaningful and capable of controlling/synthesizing a specific attribute. We apply Guided-VAE to the data modeling and few-shot learning problems and show favorable on the MNIST, CelebA, and Omniglot datasets. The contributions of our work can be summarized as follows: • We propose a new generative model disentanglement learning method by introducing latent variable guidance to variational autoencoders (VAE). Both unsupervised and supervised versions of Guided-VAE have been developed. • In unsupervised Guided-VAE, we introduce deformable PCA as a subtask to guide the general VAE learning process, making the latent variables interpretable and controllable. • In supervised Guided-VAE, we use an adversarial excitation and inhibition mechanism to encourage the disentanglement, informativeness, and controllability of the latent variables. Guided-VAE is able to keep the attractive properties of the VAE and it is easy to implement. It can be trained in an end-to-end fashion. It significantly improves the controllability of the vanilla VAE and is applicable to a range of problems for generative modeling and representation learning. Related work can be discussed along several directions. Generative model families such as generative adversarial networks (GAN) and variational autoencoder (VAE) have received a tremendous amount of attention lately. Although GAN produces higher quality synthesis than VAE, GAN is missing the encoder part and hence is not directly suited for representation learning. Here, we focus on disentanglement learning by making VAE more controllable and transparent. Disentanglement learning (; ; ;) recently becomes a popular topic in representation learning. Adversarial training has been adopted in approaches such as (; . Various methods (; ;) have imposed constraints/regularizations/supervisions to the latent variables but these existing approaches often involve an architectural change to the VAE backbone and the additional components in these approaches are not provided as secondary decoder for guiding the main encoder. A closely related work is the β-VAE approach in which a balancing term β is introduced to control the capacity and the independence prior. β-TCVAE further extends β-VAE by introducing a total correlation term. From a different angle, principal component analysis (PCA) family (; ; Candès et al., 2011) can also be viewed as representation learning. Connections between robust PCA (Candès et al., 2011) and VAE have been observed . Although being a widely adopted method, PCA nevertheless has limited modeling capability due to its linear subspace assumption. To alleviate the strong requirement for the input data being prealigned, RASL deals with unaligned data by estimating a hidden transformation to each input. Here, we take the advantage of the transparency of PCA and the modeling power of VAE by developing a sub-encoder (see Fig. 1 .a), deformable PCA, that guides the VAE training process in an integrated end-to-end manner. After training, the sub-encoder can be removed by keeping the main VAE backbone only. To achieve disentanglement learning in supervised Guided-VAE, we encourage one latent variable to directly correspond to an attribute while making the rest of the variables uncorrelated. This is analogous to the excitation-inhibition mechanism or the explaining-away phenomena. Existing approaches impose supervision as a conditional model for an image translation task, whereas our supervised Guided-VAE model targets the generic generative modeling task by using an adversarial excitation and inhibition formulation. This is achieved by minimizing the discriminative loss for the desired latent variable while maximizing the minimal classification error for the rest of the variables. Our formulation has connection to the domain-adversarial neural networks (DANN) but the two methods differ in purpose and classification formulation. Supervised Guided-VAE is also related to the adversarial autoencoder approach but the two methods differ in objective, formulation, network structure, and task domain. In , the domain invariant variational autoencoders method (DIVA) differs from ours by enforcing disjoint sectors to explain certain attributes. Our model also has connections to the deeply-supervised nets (DSN) where intermediate supervision is added to a standard CNN classifier. There are also approaches in which latent variables constraints are added but they have different formulations and objectives than Guided-VAE. Recent efforts in fairness disentanglement learning also bear some similarity but there is still with a large difference in formulation. In this section, we present the main formulations of our Guided-VAE models. The unsupervised Guided-VAE version is presented first, followed by introduction of the supervised version. Following the standard definition in variational autoencoder (VAE) , a set of input data is denoted as X = (x 1, ..., x n) where n denotes the number of total input samples. The latent variables are denoted by vector z. The encoder network includes network and variational parameters φ that produces variational probability model q φ (z|x). The decoder network is parameterized by θ to reconstruct samplex = f θ (z). The log likelihood log p(x) estimation is achieved by maximizing the Evidence Lower BOund (ELBO) : The first term in eq. corresponds to a reconstruction loss q φ (z|x) × ||x − f θ (z)|| 2 dz (the first term is the negative of reconstruction loss between input x and reconstructionx) under Gaussian parameterization of the output. The second term in eq. refers to the KL divergence between the variational distribution q φ (z|x) and the prior distribution p(z). The training process thus tries to find the optimal (θ, φ) * such that: In our unsupervised Guided-VAE, we introduce a deformable PCA as a secondary decoder to guide the VAE training. An illustration can be seen in Fig. 1.a. This secondary decoder is called Dec sub. Without loss of generality, we let z = (z def, z cont). z def decides a deformation/transformation field, e.g. an affine transformation denoted as τ (z def). z cont determines the content of a sample image for transformation. The PCA model consists of K basis B = (b 1, ..., b K). We define a deformable PCA loss as: where • defines a transformation (affine in our experiments) operator decided by τ (z def) and 2 can be optionally added to force the basis to be unit vectors. We follow the spirit of the PCA optimization and a general formulation for learning PCA, which can be found in (Candès et al., 2011). To keep the simplicity of the method we learn a fixed basis function B and one can also adopt a probabilistic PCA model . Thus, learning unsupervised Guided-VAE becomes: For training data X = (x 1, ..., x n), suppose there exists a total of T attributes with ground-truth labels. The t-th attribute, let z = (z t, z rst t) where z t defines a scalar variable deciding to decide the t-th attribute and z rst t represents remaining latent variables. Let y t (x i) be the ground-truth label for the t−th attribute of sample x i; y t (x i) ∈ {−1, +1}. For each attribute, we use an adversarial excitation and inhibition method with term: which is a hinge term. This is an excitation process since we want latent variable z t to directly correspond to the attribute label. Notice the − sign before the summation since this term will be combined with eq. for maximization. where C t (z rst t) refers to classifier making a prediction for the t-th attribute using the remaining latent variables z rst t. − log p Ct (y = y(x)|z rst t ) is a cross-entropy term for minimizing the classification error in eq.. This is an inhibition process since we want the remaining variables z rst t as independent as possible to the attribute label. Note that the term L Inhibition (φ, t) within eq. for maximization is an adversarial term to make z rst t as uninformative to attribute t as possible, by making the best possible classifier C t to be undiscriminative. The formulation of eq. bears certain similarity to that in domain-adversarial neural networks in which the label classification is minimized with the domain classifier being adversarially maximized. Here, however, we respectively encourage and discourage different parts of the features to make the same type of classification. In this section, we first present qualitative demonstrating our proposed unsupervised Guided-VAE (Figure 1a) capable of disentangling latent embedding in a more favourable way than VAE and previous disentangle methods on MNIST dataset . We also show that our learned latent representation can be later used to improve classification performance. Next, we extend this idea to a supervised guidance approach in an adversarial excitation and inhibition fashion, where a discriminative objective for certain image properties is given (Figure 1b) on the CelebA dataset . Further, we show that our method can be applied to the few-shot classification tasks, which achieves competitive performance on Omniglot dataset proposed by. 4.1 UNSUPERVISED GUIDED-VAE 4.1.1 QUALITATIVE EVALUATION We present qualitative on MNIST dataset by traversing latent variables received affine transformation guiding signal. Here, we applied the Guided-VAE with the bottleneck size of 10 (i.e. the latent variables z ∈ R 10). The first latent variable z 1 represents the rotation information and the second latent variable z 2 represents the scaling information. The rest of the latent variables z 3:10 represent the content information. Thus, the latent variables z ∈ R 10 are represented by z = (z def, z cont) = (z 1:2, z 3:10)., β-VAE with controlled capacity increase (CCβ-VAE), Joint-VAE and our Guided-VAE on the MNIST dataset. z1 and z2 in Guided-VAE are controlled. Figure 3: PCA basis learned by the secondary decoder in unsupervised Guided-VAE. In Figure 2, we show traversal of all latent variables on MNIST dataset for vanilla VAE , β-VAE , JointVAE and our guided VAE (β-VAE, JointVAE are adopted from ). While β-VAE cannot generate meaningful disentangled representations, even with controlled capacity increased, JointVAE is able to disentangle class type from continuous factors. Different from previous methods, our Guided-VAE disentangles geometry properties (z 1 and z 2) like rotation angle and stroke thickness from the rest content information z 3:10. In Figure 3, we visualize the basis B = (b 1, ..., b 8) in the PCA part of Dec sub. The basis primarily capture the content information. For a quantitative evaluation, we first compare the reconstruction error among different models on the MNIST dataset. In this experiment, we set the bottleneck size to 8 in Guided-VAE and use three settings for the deformation/transformation: Rotation, scaling, and both. In Guided-VAE (Rotation) or Guided-VAE (Scaling), we take the first latent variable z 1 to represent the rotation or the scaling information. In Guided-VAE (Rotation and Scaling), we use the first and second latent variables (z 1 and z 2) to represent rotation and scaling respectively. As Table 1 shows, our reconstruction loss is on par with vanilla VAE, whereas the previous disentangling method (β-VAE) has higher loss. Our proposed method is able to achieve added disentanglement while not sacrificing reconstruction capability over vanilla VAE. In addition, we perform classification tasks on latent embeddings of different models. Specifically, for each data point (x, y), we use the pre-trained VAE model to obtain the value of latent variable z given input image x. Here z is a d z -dim vector. We then train a linear classifier f (·) on the embedding-label pairs {(z, y)} in order to predict the class of digits. For the Guided-VAE, we disentangle the latent variables z into deformation variables z def and content variables z cont with same dimensions (i.e. d z def = d zcont) and use affine transformation as τ (z def). We compare the classification errors of different models under multiple choices of dimensions of the latent variables in Table 2. It shows that generally higher dimensional latent variables in lower classification errors. Our Guided-VAE method compares favourably over vanilla VAE and β-VAE. Moreover, we attempt to validate the effectiveness of disentanglement in Guided-VAE. We follow the same classification tasks above but use different parts of latent variables as input features for the classifier f (·): We may choose the deformation variables z def, the content variables z cont, or the whole latent variables z as the input feature vector. To reach a fair comparison, we keep the same dimensions for the deformation variables z def and the content variables z cont. Table 3 shows that the classification errors on z cont are significantly lower than the ones on z def, which indicates the success of disentanglement since the content variables should determine the class of digits while the deformation variables should be invariant to the class. In addition, when the dimensions of latent variables z are higher, the classification errors on z def increase while the ones on z cont decrease, indicating a better disentanglement between deformation and content. We first present qualitative on the CelebA dataset by traversing latent variables of attributes. We select three labeled attributes (emotion, gender and color) in the CelebA dataset as supervised guidance objectives. The bottleneck size is set to 16. We use the first three latent variables z 1, z 2, z 3 to represent the attribute information and the rest z 4:16 to represent the content information. During evaluation, we choose z t ∈ {z 1, z 2, z 3} while keeping the remaining latent variables z rst t fixed. Then we obtain a set of images through traversing from the image with t-th attribute to the image without t-th attribute (e.g. smiling to non-smiling) and compare them over methods. Figure 4 shows the traversal for β-VAE and our Guided-VAE. β-VAE performs decently for the controlled attribute change, but the individual z in β-VAE is not fully entangled or disentangled with the attribute. Guided-VAE has a better disentanglement for latent variables and is able to better isolate the attributes w.r.t. the corresponding latent variables. In supervised Guided-VAE, we train a classifier to predict the attributes by using the disentangled attribute latent variable z t or the rest of latent variables z rst t as input features. We perform adversarial excitation and inhibition by encouraging the target latent variable to best predict the corresponding t-th attribute and discouraging the rest of the variables for the prediction of that attribute. (left) shows that the classification errors on z t is significantly lower than the ones on z rst t, which indicates the effectiveness of disentanglement during the training procedure. fiers for attribute classification) prediction for being negatives on the generated images. We traverse z1 (gender) and z2 (smile) separately to generate images for the classification test. Each latent z is traversed from −3.0 to 3.0 with 0.1 as the stride length. Furthermore, we attempt to validate that the generated images from the supervised Guided-VAE can be actually controlled by the disentangled attribute variables. Thus, we pre-train an external binary classifier for t-th attribute on the CelebA training set and then use this classifier to test the generated images from Guided-VAE. Each test includes 10, 000 generated images randomly sampled on all latent variables except for the particular latent variable z t we decide to control. As Figure 5 (right) shows, we can draw the confidence-z curves of the t-th attribute where z = z t ∈ [−3.0, 3.0]. For the gender and the smile attributes, it can be seen that the corresponding z t is able to enable (z t < −1) and disable (z t > 1) the attribute of the generated image. Besides, for all the attributes, the probability monotonically decreases when z t increases, which shows the controlling ability of the t-th attribute by tuning the corresponding latent variable z t. Previously, we have shown that Guided-VAE can generate images and be used as representation to perform classification task. In this section, we will apply the proposed method to few-shot classification problem. Specifically, we use our adversarial excitation and inhibition method in the Neural Statistician by adding a supervised guidance network after the statistic network. The supervised guidance signal is the label of each input. We also apply the Mixup method in the supervised guidance network. However, we couldn't reproduce exact reported in the Neural Statistician, which is also indicated in. For comparison, we mainly consider the Matching Nets and Bruno . Yet it cannot outperform Matching Nets, our proposed Guided-VAE reaches equivalent performance as Bruno (discriminative), where a discriminative objective is fine-tuned to maximize the likelihood of correct labels. We conduct a series of ablation experiments to validate our proposed Guided-VAE model. In this part, we conduct an experiment by excluding the geometry-guided part from the unsupervised Guided-VAE. In this way, the nudging decoder is just a PCA-like decoder but not a deformable PCA. The setting of this experiment is exactly same as described in the unsupervised Guided-VAE section. The bottleneck size of our model is set to 10 of which the first two latent variables z 1, z 2 represent the rotation and scaling information separately. In the ablation part, we drop off the geometry-guided part so all 10 latent variables are controlled by the PCA-like light decoder. In this part, we conduct an experiment of using the adversarial excitation method. We design the experiment using the exact same setting described in the supervised Guided-VAE part. As Figure 7 shows, though the traversal still show the traversed on some latent variables. The from the adversarial excitation method outperforms the from the discriminative method. While traversing the latent variable controlling the smiling information, the left part (a) also changes in the smiling status but it's controlled by another latent variable. shows the traversed images from the supervised Guided-VAE without adversarial inhibition. The right part shows the traversed images from the supervised Guided-VAE using adversarial excitation and inhibition. Both images are traversed on the latent variable that is supposed to control the gender information. In this paper we have presented a new representation learning method, guided variational autoencoder (Guided-VAE), for disentanglement learning. Both versions of Guided-VAE utilize lightweight guidance to the latent variables to achieve better controllability and transparency. Improvements on disentanglement, image traversal, and meta-learning over the competing methods are observed. Guided-VAE maintains the backbone of VAE and can be applied to other generative modeling applications. A.1 PERCENTAGE OF DATA PARTICIPATING IN THE GUIDED SUB-NETWORK In this part, we design an experiment to show how the percentage of data participating in the guided sub-network can influence the final prediction. We conduct this ablation study on MNIST using unsupervised Guided-VAE. We change the percentage of data participating in the guided sub-network and then present the classification accuracy using the first half latent variables (represent geometry information) and the second half latent variables (represent content information) separately. From Figure 8, we observe consistent improvement for the last half latent variables when adding more samples to guide sub-network. This indicates adding more samples can improve disentanglement, which causes that more content information is represented in the second half latent variables. Similarity, the improvement of disentanglement leads the first half latent variables can represent more geometry information, which is indiscriminative for classes. We also observe accuracy improvement when large amount of samples are used to train sub-network. We hypothesize this is because geometry information is still partially affected by classes.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SygaYANFPr
Learning a controllable generative model by performing latent representation disentanglement learning.
Neural language models (NLMs) are generative, and they model the distribution of grammatical sentences. Trained on huge corpus, NLMs are pushing the limit of modeling accuracy. Besides, they have also been applied to supervised learning tasks that decode text, e.g., automatic speech recognition (ASR). By re-scoring the n-best list, NLM can select grammatically more correct candidate among the list, and significantly reduce word/char error rate. However, the generative nature of NLM may not guarantee a discrimination between “good” and “bad” (in a task-specific sense) sentences, ing in suboptimal performance. This work proposes an approach to adapt a generative NLM to a discriminative one. Different from the commonly used maximum likelihood objective, the proposed method aims at enlarging the margin between the “good” and “bad” sentences. It is trained end-to-end and can be widely applied to tasks that involve the re-scoring of the decoded text. Significant gains are observed in both ASR and statistical machine translation (SMT) tasks. Language models (LMs) estimate the likelihood of a symbol sequence {s i} n i=0, based on the joint probability, p(s 0, . . ., s n) = p(s 0) n i=1 p(s i |s i−1, s i−2, . . ., s 0). works, BID7 and BID25, propose to predict the next symbol based on a fusion of the hidden states in the ASR/SMT and language model. A gating mechanism is jointly trained to determine how much the language model should contribute. The afore-discussed language models are generative in the sense that they merely model the joint distribution of a symbol sequence (Eq. ). While the research community is mostly focused on pushing the limit of modeling accuracy (lower PPL) (e.g., BID12, very limited attention has been paid to the discrimination ability of language models when they are applied to supervised learning tasks, such as ASR and SMT. Discriminative language modeling aims at enhancing the performance in supervised learning tasks. In specific, existing works BID23 BID10 BID21 often target at improving ASR accuracy. The key motivation underlying them is that the model should be able to discriminate between "good" and "bad" sentences in a task-specific sense, instead of just modeling grammatical ones. The common methodology is to build a binary classifier upon hand-crafted features extracted from the sentences. However, it is not obvious how these methods can utilize large unsupervised corpus, which is often easily available, and the hand-crafted features are also ad hoc and may in suboptimal performance. In this work, we study how to improve the discrimination ability of a neural language model. The proposed method enlarges the difference between the log-likelihoods of "good" and "bad" sentences. In contrast to the existing works BID23 BID10 BID21, our method does not rely on hand-crafted features. It is trained in end-to-end manner and able to take advantage of large external text corpus. We apply the proposed large margin language model to ASR and SMT tasks. It reduces word error rate (WER) and increases bilingual evaluation understudy (BLEU) scores significantly, showing notable advantage over several alternative methods that are well adopted.2 RELATED WORK BID23 BID10 and BID21 proposed to train discriminative language models based on hand crafted features. They essentially build linear classifiers that give high scores on "good" sentences but low scores on "bad" ones. These methods all rely on ad hoc choice of features, e.g., counts of n-grams where n varies in a small range (e.g., 1 ∼ 3). Moreover, it is also not clear how these methods would take advantage of an existing language model (trained on large unsupervised corpus). BID28 tries to overcome the above issues by adapting an NLM on the transcriptions of a speech dataset. Although the setup is more similar to ours, their objective is not well-behaved and difficult to optimize when there are multiple beam candidates. An in-depth discussion will be given in Section 3.1. BID15 designed another approach to train a discriminative language model, which is based on bi-grams. Similar to our method, the objective there aims at increasing the difference between the scores of the best candidate and ground-truth. However, since the language model is not end-to-end, there are several issues complicating the training, e.g., handling back-off weight. Our proposed method is based on comparisons between pairs of sentences. Its implementation resembles siamese network architecture BID3 BID29, first proposed for face verification tasks. Recently, siamese network has also been applied to learning similarities on sequences BID19 BID20. In spite of solving different problems, the common methodology is to extract a pair of hidden representations for a pair of input samples (through a shared network). It then manipulates the distance between the hidden representations based on whether the two samples are considered similar or not. Our work also draws some inspirations from information retrieval (IR) BID16. As a representative IR method, ranking SVM BID8 assumes a linear scoring function, and imposes a hinge loss on the difference between the scores of sample pairs. We explicitly define LM score as the log-likelihood of a sentence estimated by an NLM. Existing works on NLM often train to maximize the score on a corpus that are assumed to be grammatical. However, they do not utilize any negative examples, where the "negative" means incompetence for a specific task. For example, in ASR BID0, negative samples may have spelling or grammar errors. In a conversational model BID30, negative samples are noninformative replies like "I don't know". An NLM trained in the maximum-likelihood fashion is not aware of the specific task. It therefore cannot guarantee a larger score for a positive sample than a negative one. This fact may handicap applications that need LMs to distinguish between positive and negative candidates. Examples include automatic speech recognition (ASR) and statistical machine translation (SMT). We aim at enhancing the LM's discrimination in these applications. Interestingly, negative samples are easy to obtain in the aforementioned applications. A beam search decoder can often yield abundant negative sentences (suboptimal beam candidates) that differ from ground-truth text (considered as positive). We motivate our method with an ASR example. A CTCbased BID6 ASR model is trained on Wall Street Journal (WSJ) dataset. We then input an audio utterance whose ground-truth transcription is given in the first row of TAB0. Four extracted beam candidates are listed in the following rows, from the best to the worst. Except beam 0, beam 1 to 3 all make some mistakes compared with the ground-truth. In this case, a language model is supposed to give a high score for beam 0 whereas low scores for beam 1 through 3. However, we observe that the scores of beam 2 and 3 are not sufficiently smaller than beam 0. We denote the i-th ground-truth sentence as x i where i = 1,..., N; correspondingly, the j-th beam candidate as x i,j, where j = 1,..., B and B is the beam size. Without loss of generality, we assume that these B candidates all differ from the ground-truth by some mistakes/incompetences. The NLM is desired to assign big log-likelihoods for the x i's but small ones for the x i,j's. A straightforward way is to adopt the following objective: DISPLAYFORM0 Similar formulation is also seen in BID28, where they only utilize one beam candidate, i.e., B = 1. The idea is to maximize the likelihood on the positive samples, and at the same time, minimize the likelihood on the negative samples. Optimization can be carried out by mini-batch stochastic gradient descent (SGD). Each iteration, SGD randomly samples a batch of i's and j's, computes stochastic gradient w.r.t. θ, and takes an update step. However, a potential problem with this formulation is that the second term (corresponding to the negative samples) may dominate the optimization. Specifically, the training is almost always driven by the negative x i,j's, but does not effectively enhance the discrimination. We illustrate this fact in the following experiment. Using the aforementioned ASR system, we extract 256 beam candidates for every training sample in the WSJ dataset. As a baseline for beam rescoring, a conventional NLM is trained on a large corpus, i.e. common-crawl 1. From the pre-trained baseline NLM, we warm start the training and apply SGD to optimize the objective in Eq., with a mini-batch size of 128. The training loss is shown in Figure 1a. We observe that the learning dynamic is very unstable. Using the trained model, we want the ground-truth sentences to have larger scores than the beam candidates. Therefore, we inspect log p θ (x i) − log p θ (x i,j), the margin between the scores of a ground-truth and a candidate. In FIG0, we histogram the margins for all the i, j's in a dev set. The distribution appears to be symmetric around zero, which indicates poor discrimination ability. Given these facts, we conclude that the straightforward formulation in Eq. FORMULA0 is not effective. To effectively utilize all the negative beam candidates, we propose the following objective, DISPLAYFORM0 where log p θ (x i) − log θ (x i,j) is the margin between the scores of a ground-truth x i and a negative candidate x i,j. The hinge loss on the margin encourages the log-likelihood of the ground-truth to be at least τ larger than that of the "bad" candidate. We call the above formulation Large Margin Language Model (LMLM).We repeat the same experiment in section 3.1, but change the objective function to Eq.. We fix τ = 1 across the paper. Figure 1b shows the training loss, which steadily decreases and approaches zero rapidly. Compared with the learning curve of naive formulation (figure 1a), the large margin based training is much more stable. In FIG0, we also examine the histogram of log p θ (x i) − log p θ (x i,j), where p θ (·) is the language model learned by LMLM. Compared with the histogram by the baseline NLM, LMLM significantly moves the distribution to the positive side, indicating more discrimination. In most cases, all beam candidates are imperfect. It would be beneficial to exploit the information that some candidates are relatively better than the others. We consider ranking them according to some metrics w.r.t. the ground-truth sentences. For ASR, the metric is edit distance, and for SMT, the metric is BLEU score. We define x i,0 x i and assume that the candidates {x i,j} B j=1 in the beam are sorted such that DISPLAYFORM0 for ASR, and DISPLAYFORM1 for SMT. In other words, x i,j−1 has better quality than x i,j.We then enforce the "better" sentences to have a score at least τ larger than those "worse" ones. This leads to the following formulation, DISPLAYFORM2 Compared with LMLM formulation Eq., the above introduces more comparisons among the candidates, and hence more computational cost during training. We call this formulation rank-LMLM. In this section, we study LMLM and rank-LMLM through extensive experiments on ASR and SMT. We demonstrate that both LMLM and rank-LMLM significantly outperform a baseline language model (baseline-LM) and two other domain adapted models. The baseline-LM architecture, shown in FIG1, starts with a 2048 dimensional embedding layer, followed by two LSTM layers, each with 2048 nodes. The LSTM hidden states are then projected down to dimension 512. Finally, a softmax layer with 400K dimensional output is appended to produce a distribution over the vocabulary. The huge vocabulary size incurs a large computational cost, and we use sampled softmax technique BID11 to accelerate training. This type of neural architecture has been shown effective in conventional language modeling tasks BID12. We trained the baseline-LM on the common-crawl corpus. Common-crawl corpus has a vocabulary size about 400K. This large vocabulary ensures a small out of vocabulary (OOV) rate for the used ASR and SMT datasets, details of which are summarized in TAB1. The baseline-LM achieves a reasonably good perplexity of 110 on a dev set with 400K sentences, significantly outperforming a 5-gram model, which has a dev perplexity of about 300. The experimental setup is as follows. First we train an ASR/SMT model on the training set. Then we extract B beam candidates for every training sample. This beam set, together with the corresponding ground-truth text, are used as the training data for LMLM and rank-LMLM. We then re-score the beams by linearly combining ASR/SMT and language model scores. The combination weight is found by optimizing the WER/BLEU on the dev set. Finally, WER/BLEU on the test set are reported. For comparison, we also include two other approaches that adapt the baseline-LM to the text in the specific task. One way is to fine-tune the baseline NLM using the task-specific text but still under the minimum-perplexity objective BID13. We call it refine-LM. The other way is to train a smaller NLM from scratch on the task-specific data, and then linearly interpolate with the baseline-LM BID10. We call it interp-LM. In all the experiments with LMLM and rank-LMLM, we set τ = 1. For rank-LMLM, since the total number of pairs is huge, we randomly sample 20% of them. An advantage of LMLM (and rank-LMLM) is being able to utilize huge unsupervised data. This is achieved by warm-starting the LMLM with a conventional language model trained on any unsupervised corpus. However, one would doubt why it is necessary to warm-start, since LMLM might easily learn to make binary decisions on pairs of "good" and "bad" sentences. Interestingly, we show that warm-starting is very important. Without warm-starting, the LMLM training will be stuck to "bad" local minimal and cannot generalize well. Earlier works in various applications BID9 BID5 have observed similar behavior and suggested using unsupervised model to warm-start supervised training. The observation on LMLM is yet another supportive evidence, elaborated in the following experiment. Using the same ASR system in Section 3.1, we extract 64 beam candidates for every training utterance in the WSJ dataset. We then train LMLM either with or without warm-starting from the baseline-LM. FIG2 compares the learning curves for both cases. With warm starting, training and dev losses start to decay from smaller values, approaching zero at convergence. The stronger generalization ability is further illustrated in FIG2. FIG2 shows the histogram of the margins between the scores for the ground-truths and beam candidates in the dev set. The more positive the margin is, the more the separation between positive and negative examples. In the warm-started case, the distribution is shifted rightwards, indicating more discrimination. We revisit the example in TAB0, in order to understand why LMLM and rank-LMLM can work. We estimate the language model scores using LMLM and rank-LMLM. Scores are listed in TAB2, in comparison with those by the baseline-LM, which we have seen in TAB0. Numbers in the brackets are the margins. A large positive margin indicates effective identification of the erroneous sentence. Overall, with LMLM and rank-LMLM, the margins become significantly more positive. More interestingly, rank-LMLM is able to assign larger score for beam 2 than beam 3, showing more selectivity than LMLM.We also notice that all the scores by LMLM and rank-LMLM are smaller than those by the baseline-LM, since the proposed methods are not guided by the conventional max-likelihood objective. Compared with LMLM, rank-LMLM scores are even smaller. This is due to more pairwise constraints imposed in training, which makes rank-LMLM deviate even more from the max-likelihood objective. However, we argue that the max-likelihood training is not well aligned with beam re-scoring purpose, which we shall see in the reported WERs soon. In this section, we apply the proposed methods for ASR tasks, and report WERs and CERs on test sets. The datasets we used are WSJ and Fisher, whose statistics are summarized in the 3rd and 4th columns of TAB1. Note that for Fisher task, we follow the standard setup BID22, which evaluates on the hub5 set, including two subsets (SWBD and CallHome). In total, there are 4,458 utterances for test. The ASR models for WSJ has one convolutional layer, followed by 5 layers of RNNs. The ASR model for Fisher has two convolutional layers, followed by 6 layers of GRUs. The final hidden representations are input into fully connected layers and then the models are trained using CTC loss BID6. Note that the ASR models are trained using only in-domain data. That is, the training utterances of WSJ or Fisher respectively. Using a language model during decoding may significantly improve the performance. Therefore, during decoding, we also applied an n-gram model learned from the in-domain training text. The beam search decoder has a beam width of 2000. The top-1 beam candidates give us strong baseline WERs, e.g., for WSJ task, the test WER is 7.58. The extracted beams candidates are then re-scored using the baseline-LM, refine-LM, interp-LM, LMLM and rank-LMLM. Training of these language models follow the experimental protocol in section 4.1. We report the WERs of the rescored top-1 candidates on the test set. The same exper-iment is repeated on WSJ and Fisher tasks. Results are listed in TAB3 respectively. Bold numbers are the best and italics are runner-ups. Among all the language models for re-scoring, rank-LMLM achieves the best WER and CER, and LMLM is also very competitive with rank-LMLM. They both significantly outperform the other methods, all of which are generative language models. Based on these observations, we argue that a language model has to be adapted in a way that is suitable for re-scoring purpose in supervised tasks, rather than just maximizing likelihood. LMLM and rank-LMLM are general in the sense that they can be applied to any re-scoring problem on text data. In this section, we show a proof-of-concept example on an SMT task. We experiment with IWSLT15 Vietnamese-to-English dataset 2. We use "tst2012" as dev set and "tst2013" as test set. The dataset is simplified by decapitalizing all the words and removing punctuations. This in a vocabulary size of about 44K, and only a tiny fraction of them are out of the vocabulary of common crawl. Details of the cleaned dataset are listed in the last column of TAB1.We train an SMT model based on the attention mechanism in BID17. The encoder and decoder both have two layers of LSTMs, each with 128 activations. We follow the experimental protocol outlined in Section 4.1. TAB5 reports the BLEU scores on the test set. LMLM and rank-LMLM both significantly outperform the other methods. In contrast, all the generative language models (3rd to 5th column in TAB5) do not improve upon the case without re-scoring. In fact, for these three cases, we found that the best weight to combine the language model is zero, meaning that the language model is not providing any complementary information to the SMT model. A key point to understand this phenomenon is that the decoder in a seq-to-seq model implicitly works as a conventional language model. This in very grammatical beam candidates, but their qualities differ in some semantic sense. A generative language model acts more or less like a grammar checker, and has no power of discrimination in the semantic field. Conventional language models are guided by minimizing perplexity, and they are generative models. This work proposes an approach to enhance the discrimination ability of language models. It is trained end-to-end by maximizing the margin between "good" and "bad" (in a task-specific sense) sentences. The method is general and can be applied to various tasks that require re-scoring of text data. Experiments on ASR and SMT have shown a consistent gain over several baselines. These facts argue that min-perplexity is not necessarily an appropriate guideline when we want to apply language models in some supervised learning problems. A future direction is to apply the proposed method to conversation generation. The goal is to discriminate between boring (e.g., "I don't know") and informative replies, thus deprecating the former. Another interesting future work is to apply the LMLM/rank-LMLM to lattices during decoding.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1g-gk5EuQ
Enhance the language model for supervised learning task
Conventionally, convolutional neural networks (CNNs) process different images with the same set of filters. However, the variations in images pose a challenge to this fashion. In this paper, we propose to generate sample-specific filters for convolutional layers in the forward pass. Since the filters are generated on-the-fly, the model becomes more flexible and can better fit the training data compared to traditional CNNs. In order to obtain sample-specific features, we extract the intermediate feature maps from an autoencoder. As filters are usually high dimensional, we propose to learn a set of coefficients instead of a set of filters. These coefficients are used to linearly combine the base filters from a filter repository to generate the final filters for a CNN. The proposed method is evaluated on MNIST, MTFL and CIFAR10 datasets. Experiment demonstrate that the classification accuracy of the baseline model can be improved by using the proposed filter generation method. Variations exist widely in images. For example, in face images, faces present with different head poses and different illuminations which are challenges to most face recognition models. In the conventional training process of CNNs, filters are optimized to deal with different variations. The number of filters increases if more variations are added to the input data. However, for a test image, only a small number of the neurons in the network are activated which indicates inefficient computation BID13 ).Unlike CNNs with fixed filters, CNNs with dynamically generated sample-specific filters are more flexible since each input image is associated with a unique set of filters. Therefore, it provides possibility for the model to deal with variations without increasing model size. However, there are two challenges for training CNNs with dynamic filter generation. The first challenge is how to learn sample-specific features for filter generation. Intuitively, filter sets should correspond to variations in images. If the factors of variations are restricted to some known factors such as face pose or illumination, we can use the prior knowledge to train a network to represent the variation as a feature vector. The main difficulty is that besides the factors of variations that we have already known, there are also a number of them that we are not aware of. Therefore, it is difficult to enumerate all the factors of variations and learn the mapping in a supervised manner. The second challenge is that how to map a feature vector to a set of new filters. Due to the high dimension of the filters, a direct mapping needs a large number of parameters which can be infeasible in real applications. In response, we propose to use an autoencoder for variation representation leaning. Since the objective of an autoencoder is to reconstruct the input images from internal feature representations, each layer of the encoder contains sufficient information about the input image. Therefore, we extract features from each layer in the encoder as sample-specific features. For the generation of filters, given a sample-specific feature vector, we firstly construct a filter repository. Then we learn a matrix that maps the feature vector to a set of coefficients which will be used to linearly combine the base filters in the repository to generate new filters. Our model has several elements of interest. Firstly, our model bridges the gap between the autoencoder network and the prediction network by mapping the autoencoder features to the filters in the prediction network. Therefore, we embed the knowledge from unsupervised learning to supervised learning. Secondly, instead of generating new filters directly from a feature vector, we facilitate the generation with a filter repository which stores a small number of base filters. Thirdly, we use linear combination of the base filters in the repository to generate new filters. It can be easily implemented as a convolution operation so that the whole pipeline is differentiable with respect to the model parameters. The essential part of the proposed method is the dynamical change of the parameters of a CNN. In general, there are two ways to achieve the goal including dynamically changing the connection and dynamically generating the weights, both of which are related to our work. In this section, we will give a brief review of the works from these two aspects. There are several works in which only a subset of the connections in a CNN are activated in a forward pass. We term this kind of strategy dynamic connection. Since the activation of connections depends on input images, researchers try to find an efficient way to select subsets of the connections. The benefit of using dynamical connection is the reduction in computation cost. BID13 propose a conditional convlutional neural network to handle multimodal face recognition. They incorporate decision trees to dynamically select the connections so that images from different modalities activate different routes. BID6 present deep neural decision forests that unify classification trees with representation learning functionality. Each node of the tree performs routing decisions via a decision function. For each route, the input images are passed through a specific set of convolutional layers. BID5 and BID0 also propose similar frameworks for combining decision forests and deep CNNs. Those hybrid models fuse the high representation learning capability of CNNs and the computation efficiency of decision trees. We refer to weights that are dynamically generated as dynamic weights. Furthermore, since the weights are the parameters of a CNN, learning to generate those weights can also be viewed as a meta learning approach. BID1 propose to use dynamic weights in the scenario of one-shot learning. They construct a learnet to generate the weights of another deep model from a single exemplar. A number of factorizations of the parameters are proposed to reduce the learning difficulty. BID4 present hypernetworks which can also generate weights for another network, especially a deep convolutional network or a long recurrent network. The hypernetworks can generate non-shared weights for LSTM and improve its capability of sequence modelling. There are several other similar architectures BID11 BID3 ).Results from those works demonstrate that dynamical weights help learn feature representation more effectively. The work that most resembles ours is the work of De BID3. However, our work is different in the following aspects. (i) The feature vectors we used for filter generation are extracted from the feature maps of an autoencoder network. (ii) New filters are generated by the linear combination of base filters in a filter repository. The rest of the paper is structured as follows. Section 3 presents the details of the proposed method. Section 4 shows the experiment and Section 5 concludes the paper. The framework of the proposed method is illustrated in FIG0. The description of our model will be divided into three parts, i.e. sample-specific feature learning, filter generation, and final prediction. The framework of the proposed method. The autoencoder network in the first row is used to extract features from the input image. The obtained feature maps are fed to a dimension reduction module to reduce the dimension of the feature maps. Then the reduced features are used to generate new filters in the filter generation module. Finally, the prediction network takes in the same input image and the generated filters to make the final prediction for high level tasks such as detection, classification and so on. "*" indicates the convolution operation. It is difficult to quantify variations in an image sample. Thus, we adopt an autoencoder to learn sample-specific features. Typically, an autoender consists of an encoder and a decoder. The encoder extracts features from the input data layer by layer while the decoder plays the role of image reconstruction. Therefore, we use the features from each layer of the encoder as representations of the input image. Since the feature maps from the encoder are three-dimensional, we use dimension reduction modules to reduce the dimension of the feature maps. For each dimension reduction module, there are several convolutional layers with stride larger than 1 to reduce the spatial size of the feature maps to 1 × 1. After dimension reduction, we obtained the sample-specific features at different levels. The loss function for the autoencoder network is the binary cross entropy loss DISPLAYFORM0 N pix is the number of pixels in the image. o i is the value of the ith element in the reconstructed image and t i is the value of the ith element in the input image. Both the input image and the output image are normalized to. The filter generation process is shown in FIG1. The input to the filter generation module is the sample-specific feature vector and the output is a set of the generated filters. If we ignore the bias term, a filter can be flatten to a vector. Given an input feature vector, the naive way to generate filters is to use a fully connected layer to directly map the input vector to the filters. However, it is infeasible when the number of filters is large. Let the length of each filter be L k and the length of the feature vector be L f. If we need to generate N filter vectors from the feature vector. We need DISPLAYFORM0 In order to tackle the problem, we refactor each filter vector k i as DISPLAYFORM1 w j is the coefficient of the base filter b j which is from a filter repository. M is the number of filters in the filter repository. Equation 2 assumes that each filter vector can be generated by a set of base filters. The assumption holds true if M = L K and those base filters are orthogonal. However, in real applications of CNNs, each convolutional layer has limited number of filters which indicates that compared to the large dimension of the filter vector space, only a small subspace is used in the final trained model. Based on this observation, we set M << L k in this work. The total number of parameters in the transformation matrix is N ×L f ×M which is much smaller than the original size. The filters in the repositories are orthogonally initialized and optimized during the training process. The prediction network is the network for the high level task, such as image classification, recognition, detection and so on. The filters used in the prediction network are provided by the filter generation module while the weights of the classifier in the prediction network are learned during the training process. Loss functions for high level tasks are task-dependent. In this work, we will use classification task for demonstration and the loss is the negative log likelihood loss DISPLAYFORM0 where t is the image label and p t is the softmax probability of the tth label. Therefore, the entire loss function for training our model is DISPLAYFORM1 The proposed method aims for generating dynamic filters to deal with variations and improve the performance of a baseline network. In the following experiments, we evaluate our method on three tasks, i.e. digit classification on MNIST dataset(Section 4.1), facial landmark detection on MTFL dataset (Section 4.1) and image classification on CIFAR10 dataset (Section 4.3). The number of the base filters in each filter repository is the same as the number of the filters in each layer of the prediction network if not specified. We will also present further analysis on the generated filters in Section 4.4. Details of all network structures are given in Appendix A.1. To begin our evaluation, we firstly set up a simple experiment on digit classification using MNIST dataset BID8 ). We will show the accuracy improvement brought by our dynamic filters by comparing the performance difference of a baseline network with and without our dynamic filters. We will also analyze how the size of the encoder network and the size of the filter repository (the number of filters in the repository) effect the accuracy of digit classification. The baseline model used in this experiment is a small network with two convolutional layers followed by a fully connected layer that outputs ten dimensions. For simplicity, we only use five filters in each convolutional layer. Details of the network structures are shown in Appendix A.1.1. To evaluate the effect of the size of the encoder network, we compare the classification accuracy obtained when the encoder network has different numbers of filters in each layer. Let n enc be the number of filters in each layer of the encoder network. We choose n enc from {5, 10, 20}. We also choose different repository size s from {2, 5, 10}. In the evaluation of the effect of s, we fix n enc = 20 and we fix s = 5 to evaluate the effect of n enc. We train this baseline model with and without filter generation for 20 epochs respectively. We show the classification accuracy on the test set in TAB0. The first row shows the test accuracy after training the network for only one epoch and the second row shows the final test accuracy. From both tables, we can find that the final test accuracy of the baseline model using our dynamically generated filters is higher than that using fixed filters. The highest accuracy obtained by our generated filters is 99.1% while the accuracy of the fixed filters is 98.25%.Interestingly, the test accuracies after the first epoch (first row in TAB0) show that our dynamically generated filters help the network fit the data better than the original baseline model. It could be attribute to the flexibility of the generated filters. Though there are only a small number of base filters in the repository, linear combination of those base filters can provide filters that efficiently extract discriminative features from input images. In TAB0, when s = 5, the classification accuracy increases as encoder network has more filters. It is straightforward because with more filters, the encoder network can better capture the variation in the input image. So it can provide more information for the generation of filters. Based on the observation from TAB1, it seems that the final classification accuracy is less dependent on the repository size given n enc = 20. In this section, we apply our filter generation to the task of facial landmark detection. To give a more straightforward understanding of the usefulness of our filter generation, we firstly investigate the performance difference of a baseline model before and after some explicit variations are added to the dataset. Then we show the detection performance improvement with respect to the size of the detection network. Dataset. MTFL dataset BID15 ) contains 10,000 face images with ground truth landmark locations. In order to compare the performance difference of baseline models with respect to variations, we construct two datasets from the original MTFL dataset. Rotation variation is used here since it can be easily introduced to the images by manually rotating the face images. Dataset D-Align. We follow BID14 to aligned all face images and crop the face region to the size of 64 × 64.Dataset D-Rot. This dataset is constructed based on D-Align. we randomly rotate all face images within [−45 DISPLAYFORM0 Some image samples for both datasets are shown in Appendix A.2 Figure 6 .We split both datasets into the training dataset containing 9,000 images and the test dataset containing 1,000 images. Note that the train-test splits in D-Align and D-Rot are identical. Models Here we train two baseline models based on UNet BID10). The baseline models are M odel 32 with 32 filters in each convolutional layer and M odel 64 with 64 filters in each convolutional layer. M odel 32 and M odel 64 share the same architecture. Details of the network structures are shown in Appendix A.1.2.We firstly trained M odel 32 and M odel 64 on D-Align and D-Rot without our filter generation module. Then we train them on D-Rot with our filter generation module. For evaluation, we use two metrics here. One is the mean error which is defined as the average landmark distance to groundtruth, normalized as percentages with respect to interocular distance (Burgos-Artizzu et al. FORMULA0). The other is the maximal normalized landmark distance. Since there are more rotation variations in D-Rot than D-Align, we can consider landmark detection task on D-Rot is more challenging than that on D-Align. This is also proved by the increase in detection error when the dataset is switched from D-Align to D-Rot as shown in FIG2 and TAB2. However, when we train the same baseline model with our generated filters, the detection error decreases, compared to the same model trained on D-Rot. There is also a large error drop in maximal detection error. These indicate that using filters conditioned on the input image can reduces the effect of variations in the dataset. Comparing the averaged mean error in FIG2 and FIG2, we find that the performance gain brought by filter generation is larger on M odel 32 than that on M odel 64. It could be explained by the capacity of the baseline models. The capacity of M odel 64 is larger than that of M odel 32. So M odel 64 can handle more variations than M odel 32, so the performance gain on M odel 64 is smaller. FORMULA2 ) dataset consists of natural images with more variations. We evaluate models on this dataset to show the effectiveness of our dynamic filter generation in this challenging scenario. We construct a small baseline network with only four convolutional layers followed by a fully connected layer. We train this model on CIFAR10 firstly without filter generation and then with filter generation. We also train a VGG11 model BID12 ) on this dataset. The are shown in FIG3. From the training accuracy curves, we observe that the baseline model trained without filter generation doesn't fit the data as well as other models. This is because there are only five layers in the network which limits the network's capacity. When the baseline model is trained with filter generation, the model can fit the data well, reaching more than 98% training accuracy. VGG11 also achieves high training accuracy which is not supervising since there are more layers (eleven layers) in the models. The test accuracy curves also show the benefit of adopting our dynamic filter generation. The baseline classification accuracy is improved by ∼1% by using filter generation and the test accuray is comparable to VGG11.Based on the above evaluations on different datasets, we claim that dynamically generated filters can help improve the performance of the baseline models. Using linear combination of base filters from filter repositories can generate effective filters for high level tasks. In this section, we visualize the distributions of the coefficients, the generated filters and the feature maps using MNIST dataset. Then we conduct another experiment on CIFAR10 dataset to demonstrate that the generated filters are sample-specific. DISPLAYFORM0 Figure 5: Visualization of the distributions of the generated coefficients, filters, and feature maps from the first (top row) and the second (bottom row) convolutional layer. The model we used for visualization is the baseline model trained in MNIST experiment with n ae = 20 and s = 5. TSNE BID9 ) is applied to project the high dimensional features into a two-dimensional space. The visualization are shown in Figure 5. In the first row, we show the distributions of the coefficients, the filters, and the feature maps from the first convolutional layer of the model. We observe that the generated filters are shared by certain categories but not all the categories. It is clear in Figure 5a that the coefficients generated by some digits are far away from those by other digits. Nevertheless, the feature maps from the first convolution layer show some separability. In the second row, the generated coefficients and the generated filters forms into clusters which means digits from different categories activate different filters. This behavior makes the final feature maps more separable. We further analyze the generated filters to show that those filters are sample-specific. We take CI-FAR10 dataset as an example. The model used here is the same trained model used in the CIFAR10 experiment (Section 4.3). In this experiment, we feed a test image A to the classification network and another different image B to generate filters. In other words, the filters that will be used in the classification network are not generated from A but from B. This time the classification accuracy of the classification network falls to 15.24%, which is nearly the random guess. This accuracy drop demonstrates that the generated filters are sample-specific. Filters generated from one image doesn't work on the other image. In this paper, we propose to learn to generate filters for convolutional neural networks. The filter generation module transforms features from an autoencoder network to sets of coefficients which are then used to linearly combine base filters in filter repositories. Dynamic filters increase model capacity so that a small model with dynamic filters can also be competitive to a deep model. Evaluation on three tasks show the accuracy improvement brought by our filter generation. In this section, we show the details of the network structures used in our experiments. When we extract sample-specific features, we directly take the convolution feature maps (before LReLU layer) from the autoencoder network as input and feed them to the dimension reduction network. The entire process of sample-specific feature extraction is split into the autoencoder network and the dimension reduction network for the purpose of plain and straightforward illustration. The networks used in the MNIST experiment are shown from TAB3 to TAB6. The networks used in the MTFL experiment are shown from TAB0. The networks used in the CIFAR10 experiment are shown from TAB0 A.2 IMAGE SAMPLES FROM DATASET D-Align AND DATASET D-Rot
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJa90ceAb
dynamically generate filters conditioned on the input image for CNNs in each forward pass
We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths. Compared to conventional anytime networks only with the depth controllability, the increased architectural diversity leads to higher resource utilization and consequent performance improvement under various and dynamic resource budgets. We highlight architectural features to make our scheme feasible as well as efficient, and show its effectiveness in image classification tasks. When we deploy deep neural network models on resource-constrained mobile devices or autonomous vehicles with a strict real-time latency requirement, it is essential to develop a model which makes the best of available resources. Although many network compaction techniques including distillation, pruning and quantization have been proposed BID7 BID12 BID4 BID3 BID9 BID6 BID13, this goal is still challenging because the resource availabilities are continuously changing over time while these resources are being shared with other program instances BID1, and multiple resources with different characteristics (e.g. computational capacity, memory usage) should be considered together. Anytime machine learning algorithms have addressed the first issue, how to get the optimal performance under dynamic resource budgets, by allowing us to activate only a part of the model with graceful output quality degradation BID14 BID2. Most anytime algorithms based on deep neural networks appear in the form of early termination of forward processing according to the current resource budget or the difficulty of the given task BID8 BID0 BID10. In other words, the conventional anytime networks are trained to embed many potential sub-networks with different effective depths so that the best one can be chosen according to the current budget. In this work, we propose a new type of the anytime neural network, doubly nested network, to solve the other issue, more efficient utilization of multiple heterogeneous resources. The proposed network can be sliced along the width as well as the depth to generate more diverse sub-networks than the conventional anytime networks allowing the sub-network extraction only along the depth-wise direction. As depicted in FIG0, the increased degree of freedom enables us to get higher resource utilization in the devices constrained by dynamically changing resource budget with multiple criteria. Causal convolution It is straightforward to form a sub-network along the depth by appending a separate output generation stage to the final convolution layer of the sub-network. Since one specific layer's output does not depend on the following (upper) layers' outputs, the jointly trained subnetwork does not suffer from the performance degradation even after the extraction. However, this approach does not work along the width because of the interdependence between two nodes at different horizontal locations (e.g. different channels). To address this issue, we propose a channelcausal convolution where i-th channel group in one layer is calculated only with activation values from the channel groups from the first to i-th channel group in the previous layer as shown in the right of FIG1. The circle indicates the feature map while the square indicates the classifier. Color refers each channel. Our network based on the causal convolution allows us to extract the sub-network easily along any directions by making both horizontal and vertical data flow unidirectionally. Output generation stage sharing fully-connected layers Rather than having a separate fullyconnected (FC) layer for one sub-network to generate the final output (e.g. a predicted class given image input), our network is designed to have the FC layers each of which takes only a part of activations from the preceding convolution layers and produce the final output for one sub-network by averaging multiple FC layers' outputs as depicted in FIG1. Sharing the FC layers between the sub-networks at the same depth helps us to have similar computational and memory costs of the FC layers in the depth-controllable anytime network BID8 even with much more possible output locations. We can obtain a loss function for each sub-network: DISPLAYFORM0 where L and C refer to the number of possible vertical and horizontal partitions and N is the number of classes. y i is a target label of class i andŷ DISPLAYFORM1 Experimental setup We evaluated the proposed method on the CIFAR-10 and the SVHN datasets. Similarly to the ResNet-32 model BID5, our full network architecture consists of one convolution layer fed by external input, the following 15 residual blocks and fully-connected layers for the final output generation. The network has 16 possible output locations along the depth from the first convolution layer and all residual blocks, and 22 locations along the width. Thus, we can extract 16×22 sub-networks with different widths and depths from the base network. Resource usages of the sub-networks As the selected sub-network gets deeper or wider, all computational and memory requirements such as the number of MAC (multiply-accumulate) operations, the number of parameters and the size of the largest feature map increase. However, their rates of the increase are different from each other as shown in FIG3. This means that our scheme can benefit from the larger diversity of resource usage compared to the conventional anytime methods. Comparison with other methods One of the key advantages of the proposed architecture is nontrivial nesting of sub-networks along the width direction. FIG4 shows that our scheme outperforms two straightforward vertical slicing schemes (Brute-force slicing, Fine-tuning) that can generate sub-networks with different widths to a large extent without significant performance degradation compared to the upper bound (Full training). We revealed that resource-constrained devices could benefit from the architectural diversity enriched by our anytime prediction scheme. Our future works include adding adaptive conditioning BID11 which modulates intermediate activations or weight parameters depending on the current sub-network configuration to improve the performance only with a small increase of conditioning parameters.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SygAlRLvoX
We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths.
We propose a new model for making generalizable and diverse retrosynthetic reaction predictions. Given a target compound, the task is to predict the likely chemical reactants to produce the target. This generative task can be framed as a sequence-to-sequence problem by using the SMILES representations of the molecules. Building on top of the popular Transformer architecture, we propose two novel pre-training methods that construct relevant auxiliary tasks (plausible reactions) for our problem. Furthermore, we incorporate a discrete latent variable model into the architecture to encourage the model to produce a diverse set of alternative predictions. On the 50k subset of reaction examples from the United States patent literature (USPTO-50k) benchmark dataset, our model greatly improves performance over the baseline, while also generating predictions that are more diverse. This paper proposes a novel approach for one-step retrosynthesis. This task is crucial for material and drug manufacturing and aims to predict which reactants are needed to generate a given target molecule as the main product. For instance, Figure 1 demonstrates that the input molecule "[N-]=[N+]=NCc1ccc(SCCl)cc1", expressed here as a SMILES string , can be generated using reactants "CSc1ccc(CN= [N+] =[N-])cc1" and "ClCCl". For decades, this task has been solved using template-based approaches . Templates encode transformation rules as regular expressions operating on SMILES strings and are typically extracted directly from the available training reactions. The primary limitation of such templates is coverage, i.e., it is possible that none of the templates applies to a test molecule. In order to better generalize to newer or broader chemical spaces, recently developed template-free approaches cast the problem as a sequence-to-sequence prediction task. These approaches were first explored by using LSTM models; the current state-of-the-art performance on this task uses Transformer models . Out-of-the-box Transformers nevertheless do not effectively generalize to rare reactions. For instance, model accuracy drops by 25% on reactions with 10 or fewer representative instances in the Figure 1: An example prediction task: on the left is the input target SMILES, and on the right are the output reactants SMILES. The input is a single molecule, while the output is a set of molecules separated by a period ("."). training set. 1 Another key issue is diversity. Manufacturing processes involve a number of additional criteria -such as green chemistry (having low detrimental effects on the environment). It is therefore helpful to generate a diverse collection of alternative ways of synthesizing the given target molecule. However, predicted reactions are unlikely to encompass multiple reaction classes (see Figure 2) without additional guidance. This is because the training data only provides a single reactant set for each input target, even if this is not the only valid reaction to synthesize the target. Figure 2: For the input target compound shown on the left, three possible reactant predictions are shown on the right. Prediction 1 suggestions a heterocycle formation reaction, while Predictions 2 and 3 both suggest substitution reactions. The only difference between the latter two is the halide functional group (Cl vs Br) highlighted in red. They share similar chemical properties and thus provide no additional insights for chemists. We extend molecular Transformers to address both of these challenges. First, we propose a novel pre-training approach to drive molecular representations to better retain alternative reaction possibilities. Our approach is reminiscent of successful pre-training schemes in natural language processing (NLP) applications . However, rather than using conventional token masking methods, we adopt chemically-relevant auxiliary tasks. Each training instance presents a single way to decompose a target molecule into its reactants. Here, we add alternative proxy decompositions for each target molecule by either 1) randomly removing bond types that can possibly break during reactions, or 2) transforming the target based on templates. While neither of these two auxiliary tasks are guaranteed to construct valid chemical reactions, they are closely related to the task of interest. Indeed, representations trained in this manner provide useful initializations for the actual retrosynthesis problem. To improve the diversity of predicted reactions, we incorporate latent variables into the generation process. Specifically, we merge the Transformer architecture with a discrete mixture over reactions. The role of the latent variable is to encode distinct modes that can be related to underlying reaction classes. Even though the training data only presents one reaction for each target molecule, our model learns to associate each reaction with a latent class, and in the process covers multiple reaction classes across the training set. At test time, a diverse collection of reactions is then obtained by collecting together predictions ing from conditioning on each latent class. Analogous mixture models have shown promise in generating diverse predictions in natural language translation tasks . We demonstrate similar gains in the chemical context. We evaluate our model on the benchmark USPTO-50k dataset, and compare it against state-ofthe-art template-free baselines using the Transformer model. We focus our evaluation on top-10 accuracy, because there are many equally valuable reaction transformations for each input target, though only one is presented in the data. Compared to the baseline, we achieve better performance overall, with over 13% increase in top-10 accuracy for our best model. When we create a split of the data based on different reaction templates (a task that any template-based model would fail on), we similarly observe a performance increase for our model. Additionally, we demonstrate that our model outputs exhibit significant diversity through both quantitative and human evaluations. Template-based Models Traditional methods for retrosynthetic reaction prediction use templatebased models. Templates, or rules, denote the exact atom and bond changes for a chemical reaction. applies these templates for a given target compound based on similar reactions in the dataset. Going one step further, learns the associations between molecules and templates through a neural network. uses a hierarchical network to first predict the reaction group and then the correct template for that group. However, to have the flexibility to generalize beyond extracted rules, we explore template-free generative models. Molecule Generation There are two different approaches to generative tasks for molecules, demonstrated through graph and SMILES representations. The graph-generation problem has been explored in as a node-by-node generation algorithm, but this model does not guarantee the validity of the output chemical graph. Jin et al. (2018a; b) improves upon this method using a junction-tree encoder-decoder that forces the outputs to be constrained in the valid chemical space; however, these models require complex, structured decoders. We focus on the generative task of SMILES string representations of the molecules, which has been explored in and Gómez-. Pre-training Pre-training methods have been shown to vastly improve the performance of Transformer models in NLP tasks without additional data supervision. use a masked language modeling objective to help their model learn effective representations for downstream tasks. Similar pre-training methods on molecules have been explored by , where they mask out atoms in molecular graphs. Meanwhile, our work does not use a masked objective, but instead creates pre-training tasks that are relevant to the retrosynthesis prediction problem. Given an input target molecule, the task of retrosynthetic reaction prediction is to output likely reactants that can form the target product. Formally, we express a molecule as a text string via its SMILES representation, and cast our task into a sequence-to-sequence (seq2seq) prediction problem (example shown in Figure 1). For this task, the input target is always a single molecule, while the output predictions are usually a set of more than one molecule concatenated by separators ".". To provide more intuition for this generative task, we describe some properties of SMILES strings. Each SMILES string is 1-D encoding of a 2-D molecular graph. If the predicted SMILES does not adhere to the SMILES grammar, then a valid molecular graph cannot be reconstructed. Moreover, each molecule has many equivalent SMILES representations, as a single instance of its SMILES is just a graph traversal starting at some arbitrary node. Therefore, two very different SMILES string can encode the same molecule (see Appendix B), and the model needs to be robust to the given input. One method, proposed by , augments the input data with different SMILES strings of the same input target molecule. For our model architecture, we apply a Transformer model for the seq2seq task, which has an encoder-decoder structure . The encoder maps an input sequence of tokens (from the SMILES string) to a sequence of continuous representations, which are then fed to the decoder to generate an output sequence of tokens one element at a time, auto-regressively. Once the model is trained, a beam search procedure is used at inference time to find likely output sequences. The main building block of the Transformer architecture lies in its global self-attention layers, which are well-suited for predictions of the latent graph structure of SMILES strings. For example, two tokens that are far apart in the SMILES string could be close together topologically in the corresponding molecular graph. The global connectivity of the Transformer model allows it to better leverage this information. Additionally, since SMILES follow a rigid grammar requiring long rangedependencies, these dependencies can be more easily learned through global attention layers (see Appendix B). Despite the flexible architecture of Transformer models, we recognize that there are ways to improve model generalization. Additionally, there is no inductive bias for proposing diverse outputs. We propose two techniques to enhance the base molecular Transformer model, which we describe now. In the data, each input target molecule is associated with a single reaction transformation -though there are many equally good chemical reactions. Therefore, for each input target, we construct several new prediction examples that are chemically meaningful, and pre-train the model on these auxiliary examples. We do so without requiring additionally data, or data supervision. The two variants of our method are described in detail below, with examples shown in Figure 3. with two automatically generated pre-training targets formed by breaking the bond highlighted in red. Examples and are generated from the random and template-based methods respectively. The only difference is that the template-based pre-training example adds an additional function group to the molecule (blue). Random pre-training For each input target molecule, we generate new examples by selecting a random bond to break. The types of bonds that we consider are acyclic single bonds, because these are the bonds most commonly broken in chemical reactions. As we break an acyclic bond, the input molecule is necessarily broken up into two output molecules, each being a subgraph of the input molecule. Although the examples generated by this method do not cover the entire space of chemical reactions (for instance some reactions do not break any bonds at all), these examples are easy to generate and cover a diverse range of transformations. Template-based pre-training Instead of randomly breaking bonds, we can also use the templates extracted from the training data to create reaction examples. An example of a template is shown in Figure 4: each template matches a specific pattern in the input molecule, and transforms that pattern according to the template specifications. When the matched pattern is a single acyclic bond, this method will generate similar outputs as the random pre-training method, except that templates usually add additional pieces (functional groups) to the output example. Figure 4: An example of a template, where the exact bond changes are described in red. The "C-N" bond (left) is broken and a "Br" atom is attached to the broken "C" atom (right). As shown in Figure 3, both examples are derived from the same bond broken in the input target molecule, but for the template-based example, an additional functional group was added, matching a more realistic reaction context. On average, for a random input molecule, there are 10 different possible examples that can be extracted from the random pre-training method, while there are over 200 different possible examples that can be extracted using the template-based pre-training method. However, many of these 200 examples represent similar chemical transformations, only differing in the type of functional group added. More broadly speaking, we can say that the template pre-training method generates more chemically valid reactions compared to the random pre-training method. However, the advantage of the random pre-training method is that it can break bonds that are not represented within the templates, thereby perhaps conferring a higher ability to generalize. As routine, the model is pre-trained on these automatically constructed auxiliary tasks, and then used as initialization to be fine-tuned on the actual retrosynthesis data. Next, we tackle the problem of generating diverse predictions. As mentioned earlier, the retrosynthesis problem is a one-to-many mapping since a target molecule can be formed from different types of reactions. We would like the model to produce a diverse set of predictions so that chemists can choose the most feasible and economical one in practice. However, hypotheses generated by a vanilla seq2seq model with beam search typically exemplifies low diversity with only minor differences in the suffix, see Figure 8 . To address this, we use a mixture seq2seq model that has shown sucess in generating diverse machine translations to generate diverse retrosynthesis reaction predictions . Specifically, given a target SMILES string x and reactants SMILES string y, a mixture model introduces a multinomial latent variable z ∈ {1, · · ·, K} to capture different reaction types, and decomposes the marginal likelihood as: Here, the prior p(z|x; θ) and likelihood p(y|z, x; θ) parameterized by θ are functions to be learned. We use a uniform prior p(z|x; θ) = 1/K, which is easy to implement and works well in practice . For p(y|z, x; θ), we share the encoder-decoder network among mixture components, and feed the embedding of z as an input to the decoder so that y is conditioned on it. The increase in the parameters of our model is negligible over the baseline model. We train the mixture model with the online hard-EM algorithm. Taking a mini-batch of training examples {(, we enumerate all K values of z and compute their loss, − log p(y (i) |z, x (i); θ). Then, for each (x (i), y (i) ), we select the value of z that yields the minimum loss:, and back-propagate through it, so only one component receives gradients per example. An important detail for successfully training a mixture model is that dropout is turned off in the forward passes for latent variable selection, and turned back on at back-propagation time for gradient regularization. Otherwise even a small amount of dropout noise will corrupt the optimal value of z, making the selection random and the different latent components will fail to diversify . The hard selection of the latent variable forces different components to specialize on different subsets of the data. As we shall later see in the experimental , our mixture model can learn to represent different reaction types in the training data and show improved diversity over the baseline. The benchmark dataset we use is a subset of the open source patent database of chemical reactions . Specifically, we use the curated 50k subset (USPTO-50k) from , including the same data splits. Each example reaction in this dataset is labeled with one of ten reaction classes, which describes its transformation type, but we do not use this information in our experiments, similar to. Since we are only interested in the retrosynthesis prediction problem, the examples are processed to remove any reagent molecules (molecules that do not contribute atoms to the reaction). The reactions are tokenized in the same manner as in , with each token being a meaningful subunit of the molecule (i.e., an atom or bond). In addition, we create a separate split of the USPTO-50k data, in which the train and test sets are split by reaction templates. Specifically, we split the data so that no example in the test set can be solved correctly with any templates extracted from training data. We use the template extraction code from , which to the best of our knowledge, is the only publicly available template extraction code. Accuracy The evaluation of retrosynthesis is challenging, because each input target has many valid syntheses, but only one is given in the data. When the model output does not exactly match the single solution in the data, the model is not necessarily wrong, but simply giving a plausible alternative. Therefore, we focus on the top-10 accuracy for our evaluation, but present all from our experiments. We compute the accuracies by matching the canonical SMILES strings of molecule sets. For the mixture model, we output the top 10 predictions for each latent class, and then combine those based on likelihoods to get top 10 predictions overall. Diversity To measure diversity, we provide both quantitative and human evaluations. For the former, we train a model to predict the reaction class given the input target molecule and the predicted output. We use a typical message-passing graph convolution network to embed both the input and output molecules (using weight-sharing) and compute the reaction embedding as a difference of the two embeddings. This predictor is trained on the 10 reaction class labels in the USPTO-50k dataset, and achieves 99% accuracy on the test set, so we can be fairly confident in its ability to predict the reaction class in-domain. Our main baseline is the SMILES transformer (Base), adapted from. We run the same model as other recent works for this task , and we build on top of the Transformer implementation from OpenNMT . We run ablation experiments for pre-training and different mixture models to show the impact of each approach. Random pre-training is referred to as Pre-train (R), while template-based pre-training is referred to as Pre-train (T). For each example, we construct up to 10 new auxiliary examples, and pre-train the model on these examples. Additionally, following , we also augment the training data with variations of the input SMILES string, referred to as Aug. That is, for each training example, we add an extra example using a different input SMILES string, which is trained to predict the same output reactants. This helps the model learn representations robust to the permutation of the input SMILES string. In addition to our experiments, we include a templatebased approach from , and a template-free approach from that adds a syntax predictor on top of the transformer model. Accuracy The accuracy of our model is shown in Table 1. We observe that both pre-training tasks improve over the baseline, and more so when combined with data augmentation. This shows that our pre-training tasks help the model learn the chemical reaction representational space, and are useful for the retrosynthesis prediction problem. However, interestingly, there seem to be marginal differences between the two pre-training methods. We attribute this to the fact that both pre-training methods usually generate very similar sets of examples. Previously shown in Figure 3, one of the main differences of template-based pre-training is just that it adds additional functional groups. But since these generated examples are not always chemically valid, having this extra information may not prove to be very valuable. We do note, however, that constructing additional decompositions of the input targets does actually matter for the pre-training task. We had also experimented with pretraining methods that only used variations of the input SMILES strings as pre-training output targets (because each molecule has many different SMILES representations). However, these methods did not in the same performance gains, because these pre-training targets do not contain much useful information for the actual task. Our original motivation for using a mixture model was to improve diversity, but we observe that it also leads to an increase in performance. We try N = {1, 2, 5} for the number of discrete latent classes, and we see that more latent classes generally leads to higher accuracies. The top-1 accuracy does decrease slightly as the number of latent classes increases, but we observe much higher accuracies at top-10 (increase of 7-8%). Importantly, we note that our method of combining outputs from different latent classes is not perfect, as the likelihoods from different latent classes are not totally comparable. That is likely the cause of the decrease in top-1 accuracy; yet as we mentioned in Section 5.2, top-10 accuracies are significantly more meaningful for our problem. Table 2: Prediction accuracies when tested on our template split of the USPTO dataset, for which any template-based model would get 0% accuracy on the test set. We see that our template-free methods can still generalize to this test set. Next, we show our on a different split of the data, which is more challenging to generalize. Using the dataset split on templates described in Section 5.1, we explore the performance of our best mixture model with pre-training compared to the baseline with no pre-training. As mentioned earlier, template-free models confer advantages over template-based models, as template-based models lack the ability to generalize outside of extracted rules. For this dataset, any template-based model would necessarily achieve 0% accuracy based on construction. Diversity We now look at evaluations of diversity for our model. Using the reaction class model described in Section 5.2, we predict the reaction class for every output of our models. Then, we compute the average number of unique reaction classes, holding all other factors constant besides varying the number of latent classes ( shown in Table 3). The number of unique reaction classes is 3.32 for the mixture model compared to the 2.66 for the base model, suggesting that the mixture model predicts a more diverse cast of outputs. The diversity of the predictions can also be examined from an interpretability standpoint for the latent classes of the mixture model. Using the reaction class model, we take the 10 top predictions from each latent class, and count the number of occurrences for each reaction class. Normalizing across reaction classes, we can see from Figure 6 that each latent class learns to predict a different distribution of reaction classes. We also supplement our diversity with human evaluation. To make the problem tractable for a human chemist, we randomly select 100 different reactions from the test set and present the top 5 predicted outputs from both the base and mixture model, where the the task is to determine diversity based on number of different types of reactions. The human chemist is asked to choose which of the two output sets is more diverse, or neither if the two sets do not differ in diversity (see Appendix C). For this task, the human chemist chose the mixture model more than twice as often as the base model (43 times vs 21), see Table 3. Although not perfect, these exemplify that our model does generate more diverse outputs than the baseline. We explored the problem of making one-step retrosynthesis reaction predictions, dealing with the issues of generalizability and making diverse predictions. Through pre-training and use of mixture models, we show that our model beats state-of-the-art methods in terms of accuracy and generates more diverse predictions. Even on a challenging task, for which any template-based models would fail, our model still is able to generalize to the test set. To compute a subset of the data with only rare reactions, we extracted all the templates from the entire USPTO-50k dataset, and selected the templates that occurs at most 10 times. The reactions in the test set that have these templates constitute the rare reaction subset, which is around 400 examples. The for this rare reaction subset can be found in Table 4. From this table, we can see that the top-1 accuracy for the baseline model is only 18.6% which is roughly 25% drop from the 42% in Table 1. We also mention that our new models improve over this baseline, showing more generalizability. Table 4: Prediction accuracies on the rare reaction test subset. Each molecule has many different SMILES representation, because each different SMILES string is just a different graph traversal over the molecule (see Figure 7). Although there is often some canonical SMILES string which is consistent, it is still completely arbitrary. Additionally, because SMILES is a 1-D encoding of the 2-D molecular graph, two atoms that are close in the graph may be far apart in the SMILES string, shown in Figure 8. To correctly decode a SMILES string, the decoder has to be aware of long-range dependencies. For instance, numbers in the SMILES string indicate the start and end of a cycle. The decoder has to close all cycles that it starts, and at the right position, or else the output SMILES will be invalid. Figure 8: The carbon atom (red) and the oxygen atom (blue) are neighbors on the molecular graph. However, in the SMILES string, they are far apart.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BygfrANKvB
We propose a new model for making generalizable and diverse retrosynthetic reaction predictions.
Unsupervised learning of disentangled representations is an open problem in machine learning. The Disentanglement-PyTorch library is developed to facilitate research, implementation, and testing of new variational algorithms. In this modular library, neural architectures, dimensionality of the latent space, and the training algorithms are fully decoupled, allowing for independent and consistent experiments across variational methods. The library handles the training scheduling, logging, and visualizations of reconstructions and latent space traversals. It also evaluates the encodings based on various disentanglement metrics. The library, so far, includes implementations of the following unsupervised algorithms VAE, Beta-VAE, Factor-VAE, DIP-I-VAE, DIP-II-VAE, Info-VAE, and Beta-TCVAE, as well as conditional approaches such as CVAE and IFCVAE. The library is compatible with the Disentanglement Challenge of NeurIPS 2019, hosted on AICrowd and was used to compete in the first and second stages of the challenge, where it was ranked among the best few participants. There are two overlapping avenues in representation learning. One focuses on learning task-specific transformations often optimized towards specific domains and applications. The other approach learns the intrinsic factors of variation, in a disentangled and taskinvariant fashion. The unsupervised disentanglement of latent factors, where changes in a single factor of variation shifts the latent encoding in a single direction, is an open problem of representation learning . Disentangled representations are valuable in few-shot learning, reinforcement learning, transfer learning, as well as semisupervised learning (; ; . In this work, we developed a library based on the functionalities of the PyTorch framework, which facilitates research, implementation, and testing of new variational algorithms focusing on representation learning and disentanglement. The library branches from the Disentanglement Challenge of NeurIPS 2019, hosted on AICrowd (aicrowd.com), and was used to compete in the first and second stages of the challenge where it was highly ranked. The Disentanglement-PyTorch library is released under the GNU General Public License at https://github.com/amir-abdi/disentanglement-pytorch. Unsupervised Objectives Currently, the library includes implementations of the following unsupervised variational algorithms: VAE , β-VAE , β-TCVAE , Factor-VAE , Info-VAE , DIP-I-VAE, and DIP-II-VAE . Algorithms are implemented as plug-ins to the variational Bayesian formulation, and are specified by the loss terms flag. As a , if the loss terms of two learning algorithms (e.g., A and B) were found to be compatible, they can both be included in the objective function with the flag set as [--loss terms A B]. This enables researchers to mix and match loss terms which optimize towards correlated goals. The library supports conditional approaches such as CVAE , where extra known attributes (i.e, labels) are included in the encoding and decoding processes. It also supports IFCVAE, inspired by the IFcVAE-GAN , which enforces certain latent factors to encode known attributes using a set of positive (auxiliary) and negative (adversarial) discriminators in a supervised fashion. Thanks to the modular implementation of the library, any of the above-mentioned unsupervised loss terms can be used with conditional and information factoriation approaches to encourage disentanglement across attribute-invariant latents. Neural architectures and the dimensionality of the data and the latent spaces are configurable and decoupled from the training algorithm. Consequently, new architectures for the encoder and decoder networks, such as the auto-regressive models, and support for other data domains, can be independently investigated. We rely on Google's implementation of the disentanglement metrics to evaluate the quality of the learned representations. Thanks to the disentanglement-lib 1 library, the following metrics are currently supported: BetaVAE , FactorVAE , Mutual Information Gap (MIG) , Interventional Robustness Score (IRS) , Disentanglement Completeness and Informativeness (DCP) , and Separated Attribute Predictability (SAP) . Controlled Capacity Increase It is shown that gradually relaxing the information bottleneck during training improves the disentanglement without penalizing the reconstruction accuracy. Following the formulation of , the capacity, defined as the distance between the prior and the latent posterior distributions and denoted with the variable C, is gradually increased during training. To avoid convergence points with high reconstruction loss, training can be started with more emphasis on the reconstruction and gradually relaxing for the disentanglement term to become more relative. Dynamic Learning Rate Scheduling All forms of learning rate schedulers are supported. Researchers are encouraged to leverage the dynamic LR scheduling to gradually decrease the rate when the average objective function over the epoch stops its decremental trend. Logging and Visualization The library leverages the Weights & Biases (W&B) 2 tool to record and visualize the training process and experiments' . Besides the scalar values, we visualize the attribute and condition traversals, latent factor traversals, and input reconstructions, both as static images (logged via W&B) as well as animated GIFs. The β-TCVAE algorithm achieved the best disentanglement on the mpi3d real dataset in the second stage of the disentanglement challenge. Given the limited 8-hour training time for the challenge, the model was pre-trained on the mpi3d toy dataset . The model was trained with the Adam optimizer for 90k iterations on batches of size 64. The β value of the β-TCVAE objective function was set to 2. The learning rate was initialized at 0.001 and reduced on the plateau of the objective function with a factor of 0.95. The capacity parameter, C, was gradually increased from 0 to 25. The dimensionality of the z space was generously set to 20. The encoder consisted of 5 convolutional layers with strides of 2, kernel sizes of 3 × 3, and number of kernels gradually increasing from 32 to 256. The encoder ended with a dense linear layer which estimated the posterior latent distribution as a parametric Gaussian. The decoder network consisted of one convolutional followed with 6 deconvolutional (transposed convolutional) layers, with kernel sizes of 4, strides of 2, and the number of kernels gradually decreasing from 256 down to the number of channels of the image space. ReLU activations were used except for the last layers of the encoder and decoder networks. Model's performance on the unseen objects of mpi3d realistic and mpi3d real datasets are presented in Table 1. The configurations of the two experiments are the same and the model consistently performed better on the mpi3d real dataset. This was unexpected as the model was initialized on the mpi3d toy dataset. Disentanglement performance, on the available samples of the mpi3d realistic dataset, is visualized in Appendix A. Figure 1: Latent factor traversal of the trained β-TCVAE model on a random sample of the mpi3d realistic dataset. As demonstrated, the disentanglement is not complete and some features are encoded in the same latent factor. A latent space of size 20 was used, however, changes in the other 13 latent factors had no effect on the reconstruction; thus, these feature-invariant factors were not included for brevity.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJgUsFYnir
Disentanglement-PyTorch is a library for variational representation learning
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO. The goal of model-free reinforcement learning (RL) is to optimize an agent's behavior policy through trial and error interaction with a black box environment. Value-based RL algorithms such as Q-learning BID36 and policy-based algorithms such as actor-critic BID15 have achieved well-known successes in environments with enumerable action spaces and predictable but possibly complex dynamics, e.g., as in Atari games BID19 BID34. However, when applied to environments with more sophisticated action spaces and dynamics (e.g., continuous control and robotics), success has been far more limited. In an attempt to improve the applicability of Q-learning to continuous control, BID32 and BID16 developed an off-policy algorithm DDPG, leading to promising on continuous control environments. That said, current off-policy methods including DDPG often improve data efficiency at the cost of optimization stability. The behaviour of DDPG is known to be highly dependent on hyperparameter selection and initialization BID18; even when using optimal hyperparameters, individual training runs can display highly varying outcomes. On the other hand, in an attempt to improve the stability and convergence speed of policy-based RL methods, BID13 developed a natural policy gradient algorithm based on , which subsequently led to the development of trust region policy optimization (TRPO) BID28. TRPO has shown strong empirical performance on difficult continuous control tasks often outperforming value-based methods like DDPG. However, a major drawback is that such methods are not able to exploit off-policy data and thus require a large amount of on-policy interaction with the environment, making them impractical for solving challenging real-world problems. Efforts at combining the stability of trust region policy-based methods with the sample efficiency of value-based methods have focused on using off-policy data to better train a value estimate, which can be used as a control variate for variance reduction BID8 b).In this paper, we investigate an alternative approach to improving the sample efficiency of trust region policy-based RL methods. We exploit the key fact that, under entropy regularization, the optimal policy and value function satisfy a set of pathwise consistency properties along any sampled path BID21, which allows both on and off-policy data to be incorporated in an actor-critic algorithm, PCL. The original PCL algorithm optimized an entropy regularized maximum reward objective and was evaluated on relatively simple tasks. Here we extend the ideas of PCL to achieve strong on standard, challenging continuous control benchmarks. The main observation is that by alternatively augmenting the maximum reward objective with a relative entropy regularizer, the optimal policy and values still satisfy a certain set of pathwise consistencies along any sampled trajectory. The ing objective is equivalent to maximizing expected reward subject to a penalty-based constraint on divergence from a reference (i.e., previous) policy. We exploit this observation to propose a new off-policy trust region algorithm, Trust-PCL, that is able to exploit off-policy data to train policy and value estimates. Moreover, we present a simple method for determining the coefficient on the relative entropy regularizer to remain agnostic to reward scale, hence ameliorating the task of hyperparameter tuning. We find that the incorporation of a relative entropy regularizer is crucial for good and stable performance. We evaluate Trust-PCL against TRPO, and observe that Trust-PCL is able to solve difficult continuous control tasks, while improving the performance of TRPO both in terms of the final reward achieved as well as sample-efficiency. Trust Region Methods. Gradient descent is the predominant optimization method for neural networks. A gradient descent step is equivalent to solving a trust region constrained optimization, DISPLAYFORM0 which yields the locally optimal update dθ = −η∇ (θ) such that η = √ / ∇ (θ); hence by considering a Euclidean ball, gradient descent assumes the parameters lie in a Euclidean space. However, in machine learning, particularly in the context of multi-layer neural network training, Euclidean geometry is not necessarily the best way to characterize proximity in parameter space. It is often more effective to define an appropriate Riemannian metric that respects the loss surface , which allows much steeper descent directions to be identified within a local neighborhood (e.g., ; BID17). Whenever the loss is defined in terms of a Bregman divergence between an (unknown) optimal parameter θ * and model parameter θ, i.e., (θ) ≡ D F (θ *, θ), it is natural to use the same divergence to form the trust region: DISPLAYFORM1 The natural gradient is a generalization of gradient descent where the Fisher information matrix F (θ) is used to define the local geometry of the parameter space around θ. If a parameter update is constrained by dθ DISPLAYFORM2 is obtained. This geometry is especially effective for optimizing the log-likelihood of a conditional probabilistic model, where the objective is in fact the KL divergence D KL (θ *, θ). The local optimization is, DISPLAYFORM3 Thus, natural gradient approximates the trust region by DISPLAYFORM4, which is accurate up to a second order Taylor approximation. Previous work BID13 BID4 BID24 BID28 has applied natural gradient to policy optimization, locally improving expected reward subject to variants of dθ T F (θ)dθ ≤. Recently, TRPO BID28 has achieved state-of-the-art in continuous control by adding several approximations to the natural gradient to make nonlinear policy optimization feasible. Another approach to trust region optimization is given by proximal gradient methods BID23. The class of proximal gradient methods most similar to our work are those that replace the hard constraint in with a penalty added to the objective. These techniques have recently become popular in RL BID35 BID11 BID31, although in terms of final reward performance on continuous control benchmarks, TRPO is still considered to be the state-of-the-art. BID22 make the observation that entropy regularized expected reward may be expressed as a reversed KL divergence D KL (θ, θ *), which suggests that an alternative to the constraint in should be used when such regularization is present: DISPLAYFORM5 Unfortunately, this update requires computing the Fisher matrix at the endpoint of the update. The use of F (θ) in previous work can be considered to be an approximation when entropy regularization is present, but it is not ideal, particularly if dθ is large. In this paper, by contrast, we demonstrate that the optimal dθ under the reverse KL constraint D KL (θ + dθ, θ) ≤ can indeed be characterized. Defining the constraint in this way appears to be more natural and effective than that of TRPO.Softmax Consistency. To comply with the information geometry over policy parameters, previous work has used the relative entropy (i.e., KL divergence) to regularize policy optimization; ing in a softmax relationship between the optimal policy and state values BID25 BID3 BID2 BID7 BID26 under single-step rollouts. Our work is unique in that we leverage consistencies over multi-step rollouts. The existence of multi-step softmax consistencies has been noted by prior work-first by BID21 in the presence of entropy regularization. The existence of the same consistencies with relative entropy has been noted by BID30. Our work presents multi-step consistency relations for a hybrid relative entropy plus entropy regularized expected reward objective, interpreting relative entropy regularization as a trust region constraint. This work is also distinct from prior work in that the coefficient of relative entropy can be automatically determined, which we have found to be especially crucial in cases where the reward distribution changes dramatically during training. Most previous work on softmax consistency (e.g., BID7 ; Azar et al. FORMULA0 ; BID21) have only been evaluated on relatively simple tasks, including grid-world and discrete algorithmic environments. BID26 conducted evaluations on simple variants of the CartPole and Pendulum continuous control tasks. More recently, BID10 showed that soft Qlearning (a single-step special case of PCL) can succeed on more challenging environments, such as a variant of the Swimmer task we consider below. By contrast, this paper presents a successful application of the softmax consistency concept to difficult and standard continuous-control benchmarks, ing in performance that is competitive with and in some cases beats the state-of-the-art. We model an agent's behavior by a policy distribution π(a | s) over a set of actions (possibly discrete or continuous). At iteration t, the agent encounters a state s t and performs an action a t sampled from π(a | s t). The environment then returns a scalar reward r t ∼ r(s t, a t) and transitions to the next state s t+1 ∼ ρ(s t, a t). When formulating expectations over actions, rewards, and state transitions we will often omit the sampling distributions, π, r, and ρ, respectively. Maximizing Expected Reward. The standard objective in RL is to maximize expected future discounted reward. We formulate this objective on a per-state basis recursively as DISPLAYFORM0 The overall, state-agnostic objective is the expected per-state objective when states are sampled from interactions with the environment: DISPLAYFORM1 Most policy-based algorithms, including REINFORCE BID37 and actorcritic BID15, aim to optimize O ER given a parameterized policy. Path Consistency Learning (PCL). Inspired by BID37, BID21 augment the objective O ER in with a discounted entropy regularizer to derive an objective, DISPLAYFORM2 where τ ≥ 0 is a user-specified temperature parameter that controls the degree of entropy regularization, and the discounted entropy H(s, π) is recursively defined as DISPLAYFORM3 Note that the objective O ENT (s, π) can then be re-expressed recursively as, BID21 show that the optimal policy π * for O ENT and V * (s) = O ENT (s, π *) mutually satisfy a softmax temporal consistency constraint along any sequence of states s 0,..., s d starting at s 0 and a corresponding sequence of actions a 0,..., a d−1: DISPLAYFORM4 DISPLAYFORM5 This observation led to the development of the PCL algorithm, which attempts to minimize squared error between the LHS and RHS of to simultaneously optimize parameterized π θ and V φ. Importantly, PCL is applicable to both on-policy and off-policy trajectories. Trust Region Policy Optimization (TRPO). As noted, standard policy-based algorithms for maximizing O ER can be unstable and require small learning rates for training. To alleviate this issue, BID28 proposed to perform an iterative trust region optimization to maximize O ER. At each step, a prior policyπ is used to sample a large batch of trajectories, then π is subsequently optimized to maximize O ER while remaining within a constraint defined by the average per-state KL-divergence withπ. That is, at each iteration TRPO solves the constrained optimization problem, DISPLAYFORM6 The prior policy is then replaced with the new policy π, and the process is repeated. To enable more stable training and better exploit the natural information geometry of the parameter space, we propose to augment the entropy regularized expected reward objective O ENT in with a discounted relative entropy trust region around a prior policyπ, DISPLAYFORM0 where the discounted relative entropy is recursively defined as DISPLAYFORM1 This objective attempts to maximize entropy regularized expected reward while maintaining natural proximity to the previous policy. Although previous work has separately proposed to use relative entropy and entropy regularization, we find that the two components serve different purposes, each of which is beneficial: entropy regularization helps improve exploration, while the relative entropy improves stability and allows for a faster learning rate. This combination is a key novelty. Using the method of Lagrange multipliers, we cast the constrained optimization problem in into maximization of the following objective, DISPLAYFORM2 Again, the environment-wide objective is the expected per-state objective when states are sampled from interactions with the environment, DISPLAYFORM3 A key technical observation is that the O RELENT objective has a similar decomposition structure to O ENT, and one can cast O RELENT as an entropy regularized expected reward objective with a set of transformed rewards, i.e., DISPLAYFORM0 where DISPLAYFORM1 is an expected reward objective on a transformed reward distribution functioñ r(s, a) = r(s, a) + λ logπ(a|s). Thus, in what follows, we derive a corresponding form of the multi-step path consistency in.Let π * denote the optimal policy, defined as π * = argmax π O RELENT (π). As in PCL BID21, this optimal policy may be expressed as DISPLAYFORM2 where V * are the softmax state values defined recursively as DISPLAYFORM3 We may re-arrange to yield DISPLAYFORM4 This is a single-step temporal consistency which may be extended to multiple steps by further expanding V * (s t+1) on the RHS using the same identity. Thus, in general we have the following softmax temporal consistency constraint along any sequence of states defined by a starting state s t and a sequence of actions a t,..., a t+d−1: DISPLAYFORM5 We propose to train a parameterized policy π θ and value estimate V φ to satisfy the multi-step consistencies in. Thus, we define a consistency error for a sequence of states, actions, and rewards s t:t+d ≡ (s t, a t, r t, . . ., s t+d−1, a t+d−1, r t+d−1, s t+d) sampled from the environment as DISPLAYFORM0 We aim to minimize the squared consistency error on every sub-trajectory of length d. That is, the loss for a given batch of episodes (or sub-episodes) S = {s DISPLAYFORM1 We perform gradient descent on θ and φ to minimize this loss. In practice, we have found that it is beneficial to learn the parameter φ at least as fast as θ, and accordingly, given a mini-batch of episodes we perform a single gradient update on θ and possibly multiple gradient updates on φ (see Appendix for details).In principle, the mini-batch S may be taken from either on-policy or off-policy trajectories. In our implementation, we utilized a replay buffer prioritized by recency. As episodes (or sub-episodes) are sampled from the environment they are placed in a replay buffer and a priority p(s 0:T) is given to a trajectory s 0:T equivalent to the current training step. Then, to sample a batch for training, B episodes are sampled from the replay buffer proportional to exponentiated priority exp{βp(s 0:T)} for some hyperparameter β ≥ 0.For the prior policy πθ, we use a lagged geometric mean of the parameters. At each training step, we updateθ ← αθ + (1 − α)θ. Thus on average our training scheme attempts to maximize entropy regularized expected reward while penalizing divergence from a policy roughly 1/(1 − α) training steps in the past. The use of a relative entropy regularizer as a penalty rather than a constraint introduces several difficulties. The hyperparameter λ must necessarily adapt to the distribution of rewards. Thus, λ must be tuned not only to each environment but also during training on a single environment, since the observed reward distribution changes as the agent's behavior policy improves. Using a constraint form of the regularizer is more desirable, and others have advocated its use in practice BID28 specifically to robustly allow larger updates during training. To this end, we propose to redirect the hyperparameter tuning from λ to. Specifically, we present a method which, given a desired hard constraint on the relative entropy defined by, approximates the equivalent penalty coefficient λ. This is a key novelty of our work and is distinct from previous attempts at automatically tuning a regularizing coefficient, which iteratively increase and decrease the coefficient based on observed training behavior BID31 BID11.We restrict our analysis to the undiscounted setting γ = 1 with entropy regularizer τ = 0. Additionally, we assume deterministic, finite-horizon environment dynamics. An additional assumption we make is that the expected KL-divergence over states is well-approximated by the KL-divergence starting from the unique initial state s 0. Although in our experiments these restrictive assumptions are not met, we still found our method to perform well for adapting λ during training. In this setting the optimal policy of FORMULA0 is proportional to exponentiated scaled reward. Specifically, for a full episode s 0:T = (s 0, a 0, r 0, . . ., s T −1, a T −1, r T −1, s T), we have DISPLAYFORM0 where π(s 0: DISPLAYFORM1 We would like to approximate the trajectory-wide KL-divergence between π * andπ. We may express the KL-divergence analytically: DISPLAYFORM2 DISPLAYFORM3 Since all expectations are with respect toπ, this quantity is tractable to approximate given episodes sampled fromπ Therefore, in Trust-PCL, given a set of episodes sampled from the prior policy πθ and a desired maximum divergence, we can perform a simple line search to find a suitable λ which yields KL(π * ||πθ) as close as possible to.The preceding analysis provided a method to determine λ given a desired maximum divergence. However, there is still a question of whether should change during training. Indeed, as episodes may possibly increase in length, KL(π * ||π) naturally increases when compared to the average perstate KL(π * (−|s)||π(−|s)), and vice versa for decreasing length. Thus, in practice, given an and a set of sampled episodes S = {s DISPLAYFORM4, we approximate the best λ which yields a maximum divergence of N N k=1 T k . This makes it so that corresponds more to a constraint on the lengthaveraged KL-divergence. To avoid incurring a prohibitively large number of interactions with the environment for each parameter update, in practice we use the last 100 episodes as the set of sampled episodes S. While this is not exactly the same as sampling episodes from πθ, it is not too far off since πθ is a lagged version of the online policy π θ . Moreover, we observed this protocol to work well in practice. A more sophisticated and accurate protocol may be derived by weighting the episodes according to the importance weights corresponding to their true sampling distribution. We evaluate Trust-PCL against TRPO on a number of benchmark tasks. We choose TRPO as a baseline since it is a standard algorithm known to achieve state-of-the-art performance on the continuous control tasks we consider (see e.g., leaderboard on the OpenAI Gym website BID5). We find that Trust-PCL can match or improve upon TRPO's performance in terms of both average reward and sample efficiency. We chose a number of control tasks available from OpenAI Gym BID5. The first task, Acrobot, is a discrete-control task, while the remaining tasks (HalfCheetah, Swimmer, Hopper, Walker2d, and Ant) are well-known continuous-control tasks utilizing the MuJoCo environment BID33.For TRPO we trained using batches of Q = 25, 000 steps (12, 500 for Acrobot), which is the approximate batch size used by other implementations ). Thus, at each training iteration, TRPO samples 25, 000 steps using the policy πθ and then takes a single step within a KL-ball to yield a new π θ.Trust-PCL is off-policy, so to evaluate its performance we alternate between collecting experience and training on batches of experience sampled from the replay buffer. Specifically, we alternate between collecting P = 10 steps from the environment and performing a single gradient step based on a batch of size Q = 64 sub-episodes of length P from the replay buffer, with a recency weight of β = 0.001 on the sampling distribution of the replay buffer. To maintain stability we use α = 0.99 and we modified the loss from squared loss to Huber loss on the consistency error. Since our policy is parameterized by a unimodal Gaussian, it is impossible for it to satisfy all path consistencies, and so we found this crucial for stability. For each of the variants and for each environment, we performed a hyperparameter search to find the best hyperparameters. The plots presented here show the reward achieved during training on the best hyperparameters averaged over the best 4 seeds of 5 randomly seeded training runs. Note that this reward is based on greedy actions (rather than random sampling).Experiments were performed using Tensorflow BID0. Although each training step of Trust-PCL (a simple gradient step) is considerably faster than TRPO, we found that this does not have an overall effect on the run time of our implementation, due to a combination of the fact that each environment step is used in multiple training steps of Trust-PCL and that a majority of the run time is spent interacting with the environment. A detailed description of our implementation and hyperparameter search is available in the Appendix. We present the reward over training of Trust-PCL and TRPO in FIG0. We find that Trust-PCL can match or beat the performance of TRPO across all environments in terms of both final reward and sample efficiency. These are especially significant on the harder tasks (Walker2d and Ant). We additionally present our compared to other published in Table 1. We find that even when comparing across different implementations, Trust-PCL can match or beat the state-of-the-art. The most important hyperparameter in our method is, which determines the size of the trust region and thus has a critical role in the stability of the algorithm. To showcase this effect, we present the reward during training for several different values of in FIG1. As increases, instability increases as well, eventually having an adverse effect on the agent's ability to achieve optimal reward. The of Trust-PCL against a TRPO baseline. Each plot shows average greedy reward with single standard deviation error intervals capped at the min and max across 4 best of 5 randomly seeded training runs after choosing best hyperparameters. The x-axis shows millions of environment steps. We observe that Trust-PCL is consistently able to match and, in many cases, beat TRPO's performance both in terms of reward and sample efficiency. The of Trust-PCL across several values of, defining the size of the trust region. Each plot shows average greedy reward across 4 best of 5 randomly seeded training runs after choosing best hyperparameters. The x-axis shows millions of environment steps. We observe that instability increases with, thus concluding that the use of trust region is crucial. Note that standard PCL BID21 corresponds to → ∞ (that is, λ = 0). Therefore, standard PCL would fail in these environments, and the use of trust region is crucial. The main advantage of Trust-PCL over existing trust region methods for continuous control is its ability to learn in an off-policy manner. The degree to which Trust-PCL is off-policy is determined by a combination of the hyparparameters α, β, and P. To evaluate the importance of training off-policy, we evaluate Trust-PCL with a hyperparameter setting that is more on-policy. We set α = 0.95, β = 0.1, and P = 1, 000. In this setting, we also use large batches of Q = 25 episodes of length P (a total of 25, 000 environment steps per batch). Figure 3 shows the of Trust-PCL with our original parameters and this new setting. We note a dramatic advantage in sample efficiency when using off-policy training. Although Trust-PCL (on-policy) can achieve state-of-the-art reward performance, it requires an exorbitant amount of experience. On the other hand, Trust-PCL (off- Trust-PCL (on-policy) Trust-PCL (off-policy) Figure 3: The of Trust-PCL varying the degree of on/off-policy. We see that Trust-PCL (on-policy) has a behavior similar to TRPO, achieving good final reward but requiring an exorbitant number of experience collection. When collecting less experience per training step in Trust-PCL (off-policy), we are able to improve sample efficiency while still achieving a competitive final reward. BID9. These are each on different setups with different hyperparameter searches and in some cases different evaluation protocols (e.g.,TRPO (rllab) and IPG were run with a simple linear value network instead of the two-hidden layer network we use). Thus, it is not possible to make any definitive claims based on this data. However, we do conclude that our are overall competitive with state-of-the-art external implementations.policy) can be competitive in terms of reward while providing a significant improvement in sample efficiency. One last hyperparameter is τ, determining the degree of exploration. Anecdotally, we found τ to not be of high importance for the tasks we evaluated. Indeed many of our best use τ = 0. Including τ > 0 had a marginal effect, at best. The reason for this is likely due to the tasks themselves. Indeed, other works which focus on exploration in continuous control have found the need to propose exploration-advanageous variants of these standard benchmarks BID10. We have presented Trust-PCL, an off-policy algorithm employing a relative-entropy penalty to impose a trust region on a maximum reward objective. We found that Trust-PCL can perform well on a set of standard control tasks, improving upon TRPO both in terms of average reward and sample efficiency. Our best on Trust-PCL are able to maintain the stability and solution quality of TRPO while approaching the sample-efficiency of value-based methods (see e.g., BID18). This gives hope that the goal of achieving both stability and sample-efficiency without trading-off one for the other is attainable in a single unifying RL algorithm. We thank Matthew Johnson, Luke Metz, Shane Gu, and the Google Brain team for insightful comments and discussions. We have already highlighted the ability of Trust-PCL to use off-policy data to stably train both a parameterized policy and value estimate, which sets it apart from previous methods. We have also noted the ease with which exploration can be incorporated through the entropy regularizer. We elaborate on several additional benefits of Trust-PCL.Compared to TRPO, Trust-PCL is much easier to implement. Standard TRPO implementations perform second-order gradient calculations on the KL-divergence to construct a Fisher information matrix (more specifically a vector product with the inverse Fisher information matrix). This yields a vector direction for which a line search is subsequently employed to find the optimal step. Compare this to Trust-PCL which employs simple gradient descent. This makes implementation much more straightforward and easily realizable within standard deep learning frameworks. Even if one replaces the constraint on the average KL-divergence of TRPO with a simple regularization penalty (as in proximal policy gradient methods BID31 BID35), optimizing the ing objective requires computing the gradient of the KL-divergence. In Trust-PCL, there is no such necessity. The per-state KL-divergence need not have an analytically computable gradient. In fact, the KL-divergence need not have a closed form at all. The only requirement of Trust-PCL is that the log-density be analytically computable. This opens up the possible policy parameterizations to a much wider class of functions. While continuous control has traditionally used policies parameterized by unimodal Gaussians, with Trust-PCL the policy can be replaced with something much more expressive-for example, mixtures of Gaussians or autoregressive policies as in BID18.We have yet to fully explore these additional benefits in this work, but we hope that future investigations can exploit the flexibility and ease of implementation of Trust-PCL to further the progress of RL in continuous control environments. We describe in detail the experimental setup regarding implementation and hyperparameter search. In Acrobot, episodes were cut-off at step 500. For the remaining environments, episodes were cutoff at step 1, 000.Acrobot, HalfCheetah, and Swimmer are all non-terminating environments. Thus, for these environments, each episode had equal length and each batch contained the same number of episodes. Hopper, Walker2d, and Ant are environments that can terminate the agent. Thus, for these environments, the batch size throughout training remained constant in terms of steps but not in terms of episodes. There exists an additional common MuJoCo task called Humanoid. We found that neither our implementation of TRPO nor Trust-PCL could make more than negligible headway on this task, and so omit it from the . We are aware that TRPO with the addition of GAE and enough finetuning can be made to achieve good on Humanoid. We decided to not pursue a GAE implementation to keep a fair comparison between variants. Trust-PCL can also be made to incorporate an analogue to GAE (by maintaining consistencies at varying time scales), but we leave this to future work. We use fully-connected feed-forward neural networks to represent both policy and value. The policy π θ is represented by a neural network with two hidden layers of dimension 64 with tanh activations. At time step t, the network is given the observation s t. It produces a vector µ t, which is combined with a learnable (but t-agnostic) parameter ξ to parametrize a unimodal Gaussian with mean µ t and standard deviation exp(ξ). The next action a t is sampled randomly from this Gaussian. The value network V φ is represented by a neural network with two hidden layers of dimension 64 with tanh activations. At time step t the network is given the observation s t and the component-wise squared observation s t s t. It produces a single scalar value. At each training iteration, both the policy and value parameters are updated. The policy is trained by performing a trust region step according to the procedure described in BID28.The value parameters at each step are solved using an LBFGS optimizer. To avoid instability, the value parameters are solved to fit a mixture of the empirical values and the expected values. That is, we determine φ to minimize s∈batch (V φ (s) − κVφ(s) − (1 − κ)Vφ(s)) 2, where againφ is the previous value parameterization. We use κ = 0.9. This method for training φ is according to that used in. At each training iteration, both the policy and value parameters are updated. The specific updates are slightly different between Trust-PCL (on-policy) and Trust-PCL (off-policy).For Trust-PCL (on-policy), the policy is trained by taking a single gradient step using the Adam optimizer BID14 with learning rate 0.001. The value network update is inspired by that used in TRPO we perform 5 gradients steps with learning rate 0.001, calculated with regards to a mix between the empirical values and the expected values according to the previousφ. We use κ = 0.95.For Trust-PCL (off-policy), both the policy and value parameters are updated in a single step using the Adam optimizer with learning rate 0.0001. For this variant, we also utilize a target value network (lagged at the same rate as the target policy network) to replace the value estimate at the final state for each path. We do not mix between empirical and expected values. We found the most crucial hyperparameters for effective learning in both TRPO and Trust-PCL to be (the constraint defining the size of the trust region) and d (the rollout determining how to evaluate the empirical value of a state). For TRPO we performed a grid search over ∈ {0.01, 0.02, 0.05, 0.1}, d ∈ {10, 50}. For Trust-PCL we performed a grid search over ∈ {0.001, 0.002, 0.005, 0.01}, d ∈ {10, 50}. For Trust-PCL we also experimented with the value of τ, either keeping it at a constant 0 (thus, no exploration) or decaying it from 0.1 to 0.0 by a smoothed exponential rate of 0.1 every 2,500 training iterations. We fix the discount to γ = 0.995 for all environments. A simplified pseudocode for Trust-PCL is presented in Algorithm 1. Input: Environment EN V, trust region constraint, learning rates η π, η v, discount factor γ, rollout d, batch size Q, collect steps per train step P, number of training steps N, replay buffer RB with exponential lag β, lag on prior policy α.function Gradients({s // Update auxiliary variables Updateθ = αθ + (1 − α)θ. Update λ in terms of according to Section 4.3. end for
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyrCWeWCb
We extend recent insights related to softmax consistency to achieve state-of-the-art results in continuous control.
Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization. Here we consider a family of combinatorial games, arising from work of Erdos, Selfridge, and Spencer, and we propose their use as environments for evaluating and comparing different approaches to reinforcement learning. These games have a number of appealing features: they are challenging for current learning approaches, but they form (i) a low-dimensional, simply parametrized environment where (ii) there is a linear closed form solution for optimal behavior from any state, and (iii) the difficulty of the game can be tuned by changing environment parameters in an interpretable way. We use these Erdos-Selfridge-Spencer games not only to compare different algorithms, but also to compare approaches based on supervised and reinforcement learning, to analyze the power of multi-agent approaches in improving performance, and to evaluate generalization to environments outside the training set. Deep reinforcement learning has seen many remarkable successes over the past few years BID5 BID9. But developing learning algorithms that are robust across tasks and policy representations remains a challenge. Standard benchmarks like MuJoCo and Atari provide rich settings for experimentation, but the specifics of the underlying environments differ from each other in many different ways, and hence determining the principles underlying any particular form of sub-optimal behavior is difficult. Optimal behavior in these environments is generally complex and not fully characterized, so algorithmic success is generally associated with high scores, making it hard to analyze where errors are occurring in any sort of fine-grained sense. An ideal setting for studying the strengths and limitations of reinforcement learning algorithms would be (i) a simply parametrized family of environments where (ii) optimal behavior can be completely characterized, (iii) the inherent difficulty of computing optimal behavior is tightly controlled by the underlying parameters, and (iv) at least some portions of the parameter space produce environments that are hard for current algorithms. To produce such a family of environments, we look in a novel direction -to a set of two-player combinatorial games with their roots in work of Erdos and Selfridge BID3, and placed on a general footing by BID10. Roughly speaking, these Erdos-Selfridge-Spencer (ESS) games are games in which two players take turns selecting objects from some combinatorial structure, with the feature that optimal strategies can be defined by potential functions derived from conditional expectations over random future play. These ESS games thus provide an opportunity to capture the general desiderata noted above, with a clean characterization of optimal behavior and a set of instances that range from easy to very hard as we sweep over a simple set of tunable parameters. We focus in particular on one of the best-known games in this genre, Spencer's attacker-defender game (also known as the "tenure game"; BID10, in which -roughly speaking -an attacker advances a set of pieces up the levels of a board, while a defender destroys subsets of these pieces to try prevent any of them from reaching the final level ( FIG0). An instance of the game can be parametrized by two key quantities. The first is the number of levels K, which determines both the size of the state space and the approximate length of the game; the latter is directly related to the sparsity of win/loss signals as rewards. The second quantity is a potential function φ, whose magnitude characterizes whether the instance favors the defender or attacker, and how much "margin of error" there is in optimal play. The environment therefore allows us to study learning by the defender, or by the attacker, or in a multi-agent formulation where the defender and attacker are learning concurrently. Because we have a move-by-move characterization of optimal play, we can go beyond simple measures of reward based purely on win/loss outcomes and use supervised learning techniques to pinpoint the exact location of the errors in a trajectory of play. In the process, we are able to develop insights about the robustness of solutions to changes in the environment. These types of analyses have been long-standing goals, but they have generally been approached much more abstractly, given the difficulty in characterizing step-by-step optimally in non-trivial environments such as this one. The main contributions of this work are thus the following:1. The development of these combinatorial games as environments for studying the behavior of reinforcement learning algorithms, with sensitive control over the difficulty of individual instances using a small set of natural parameters.2. A comparison of the performance of an agent trained using deep RL to the performance of an agent trained using supervised learning on move-by-move decisions. Exploiting the fact that we can characterize optimal play at the level of individual moves, we find an intriguing phenomenon: while the supervised learning agent is, not surprisingly, more accurate on individual move decisions than the deep RL agent, the deep RL agent is better at playing the game! We further interpret this by studying fatal mistakes.3. An investigation of the way in which the success of one of the two players (defender or attacker) in training turns out to depend crucially on the algorithm being used to implement the other player. We explore properties of this other player's algorithm, and also properties of mulitagent learning, that lead to more robust policies with better generalization. This is a largely empirical paper, building on a theoretically grounded environment derived from a combinatorial game. We present learning and generalization experiments for a variety of commonly used model architectures and learning algorithms. We aim to show that despite the simple structure of the game, it provides both significant challenges for standard reinforcement learning approaches and a number of tools for precisely understanding those challenges. We first introduce the family of Attacker-Defender Games BID10, a set of games with two properties that yield a particularly attractive testbed for deep reinforcement learning: the ability to continuously vary the difficulty of the environment through two parameters, and the existence of a closed form solution that is expressible as a linear model. An Attacker-Defender game involves two players: an attacker who moves pieces, and a defender who destroys pieces. An instance of the game has a set of levels numbered from 0 to K, and N pieces that are initialized across these levels. The attacker's goal is to get at least one of their pieces to level K, and the defender's is to destroy all N pieces before this can happen. In each turn, the attacker proposes a partition A, B of the pieces still in play. The defender then chooses one of the sets to destroy and remove from play. All pieces in the other set are moved up a level. The game ends when either one or more pieces reach level K, or when all pieces are destroyed. FIG0 shows one turn of play. With this setup, varying the number of levels K or the number of pieces N changes the difficulty for the attacker or the defender. One of the most striking aspects of the Attacker-Defender game is that it is possible to make this tradeoff precise, and en route to doing so, also identify a linear optimal policy. We start with a simple special case -rather than initializing the board with pieces placed arbitrarily, we require the pieces to all start at level 0. In this special case, we can directly think of the game's difficulty in terms of the number of levels K and the number of pieces N. Theorem 1. Consider an instance of the Attacker-Defender game with K levels and N pieces, with all N pieces starting at level 0. Then if N < 2 K, the defender can always win. There is a simple proof of this fact: the defender simply always destroys the larger one of the sets A or B. In this way, the number of pieces is reduced by at least a factor of two in each step; since a piece must travel K steps in order to reach level K, and N < 2 K, no piece will reach level K.When we move to the more general case in which the board is initialized at the start of the game with pieces placed at arbitrary levels, it will be less immediately clear how to define the "larger" one of the sets A or B. We therefore give a second proof of Theorem 1 that will be useful in these more general settings. This second proof BID10 ) uses Erdos's probabilistic method and proceeds as follows: for any attacker strategy, assume the defender plays randomly. Let T be a random variable for the number of pieces that reach level K. Then T = T i where T i is the indicator that piece i reaches level K. DISPLAYFORM0 as the defender is playing randomly, any piece has probability 1/2 of advancing a level and 1/2 of being destroyed. As all the pieces start at level 0, they must advance K levels to reach the top, which happens with probability 2 −K. But now, by choice of N, we have that i 2 −K = N 2 −K < 1. Since T is an integer random variable, E [T] < 1 implies that the distribution of T has nonzero mass at 0 -in other words there is some set of choices for the defender that guarantees destroying all pieces. This means that the attacker does not have a strategy that wins with probability 1 against random play by the defender; since the game has the property that one player or the other must be able to force a win, it follows that the defender can force a win. Now consider the general form of the game, in which the initial configuration can have pieces at arbitrary levels. Thus, at any point in time, the state of the game can be described by a K-dimensional vector S = (n 0, n 1, ..., n K), with n i the number of pieces at level i. Extending the argument used in the second proof above, we note that a piece at level l has a 2 DISPLAYFORM1 chance of survival under random play. This motivates the following potential function on states: Definition 1. Potential Function: Given a game state S = (n 0, n 1, ..., n K), we define the potential of the state as DISPLAYFORM2 Note that this is a linear function on the input state, expressible as φ(S) = w T · S for w a vector with w l = 2 −(K−l). We can now state the following generalization of Theorem 1.Theorem 2. Consider an instance of the Attacker-Defender game that has K levels and N pieces, with pieces placed anywhere on the board, and let the initial state be S 0. Then (a) If φ(S 0) < 1, the defender can always win (b) If φ(S 0) ≥ 1, the attacker can always win. One way to prove part (a) of this theorem is by directly extending the proof of Theorem 1, with This definition of the potential function gives a natural, concrete strategy for the defender: the defender simply destroys whichever of A or B has higher potential. We claim that if φ(S 0) < 1, then this strategy guarantees that any subsequent state S will also have φ(S) < 1. Indeed, suppose (renaming the sets if necessary) that A has a potential at least as high as B's, and that A is the set destroyed by the defender. Since φ(B) ≤ φ(A) and φ(A) + φ(B) = φ(S) < 1, the next state has potential 2φ(B) (double the potential of B as all pieces move up a level) which is also less than 1. In order to win, the attacker would need to place a piece on level K, which would produce a set of potential at least 1. Since all sets under the defender's strategy have potential strictly less than 1, it follows that no piece ever reaches level K. DISPLAYFORM3 If φ(S 0) ≥ 1, we can devise a similar optimal strategy for the attacker. The attacker picks two sets A, B such that each has potential ≥ 1/2. The fact that this can be done is shown in Theorem 3, and in BID10. Then regardless of which of A, B is destroyed, the other, whose pieces all move up a level, doubles its potential, and thus all subsequent states S maintain φ(S) ≥ 1, ing in an eventual win for the attacker. The Atari benchmark BID5 is a well known set of tasks, ranging from easy to solve (Breakout, Pong) to very difficult (Montezuma's Revenge). BID2 proposed a set of continuous environments, implemented in the MuJoCo simulator BID13. An advantage of physics based environments is that they can be varied continuously by changing physics parameters BID7, or by randomizing rendering BID12. Deepmind Lab BID0 ) is a set of 3D navigation based environments. OpenAI Gym BID1 contains both the Atari and MuJoCo benchmarks, as well as classic control environments like Cartpole BID11 and algorithmic tasks like copying an input sequence. The difficulty of algorithmic tasks can be easily increased by increasing the length of the input. Our proposed benchmark merges properties of both the algorithmic tasks and physics-based tasks, letting us increase difficulty by discrete changes in length or continuous changes in potential. From Section 2, we see that the Attacker-Defender games are a family of environments with a difficulty knob that can be continuously adjusted through the start state potential φ(S 0) and the number of levels K. In this section, we describe a set of baseline on Attacker-Defender games that motivate the exploration in the remainder of this paper. We set up the Attacker-Defender environment as follows: the game state is represented by a K + 1 dimensional vector for levels 0 to K, with coordinate l representing the number of pieces at level l. For the defender agent, the input is the concatenation of the partition A, B, giving a 2(K + 1) dimensional vector. The game start state S 0 is initialized randomly from a distribution over start states of a certain potential. We first look at training a defender agent against an attacker that randomly chooses between (mostly) playing optimally, and (occasionally) playing suboptimally, with the Disjoint Support Strategy. This strategy unevenly partitions the occupied levels between A, B so that one set has higher potential than the other, with the proportional difference between the two sets being sampled randomly. Note that this strategy gives rise to very different states A, B (uneven potential, disjoint occupied levels) than the optimal strategy, and we find that the model learns a much more generalizable policy when mixing between the two (Section 6).When testing out reinforcement learning, we have two choices of difficulty parameters. The potential of the start state, φ(S 0), changes how optimally the defender has to play, with values close to 1 giving much less leeway for mistakes in valuing the two sets. Changing K, the number of levels, directly affects the sparsity of the reward, with higher K ing in longer games and less feedback. Additionally, K also greatly increases the number of possible states and game trajectories (see Theorem 4). theoretically expressive enough to learn the optimal policy for the defender agent. In practice, we see that for many difficulty settings and algorithms, RL struggles to learn the optimal policy and performs more poorly than when using deeper models (compare to Figure 3). An exception to this is DQN which performs relatively well on all difficulty settings. Recall that the optimal policy can be expressed as a linear network, with the weights given by the potential function, Definition 1. We therefore first try training a linear model for the defender agent. We evaluate Proximal Policy Optimization (PPO), Advantage Actor Critic (A2C) BID6, and Deep Q-Networks (DQN) BID5, using the OpenAI Baselines implementations BID4. Both PPO and A2C find it challenging to learn the harder difficulty settings of the game, and perform better with deeper networks FIG2 ). DQN performs surprisingly well, but we see some improvement in performance variance with a deeper model. In summary, while the policy can theoretically be expressed with a linear model, empirically we see gains in performance and a reduction in variance when using deeper networks (c.f. Figures 3, 4.) Having evaluated the performance of linear models, we try a deeper model for our policy net: a fully connected neural network with two hidden layers of width 300. (Hyperparameters were chosen without extensive tuning and by trying a few different possible settings. We found that two hidden layers generally performed best and the width of the network did not have much effect on the resutls.) Identically to above, we evaluate PPO, A2C and DQN on varying start state potentials and K. Each algorithm is run with 3 random seeds, and in all plots we show minimum, mean, and maximum performance. of potential (top and bottom row) with a deep network. All three algorithms show a noticeable variation in performance over different difficulty settings, though we note that PPO seems to be more robust to larger K (which corresponds to longer episodes). A2C tends to fare worse than both PPO and DQN.. Unsurprisingly, we see that supervised learning is better on average at getting the ground truth correct move. However, RL is better at playing the game: a policy trained through RL significantly outperforms a policy trained through supervised learning (right pane), with the difference growing for larger K. One remarkable aspect of the Attacker-Defender game is that not only do we have an easily expressible optimal policy, but we know the ground truth on a per move basis. We can thus compare RL to a Supervised Learning setup, where we classify the correct action on a large set of sampled states. To carry out this test in practice, we first train a defender policy with reinforcement learning, saving all observations seen to a dataset. We then train a supervised network (with the same architecture as the defender policy) to classify the optimal action. This ensures both methods see exactly the same data points. We then test the supervised network on how well it can play. The , shown in FIG4 are counter intuitive. Supervised learning (unsurprisingly) has a higher proportion of correct moves: keeping count of the ground truth correct move for each turn in the game, the trained supervised policy network has a higher proportion of ground truth correct moves in play. However, despite this, reinforcement learning is better at playing the game, winning a larger proportion of games. These are shown in FIG4 for varying choices of K.We conjecture that reinforcement learning is learning to focus most on moves that matter for winning. To investigate this conjecture, we perform two further experiments. Define a fatal mistake to be when the defender moves from a winning state (potential < 1) to a losing state (potential > 1) due to an incorrect move. We count the number of fatal mistakes made by the trained supervised policy, and trained RL policy. The are shown in the left pane of FIG5. We see that supervised learning is much more prone to make fatal mistakes, with a sharp increase in fatal mistakes for larger K, supporting its sharp decrease in performance. We also look at where mistakes are made by RL and Supervised Learning based on distance of the move from the end of the game. We find that RL is better at the final couple of moves, and then consistently better in most of the earlier parts of the game. This contrast forms an interesting counterpart to recent findings of BID9, who in the context of Go also compared reinforcement learning to supervised approaches. A key distinction is that their supervised work was relative to a heuristic objective, whereas in our domain we are able to compare to provably optimal play. Returning to our RL Defender Agent, we would like to know how robust its learned policy is. In particular, as we have so far been training our agent with a randomized but hard coded attacker, we would like to test how sensitive a defender agent is to the particular attacker strategy. We investigate this in FIG7 where we first train a defender agent on the optimal attacker and test on the disjoint support attacker. We notice a large drop in performance when switching from the optimal attacker to (less than 1.0 potential) to losing state (greater than 1.0 potential)) made by supervised learning compared to RL. We find that Supervised Learning makes many more fatal mistakes, explaining its collapse in performance. (right pane) plot showing when (measured as distance to end game) RL and supervised learning make mistakes. RL is more accurate than supervised learning at predicting the right action for the final couple of moves, and then drops quickly to a constant, whereas supervised learning is less accurate right at the very end and drops more slowly but much further, having lower accuracy than RL for many of the earlier moves. and then tested on (a) another optimal attacker environment (b) the disjoint support attacker environment. The left pane shows the ing performance drop when switching to testing on the same opponent strategy as in training to a different opponent strategy. The right pane shows the of testing on an optimal attacker vs a disjoint support attacker during training. We see that performance on the disjoint support attacker converges to a significantly lower level than the optimal attacker. Figure 8: Performance of PPO and A2C on training the attacker agent for different difficulty settings. DQN performance was very poor (reward < −0.8 at K = 5 with best hyperparams). We see much greater variation of performance with changing K, which now affects the sparseness of the reward as well as the size of the action space. There is less variation with potential, but we see a very high performance variance (top right pane) with lower (harder) potentials. the disjoint support attacker. As we know there exists an optimal policy which generalizes perfectly across all attacker strategies, this suggests that the defender is overfitting to the particular attacker strategy. One way to mitigate this overfitting issue is to set up a method of also training the attacker, with the goal of training the defender against a learned attacker, or even better, in the multiagent setting. However, determining the correct setup to train the attacker agent first requires devising a tractable parametrization of the action space. A naive implementation of the attacker would be to have the policy output how many pieces should be allocated to A for each of the K + 1 levels (as described in BID10). This can grow exponentially in K, which is clearly impractical. To address this, we first prove a theorem that enables us to show that we can parametrize an optimal attacker with a much smaller action space. Theorem 3. For any Attacker-Defender game with K levels, start state S 0 and φ(S 0) ≥ 1, there exists a partition A, B such that φ(A) ≥ 0.5, φ(B) ≥ 0.5, and for some l, A contains pieces of level i > l, and B contains all pieces of level i < l. Proof. For each l ∈ {0, 1, . . ., K}, let A l be the set of all pieces from levels K down to and excluding level l, with A K = ∅. We have φ(A i+1) ≤ φ(A i), φ(A K) = 0 and φ(A 0) = φ(S 0) ≥ 1. Thus, there exists an l such that φ(A l) < 0.5 and φ(A l−1) ≥ 0.5. If φ(A l−1) = 0.5, we set A l−1 = A and B the complement, and are done. So assume φ(A l) < 0.5 and φ(A l−1) > 0.5Since A l−1 only contains pieces from levels K to l, potentials φ(A l) and φ(A l−1) are both integer multiples of 2 −(K−l), the value of a piece in level l. Letting φ(A l) = n · 2 −(K−l) and φ(A l−1) = m · 2 −(K−l), we are guaranteed that level l has m − n pieces, and that we can move k < m − n pieces from A l−1 to A l such that the potential of the new set equals 0.5. Figure 9: Performance of attacker and defender agents when learning in a multiagent setting. In the top panes, solid lines denote attacker performance. In the bottom panes, solid lines are defender performance. The sharp changes in performance correspond to the times we switch which agent is training. We note that the defender performs much better in the multiagent setting: comparing the top and bottom left panes, we see far more variance and lower performance of the attacker compared to the defender performance below. Furthermore, the attacker loses to the defender for potential 1.1 at K = 15, despite winning against the optimal defender in Figure 8. We also see (right panes) that the attacker has higher variance and sharper changes in its performance even under conditions when it is guaranteed to win. This theorem gives a different attacker parametrization. The attacker outputs a level l. The environment assigns all pieces before level l to A, all pieces after level l to B, and splits level l among A and B to keep the potentials of A and B as close as possible. Theorem 3 guarantees the optimal policy is representable, and the action space linear in K instead of exponential in K.With this setup, we train an attacker agent against the optimal defender with PPO, A2C, and DQN. The DQN were very poor, and so we show for just PPO and A2C. In both algorithms we found there was a large variation in performance when changing K, which now affects both reward sparsity and action space size. We observe less outright performance variability with changes in potential for small K but see an increase in the variance (Figure 8). With this attacker training, we can now look at learning in a multiagent setting. We first explore the effects of varying the potential and K as shown in Figure 9. Overall, we find that the attacker fares worse in multiagent play than in the single agent setting. In particular, note that in the top left pane of Figure 9, we see that the attacker loses to the defender even with φ(S 0) = 1.1 for K = 15. We can compare this to Figure 8 where with PPO, we see that with K = 15, and potential 1.1, the single agent attacker succeeds in winning against the optimal defender. defender. The figure single agent defender trained on the optimal attacker and then tested on the disjoint support attacker and a multiagent defender also tested on the disjoint support attacker for different values of K. We see that multiagent defender generalizes better to this unseen strategy than the single agent defender. Finally, we return again to our defender agent, and test generalization between the single and multiagent settings. We train a defender agent in the single agent setting against the optimal attacker, and test on a an attacker that only uses the Disjoint Support strategy. We also test a defender trained in the multiagent setting (which has never seen any hardcoded strategy of this form) on the Disjoint Support attacker. The are shown in FIG0. We find that the defender trained as part of a multiagent setting generalizes noticeably better than the single agent defender. We show the over 8 random seeds and plot the mean (solid line) and shade in the standard deviation. In this paper, we have proposed Erdos-Selfridge-Spencer games as rich environments for investigating reinforcement learning, exhibiting continuously tunable difficulty and an exact combinatorial characterization of optimal behavior. We have demonstrated that algorithms can exhibit wide variation in performance as we tune the game's difficulty, and we use the characterization of optimal behavior to expose intriguing contrasts between performance in supervised learning and reinforcement learning approaches. Having reformulated the to enable a trainable attacker, we have also been able to explore insights on overfitting, generalization, and multiagent learning. We also develop further in the Appendix, including an analysis of catastrophic forgetting, generalization across different values of the game's parameters, and a method for investigating measures of the model's confidence. We believe that this family of combinatorial games can be used as a rich environment for gaining further insights into deep reinforcement learning. On the left we train on different potentials and test on potential 0.99. We find that training on harder games leads to better performance, with the agent trained on the easiest potential generalizing worst and the agent trained on a harder potential generalizing best. This is consistent across different choices of test potentials. The right pane shows the effect of training on a larger K and testing on smaller K. We see that performance appears to be inversely proportional to the difference between the train K and test K. In the main text we examined how our RL defender agent performance varies as we change the difficulty settings of the game, either the potential or K. Returning again to the fact that the AttackerDefender game has an expressible optimal that generalizes across all difficulty settings, we might wonder how training on one difficulty setting and testing on a different setting perform. Testing on different potentials in this way is straightforwards, but testing on different K requires a slight reformulation. our input size to the defender neural network policy is 2(K + 1), and so naively changing to a different number of levels will not work. Furthermore, training on a smaller K and testing on larger K is not a fair test -the model cannot be expected to learn how to weight the lower levels. However, testing the converse (training on larger K and testing on smaller K) is both easily implementable and offers a legitimate test of generalization. We find (a subset of plots shown in FIG0) that when varying potential, training on harder games in better generalization. When testing on a smaller K than the one used in training, performance is inverse to the difference between train K and test K. Recently, several papers have identified the issue of catastrophic forgetting in Deep Reinforcement Learning, where switching between different tasks in destructive interference and lower performance instead of positive transfer. We witness effects of this form in the Attacker-Defender games. As in Section 7, our two environments differ in the K that we use -we first try training on a small K, and then train on larger K. For lower difficulty (potential) settings, we see that this curriculum learning improves play, but for higher potential settings, the learning interferes catastrophically, FIG0 The significant performance drop we see in FIG7 motivates investigating whether there are simple rules of thumb that the model has successfully learned. Perhaps the simplest rule of thumb is learning the value of the null set: if one of A, B (say A) consists of only zeros and the other (B) has some pieces, the defender agent should reliably choose to destroy B. Surprisingly, even this simple rule of thumb is violated, and even more frequently for larger K, FIG0. We can also test to see if the model outputs are well calibrated to the potential values: is the model more confident in cases where there is a large discrepancy between potential values, and fifty-fifty where the potential is evenly split? The are shown in FIG0. In the main paper, we mixed between different start state distributions to ensure a wide variety of states seen. This begets the natural question of how well we can generalize across start state distribution if we train on purely one distribution. The in FIG0 show that training naively Confidence as a function of potential difference between states. The top figure shows true potential differences and model confidences; green dots are moves where the model prefers to make the right prediction, while red moves are moves where it prefers to make the wrong prediction. The right shows the same data, plotting the absolute potential difference and absolute model confidence in its preferred move. Remarkably, an increase in the potential difference associated with an increase in model confidence over a wide range, even when the model is wrong. In fact, the amount of possible starting states for a given K and potential φ(S 0) = 1 grows super exponentially in the number of levels K. We can state the following theorem:Theorem 4. The number of states with potential 1 for a game with K levels grows like 2 DISPLAYFORM0 We give a sketch proof. Proof. Let such a state be denoted S. Then a trivial upper bound can be computed by noting that each s i can take a value up to 2 (K−i), and producting all of these together gives roughly 2 K/2.For the lower bound, we assume for convenience that K is a power of 2 (this assumption can be avoided). Then look at the set of non-negative integer solutions of the system of simultaneous equations a j−1 2 1−j + a j 2 −j = 1/K where j ranges over all even numbers between log(K) + 1 and K. The equations don't share any variables, so the solution set is just a product set, and the number of solutions is just the product As the optimal defender policy is expressible as a linear model, we empirically investigate whether depth is helpful. We find that even with a temperature included, linear models perform worse than models with one or two hidden layers.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkCnm-bAb
We adapt a family of combinatorial games with tunable difficulty and an optimal policy expressible as linear network, developing it as a rich environment for reinforcement learning, showing contrasts in performance with supervised learning, and analyzing multiagent learning and generalization.
Adoption of deep learning in safety-critical systems raise the need for understanding what deep neural networks do not understand. Several methodologies to estimate model uncertainty have been proposed, but these methodologies constrain either how the neural network is trained or constructed. We present Outlier Detection In Neural networks (ODIN), an assumption-free method for detecting outlier observations during prediction, based on principles widely used in manufacturing process monitoring. By using a linear approximation of the hidden layer manifold, we add prediction-time outlier detection to models after training without altering architecture or training. We demonstrate that ODIN efficiently detect outliers during prediction on Fashion-MNIST, ImageNet-synsets and speech command recognition. Thanks to the powerful transformations learned by deep neural networks, deep learning is applied in an increasing number of applications. But deep neural networks, as all data-driven models, tend to fail when input data differs from training data. Adopting deep learning in increasingly complex and possibly safety-critical systems makes it crucial to know not only whether the model's predictions are accurate, but also whether the model should predict at all. If the model is able to detect outlier observations post-training directly, the system can fall back to safe behaviour minimizing negative consequences of faulty predictions. By understanding the limits of models' learned representations and detecting when observations are not recognized, autonomous decision making based on deep learning can be improved. In manufacturing process control, predictive models have long been used to predict process outcomes as well as detecting outlier input for decades BID8 BID17 BID9. A widely used model for this purpose is Partial Least Squares regression (PLS) BID33, which project input data onto a set of linear latent variables prior to prediction. In the latent variable space, the distance from new observations to the training data distribution is used to detect outlier observations. The latent variables can also be used to approximate input observations, meaning that outliers can also be detected by measuring the distance to the latent variable subspace itself. However, the layers of a neural network learn a non-linear mapping from input to output data spaces rather than a single linear subspace. This makes it difficult to directly determine the limits of a neural network model's knowledge in the same manner as for a PLS model. In this paper we present Outlier Detection In Neural networks (ODIN), a method for detecting outliers during prediction in deep neural networks. Based on principles long used in manufacturing process control, we propose using a linear approximation of intermediate activations to provide a fixed representation of the training data. By comparing new data to this fixed representation we are able to detect outliers during prediction without imposing constraints on architecture or training. This allows us to use ODIN as a plug-in method allowing reliable and safe autonomous decision making based on deep learning. A wide collection of methods allowing neural networks to describe uncertainty in predictions, which can be used to determine if new observations are outliers, have been proposed. For decades, many methods have been formulated within a Bayesian framework BID7 BID21 allowing neural networks to predict probability distributions rather than point inferences. The predictive uncertainty can then be estimated by the entropy or variance of the predicted distribution. BID10 proposed MC-dropout, using prediction time dropout and Monte-Carlo sampling. In summary, MC-dropout make multiple predictions per inference while the network is randomly perturbed by drop-out which in a predicted distribution. A number of alternatives to using dropout to perturb Monte-Carlo samples have been proposed in recent years including: sampling based on batch-normalization parameters BID29, model ensembles BID18, multiple prediction heads in a shared base network BID25 BID14 BID22, variational inference of weight distribution instead of regular point weights BID0 and Laplace approximation of distributions from existing weights BID26. However, the mentioned methods constrain either how the network is constructed BID18 BID25 BID14 BID22 or how the network is trained BID10 BID29 limiting their use in systems already in production. Several methods also rely on multiple inferences per prediction BID10 BID29 BID0 BID26. This limits their use in real-time systems or systems with limited computational resources. An alternative approach for estimating uncertainty in classification problems is presented by where linear classifiers are trained to classify the target output given intermediate layers of a given base model. The linear classifier outputs are then fed to a meta-model that is trained to estimate whether or not the base model is correct. Another alternative approach is proposed by BID19 that leverage Generative Adversarial Networks (GANs) BID12 to augment the original dataset with border-line outliers. A deep neural classifier is then trained to output high uncertainty for outlier observations and low uncertainty for the original observations. This method does however involve training a GAN that can be difficult to train to convergence BID23.Anomaly detection is closely related to prediction time outlier detection, and there are many methods for flagging deviating observations. Non neural-methods include one-class support vector machines BID27, local observation density BID2, distances BID16, isolation forests BID20 and many others. A multitide of methods based on deep neural networks, typically autoencoders have been developed as well BID35 BID3 BID36 BID24. Of particular relevance to this work is BID24, that use reconstruction residual as metric to flag outliers. Important to note is that outlier detection systems are based on training a separate model to detect deviant observations. Prediction time outlier detection, on the other hand, describes the limits of a predictive model's knowledge. In this section we briefly describe the Partial Least Squares regression model, and how its latent variable approximation of the input data space is used to detect outliers after training. We then describe how we can apply similar principles in neural networks by using a linear approximation of the hidden layer manifold, in a method we call Outlier Detection In Neural networks (ODIN). Partial least squares regression (PLS) BID32 BID11 ) is a widely used regression model within manufacturing process control. Similar to Principal Component Analysis (PCA), PLS assumes that high-dimensional data resides in a sub-space of the original data space spanned by so called latent variables and formulated in terms of matrix decomposition. The PLS model is summarized as: DISPLAYFORM0 where the n × m input matrix X is decomposed into n × k latent variable matrix T = [t 1 ... t k] and m × k loading matrix P = [p 1 ... p k] with residual matrix E. The n × p response matrix Y is predicted using T multiplied with response weight matrix C = [c 1 ... c k] with residuals F. The latent variable-matrix T spans a orthogonal subspace of the data-matrix X and maximize the covariance between X and Y. Note that the PLS model of X is similar to how PCA approximates the data through matrix decomposition but PCA finds latent variables t i that maximize the variance in X rather than the covariance between two matrices. The columns in T are typically calculated sequentially, where the first column t 1 is found through basis-vectors w 1 ∈ R m and c 1 ∈ R p solving the optimization problem: DISPLAYFORM1 The corresponding loading vector p 1 is then chosen so thatX 1 = X − t 1 p T 1 is uncorrelated with t 1. This is achieved by selecting p 1 as: DISPLAYFORM2 The subsequent vectors t i, w i, p i where i ∈ [2, ..., k] are then calculated by repeating equations 2 and 3 usingX DISPLAYFORM3 The latent variable formulation means that PLS carries its own model of the training data and provides two ways of detecting outliers during prediction. Since new observations are projected to the low-dimensional sub-space spanned by the latent variables, both distance to the sub-space itself and distance to training observations within the sub-space can be used to detect outliers. Distance to sub-space is typically measured using the residual sum of squares (RSS). RSS for a new observation row vector x new ∈ R p is given by: DISPLAYFORM4 where x new is approximated as DISPLAYFORM5 There are several ways to estimate the distance to training observations within the sub-space and a common choice is the Mahalanobis distance. The Mahalanobis distance is a well-used statistical distance measuring how many standard deviations away an observations is from the origin in a multivariate probability normal distribution. Given a fitted PLS model, the training data projections can be approximated as a multivariate normal distribution with covariance matrix C T = E(T T T). Then the Mahalanobis distance for x new is given by: DISPLAYFORM6 Alternatively, to compensate for using a linear model of a possibly non-linear manifold, a density based metric within the latent variable space may be used. For instance, by using the Local Outlier Factor (LOF) BID2, observations within low-density regions may be flagged as outliers instead of only using the Mahalanobis distance. In contrast to PLS, data are not typically linearly mapped to a single sub-space in deep neural networks. Instead, a neural network performs as a nested series of non-linear transformations. That is, the activation vector a i of an observation vector x from a layer i is given by: DISPLAYFORM0 with weight-matrices W k and activation functions f k.According to the manifold hypothesis, data is assumed to reside near a region of low dimensionality that may be highly entangled. One possible explanation for why deep neural networks work well is that they are able to disentangle complicated manifolds BID1. If deep neural networks disentangle manifolds, we may find a transformation from a complicated data manifold to a manifold that is approximately linear. If this hypothesis holds true, we can apply a simple trick to add prediction time outlier detection to neural network models by building a model of the data representation within the model. Given n × m activation matrix DISPLAYFORM1 T of training data X, where the row vectors a i,k are given by equation 6, we approximate the activation manifold using PCA as: DISPLAYFORM2 where the n × k latent variable matrix T Ai contain the projections of the A i onto the orthonormal sub-space spanned by columns of the m × k loading matrix P Ai. Now we have a fixed orthogonal approximation of the activation manifold that we can use to detect outliers during prediction analogously to PLS (see 3.1). Meaning that we can measure the distance from new observations to the activation manifold as the residual sum of squares similar to equation 4 using the observation activation a i,new with projection t ai,new = a i,new P Ai: DISPLAYFORM3 Similarily, the distance from training observations within the manifold can be measured using Mahalanobis distance or density based approaches as the Local Outlier Factor within the linear approximation. For the Mahalanobis distance, the covariance matrix of the activation projections DISPLAYFORM4 We choose to call our method Outlier Detection In Neural networks, ODIN, since we use the intermediate data representations within the neural network itself. In contrast to common Bayesian approaches, we do not perturb predictions to produce prediction distributions. We simply measure the distance from new observations to the training data to determine whether they are outliers or not. In the following sections we demonstrate how to detect outliers during prediction time using ODIN on different classification tasks. We choose classification tasks for demonstration since it is straightforward to simulate outliers by excluding a subset of the classes. We also explore how to choose what layer's activations to use and rank of PCA approximation. For comparison, we also perform outlier detection using MC-Dropout BID10 since it is well-established and straightforward to implement even though it has received criticism BID25. To provide a simple classification problem with outliers encountered during prediction, we use the Fashion-MNIST BID34 dataset. Fashion-MNIST consists of 70 000 greyscale 28x28 pixel images, out of which 10 000 are test set images, of ten categories of fashion products. We excluded five classes to use as outliers, including all shoes (sandals, ankle boots and sneakers) and two clothing classes (pullover and shirts). The intuition is that shoe-images are strong outliers since all shoe-related information is absent from training data, and excluded clothes are more subtle outliers since the training data contain other upper body garments. We trained a small convolutional neural network (CNN, for architecture see Figure A .1) on five out of the ten classes. We used rmsprop BID30 ) optimization, categorical cross entropy loss function, batch size 128 for 10 epochs and kept 10 % of the images as validation set and achieved a test set accuracy of 97 %. To use for outlier detection, we extracted features from both max-pooling layers (without global average pooling) for all images. We evaluate ODIN using different outlier metrics (RSS, Mahalanobis distance and LOF), five levels of explained variance (R2) of the PCA model (50-99 %) and different layers of extracted features using the area under the receiver operating characteristic curve (ROC-AUC) as performance metric, (see FIG0, complete in TAB1 .1). We calculate the ROC-AUC comparing how well test set observations are separated from outlier observations. For comparison, we also used MCdropout to calculate the image-wise entropy from 50 Monte Carlo samples per image and evaluated the using ROC-AUC in the same way as ODIN.All metrics clearly separate strong outliers (shoes) from the test set images FIG0 left) with RSS being most sucessful (ROC-AUC 0.97 compared to Mahalanobis 0.94 and LOF 0.91). There is a trend that they benefit from increased PCA R2. Surprisingly MC-dropout failed to detect shoe outliers (ROC-AUC 0.495). The subtle outliers (non-shoes) are significantly more difficult to detect FIG0, and LOF is most successful doing so (best ROC-AUC 0.71 compared to RSS 0.60, Mahalanobis 0.61 and MC-Dropout 0.63).To conclude, the Fashion-MNIST experiment show that ODIN successfully detect outliers in a simple image classification problem. Strong outliers seem to be best detected by measuring distance to manifold while subtle outliers are better detected in low-density regions of the linear approximation using LOF. In order to provide a more complex example we demonstrate prediction time outlier detection using a pre-trained CNN on image synsets from ImageNet BID6 ). We train a cat vs. dog classifier on the cat-and dog-synsets from ImageNet and used the car-and horse-synsets as outliers. We used an Inception v3-network BID28 pre-trained on ImageNet, freezing all Inception module weights during training. We replaced the penultimate layer with a hidden layer of 128 ReLu units and a single sigmoid output with 50 % dropout before and after the ReLu-layer and trained for 50 epochs using the Adam optimizer BID15 and achieved a test set accuracy of 93 %.We extracted features from each inception module in the Inception-v3 network and pooled them feature-map wise using global average pooling. For each layer of features, we performed outlier detection with five levels of explained variance (R2) for the PCA model (50-99 %) and different outlier We are able to convincingly detect cars using our cats vs. dogs classifier (best ROC-AUC for RSS, 0.96, Mahalanobis distance 0.94 and LOF 0.93). Horses are also detected as outliers even though they share visual features with cats and dogs (best ROC-AUC for RSS 0.76, Mahalanobis distance 0.75 and LOF 0.69). Since we used dropout for the last fully connected layers, we also performed MC-dropout achieving similar (ROC-AUC: 0.86 for cars, 0.61 for horses). The degree of explained variance was not as influental in this experiment as in the Fashion-MNIST experiment, but both Mahalanobis distance and LOF fail to detect both cars and horses using 50 % R2. Interestingly, the performance of all metrics peak at inception module 8 where an auxilliary output was used during training on ImageNet BID28.To conclude, the experiment on cats and dogs show that ODIN reliably detect outliers using a pretrained CNN on real-world images. ODIN performs slightly better than MC-dropout but does not rely on using dropout, or any type of constraint on the training procedure. In line with the from the Fashion-MNIST experiment, higher PCA R2 produce more reliable . To show that ODIN for prediction time outlier detection works for not only CNN-based image classification, we perform a speech command recognition experiment using a LSTM-based model. We use the Speech Commands dataset BID31 ) that consists of 105 000 short utterances of 35 words recorded at 16 kHz sampling-rate. The words includes digits zero to nine, command words yes, no, up, down, left, right, on, off, stop, go, backward, forward, follow, learn and visual. The dataset also include a set of arbitrary words bed, bird, cat, dog, happy, house, marvin, sheila, tree and wow. In our experiment, we train a classification model of both digits and command words and use the arbitrary words as outliers. We transform the utterances into 64 Mel-Frecuency Cepstral Coefficients BID5, using a frame-length of 2048 samples and frame-stride of 512 samples. We train a three layer bi-directional LSTM-model with 30 % dropout after each LSTM-layer and softmax output (see architecture in Figure C .1) for 30 epochs, using the Adam-optimizer BID15 and batch-size 512 ing in test-set accuracy of 78 % for the 25 classes. The classification accuracy is lower than the 88 % accuracy of the baseline CNN:s BID31 ), but we believe it is sufficient for demonstrating prediction time outlier detection. For outlier detection, we extracted training set features from the third LSTM-layer and fitted a PCAmodel explaining 99 % of the variance and chose RSS and Mahalanobis distance limits to be the 9th deciles of the training set distances. We then extracted features from the test set and outlier classes and projected them onto the PCA-model and calculated RSS and Mahalanobis distances. Using precision, recall and F1-score we evaluated outlier detection at the 9th deciles (see FIG2, complete in TAB1 .1). We also combined RSS and Mahalanobis distance classifications using OR and AND combined classification. For comparison, we also used MC-dropout with 10 Monte Carlo samples per utterance and calculated the sample-wise Shannon entropy. We performed outlier detection using the 9th decile of training set entropies as threshold, and evaluated MC-dropout in the same manner as ODIN.Detecting outliers in the speech-commands dataset is difficult for both ODIN and MC-dropout with best word-wise F1-scores ranging from 0.25 for tree, which is phonetically similar to three, to 0.47 for house. ODIN consistently outperform MC-dropout. Additionally, since two metrics are used, we also have the opportunity to raise precision or recall by using AND-or OR-combination of classification according to the two metrics. Depending on the application, either precision or recall may be more important than the other. The Speech Command experiment shows that ODIN performs well for recurrent neural networks on a speech recognition task in addition to image classification. We also demonstrate how it can be used in practice, by selecting classification threshold and evaluating our choice using precision and recall. We also show how combinations of the different metrics available may be used to tune the precision/recall ratio. Deep neural networks are powerful transformers that have shown great success in many applications. But, in order to adopt deep learning in safety-critical applications it is crucial to understand when new observations do not match the data used during training. To imitate linear latent variable models used in manufacturing process monitoring, we use a linear approximation of the hidden layer manifolds to measure distance to and within the manifold. We compare our to MC-dropout, a well established Bayesian approach, and consistently detect outliers post-training without imposing any constraints on either architecture or training procedure. We demonstrate our method in two image classification experiments, with and without a pre-trained network, and a speech recognition example using a recurrent neural network. By defining the limits of our neural networks' knowledge, ODIN contribute to safer use of deep learning. APPENDIX A APPENDIX Input FORMULA0 Softmax ( Input (
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkGqLoR5tX
An add-on method for deep learning to detect outliers during prediction-time
This work introduces a simple network for producing character aware word embeddings. Position agnostic and position aware character embeddings are combined to produce an embedding vector for each word. The learned word representations are shown to be very sparse and facilitate improved on language modeling tasks, despite using markedly fewer parameters, and without the need to apply dropout. A final experiment suggests that weight sharing contributes to sparsity, increases performance, and prevents overfitting. When processing text for Natural Language Processing (NLP), one important decision to make is how to represent the words for a given model or system. For many tasks tackled by deep learning such as language modeling, language understanding, and translation, the use of word embeddings has become the standard approach. BID20 BID4 BID0 This is in part due to their ability to represent complex syntactic and semantic relationships between words as spatial relationships within the embedding dimensions BID13.Embeddings are generally implemented as a lookup table for computational efficiency. However for those unfamiliar with their use it may be beneficial to formulate them as the output of the first layer in a neural network. This is true for a layer that has one-hot feature vectors as inputs, no bias, and no activation function. For a given one-hot feature vector x, the activations of such a layer can be computed by xW, which is equivalent to selecting the row W i of the weight matrix, where x i == 1. The weight matrix or embedding lookup matrix can then be optimized via typical techniques such as gradient descent, including from subsequent layers of a DNN through back propagation. BID15 For word embeddings, the basic approach is to assign a unique vector of trainable parameters to each word in a vocabulary. These vectors are referred to in this paper as token embeddings. Token embeddings present a number of limitations. For example, any out-of-vocabulary word cannot be represented except as a pre-defined < U N K > token. A corollary of this is that the number of embeddings (and therefore trainable parameters) grows linearly with the size of the vocabulary. Furthermore, characters are ignored, meaning that potentially useful morphological information is thrown out. To get around these limitations, researchers have explored building word embeddings from lower level character representations. A variety of techniques have been presented, including the use of feedforward multi layer perceptrons (MLPs) BID3, convolutional neural networks (CNNs) BID9 BID10, and recurrent neural networks (RNNs) BID12. These character level representations of words have the advantage over token embeddings of allowing an open vocabulary, usually having fewer parameters, and improving performance by making use of information available in sub-word level features. The most successful approaches for building word embeddings from characters use CNNs. BID9 However, the architecture of CNNs is designed to identify positioninvariant features, not the specific ordering of characters that make up a word's spelling. Here we ask whether such ordering is a valuable source of information. A number of convolutional features of varying size can be used to capture some ordering, for example within each feature independently. However as the vocabulary is expanded, the number convolutional features must be increased to compensate BID9. Once convolution is performed, the used of a deep highway network, as introduced by BID18, is then needed to produce the final word embedding. The current study presents a simple fully connected architecture for combining characters. In this framework, each character is represented both by position-agnostic character embeddings and position-aware character embeddings, which we call spelling embeddings. The combination of the two allows the model to learn both position invariant features and positional features. A word embedding is then constructed by combining both the character and spelling embeddings of the word, for example by summing or by averaging them together. The ing vector is then passed through a nonlinear MLP that combines the character and spelling information to produce the final word embedding. This MLP, along with the spelling and character embeddings, were trained via gradient descent as inputs to a Recurrent Neural Network (RNN) being trained for a language modeling task. Results show that including the spelling information facilitates improvement over token embeddings despite requiring far fewer parameters. Without the position information, character embeddings alone are not sufficient in this fully connected architecture. An analysis of the learned representations at the word embedding level shows much greater sparsity for spelling embeddings than for token embeddings, and demonstrates some of the negative impacts of dropout on the representations. Finally, we compare token based models with a fully connected layer of shared weights to raw token embeddings with no weight sharing. Passing the token embeddings through a layer of shared weights is shown to drastically increase representation sparsity and prevent overfitting. Given that the character and spelling weights are heavily shared among word embeddings, this is presented as possible explanation for the spelling aware model's robustness against overfitting. Many architectures have been explored for composing word embeddings from lower level features, including the use of recurrent neural networks BID12, BID11 convolutional networks BID10, BID16, BID9, character n-grams BID1, as well as combinations of word tokens with morphological features BID2, BID3.One such architecture is to enhance token based word embeddings of Chinese words by including character embeddings BID3. Multiple approaches were explored, the simplest of which was to embed characters and build a word embedding by combining a traditional token embedding with the average of the embeddings for each character in the word: DISPLAYFORM0 Where e(i) is the character enhanced embedding for word i, T is the token embedding lookup table, T i is the token embedding vector for the word, c j is the character embedding vector for the jth letter of the word, and L i is the total number of letters in the word. There are a number of drawbacks with this approach. First, character ordering is not taken into account so the token embeddings are needed to ensure uniqueness. Second, the character embeddings were not included for words that were pre-screened for ambiguous or misleading character information, which requires a manual or heuristic pre-processing step. Finally, simply averaging the character embeddings doesnt provide an opportunity to build richer non-linear combinations such as would be possible with an MLP.Convolution neural networks (CNNs) have also been used to create word embeddings from character representations. BID9 Their character aware CNN architecture was based on a previous publication by BID10, but used more convolution features to cope with larger datasets. This approach was found to give state of the art when applied to language modeling with the popular One Billion Word Benchmark, despite using far fewer parameters than a traditional token embedding model. The use of a fully connected network with explicit positional information was not reported on. The inclusion of positional information can be handled in a variety of ways. An interesting method not explored in this work is provided by BID19, who combine positional information with each symbol in the form of unlearned sin and cosine dependant functions of varying frequencies. These functions produce repeating waveforms that allow their model to capture information about relative positions. This differs from the current study which uses learned, explicit and distinct representations for each position of each character. The task in language modeling is to assign probabilities to sentences or sequences of words. That is, we want to model the probability of of the next word in a sequence conditional on the ordered sequence of all previous words. DISPLAYFORM0 This was accomplished with the use of an RNN which produces a context vector v from previous words. The RNN was implemented with Gated Recurrent Units (GRUs), which we denote here by the function g for simplicity. A fully connected layer with weights W (s) and biases b (s) was used to project the GRU's output, v, to the target vocabulary. A softmax activation was then applied to produce a valid probability distribution, q, over the vocabulary: DISPLAYFORM1 Gradients were computed from the cross entropy between the softmax layer and expected next word in the sequence. Each batch contained sequences with a fixed length, and each sequence followed from the previous batch. Gradients were time truncated to the fixed sequence length. Gradients were also clipped to a maximum global norm to prevent them from exploding. BID14 The initial states of the GRU were only reset at the beginning of each epoch. Dropout was applied to the outputs of each RNN layer in order to regularize the RNN weights of the models. BID17 For the embedding layers, two dropout configurations were compared. The first applied dropout to the final word embedding layer and the second did not. Two datasets were evaluated. The first is a relatively small dataset consisting of the novels of The Wheel of Time, by Jordan & Sanderson (1990 BID13 . It has vocabulary of 34,594 words and was split into train/test partitions with 5,007,362 and 444,576 words respectively. The second dataset is a subset of works from Project Gutenberg Canada, which is a collection of writings that are in the Canadian public domain. This larger dataset has a vocabulary of 205,027 words and was split into train/test partitions with 63,319,830 and 7,136,409 words respectively. This is about 10% the size of the popular One Billion Word Benchmark. Both datasets were pre-processed as follows. All texts were lower-cased and separated by whitespace into words. Each word was then parsed again so that any stream of consecutive alphabetical characters were considered a token, and any stream of non-alphabetical characters were considered a token. Converting the vocabulary to lowercase removes potentially valuable information, and was done only to reduce the vocabulary size. This allowed for a speed up in experimentation and hyper-parameter tuning, as well as to fit larger models on available hardware. The token embeddings consist of a V × N lookup table of trainable parameters followed by a fully connected layer with rectified linear units (ReLUs). A graphical representation is provided in figure 1. Given a lookup table T, word index i, and a fully connected layer with matrix W (t) and bias vector b (t), the embedding funtion is: DISPLAYFORM0 An additional configuration for the tokens, referred to as raw token embeddings, was investigated with the larger dataset. These were simply presented directly the the RNN, rather than passed through a fully connected layer first. Hence: DISPLAYFORM1 As shown in figure 2, the spelling embeddings are built up from the characters in the word as follows. Two lookup tables are used. One contains position agnostic character embeddings and is of size C × N c. The other contains positional character embeddings and is of size C × L × N s. Where C is the number of characters, L is the maximum word length, N c is the size of the embedding dimension for position agnostic character embeddings, and N s is the embedding dimension for spelling embeddings. To embed a word, the embeddings for the characters in the word are first pulled from each table separately and averaged. The ing vectors from these averages are then concatenated together to produce a vector of dimensionality N c + N s. This vector is then used as input to an MLP with two ReLU layers. We denote the lookup tables for the position aware and position agnostic character embeddings as U and V, respectively. Then for a word indexed by i, the vector w (i) contains the indices corresponding to the position agnostic characters of that word. Then if L (i) is the length of the word and j indexes the character position, we formulate the concatenation of the position aware and position agnostic character representations of the word as: DISPLAYFORM2 The models were also run without the position aware spelling embeddings in order to determine the value of this information for the task. All embedding methods presented have a final embedding layer with dimensionality M, in order to ensure that the language model is given an equal capacity for information about incoming words. The number of parameters required to embed the entire vocabulary was controlled in order to prevent spelling embeddings from gaining an unfair advantage over tokens. This was accomplished by limiting the number of nodes in each layer. One of the main benefits of spelling embeddings is that the number of parameters does not grow necessarily with the vocabulary size as it does with token embeddings. The number of parameters needed to embed the vocabulary using token embeddings is computed by (V × N t) + (N t × M). The dominant term is generally the vocabulary size, V, which is much larger than the embedding dimension. For spelling embeddings an upper bound is considered because not all characters must appear in all possible word positions. This is computed by: DISPLAYFORM3 where M is the size of the fully connected layer placed between the character embeddings and the final embedding layer. TAB0 shows the specific values for the number of parameters used in our experiments. The spelling embeddings do not use significantly more parameters for the larger dataset than for the smaller because they depend on the number of characters and the lengths of words rather than on the number of words. On the larger Gutenberg dataset, spelling embeddings outperform token embeddings despite using far fewer parameters to embed each word (∼ 13M vs ∼ 82M). On the smaller Wheel of Time dataset they are on par with token embeddings. Position agnostic character alone embeddings perform worse than tokens. Performance curves are plotted in figure 3. Final performance of each model is listed in table 2. The token embeddings overfit the training data on the Wheel of Time dataset. On the Gutenberg dataset, only the raw token embeddings exhibited overfitting. spelling embeddings are far more sparse than those of token embedding. Raw token embeddings exhibit the least amount of sparsity. To get a more comprehensive view of sparsity, the Gini coefficient was applied to the embeddings of the entire vocabulary for each model run on the Gutenberg dataset. BID6 The Gini coefficient was chosen as a measure of sparsity because it has been shown to be robust under a number of metrics. BID7 FIG3 shows the distribution of sparsity across the vocabulary as measured by the Gini coefficient. Raw token embeddings are the least sparse. Token embeddings passed through a fully connected layer increase dramatically in sparsity, followed by the spelling embeddings which are the most sparse. Sparsity is also affected by dropout. Whereas dropout in greater sparsity for the majority of the token embeddings, it causes a few to lose all sparsity and become completely homogeneous. Dropout also has this homogenizing effect on some of the spelling embeddings. This work shows that a simple fully connected network is able to produce character aware word embeddings that outperform traditional token embeddings. The architecture is relatively simple compared to previous approaches that use CNNs or RNNs to combine character information. This work lacks a direct comparison to these other character aware methods, which is an obvious direction for future work. Investigation into the word embeddings produced by the presented architectures reveal a number of interesting properties. Spelling embeddings are especially resistant to overfitting compared to token embeddings, and are also significantly more sparse in their activations. Furthermore, dropout is shown to have some negative impacts on the word representations, and weight sharing is presented as a better way to regularize word embeddings. Spelling embeddings exhibit the most weight sharing, because each character embedding is shared across many words in the vocabulary. This may be a contributing factor to their increased sparsity and resistance to overfitting. Additional evidence for this is provided in the comparison of raw token embeddings to those passed through a fully connected layer. Whereas raw token embeddings share none of their weights with other words in the vocabulary, token embeddings passed through a fully connected layer share all the weights in that layer across the entire vocabulary. Not only do token embeddings enjoy increased resistance to overfitting when passed though a shared weight layer, they also become drastically more sparse. Whereas dropout is a popular technique for regularization in NLP, it can have a negative impact on the word embeddings, causing some of them to gain a Gini coefficient of 0. This suggests that these particular words have completely homogeneous representations and are indistinguishable from each other. On the smaller dataset the number of shared parameters in the fully connected layer of the token embeddings is large compared to the vocabulary size. In this case, dropout is needed to prevent overfitting. On the larger dataset, the number of shared parameters is much smaller relative to the vocabulary size. In this case dropout is not needed for the token embeddings and actually hinders them. The spelling embeddings perform worse with dropout on both datasets. The architecture presented here should be compared to the state of the art character CNN obtained on the One Billion Word benchmark. BID9 ) Also, whereas a number hyper-parameters governing the number and size of the layers were tried before the ones presented in this paper were found, other techniques such as highway networks BID18 have not yet been investigated. Furthermore, extending the concept of character aware word embeddings to the output softmax layer is another open area of research that has been tried with character CNNs BID9, but not to our knowledge with a spelling network as presented in this work.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJ8rHkWRb
A fully connected architecture is used to produce word embeddings from character representations, outperforms traditional embeddings and provides insight into sparsity and dropout.
Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to $\pm$1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original \emph{defensive distillation} procedure that led to \emph{gradient masking}, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients. The ability to fool machine learning models by making small changes to their input severely limits their potential for safe use in many real-world scenarios. Example vulnerabilities include a seemingly innocuous audio broadcast that is interpreted by a speech recognition model in a smartphone, with the intent to trigger an e-transfer, as well as pictures or identity documents that are automatically tagged as someone other than the real individual. The two most common threat models when evaluating the security of a system are the black-box and white-box assumptions, which represent varying degrees of information that an adversary may possess. In a black-box threat model, an adversary has similar abilities to a normal user that interacts with a system by providing inputs and observing the corresponding outputs. Under this threat model, an adversary generally does not know details of the model architecture or dataset used to train the model. Of course, an adversary is free to assume that a convolutional architecture was likely used if the input domain is images, or a recurrent model for speech or text. In a white-box threat model, an adversary has complete access to the model architecture and parameters. In the case of neural networks, white-box attacks frequently rely on gradient information to craft especially strong adversarial examples, where strong means that the example is very close to the original input as defined by some distance norm (e.g. L 0 -number of features modified, L 2 -mean squared distance), yet is very likely to cause the model to yield the incorrect output. For both threat types, targeted attacks where a model is made to fail in a specific way (e.g. causing a handwritten '7' look like a '3') represents a stronger attack than simple misclassification. The problem with deploying machine learning systems that are secured in a traditional sense, is that adversarial examples have been shown to generalize well between models with different source and target architectures BID19 BID16 Tramèr et al., 2017). This means that a secured model can be compromised in an approximately white-box setting by training and attacking a substitute model that approximates the decision boundary of the model under attack BID16. Thus, to make strong about the robustness of a machine learning model to adversarial attacks, both threat models should be considered. Tangent to research on defences against adversarial attacks, significant progress has been made towards training very low-precision deep neural networks to accuracy levels that are competitive with full-precision models BID4 BID22 BID20. The current motivation for extreme quantization is the ability to deploy these models under hardware resource constraints, acceleration, or reduced power consumption. Ideally, 32× compression is possible by using 1-bit to represent single-precision floating point parameters. By similarly quantizing activations, we can reduce run-time memory consumption as well. These savings enable large scale deployment of neural networks on the billions of existing embedded devices. Very low-precision models were designed with deployment in mind, and may be responsible for making critical decisions in embedded systems, all subject to reverse engineering and a diverse set of real world attacks. With much at stake in applications like autonomous navigation, robotics, and network infrastructure, understanding how very low-precision neural networks behave in adversarial settings is essential. To that end, we make the following contributions:• To the best of our knowledge, we are the first to formally evaluate and interpret the robustness of binary neural networks (BNNs) to adversarial attacks on the MNIST BID10 ) and CIFAR-10 datasets.• We compare and contrast the properties of low-precision neural networks that confer adversarial robustness to previously proposed defense strategies. We then combine these properties to propose an optimal defense strategy.• We attempt to generalize and make recommendations regarding the suitability of lowprecision neural networks against various classes of attacks (e.g. single step vs. iterative). Since the initial disclosure of adversarial examples by BID19 and BID2, many defense strategies have been proposed and subsequently defeated. It is generally accepted that strategies for mitigating the impact of these examples still lag behind state of the art attacks, which are capable of producing adversarial examples that are indistinguishable from unmodified inputs as perceived by humans. In general, there are two approaches to defending against adversarial examples: reactive-detecting the presence of adversarial examples, such as through some notion of confidence-based outlier detection. On the other hand, a proactive approach aims to improve the robustness of the underlying model, which may involve adding an extra class to which malicious inputs should be assigned BID13. The latter approach is important for building reliable systems where a sensible decision must be made at all times. In this work, we focus solely on the proactive approach. To define adversarial examples, we require some measurement of distance that can be computed between perturbed inputs and naturally occurring inputs. In the visual domain, it is convenient if the metric approximates human perceptual similarity, but is not required. Various L p norms have been used in the literature: L 0 -number of features modified, L 2 -mean squared distance, L ∞ -limited only in the maximum perturbation applied to any feature. We evaluate at least one attack that is cast in terms of each respective distance metric, and leave discussion of the optimal distance metric to future work. The most compelling explanation for the existence of adversarial examples proposed to date is the linearity hypothesis. The elementary operators, matrix dot-products and convolutions used at each layer of a neural network are fundamentally too linear. Furthermore, the non-linearity applied at each layer is usually itself either piecewise linear (e.g. ReLU), or we have specifically encouraged the network through initialization or regularization to have small weights and activations such that its units (e.g. sigmoid, tanh) operate in their linear regions. By adding noise to inputs which is highly correlated with the sign of the model parameters, a large swing in activation can be induced. Additionally, the magnitude by which this noise must be scaled to have this effect tends to diminish as the input dimensionality grows. This piecewise linearity also makes neural networks easy to attack using the gradient of the output with respect to the input, and consequently, the ing incorrect predictions are made with high-confidence. Fortunately, we are reminded that the universal approximation theorem suggests that given sufficient capacity, a neural network should at least be able to represent the type of function that resists adversarial examples. The most successful defense mechanism to date, adversarial training, is based on this premise, and attempts to learn such a function. The fast gradient sign method (FGSM) is one such procedure for crafting this damaging noise, and is still used today despite not being state-of-the-art in the white-box setting, as it is straightforward to compute, and yields examples that transfer well between models.The linearity hypothesis was one of the main reasons for initially considering binarized neural networks as a natural defense against adversarial examples. Not only are they highly regularized by default through severely quantized weights, but they appear to be more non-linear and discontinuous than conventional deep neural networks (DNNs). Additionally, we suspect that the same characteristics making them challenging to train, make them difficult to attack with an iterative procedure. At the same time, assumptions regarding the information required by an effective adversary have become more and more relaxed, to the extent that black-box attacks can be especially damaging with just a small set of labeled input-output pairs BID16.Perhaps the most striking feature of adversarial examples is how well they generalize between models with different architectures while trained on different datasets BID16. It was shown by BID9 that 2/3 of adversarial ImageNet examples survive various camera and perspective transformations after being printed on paper and subsequently photographed and classified by a mobile phone. The most successful black-box attacks have the secured model (Oracle) assign labels to a set of real or synthetic inputs, which can be used to train a substitute model that mimics the Oracle's decision boundary BID16. A single step attack, such as FGSM, can be used on the smooth substitute model to generate examples that transfer, without having access to the original training data, architecture, or training procedure used by the Oracle. BID16 showed they are able to compromise machine learning models 80% of the time on small datasets like MNIST using various shallow MLP-based substitute models. There is not a particularly high correlation between test accuracy and transferability of adversarial examples; therefore despite not attaining great on the original MNIST task, a simple substitute learns enough to compromise the Oracle. This technique was shown to overcome gradient masking approaches, such as in the case with models that either obscure or have no gradient information, such as k-nearest neighbors or decision trees. With strong adversarial training of the model to be defended, attacks generated using the substitute model do not transfer as well. Therefore, to be compelling, BNNs should be able to handle training with large while maintaining competitive test accuracy on clean inputs relative to full-precision. The strongest white-box attacks all use an iterative procedure; however, the ing examples do not transfer as well as single step methods. An iterative attack using the Adam optimizer was proposed by BID3 that outperforms other expensive optimization based approaches BID19, the Jacobian-based saliency map attack (JSMA) BID14, and Deepfool BID12 in terms of three L p norms previously used as an adversarial example distance metrics in the literature. We have made our best attempt to use state-of-the-art attacks in our experiments. Figure 1: Blocks used in binary convolution architecture. In Figure 1, we depict the quantization scheme applied to the base convolutional neural network provided in the CleverHans library tutorials BID15. In the first layer, we retain weights and activations in single-precision floating point. Weights in hidden layers are binarized either de-terministically or stochastically, as in BID4, and activations were always binarized deterministically. Unlike in BID4, we stochastically quantize weights at test time as a possible defense against iterative attacks. Under the stochastic binarization scheme, weights are sampled once per forward pass from a Bernoulli distribution with probability given by passing the real valued weight through the hard sigmoid function from BID4. Lastly, we map the Bernoulli samples ∈ to ±1 by multiplying by 2 and subtracting 1 2.We do not find that this significantly slows down training with TensorFlow BID0 on a modern GPU, but these networks take between 3-4× as many epochs as a deterministically quantized binary network to converge. We use the straight through estimator (STE) to back-propagate gradients through the quantization step BID1. We optionally insert a small (e.g 1e-2) tunable scalar after the ReLU in hidden layers, to compensate for an increase in the L1 norm of the activations due to binarization. BID20 also used this approach to reach similar accuracy gains as those conferred by the more expensive XNOR-Net channel-wise normalization scheme BID17. Convolution kernels were initialized from a truncated normal distribution with σ=0.2 for accumulating full-precision weight updates, and were quantized to ±1 in the forward pass. Batch normalization was applied before quantizing activations to ensure they were centered around zero BID6.We report test error rates for these models on MNIST with varying capacity in Table 6 of Appendix A. Capacity is denoted by the number of kernels in the first layer, K Layer1. All subsequent layers had exactly double this number of kernels. Models were trained for 15 epochs unless indicated otherwise. In general, models with full-precision weights and activations under-fit the naturally occurring data less than a binary equivalent, with error rates of approximately 1% and 2%, respectively. With the addition of the small learned scaling factor, the binary models converge to approximately the same error rate as the full-precision model on MNIST and CIFAR-10.We experiment with three different types of adversarial training, depending on the combination of dataset and attack: FGSM with fixed, FGSM with sampled from a truncated normal distribution as in, and projected gradient descent (PGD) BID11, which is the state-of-the-art adversarial training procedure for MNIST. We do not necessarily pair all training methods against all attacks. The model's own best prediction is used as the true label to minimize in adversarial training, unless otherwise noted to prevent the label leaking effect. We first attempt to fool our binarized networks with single step attacks in a whitebox setting, and progressively scale up to stronger state-of-the-art attacks. All experiments were conducted by seeding the TensorFlow random number generator with the value 1234. All experiments were conducted in TensorFlow, and used either v2.0.0 of CleverHans BID15, or Foolbox, a Python toolbox for creating adversarial examples BID18. All attacks were clipped to the anticipated input range during adversarial training and evaluation. For single step attacks, we fix the magnitude of the perturbation and attack the whole test set, then report accuracy on the new test set. The general procedure for iterative attacks is to fix the step size per iteration or learning rate, and number of iterations. We then report accuracy on the perturbed test set after this many iterations while keeping other hyper-parameters constant. The FGSM is a simple but effective single step attack first introduced in, and defined in eq. The attack linearly approximates the gradient of the loss used to train the model with respect to the input. The gradient is thresholded by taking its sign, scaled by a uniform constant, and added to, or subtracted from, the input, depending on if we wish to minimize the current class, or move in the direction of a target class: DISPLAYFORM0 2 In TensorFlow this can be accomplished with: 2 * Bernoulli(probs=tf.clip by value((x + 1.)/ 2., 0., 1.))).sample -1To confer robustness to more than one value of with which an adversary may use to attack, the adversarial training procedure from proposes to sample a unique for each training example from a truncated normal distribution. We set the standard deviation to σ = ceil(max * 255/2). We consider up to max = 0.3, as this is a common upper limit for a L ∞ norm perturbation that is not easily perceived by humans, and corresponds to a 30% change in pixel intensity for an arbitrary number of pixels. Table 1: Accuracy on adversarial examples generated with a FGSM misclassification attack on the MNIST test set with three values of. Three different models were evaluated: A is full-precision, B is binary, and C is binary with a learned scalar. Models trained with, and without, adversarial training are shown. The'+' suffix indicates the model was trained for the last 5 epochs with the procedure from. All values averaged over four runs for models trained from scratch. Model In Table 1, it can be observed that a plain binary network without adversarial training (B) achieves the best robustness to FGSM, with nearly 90% accuracy for = 0.1 for the highest capacity model. We postpone a formal explanation of this outlier for the discussion. Our for large agree with observations made by BID11 where they found FGSM to be suboptimal for training as it yields a limited set of adversarial examples. We suspect that the reason neither scaled nor unscaled binary models performed well when trained with an adversary and tested on larger values of is because by the time adversarial training was introduced at epoch 10, both had entered into a state of decreased learning. Our binary weight implementation makes updates to real valued weights during training, which are binarized in the forward pass. The real valued weights tend to polarize as the model converges, ing in fewer sign changes. Regularization schemes actually encouraging the underlying real valued weights to polarize around ±1 have been proposed BID20 ), but we do not find this to be particularly helpful after sweeping a range of settings for the regularization constant λ. Regardless, in this case, the binary models did not benefit from adversarial training to the same extent that the full-precision models did. DISPLAYFORM1 We find that adversarial training with binary models is somewhat of a balancing act. If a strong adversary is introduced to the model too early, it may fail to converge for natural inputs. If introduced too late, it may be difficult to bring the model back into its malleable state, where it is willing to flip the sign of its weights. Despite this challenge, the scaled binary model (C+) (see Figure 1 for location of optional scalar) reaped significant benefits from adversarial training and its accuracy was on par with the full-precision model for = 0.1.To investigate the low performance observed against large in Table 1, models A and C were trained from scratch with 40 iterations of PGD BID11. TAB1 shows the of this new training and subsequent FGSM attack performed identically to that of Table 1. A similar trend was found in TAB1, where the lowest capacity models struggle to become robust against large. Once the scaled binary model had sufficient capacity, it actually slightly outperforms its fullprecision equivalent for all values of. With this, we have demonstrated that not only can BNNs achieve competitive accuracy on clean inputs with significantly fewer resources, but they can also allocate excess capacity in response to state-of-the-art adversaries. The Carlini-Wagner L2 attack (CWL2) is an iterative process guided by an optimizer such as Adam, that produces strong adversarial examples by simultaneously minimizing distortion, and manipulating the logits per the attack goal. We use the implementation from CleverHans BID15 and show in TAB2 and FIG0. Only binary models are shown in TAB2 because all but two full-precision models had zero accuracy after running CWL2 for 100 iterations. The best full-precision model was A256+ with 1.8±0.9% accuracy. We note that the stochastically quantized binary models with scaling to prevent gradient masking ('S' prefix) underfit somewhat on the training set, and had test error rates of 8±1%, 5±2%, and 3±1% for each of S64, S128, and S256 averaged over four runs. For S256, this test error can be compared with an unscaled binary model which only achieves 22±3% accuracy with gradient masking compared to 46±3% without. In FIG0, it can be observed that binary and full-precision models perform somewhat similarly for the first few iterations of the CWL2 attack, but beyond 10-20 iterations, the accuracy of fullprecision models drops off quickly, regardless of having performed adversarial training. We note that PGD, defined with respect to the L ∞ norm, makes no claim of increasing robustness to L 2 attacks, such as CWL2. Interestingly, it can be seen that the binary model benefited from adversarial training considerably when evaluated at 10 to 100 attack iterations, while the full-precision model did not. These benefits eventually disappear to within the margin of random error after continuing to 1000 iterations, as recommended by BID3. At this point, both B and B+ had accuracy of 19±3%, by which time the full-precision models had long flatlined at zero. Meanwhile, S64 maintained 38 ± 3% accuracy after 1000 iterations, nearly double that of the deterministically quantized models. Running these attacks to 1000 iterations was two orders of magnitude more time consuming than training these models from scratch (without PGD training); therefore we believe this targeted attack represents a fairly substantial level of effort on behalf of the adversary. We run the substitute model training procedure from BID16 using CleverHans v2.0.0, for both MNIST and CIFAR-10 datasets with and without FGSM adversarial training. As a substitute model, we use a two-layer MLP with 200 hidden units and ReLU activations. The substitute is trained on 150 images withheld from the test set, and augmented by perturbing the images in the direction of maximal variability of the substitute model, as defined by the Jacobian. Six epochs of data augmentation with λ = 0.1 were used in combination with 10 substitute model training epochs after each augmentation step. The oracle was again trained for 15 epochs for MNIST, and 20 epochs for CIFAR-10. Results for the black-box experiment on the MNIST dataset are shown in TAB3. Full-precision networks had a moderate advantage over undefended binary models B and C. Only the highest capacity full-precision model benefited from FGSM adversarial training, while the scaled binary model benefited regardless of capacity. There was a small positive relationship between accuracy and capacity for both A and C when trained with PGD, and there was almost no loss in accuracy in this setting after binarization. PGD was more effective than stochasticity here as it leads to learning a more optimal decision boundary, rather than confusing an adversary with dynamic gradient information. We suspect that plain BNNs implement two different kinds of gradient masking. We discovered the first by tracking the L1 norm of the hidden layer activations and unscaled logits. BNNs operate with larger range and variance than'normal' networks, which can be explained by virtue of convolving inputs with greater magnitude (±1) compared with the typically small values taken by weights and activations. For our 64 kernel CNN, the logits were about 4× larger than the scaled or full-precision networks. This is analogous to the more complex defensive distillation procedure in which the model to be secured is trained with soft-labels generated by a teacher model. When training the teacher, a softmax temperature, T 1 is used. The distilled model is trained on the labels assigned by the teacher and using the same T. At test time, the model is deployed with T = 1, which causes the logits to explode with respect to their learned values. The logits saturate the softmax function and cause gradients to vanish, leading FGSM and JSMA to fail at a higher rate. However, this defense is defeated with a close enough guess for T, or via a black box attack BID3.The second type of gradient masking is less easily overcome, and has to do with gradients being inherently discontinuous and non-smooth, as seen in FIG1 of Appendix B. We believe that this effect is what gives scaled BNNs an advantage over full-precision with respect to targeted attacks. Even more importantly, through a regularization effect, the decision boundary for the MLP with binary units FIG1 ) better represents the actual function to be learned, and is less susceptible to adversarial examples. But why does gradient masking have a disproportionate effect when attacking compared with training on clean inputs? Models'A' and'B' were trained to within 1.2% test accuracy, while'B' had improvements of 9.0% and 29.5% on JSMA and CWL2 attacks respectively, corresponding to 8× and 25× difference in accuracy, respectively, for adversarial vs. clean inputs. For JSMA, the performance gap can be attributed to the sub-optimality of the attack as it uses logits rather than softmax probabilities. Furthermore, to achieve its L0 goal, pairs of individual pixels are manipulated which is noisy process in a binarized model. The success of model'S' with stochastically quantized weights in its third convolutional layer against iterative attacks is more easily explained. Adversarial examples are not random noise, and do not occur in random directions. In fact, neural networks are extremely robust to large amounts of benign noise. An iterative attack that attempts to fool our stochastically quantized model faces a unique model at every step, with unique gradients. Thus, the direction that minimizes the probability of the true class in the first iteration is unlikely to be the same in the second. An iterative attack making n steps is essentially attacking an ensemble of n models. By making a series of small random steps, the adversary is sent on the equivalent of a wild goose chase and has a difficult time making progress in any particularly relevant direction to cause an adversarial example. We have shown that for binarized neural networks, difficulty in training leads to difficulty when attacking. Although we did not observe a substantial improvement in robustness to single step attacks through binarization, by introducing stochasticity we have reduced the impact of the strongest attacks. Stochastic quantization is clearly far more computationally and memory efficient than a traditional ensemble of neural networks, and could be run entirely on a micro-controller with a pseudo random number generator. Our adversarial accuracy on MNIST against the best white-box attack (CWL2) is 71±2% (S64+) compared with the best full-precision model 1.8±0.9% (A256+). Blackbox were competitive between binary and full-precision on MNIST, and binary models were slightly more robust for CIFAR-10, which we attribute to their improved regularization. Beyond their favourable speed and resource usage, we have demonstrated another benefit of deploying binary neural networks in industrial settings. Future work will consider other types of low-precision models as well as other adversarial attack methods. Table 6: Error on clean MNIST test set for models with varying capacity and precision. A is fullprecision, B is binary, and C is binary with a learned scalar applied to the ReLU in hidden layers. All models were trained with Adam for 15 epochs with a batch size of 128 and a learning rate of 1e-3. For adversarially trained models, we used 20 iterations of PGD BID11 We reproduce the toy problem in BID14 of learning the two-input logical AND function with a simple MLP having two neurons in each layer. The only difference between our experiment and the original is that we train a 3-hidden-layer MLP (as opposed to 2-layers) with the Adam optimizer for 1k epochs, with a learning rate of 0.1. We use 3 layers since this is the smallest number of layers where the middle one can be quantized without directly touching the input or output, which would adversely impact learning. Here, a "quantized" layer means that its weights and activations are thresholded to +1 and -1, and a straight through estimator BID1 is used to backpropagate gradients for learning. All configurations in the AND experiment learn a reasonable decision boundary; however, the MLPs with a single quantized hidden layer had highly non-linear forward gradients, as can be seen in FIG1. As training progresses, the forward derivative was highly dynamic and took on a variety of different shapes with sharp edges and peaks. When the MLP was allowed more capacity by doubling the number of hidden units (see Figure 4), the forward derivative was almost entirely destroyed. If one was to use this information to construct a saliency map, only two regions would be proposed (with poor directional information), and once exhausted there would be no further choices more insightful than random guessing. In Figure 5 we compare the logits of full-precision and binary networks under varying degrees of FGSM perturbation. We noticed that for softmax temperature T between 0.6-0.7 the direction in which increasing the perturbation causes an adversarial example flips. We observe no similar effect for full-precision models. Additionally the full-precision logits respond to scaling in an approximately linear manner, whereas there is very little change in logits for the binary case apart from the 180 degree flip. We used values of in the range of actual attacks conducted in the paper, however the piecewise linear effect from is still there for with large absolute value.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkTEFfZRb
We conduct adversarial attacks against binarized neural networks and show that we reduce the impact of the strongest attacks, while maintaining comparable accuracy in a black-box setting
Unsupervised bilingual dictionary induction (UBDI) is useful for unsupervised machine translation and for cross-lingual transfer of models into low-resource languages. One approach to UBDI is to align word vector spaces in different languages using Generative adversarial networks (GANs) with linear generators, achieving state-of-the-art performance for several language pairs. For some pairs, however, GAN-based induction is unstable or completely fails to align the vector spaces. We focus on cases where linear transformations provably exist, but the performance of GAN-based UBDI depends heavily on the model initialization. We show that the instability depends on the shape and density of the vector sets, but not on noise; it is the of local optima, but neither over-parameterization nor changing the batch size or the learning rate consistently reduces instability. Nevertheless, we can stabilize GAN-based UBDI through best-of-N model selection, based on an unsupervised stopping criterion. A word vector space -also sometimes referred to as a word embedding -associates similar words in a vocabulary with similar vectors. Learning a projection of one word vector space into another, such that similar words -across the two word embeddings -are associated with similar vectors, is useful in many contexts, with the most prominent example being the alignment of vocabularies of different languages. This is a key step in machine translation of low-resource languages ). An embedding of English words may associate thoughtful, considerate, and gracious with similar vectors, for example, but for English-Icelandic translation, it would be useful to have access to a cross-lingual word embedding space in which hugulsamur (lit.: 'thoughtful') was also associated with a similar vector. Such joint embeddings of words across languages can also be used to extract bilingual dictionaries. Projections between word vector spaces have typically been learned from dictionary seeds. In seminal papers such as BID22 and BID11, these seeds would comprise thousands of words, but BID31 showed that we can learn reliable projections from as little as 50 words. and BID15 subsequently showed that the seed can be replaced with just words that are identical across languages; and BID1 showed that numerals can also do the job, in some cases; both proposals removing the need for an actual dictionary. Even more recently, a handful of papers have proposed an entirely unsupervised approach to projecting word vector spaces onto each other, based on Generative Adversarial Networks (GANs) BID12. We present the core idea behind such approaches in §3, but briefly put, GANs are used to learn a linear transformation to minimize the divergence between a target distribution (say the Icelandic embeddings) and a source distribution (the English embeddings projected into the Icelandic space).The possibility of unsupervised bilingual dictionary induction (UBDI) has seemingly removed the data bottleneck in machine translation, evoking the idea that we can now learn to translate without human supervision ). Yet, it remains an open question whether the initial, positive extrapolate to real-world scenarios of learning translations between low-resource language pairs. recently presented suggesting that UBDI is challenged by some language pairs exhibiting very different morphosyntactic properties, as well as when the monolingual corpora are very different. In this paper, we identify easy, hard, and impossible instances of GAN-based UBDI, and apply a simple test for discriminating between them. The hard cases exhibit instability, i.e. their success depends heavily on initialization. We set up a series of experiments to investigate these hard cases. Our contributions We introduce a distinction between easy, hard, and impossible alignment problems over pairs of word vector spaces and show that a simple linearity test can be used to tell these cases apart. We show that the impossible cases are caused not necessarily by linguistic differences, but rather by properties of the corpora and the embedding algorithms. We also show that in the hard cases, the likelihood of being trapped in local minima depends heavily on the shape and density of the vector sets, but not on noise. Changes in the number of parameters, batch size, and learning rate do not alleviate the instability. Yet, using an unsupervised model selection method over N different initializations to select the best generators, leads to a 6.74% average error reduction over standard MUSE.Structure of the paper §2 presents MUSE BID6, an approach to GAN-based UBDI. Here we also discuss theoretical from the GAN literature, relevant to our case, and show a relation to a common point set registration method. In §3, we use a test based on Procrustes Analysis to discriminate between easy, hard, and impossible cases, discussing its relation with tests of isomorphism and isospectrality. We then focus on the hard cases, where linear transformations provably exist, but GANs exhibit considerable instability. Through a series of experiments, we analyze what affects the instability of GAN-based UBDI. §4 presents our unsupervised best-of-N model selection method for stabilizing GAN-based UBDI. In this section, we discuss the dynamics of GAN-based UBDI and how the training behavior of GANs can help us understand their limitations as applied to UBDI. Two families of approaches to UBDI exist: using GANs BID3 BID6 BID34 and using iterative closest point BID16. We focus on GAN-based UBDI, and more specifically on MUSE BID6, but at the end of this section we establish a relation between the two families of algorithms. A GAN consists of a generator and a discriminator. The generator G is trained to fool the discriminator D. The generator can be any differentiable function; in MUSE, it is a linear transform Ω. Let e ∈ E be an English word vector, and f ∈ F a French word vector, both of dimensionality d. The goal of the generator is then to choose Ω ∈ R d×d such that ΩE has a distribution close to F. The discriminator is a map D w: X → {0, 1}, implemented in MUSE as a multi-layered perceptron. The objective of the discriminator is to discriminate between vector spaces F and ΩE. During training, the model parameters Ω and w are optimized using stochastic gradient descent by alternately updating the parameters of the discriminator based on the gradient of the discriminator loss and the parameters of the generator based on the gradient of the generator loss, which, by definition, is the inverse of the discriminator loss. The loss function used in MUSE and in our experiments below is cross-entropy. In each iteration, we sample N vectors e ∈ E and N vectors f ∈ F and update the discriminator parameters w according to DISPLAYFORM0 Theoretically, the optimal parameters are a solution to the min-max problem: DISPLAYFORM1 If a generator wins the game against an ideal discriminator on a very large number of samples, then F and ΩE can be shown to be close in Jensen-Shannon divergence, and thus the model has learned the true data distribution. This , referring to the distributions of the data, p data, and the distribution, p g, G is sampling from, is from BID12: If G and D have enough capacity, and at each step of training, the discriminator is allowed to reach its optimum given G, and p g is updated so as to improve the criterion DISPLAYFORM2 then p g converges to p data. This relies on a number of assumptions that do not hold in practice. The generator in MUSE, which learns a linear transform Ω, has very limited capacity, for example, and we are updating Ω rather than p g. In practice, therefore, during training, MUSE alternates between k steps of optimizing the discriminator and one step of optimizing the generator. Another common problem with training GANs is that the discriminator loss quickly drops to zero, when there is no overlap between p g and p data ; but note that in our case, the discriminator is initially presented with IE and F, for which there is typically no trivial solution, since the embedding spaces are likely to overlap. We show in §4 that discriminator and generator loss are poor model selection criteria, however; instead we propose a simple criterion based on cosine similarities between nearest neighbors in the learned alignment. From ΩE and F, we can extract a bilingual dictionary using nearest neighbor queries, i.e., by asking what is the nearest neighbor of ΩE in F, or vice versa. MUSE uses a normalized nearest neighbor retrieval method to reduce the influence of hubs BID24 BID9. The method is called cross-domain similarity local scaling (CSLS) and used to expand high-density areas and condense low-density ones. The mean similarity of a source language embedding Ωe to its k nearest neighbors in the target language (k = 10 suggested) is defined as µ DISPLAYFORM3 where cos is the cosine similarity. µ F (f i) is defined in an analogous manner for every i. CSLS(e, f i) is then calculated as 2cos(e, DISPLAYFORM4 . MUSE uses an unsupervised validation criterion based on CSLS. The translations of the top 10k most frequent words in the source language are obtained with CSLS and average pairwise cosine similarity is computed over them. This metric is considered indicative of the closeness between the projected source space and the target space, and is found to correlate well with supervised evaluation metrics. After inducing a bilingual dictionary, E d and F d, by querying ΩE and F with CSLS, MUSE performs a refinement step based on the Procrustes algorithm BID26, whereby the singular value decomposition of DISPLAYFORM5 The idea of minimizing nearest neighbor similarity for unsupervised model selection is also found in point set registration and lies at the core of iterative closest point (ICP) optimization BID4. ICP typically minimizes the λ 2 distance (mean squared error) between nearest neighbor pairs. The ICP optimization algorithm works by assigning each transformed vector to its nearest neighbor and then computing the new relative transformation that minimizes the cost function with respect to this assignment. ICP can be shown to converge to local optima BID4, in polynomial time BID10. ICP easily gets trapped in local optima, however, exact algorithms only exist for two-and three-dimensional point set registration, and these algorithms are slow BID33. Generally, it holds that the optimal solution to the GAN min-max problem is also optimal for ICP. To see this, note that a GAN minimizes the Jensen-Shannon divergence between F and ΩE. The optimal solution to this is F = ΩE. As sample size goes to infinity, this means the L 2 loss in ICP goes to 0. In other words, ICP loss is minimal if an optimal solution to the UBDI min-max problem is found. ICP was independently proposed for UBDI in BID16. They report their method only works using PCA initialization. We explored PCA initialization for MUSE, but observed the opposite effect, namely that PCA initialization leads to a degradation in performance. A function Ω from E to F is a linear transformation if Ω(f + g) = Ω(f) + Ω(g) and Ω(kf) = kΩ(f) for all elements f, g of E, and for all scalars k. An invertible linear transformation is called an isomorphism. The two vector spaces E and F are called isomorphic, if there is an isomorphism from E to F. Equivalently, if the kernel of a linear transformation between two vector spaces of the same dimensionality contains only the zero vector, it is invertible and hence an isomorphism. Most work on supervised or unsupervised alignment of word vector spaces relies on the assumption that they are approximately isomorphic, i.e., isomorphic after removing a small set of vertices BID22 BID3 BID34 BID6. In this section, show that word vector spaces are not necessarily approximately isomorphic. We will refer to cases of non-approximately isomorphic word vector spaces as impossible cases. The possible cases can be further divided into easy and hard cases; corresponding to the cases where GAN-based UBDI is stable and unstable (i.e., performance is highly dependent on initialization), respectively. It is not difficult to see why hard cases may arise when using GANs for unsupervised alignment of vector spaces. One example of a hard (but not impossible) problem instance is the case of two smoothly populated vector spaces on unit spheres. In this case, there is an infinite set of equally good linear transformations (rotations) that achieve the same training loss. Similarly, for two binary-valued, n-dimensional vector spaces with one vector in each possible position. Here the number of local optima would be 2 n, but since the loss is the same in each of them the loss landscape is highly non-convex, and the basin of convergence is therefore very small BID33. The chance of aligning the two spaces using gradient descent optimization would be 1 2 n. In other words, minimizing the Jensen-Shannon divergence between the word vector distributions, even in the easy case, is not always guaranteed to uncover an alignment between translation equivalents. From the above, it follows that alignments between linearly alignable vector spaces cannot always be learned using UBDI methods. In §3.1, we test for approximate isomorphism to decide whether two vector spaces are linearly alignable. §3.2-3.3 are devoted to analyzing when alignments between linearly alignable vector spaces can be learned. In our experiments in §3 and 4, Bengali and Cebuano embeddings are pretrained by FastText; 1 all others are trained using FastText on Polyglot.2 In the experiments in §5, we use FastText embeddings pretrained on Wiki and Common Crawl data. 3 If not indicated otherwise, we use MUSE with default parameters BID6. Procrustes fit ) is a simple linearity test, which, as we find, captures the dynamics of GAN-based UBDI well. Compared to isomorphism and isospectrality tests, Procrustes fit is inexpensive and can be run with bigger dictionary seeds. Procrustes fit The idea behind this test is to apply a Procrustes analysis (see §2) on a sizeable dictionary seed (5000 tokens), to measure the training fit. Since U V T E = F if and only if E and F are isomorphic, the Procrustes fit tests the linear alignability between two embedding spaces exists. We can correlate the Procrustes fit measure with the performance of UBDI. While UBDI is motivated by cases where dictionary seeds are not available, and Procrustes fit relies on dictionary seeds, a strong correlation can act as a sanity check on UBDI, as well as a tool to help us understand its limitations. The relationship between Procrustes fit and UBDI performance is presented in FIG0 and shows a very strong correlation. One immediate is that the poor UBDI performance on languages such as Bengali and Cebuano is not a of GANs being a poor estimator of the linear transforms, but rather a of there not being a good linear transform from English into these languages. Isomorphism and isospectrality We briefly compare Procrustes fit to two similarity measures for nearest neighbor graphs of vector spaces, introduced in graph of a word vector space is obtained by adding edges between any word vertex and its nearest neighbor. Note that only cycles of length 2 are possible in a nearest neighbor graph. Two nearest neighbor graphs are graph isomorphic if they contain the same number of vertices connected in the same way. Two isomorphic vector spaces have isomorphic nearest neighbor graphs, but not vice versa. We say that the nearest neighbor graphs are k-subgraph isomorphic if the nearest neighbor graphs for the most frequent k words (in the source language and their translations) are isomorphic. There are exact algorithms, e.g., VF2 BID7, for checking whether two nearest neighbor graphs are graph isomorphic. These algorithms do not scale easily to graphs with hundreds of thousands of nodes, however. Also, the algorithms do not identify approximate isomorphism, unless run on all subgraphs with k vertices removed. Such tests are therefore impractical. instead introduce a spectral metric based on eigenvalues of the Laplacian of the nearest neighbor graphs, similar to metrics used for graph matching problems in computer vision BID25 and biology BID20. The metric quantifies to what extent the nearest neighbor graphs are isospectral. Note that (approximately) isospectral graphs need not be (approximately) isomorphic, but (approximately) isomorphic graphs are always (approximately) isospectral. Let A 1 and A 2 be the adjacency matrices of the nearest neighbor graphs G 1 and G 2 of our two word embeddings, respectively. Let L 1 = D 1 − A 1 and L 2 = D 2 − A 2 be the Laplacians of the nearest neighbor graphs, where D 1 and D 2 are the corresponding diagonal matrices of degrees. We then compute the eigensimilarity of the Laplacians of the nearest neighbor graphs, L 1 and L 2. For each graph, we find the smallest k such that the sum of the k largest Laplacian eigenvalues is <90% of the Laplacian eigenvalues. We take the smallest k of the two, and use the sum of the squared differences between the largest k Laplacian eigenvalues DISPLAYFORM0. Note that ∆ = 0 means the graphs are isospectral, and the metric goes to infinite. Thus, the higher ∆ is, the less similar the graphs (i.e., their Laplacian spectra). Isospectrality varies with Procrustes fit; to see this, we show that DISPLAYFORM1 F, in this case Ω = I. Two isomorphic graphs also have the same set of sorted eigenvalues, i.e., DISPLAYFORM2 In general, it holds that if we add an edge to a graph G, to form G, its spectrum changes monotonically BID29. Since the Procrustes fit evaluates the nearest neighbor graph, it follows that a change in the nearest neighbor graph leading to a drop in Procrustes fit will also lead to a drop in eigenvector similarity. However, isomorphism and isospectrality tests are computationally expensive, and in practice, we have to sample subgraphs and run the tests on multiple subgraphs, which leads to a poor approximation of the similarities of the two embedding graphs. In practice, Procrustes fit, k-subgraph isomorphism, and k-subgraph isospectrality thus all rely on a dictionary. The tests are therefore not diagnostic tools, but means to understand the dynamics of UBDI. Procrustes fit is more discriminative (since vector space isomorphism entails nearest neighbor graph isomorphism, not vice versa) and computationally more efficient. In our experiments, it also correlates much better with UBDI performance (MUSE in Table 2 ; the correlation coefficient is 96%, compared to 0% for k-subgraph isomorphism (not listed in Table 2), and -27% for k-subgraph isospectrality with k = 10.Observation 1 Impossible cases are not (solely) the of linguistic differences, but also of corpus characteristics. English-Bengali and English-Cebuano are not linearly alignable according to our Procrustes fit tests. There can be two explanations for such an observation: linguistic differences between the two languages or variance in the monolingual corpora for Bengali and Cebuano, i.e. noise and little support per word. We test for this by applying the Procrustes fit test to the word vector spaces of Bengali and a higher resource related language, Hindi. The Procrustes fit for Bengali-Hindi is even lower than for compared to 46.25, respectively). This finding is surprising as we would expect Bengali and Hindi to align well due to their relatedness. The thus suggests that the Bengali embeddings are of insufficient quality, which can largely explain the poor alignment found by the GAN. This is further supported by follow-up experiments we ran aligning a word vector space for English and a word vector space induced from scrambled English sentences (learned on two different 10% samples of Wikipedia), which can be thought of as a sample from a synthetic language that completely diverges from English in its syntactic properties.5 GAN-based UBDI was able to near-perfectly recover the word identities without supervision, showing that its success is not easily impeded by linguistic differences. Observation 2 Impossible cases can also be the of the inductive biases of the underlying word embedding algorithms. One observation made in BID6 is that the performance of MUSE degrades a little when using alternative embedding algorithms, but that alignment is still possible. We, however, observe that this is not the case if using different, monolingual embedding algorithms, i.e., if using FastText for English and Hyperwords for Spanish. While such embeddings are still linearly alignable (as verified by computing their Procrustes fits), GAN-based UBDI consistently fails on such cases. This also holds for the case of aligning FastText for English and Hyperwords for English, as observed in BID14. In order to better understand the dynamics of GAN-based UBDI in hard cases, i.e., when the GAN suffers from local minima, we introduce three ablation transformations, designed to control for properties of the word vector spaces: unit length normalization, PCA-based pruning, and noising. The of GAN-based UBDI after applying these transforms are reported in Table 2.Observation 3 GAN-based UBDI becomes more unstable and performance deteriorates with unit length normalization. This ablation transform performs unit length normalization (ULN) of all vectors x, i.e., x = x ||x|| 2, and is often used in supervised bilingual dictionary induction BID32 BID1. We use this transform to project word vectors onto a sphere -to control for shape information. If vectors are distributed smoothly over two spheres, there is no way to learn an alignment in the absence of dictionary seed; in other words, if UBDI is unaffected by this transform, UBDI learns from density information alone. While supervised methods are insensitive to or benefit from ULN, we find that UBDI is very sensitive to such normalization (see Table 2, M-unit). We verify that supervised alignment is not affected by ULN by checking the Procrustes fit (§3.1), which remains unchanged under this transformation. Observation 4 GAN-based UBDI becomes more unstable and performance deteriorates with PCA pruning. In order to control for density, we apply PCA to our word vector spaces, reducing them to 25 dimensions, and prune our vocabularies to remove density clusters by keeping all but one of the nearest neighbors vectors on an integer grid. This removes about 10% of our vocabularies. We then apply UBDI to the original vectors for the remaining words. This smoothening of the embeddings in highly unstable and reduced performance (see Table 2, M-PCA). In other words, density information, while less crucial than shape information, is important for the stability of UBDI, possibly by reducing the chance of getting stuck in local optima. This is in contrast with the on using ICP for UBDI in BID16 Table 2: Main experiments; average performance and stability across 10 runs. We consider a P@1 score below 1% a fail. MUSE is the MUSE system with default parameters. Ablation transforms: M-unit uses unit length normalization to evaluate the impact of shape; M-PCA uses PCA-based pruning to evaluate the impact of density; M-noise uses 25% random vectors injected in the target language space to evaluate the impact of noise. M-discr uses discriminator loss for model selection, as a baseline for M-cosine; M-cosine uses our model selection criterion. The macro-averaged error reduction of M-cosine over MUSE for the HARD languages is 7%; and 4% across all language pairs. using PCA initialization with 50 dimensions. We ran experiments with 25, 50, and 100 dimensions, with or without pruning, observing significant drops in performance across the board. Observation 5 GAN-based UBDI is largely unaffected by noise injection. We add 25% random vectors, randomly sampled from a hypercube bounding the vector set. GAN-based UBDI are not consistently affected by noise injection (see Table 2, M-noise). This is because the injected vectors rarely end up in the seed dictionaries used for the Procrustes analysis step. In follow-up experiments on Greek and Hungarian, we find that GAN-based UBDI gets stuck in local optima in hard cases, and over-parameterization, increasing batch size or decreasing learning rate does not help. Observation 6 In the hard cases, GAN-based UBDI gets stuck in local optima. In cases where linear alignment is possible, but UBDI is unstable, the model might get stuck in a local optimum, which is the of the discriminator loss not increasing in the region around the current discriminator model. We analyze the discriminator loss in these areas by plotting it as a function of the generator parameters for the failure cases of two of the hard alignment cases, namely English-Greek and English-Hungarian. We plot the loss surface along its intersection with a line segment connecting two sets of parameters BID13 BID21. In our case, we interpolate between the model induced by GAN-based UBDI and the (oracle) model obtained using supervised Procrustes analysis. Result are shown in FIG2. The green loss curves represent the current discriminator's loss along all the generators between the current generator and the generator found by Procrustes analysis. In all cases, we see that while performance (P@1 and mean cosine similarity) goes up, there is an initial drop in the discriminator loss, which suggests there is no learning signal in this direction for GAN-based UBDI. This is along a line segment representing the shortest path from the failed generator to the oracle generator, of course; linear interpolation provides no guarantee there are no almost-as-short paths with plenty of signal. A more sophisticated sampling method is to sample along two random direction vectors BID13; BID21. We used an alternative strategy of sampling from normal distributions with fixed variance that were orthogonal to the line segment. We observed the same pattern, leading us to the that instability is caused by local optima. Observation 7 Over-parameterization does not consistently help in the hard cases. Recent work has observed that over-parameterization leads to smoother loss landscapes and makes optimization easier BID5. We experiment with widening our discriminators to smoothen the loss landscape, but are inconsistent: for Hungarian, this made GAN-based UBDI more stable; for Greek, less stable (see FIG2).Observation 8 Changing the batch size or the learning rate to hurt the discriminator also does not help. Previous work has shown that large learning rate and small batch size contribute towards SGD finding flatter minima BID17, but in our experiments, we are interested in the discriminator not ending up in flat regions, where there is no signal to update the generator. We therefore experiment with smaller learning rate and larger batch sizes. The motivation behind both is decreasing the scale of random fluctuations in the SGD dynamics BID27 BID2, enabling the discriminator to explore narrower regions in the loss landscape. See FIG2 for . Increasing the batch size or varying the learning rate (up or down) clearly comes at a cost, and it seems the MUSE default hyperparameters are close to optimal. In this section, we compare two unsupervised model selection criteria. We train three models with different random seeds in parallel and use the selection criterion to select one of these models to train for the remaining epochs. The first criterion is the discriminator loss during training, which is used in BID8, for example. In contrast, we propose to use the mean cosine similarity between all translations predicted by the CSLS method (see §2), which was used as an unsupervised stopping criterion by BID6.Observation 9 In the hard cases, model selection with cosine similarity can stabilize GAN-based UBDI. As we see in Table 2, the selection criterion based on discriminator loss (M-discr) increases the instability of UBDI, leading to 4/10 failed alignments for Greek compared to 2/10 without model selection, for example. Cosine similarity (M-cosine) in contrast leads to perfectly stable UBDI. Note that if the probability of getting stuck in a local optimum that leads to a poor alignment is β, using n random restarts and oracle model selection we increase the probability of finding a good alignment to 1 − (1 − β) n. In our experiments, n = 3. Some pairs of word vector spaces are not alignable based on distributional information alone. For other pairs, GANs can be used to induce such an alignment, but the degree of instability is very susceptible to the shape and density of the word vector spaces, albeit not to noise. Instability is caused by local optima, but not remedied by standard techniques such as over-parameterization, increasing the batch size or decreasing the learning rate. We propose an unsupervised model selection criterion that enables stable learning, leading to a~7% error reduction over MUSE, and present further observations about the alignability of word vector distributions.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJxbps09K7
An empirical investigation of GAN-based alignment of word vector spaces, focusing on cases, where linear transformations provably exist, but training is unstable.
Owing to the ubiquity of computer software, software vulnerability detection (SVD) has become an important problem in the software industry and in the field of computer security. One of the most crucial issues in SVD is coping with the scarcity of labeled vulnerabilities in projects that require the laborious manual labeling of code by software security experts. One possible way to address is to employ deep domain adaptation which has recently witnessed enormous success in transferring learning from structural labeled to unlabeled data sources. The general idea is to map both source and target data into a joint feature space and close the discrepancy gap of those data in this joint feature space. Generative adversarial network (GAN) is a technique that attempts to bridge the discrepancy gap and also emerges as a building block to develop deep domain adaptation approaches with state-of-the-art performance. However, deep domain adaptation approaches using the GAN principle to close the discrepancy gap are subject to the mode collapsing problem that negatively impacts the predictive performance. Our aim in this paper is to propose Dual Generator-Discriminator Deep Code Domain Adaptation Network (Dual-GD-DDAN) for tackling the problem of transfer learning from labeled to unlabeled software projects in the context of SVD in order to resolve the mode collapsing problem faced in previous approaches. The experimental on real-world software projects show that our proposed method outperforms state-of-the-art baselines by a wide margin. In the software industry, software vulnerabilities relate to specific flaws or oversights in software programs which allow attackers to expose or alter sensitive information, disrupt or destroy a system, or take control of a program or computer system . The software vulnerability detection problem has become an important issue in the software industry and in the field of computer security. Computer software development employs of a vast variety of technologies and different software development methodologies, and much computer software contains vulnerabilities. This has necessitated the development of automated advanced techniques and tools that can efficiently and effectively detect software vulnerabilities with a minimal level of human intervention. To respond to this demand, many vulnerability detection systems and methods, ranging from open source to commercial tools, and from manual to automatic methods have been proposed and implemented. Most of the previous works in software vulnerability detection (SVD) (; ; ; ; ; ;) have been developed based on handcrafted features which are manually chosen by knowledgeable domain experts who may have outdated experience and underlying biases. In many situations, handcrafted features normally do not generalize well. For example, features that work well in a certain software project may not perform well in other projects . To alleviate the dependency on handcrafted features, the use of automatic features in SVD has been studied recently (; ;). These works have shown the advantages of automatic features over handcrafted features in the context of software vulnerability detection. However, most of these approaches lead to another crucial issue in SVD research, namely the scarcity of labeled projects. Labelled vulnerable code is needed to train these models, and the process of labeling vulnerable source code is very tedious, time-consuming, error-prone, and challenging even for domain experts. This has led to few labeled projects compared with the vast volume of unlabeled ones. A viable solution is to apply transfer learning or domain adaptation which aims to devise automated methods that make it possible to transfer a learned model from the source domain with labels to the target domains without labels. Studies in domain adaptation can be broadly categorized into two themes: shallow and deep domain adaptations (; ; . These recent studies have shown the advantages of deep over shallow domain adaptation (i.e., higher predictive performance and capacity to tackle structural data). Deep domain adaptation encourages the learning of new representations for both source and target data in order to minimize the divergence between them (; ; . The general idea is to map source and target data to a joint feature space via a generator, where the discrepancy between the source and target distributions is reduced. Notably, the work of employed generative adversarial networks (GANs) to close the discrepancy gap between source and target data in the joint space. However, most of aforementioned works mainly focus on transfer learning in the computer vision domain. The work of is the first work which applies deep domain adaptation to SVD with promising predictive performance on real-world source code projects. The underlying idea is to employ the GAN to close the gap between source and target domain in the joint space and enforce the clustering assumption to utilize the information carried in the unlabeled target samples in a semi-supervised context. GANs are known to be affected by the mode collapsing problem . In particular, recently studied the mode collapsing problem and further classified this into the missing mode problem i.e., the generated samples miss some modes in the true data, and the boundary distortion problem i.e., the generated samples can only partly recover some modes in the true data. It is certain that deep domain adaptation approaches that use the GAN principle will inherently encounter both the missing mode and boundary distortion problems. Last but not least, deep domain adaptation approaches using the GAN principle also face the data distortion problem. The representations of source and target examples in the joint feature space degenerate to very small regions that cannot preserve the manifold/clustering structure in the original space. Our aim in this paper is to address not only deep domain adaptation mode collapsing problems but also boundary distortion problems when employing the GAN as a principle in order to close the discrepancy gap between source and target data in the joint feature space. Our two approaches are: i) apply manifold regularization for enabling the preservation of manifold/clustering structures in the joint feature space, hence avoiding the degeneration of source and target data in this space; and ii) invoke dual discriminators in an elegant way to reduce the negative impacts of the missing mode and boundary distortion problems in deep domain adaptation using the GAN principle as mentioned before. We name our mechanism when applied to SVD as Dual Generator-Discriminator Deep Code Domain Adaptation Network (Dual-GD-DDAN). We empirically demonstrate that our Dual-GD-DDAN can overcome the missing mode and boundary distortion problems which is likely to happen as in Deep Code Domain Adaptation (DDAN) in which the GAN was solely applied to close the gap between the source and target domain in the joint space (see the discussion in Sections 2.4 and 3.3, and the visualization in Figure 3). In addition, we incorporate the relevant approaches -minimizing the conditional entropy and manifold regularization with spectral graph -proposed in to enforce the clustering assumption and arrive at a new model named Dual Generator-Discriminator Semi-supervised Deep Code Domain Adaptation Network (Dual-GD-SDDAN). We further demonstrate that our Dual-GD-SDDAN can overcome the mode collapsing problem better than SCDAN in, hence obtaining better predictive performance. We conducted experiments using the data sets collected by , that consist of five real-world software projects: FFmpeg, LibTIFF, LibPNG, VLC and Pidgin to compare our proposed Dual-GD-DDAN and Dual-GD-SDDAN with the baselines. The baselines consider to include VULD (i.e., the model proposed in without domain adaptation), MMD, DIRT-T, DDAN and SCDAN as mentioned and D2GAN (a variant of the GAN using dual-discriminator to reduce the mode collapse for which we apply this mechanism in the joint feature space). Our experimental show that our proposed methods are able to overcome the negative impact of the missing mode and boundary distortion problems inherent in deep domain adaptation approaches when solely using the GAN principle as in DDAN and SCDAN. In addition, our method outperforms the rival baselines in terms of predictive performance by a wide margin. is also a sequence of L embedding vectors. We wish to bridge the gap between the source and target domains in the joint feature space. This allows us to transfer a classifier trained on the source domain to predict well on the target domain. We preprocess data sets before inputting into the deep neural networks. Firstly, we standardize the source code by removing comments, blank lines and non-ASCII characters. Secondly, we map user-defined variables to symbolic names (e.g., "var1", "var2") and user-defined functions to symbolic names (e.g., "func1", "func2"). We also replace integers, real and hexadecimal numbers with a generic <num> token and strings with a generic <str> token. Thirdly, we embed statements in source code into vectors. In particular, each statement x consists of two parts: the opcode and the statement information. We embed both opcode and statement information to vectors, then concatenate the vector representations of opcode and statement information to obtain the final vector representation i of statement x. For example, in the following statement (C programming language) "if(func3(func4(num,num),&var2)!=var11)", the opcode is if and the statement information is (func3(func4(num,num),&var2)!=var11). To embed the opcode, we multiply the one-hot vector of the opcode by the opcode embedding matrix. To embed the statement information, we tokenize it to a sequence of tokens (e.g., (,func3,(,func4,(,num,num,),&,var2,),!=,var11,)), construct the frequency vector of the statement information, and multiply this frequency vector by the statement information embedding matrix. In addition, the opcode embedding and statement embedding matrices are learnable variables. To handle sequential data in the context of domain adaptation of software vulnerability detection, the work of proposed an architecture referred to as the Code Domain Adaptation Network (CDAN). This network architecture recruits a Bidirectional RNN to process the sequential input from both source and target domains (i.e., x . A fully connected layer is then employed to connect the output layer of the Bidirectional RNN with the joint feature layer while bridging the gap between the source and target domains. Furthermore, inspired by the Deep Domain Adaptation approach , the authors employ the source classifier C to classify the source samples, the domain discriminator D to distinguish the source and target samples and propose Deep Code Domain Adaptation (DDAN) whose objective function is as follows: where seeking the optimal generator G *, the domain discriminator D *, and the source classifier C * is found by solving: Figure 1: An illustration of the missing mode and boundary distortion problems of DDAN. In the joint space, the target distribution misses source mode 2, while the source distribution can only partly cover the target mode 2 in the target distribution and the target distribution can only partly cover the source mode 1 in the source distribution. We observe that DDAN suffers from several shortcomings. First, the data distortion problem (i.e., the source and target data in the joint space might collapse into small regions) may occur since there is no mechanism in DDAN to circumvent this. Second, since DDAN is based on the GAN approach, DDAN might suffer from the mode collapsing problem . In particular, has recently studied the mode collapsing problem of GANs and discovered that they are also subject to i) the missing mode problem (i.e., in the joint space, either the target data misses some modes in the source data or vice versa) and ii) the boundary distortion problem (i.e., in the joint space either the target data partly covers the source data or vice versa), which makes the target distribution significantly diverge from the source distribution. As shown in Figure 1, both the missing mode and boundary distortion problems simultaneously happen since the target distribution misses source mode 2, while the source distribution can only partly cover the target mode 2 in the target distribution and the target distribution can only partly cover the source mode 1 in the source distribution. We employ two discriminators (namely, D S and D T) to classify the source and target examples and vice versa and two separate generators (namely, G S and G T) to map the source and target examples to the joint space respectively. In particular, D S produces high values on the source examples in the joint space (i.e., G S x S) and low values on the target examples in the joint space (i.e., G T x T), while D T produces high values on the target examples in the joint space (i.e., G T x T) and low values on the source examples (i.e., G S x S). The generator G S is trained to push G S x S to the high value region of D T and the generator G T is trained to push G T x T to the high value region of D S. Eventually, both D S G S x S and D S G T x T are possibly high and both are possibly high. This helps to mitigate the issues of missing mode and boundary distortion since as in Figure 1, if the target mode 1 can only partly cover the source mode 1, then D T cannot receive large values from source mode 1. Another important aspect of our approach is to maintain the cluster/manifold structure of source and target data in the joint space via the manifold regularization to avoid the data distortion problem. To address the two inherent problems in the DDAN mentioned in Section 2.4, we employ two different generators G S and G T to map source and target domain examples to the joint space and two discriminators D S and D T to distinguish source examples against target examples and vice versa together with the source classifier C which is used to classify the source examples with labels as shown in Figure 2. We name our proposed model as Dual Generator-Discriminator Deep Code Domain Adaptation Network (Dual-GD-DDAN). Updating the discriminators The two discriminators D S and D T are trained to distinguish the source examples against the target examples and vice versa as follows: where θ > 0. Note that a high value of θ encourages D s and D T place higher values on G S x S and G T x T respectively. Updating the source classifier The source classifier is employed to classify the source examples with labels as follows: min, y i where specifies the cross-entropy loss function for the binary classification (e.g., using crossentropy). Updating the generators The two generators G S and G T are trained to i) maintain the manifold/cluster structures of source and target data in their original spaces to avoid the data distortion problem and ii) move the target samples toward the source samples in the joint space and resolve the missing mode and boundary distortion problems in the joint space. To maintain the manifold/cluster structures of source and target data in their original spaces, we propose minimizing the manifold regularization term as: min where M (G S, G T) is formulated as: where the weights are defined as are the last hidden states of the bidirectional RNN with input x. To move the target samples toward the source samples and resolve the missing mode and boundary distortion problems in the joint space, we propose minimizing the following objective function: where K (G S, G T) is defined as: Moreover, the source generator G S has to work out the representation that is suitable for the source classifier, hence we need to minimize the following objective function: GD-DDAN). The generators G S and G T take the sequential code tokens of the source domain and target domain in vectorial form respectively and map this sequence to the joint layer (i.e., the joint space). The discriminators D S and D T are invoked to discriminate the source and target data. The source classifier C is trained on the source domain with labels. We note that the source and target networks do not share parameters and are not identical. Finally, to update G S and G T, we need to minimize the following objective function: where α, β > 0 are two non-negative parameters. Below we explain why our proposed Dual-GD-DDAN is able to resolve the two critical problems that occur with the DDAN approach. First, if x 4). This increases the chance of the two representations residing in the same cluster in the joint space. Therefore, Dual-GD-DDAN is able to preserve the clustering structure of the source data in the joint space. By using the same argument, we reach the same for the target domain. Second, following Eqs., the discriminator D S is trained to encourage large values for the source modes (i.e., G S x S), while the discriminator D T is trained to produce large values for the target modes (i.e., G T x T). Moreover, as in Eq., G s is trained to move the source domain examples x S to the high-valued region of D T (i.e., the target modes or G T x T) and G T is trained to move the target examples x T to the high-valued region of D S (i.e., the source modes or G S x S). As a consequence, eventually, the source modes (i.e., G S x S) and target modes (i.e., G T x T) overlap, while D S and D T place large values on both source (i.e., G S x S) and target (i.e., G T x T) modes. The mode missing problem is less likely to happen since, as shown in Figure 1, if the target data misses source mode 2, then D T cannot receive large values from source mode 2. Similarly, the boundary distortion problem is also less likely to happen since as in Figure 1, if the target mode 1 can only partly cover the source mode 1, then D T cannot receive large values from source mode 1. Therefore, Dual-GD-DDAN allows us to reduce the impact of the missing mode and boundary distortion problems, hence making the target distribution more identical to the source distribution in the joint space. When successfully bridging the gap between the source and target domains in the joint layer (i.e., the joint space), the target samples can be regarded as the unlabeled portion of a semi-supervised learning problem. Based on this observation, Nguyen et al. proposed to enforce the clustering assumption by minimizing the conditional entropy and using the spectral graph to inspire the smoothness of the source classifier C. Using our proposed Dual-GD-DDAN, the conditional entropy H (C, G S, G T) is defined as: Let SG = (V, E) where the set of vertices V = S ∪ T be the spectral graph defined as in. The smoothness-inspired term is defined as: where B u specifies the Bernoulli distribution with P (y = 1 | u) = C (u) and, and KL (B u, B v) specifies the Kullback-Leibler divergence between two distributions. Here we note that u = G S x S and v = G T x T are two representations of the source sample x S and the target sample x T in the joint space. We incorporate these two terms into our Dual Generator-Discriminator mechanism to propose Dual Generator-Discriminator Semi-supervised Deep Code Domain Adaptation Network (Dual-GD-SDDAN) with the following objective function: where γ, λ are two non-negative parameters. We present experimental of applying our Dual-GD-DDAN approach to five real-world software projects . We compare our proposed Dual-GD-DDAN with VulDeePecker without domain adaptation, MMD, D2GAN, DIRT-T and DDAN using the architecture CDAN proposed in. We further compare our proposed Dual Generator-Discriminator Semi-supervised Deep Code Domain Adaptation (Dual-GD-SDDAN) and Semi-supervised Deep Code Domain Adaptation (SCDAN) introduced in. We use the real-world data sets collected by , which contain the source code of vulnerable and non-vulnerable functions obtained from five real-world software projects, namely FFmpeg (#vul-funcs: 187, #non-vul-funcs: 5,427), LibTIFF (#vul-funcs: 81, #non-vul-funcs: 695), LibPNG (#vul-funcs: 43, #non-vul-funcs: 551), VLC (#vul-funcs: 25, #non-vul-funcs: 5,548) and Pidgin (#vul-funcs: 42, #non-vul-funcs: 8,268) where #vul-funcs and #non-vul-funcs is the number of vulnerable and non-vulnerable functions respectively. The data sets contain both multimedia (FFmpeg, VLC, Pidgin) and image (LibPNG, LibTIFF) application categories. In our experiment, some of the data sets from the multimedia category were used as the source domain whilst other data sets from the image category were used as the target domain (see Table 1). For training the eight methods -VulDeePecker, MMD, D2GAN, DIRT-T, DDAN, Dual-GD-DDAN, SCDAN and Dual-GD-SDDAN -we use one-layer bidirectional recurrent neural networks with LSTM cells where the size of hidden states is in {128, 256} for the generators. For the source classifier and discriminators, we use deep feed-forward neural networks with two hidden layers in which the size of each hidden layer is in {200, 300}. We embed the opcode and statement information in the {150, 150} dimensional embedding spaces respectively. We employ the Adam optimizer with an initial learning rate in {0.001, 0.0001}. The mini-batch size is 64. The trade-off parameters α, β, γ, λ are in {10 −1, 10 −2, 10 −3}, θ is in {0, 1} and 1/(2σ 2) is in {2 −10, 2 −9}. We split the data of the source domain into two random partitions containing 80% for training and 20% for validation. We also split the data of the target domain into two random partitions. The first partition contains 80% for training the models of VulDeePecker, MMD, D2GAN, DIRT-T, DDAN, Dual-GD-DDAN, SCDAN and Dual-GD-SDDAN without using any label information while the second partition contains 20% for testing the models. We additionally apply gradient clipping regularization to prevent over-fitting in the training process of each model. We implement eight mentioned methods in Python using Tensorflow which is an open-source software library for Machine Intelligence developed by the Google Brain Team. We run our experiments on a computer with an Intel Xeon Processor E5-1660 which had 8 cores at 3.0 GHz and 128 GB of RAM. For each method, we run the experiments 5 times and then record the average predictive performance. Quantitative Results We first investigate the performance of our proposed Dual-GD-DDAN compared with other methods including VulDeePecker (VULD) without domain adaptation , DDAN, MMD, D2GAN and DIRT-T with VAP applied in the joint feature layer using the architecture CDAN introduced in. The VulDeePecker method is only trained on the source data and then tested on the target data, while the MMD, D2GAN, DIRT-T, DDAN and Dual-GD-DDAN methods employ the target data without using any label information for domain adaptation. Quantitative Results To quantitatively demonstrate the efficiency of our proposed Dual-GD-DDAN in alleviating the boundary distortion problem caused by using the GAN principle, we reuse the experimental setting in Section 5.2 . The basic idea is, given two data sets S 1 and S 2, to quantify the degree of cover of these two data sets. We train a classifier C 1 on S 1, then test on S 2 and another classifier C 2 on S 2, then test on S 1. If these two data sets cover each other well with reduced boundary distortion, we expect that if C 1 predicts well on S 1, then it should predict well on S 2 and vice versa if C 2 predicts well on S 2, then it should predict well on S 1. This would seem reasonable since if boundary distortion occurs (i.e., assume that S 2 partly covers S 1), then C 2 trained on S 2 would struggle to predict S 1 well which is much larger and possibly more complex. Therefore, we can utilize the magnitude of the accuracies and the accuracy gap of C 1 and C 2 when predicting their training and testing sets to assess the severity of the boundary distortion problem. Figure 3: A 2D t-SNE projection for the case of the FFmpeg → LibPNG domain adaptation. The blue and red points represent the source and target domains in the joint space respectively. In both cases of the source and target domains, data points labeled 0 stand for non-vulnerable samples and data points labeled 1 stand for vulnerable samples. Inspired by this observation, we compare our Dual-GD-DDAN with DDAN using the representations of the source and target samples in the joint feature space corresponding to their best models. In particular, for a given pair of source and target data sets and for comparing each method, we train a neural network classifier on the best representations of the source data set in the joint space, then predict on the source and target data set and do the same but swap the role of the source and target data sets. We then measure the difference of the corresponding accuracies as a means of measuring the severity of the boundary distortion. We choose to conduct such a boundary distortion analysis for two pairs of the source (FFmpeg and Pidgin) and target (LibPNG) domains. As shown in Table 2, all gaps obtained by our Dual-GD-DDAN are always smaller than those obtained by DDAN, while the accuracies obtained by our proposed method are always larger. We can therefore conclude that our Dual-GD-DDAN method produces a better representation for source and target samples in the joint space and is less susceptible to boundary distortion compared with the DDAN method. Visualization We further demonstrate the efficiency of our proposed Dual-GD-DDAN in alleviating the boundary distortion problem caused by using the GAN principle. Using a t-SNE projection, with perplexity equal to 30, we visualize the feature distributions of the source and target domains in the joint space. Specifically, we project the source and target data in the joint space (i.e., G (x)) into a 2D space with domain adaptation (DDAN) and with dual-domain adaptation (Dual-GD-DDAN). In Figure 3, we observe these cases when performing domain adaptation from a software project (FFmpeg) to another (LibPNG). As shown in Figure 3, with undertaking domain adaptation (DDAN, the left figure) and dual-domain adaptation (Dual-GD-DDAN, the right figure), the source and target data sampled are intermingled especially for Dual-GD-DDAN. However, it can be observed that DDAN when solely applying the GAN is seriously vulnerable to the boundary distortion issue. In particular, in the clusters/data modes 2, 3 and 4 (the left figure), the boundary distortion issue occurs since the blue data only partly cover the corresponding red ones (i.e., the source and target data do not totally mix up). Meanwhile, for our Dual-GD-DDAN, the boundary distortion issue is much less vulnerable, and the mixing-up level of source and target data is significantly higher in each cluster/data mode. In this section, we compare the performance of our Dual Generator-Discriminator Semi-supervised Deep Code Domain Adaptation (Dual-GD-SDDAN) with Semi-supervised Deep Code Domain Adaptation (SCDAN) on four pairs of source and target domain including FFmpeg → LibTIFF, FFmpeg → LibPNG, VLC→ LibPNG and Pidgin → LibTIFF. In Table 3, the experimental show that our Dual-GD-SDDAN achieves a higher performance than SCDAN for detecting vulnerable and non-vulnerable functions in terms of FPR, Precision and F1-measure in almost cases of the source and target domains, especially for F1-measure. For example, to the case of the source domain (VLC) and target domain (LibPNG), our Dual-GD-SDDAN achieves an F1-measure of 76.19% compared with an F1-measure of 72.73% obtained with SCDAN. These further demonstrate the ability of our Dual-GD-SDDAN for dealing with the mode collapsing problem better than SCDAN, hence obtaining better predictive performance in the context of software domain adaptation. Software vulnerability detection (SVD) is an important problem in the software industry and in the field of computer security. One of the most crucial issues in SVD is to cope with the scarcity of labeled vulnerabilities in projects that require the laborious labeling of code by software security experts. In this paper, we propose the Dual Generator-Discriminator Deep Code Domain Adaptation Network (Dual-GD-DDAN) method to deal with the missing mode and boundary distortion problems which arise from the use of the GAN principle when reducing the discrepancy between source and target data in the joint space. We conducted experiments to compare our Dual-GD-DDAN method with the state-of-the-art baselines. The experimental show that our proposed method outperforms these rival baselines by a wide margin in term of predictive performances. We give an example of source code functions obtained from the VLC and LibPNG projects, to demonstrate that transfer learning for software vulnerability detection between different projects is plausible and promising. Both C language functions obtained from the VLC and LibPNG projects depicted in Figure 4 invoke the memcpy function which is used to copy one memory buffer to another. The misuse of this function can cause a buffer overflow error if insufficient memory is allocated in the target buffer for all of the data to be copied from the source buffer. Furthermore, these functions also share rather similar semantic and syntactic relationships (i.e. the C language programming syntax, loop structure etc). Therefore, a model that can capture the characteristics of the first function in the first project should be able to confidently predict the second function in the second project. It therefore makes sense to undertake transfer learning from the first project to the second project. Figure 4: An example of two source code functions (with some parts omitted for brevity) in the C programming language obtained from the VLC (Left) and LibPNG project (Right). These two source code examples highlight the same vulnerability due to the misuse of the memcpy function. In this section, we introduce work related to ours. First, we present the recent work in automatic feature learning for software vulnerability detection. Finally, we present the recent work in deep domain adaptation. Automatic feature learning in software vulnerability detection minimizes intervention from security experts (; ;). Particularly, shared the same approach employing a Recurrent Neutral Network (RNN) to transform sequences of code tokens to vectorial features for automatic feature learning, which are then fed to a separate classifier (e.g., Support Vector Machine or Random Forest ) for classification purposes. However, owing to the independence of learning the vector representations and training the classifier, it is likely that the ing vector representations of may not fit well with classifiers to enhance the predictive performance. To deal with this problem, the study introduced in combined the learning of the vector representations and the training of a classifier in a deep neural network. This work
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkglepEFDS
Our aim in this paper is to propose a new approach for tackling the problem of transfer learning from labeled to unlabeled software projects in the context of SVD in order to resolve the mode collapsing problem faced in previous approaches.
Representation learning is one of the foundations of Deep Learning and allowed important improvements on several Machine Learning tasks, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. Several of these methods are based on the SkipGram algorithm, and they usually process a large number of multi-hop neighbors in order to produce the context from which node representations are learned. In this paper, we propose an effective and also efficient method for generating node embeddings in graphs that employs a restricted number of permutations over the immediate neighborhood of a node as context to generate its representation, thus ego-centric representations. We present a thorough evaluation showing that our method outperforms state-of-the-art methods in six different datasets related to the problems of link prediction and node classification, being one to three orders of magnitude faster than baselines when generating node embeddings for very large graphs. Many important problems involving graphs require the use of learning algorithms to make predictions about nodes and edges. These predictions and inferences on nodes and edges from a graph are typically done using classifiers with carefully engineered features BID13. These features, besides taking time and manual labor to be developed and acquired, usually do not generalize well to other problems or contexts. The field of Natural Language Processing (NLP) has had many advances due to the use of algorithms that learn word representations, instead of manually extracted features. Originally proposed by BID5 and commonly used with Word2Vec algorithms like CBOW and SkipGram (a), word embeddings are used in many state-of-the-art solutions for neural machine translation (; BID12, question answering BID2 and natural language generation . Recent works have proposed new methods for learning representations for nodes and edges in graphs, based on random walks (; BID13 or auto-encoding adjacency vectors .In this paper, we propose a new general purpose method for generating node embeddings in very large graphs, which we call Neighborhood Based Node Embeddings (or simply NBNE). NBNE is based on the SkipGram algorithm and uses nodes neighborhoods as contexts. NBNE outperforms state-of-the-art DeepWalk and Node2Vec BID13 for the tasks of link prediction and node classification on six collections, being one to three orders of magnitude faster. In this work, we considered DeepWalk and Node2Vec as baselines. The main reason for this improvement on effectiveness and efficiency is that we concentrate learning on the "predictable" parts of the graph. A study by Facebook research BID8 found that each person in the world (at least among the people active on Facebook) is connected to every other person by an average 3.57 other people. In a graph of this magnitude and connectedness, learning node embeddings by maximizing the log-probability of predicting nearby nodes in a random walk (with a window size of 5) can be highly inefficient and make it'harder' for the embeddings to be constructed, even if these random walks are biased like in Node2Vec. We suspect this can also make them more unstable, which would explain why they need more iterations before embedding convergence. The definition of node similarity and finding general purpose node and/or edge representations are non-trivial challenges (Lü &). Many definitions of similarity in graphs use the notion of first and second order proximity. First-order proximity is the concept that connected nodes in a graph should have similar properties, while the second-order proximity indicates that nodes with similar neighborhoods should have common characteristics. Some earlier works on finding these embeddings use various matrix representations of the graph, together with dimensionality reduction techniques, to obtain the nodes' representations . A problem with these approaches is that they usually depend on obtaining the matrix' eigenvectors, which is infeasible for large graphs (O(n 2.376)) with the Coppersmith-Winograd algorithm BID6 ). Recent techniques attempt to solve this problem by dynamically learning representations for nodes in a graph using non-linear techniques based either on first and second order proximities or random walks (; BID13 .Other recent works focus on finding representations for specific types of graphs. TriDNR uses a graph structure together with node content and labels to learn node representations in two citation networks. Their work can be directly applied to any graph where nodes have labels and/or text contents. TEKE and KR-EAR find representations for entities in knowledge graphs and metapath2vec BID7 finds node representations in heterogeneous networks. The method LINE finds a d dimensional representation for each node based on first and second-order graph proximities, not being feasible for large graphs, because its cost function depends on the whole adjacency matrix (O(|V | 2)).Another method, Structural Deep Network Embedding (SDNE) , is also based on first and second order proximities. It uses autoencoders to learn a compact representation for nodes based on their adjacency matrix (second-order proximity), while forcing representations of connected nodes to be similar (first-order proximity) by using an hybrid cost function. SDNE is also not feasible for large graphs, since the autoenconders are trained on the complete adjacency vectors. Each vector has size O(|V |) and is created at least once, creating a lower bound on time complexity DISPLAYFORM0 The method DeepWalk generates k random walks starting on each vertex in the graph to create sentences where each "word" is a node. These sentences are then trained using the SkipGram algorithm to generate node embeddings. This method has a time complexity bounded by O(|V | log |V |).Node2Vec BID13 ) also uses random walks with SkipGram and can be seen as a generalization of DeepWalk. The difference between the two methods is that Node2Vec's walks are random, but biased by two pre-assigned parameters p and q. During the creation of the walks, these parameters are used to increase the chance of the walk returning to a parent node or going farther from it. This method uses a semi-supervised approach which requires several models to be generated and a small sample of labeled nodes to be used so that the best parameters p and q can be chosen. Node2Vec is not efficient for densely connected graphs, since its time and memory dependencies on the graph's branching factor b are O(b 2).In this work, we considered DeepWalk and Node2Vec BID13 as baselines, since they are scalable, having a time complexity (O(|V | log |V |)). The main differences between NBNE and the two baselines are: (i) we use a different sentence sampling strategy which is based in a node's neighborhood instead of random walks, (ii) NBNE is more effective than both Node2Vec and DeepWalk, as supported by our experiments in six different datasets, and (iii) NBNE is efficient for both dense and sparse graphs and scalable for very large applications, having a faster training time than both Node2Vec and DeepWalk. The context of a word is not a straightforward concept, but it is usually approximated by the words surrounding it. In graphs, a node's context is an even more complex concept. As explained above, DeepWalk and Node2Vec use random walks as sentences and consequently as contexts in which nodes appear. In this work, the contexts are based solely on the neighborhoods of nodes, defined here as the nodes directly connected to it, focusing mainly on the second-order proximities. Consequently, nodes' representations will be mainly defined by their neighborhoods and nodes with similar neighborhoods (contexts) will be associated with similar representations. In our Neighborhood Based Node Embedding's (NBNE) method, as the name implies, sentences are created based on the neighborhoods of nodes. There are two main challenges in forming sentences from neighborhoods, as follows:• A sentence containing all the neighbors from a specific highly connected root node might be of little use. Most neighbors would be distant from each other in the sentence, not influencing each other's representations, and not directly influencing the root node.• There is no explicit order in the nodes in a neighborhood. So there is no clear way to choose the order in which they would appear in a sentence. In this work, the solution is to form small sentences, with only k neighbors in each, using random permutations of these neighborhoods. Algorithm 1 presents the code for generating sentences. As a trade-off between training time and increasing the training dataset the user can select the number of permutations n. Selecting a higher value for n creates a more uniform distribution on possible neighborhood sentences, but also increases training time. Algorithm 1 Sentence Sampling 1: procedure GETSENTENCES(graph, n) 2: DISPLAYFORM0 for j in 0: n do for node in graph.nodes do neighbors ← random permutation(node.neighbors) 6:for i in 0: len(neighbors)/k do 7: DISPLAYFORM0 sentences.append(sentence) As described in Section 3.1, Algorithm 1 forms a set of sentences S, where each word is actually a node from the graph. We train the vector representations of nodes by maximizing the log probability of predicting a node given another node in a sentence and given a set of representations r. We use a window of size k which is equal to the size of the generated sentences, so that each node in a sentence predicts all the others. The log probability maximized by NBNE is given by: DISPLAYFORM0 where p (s|r) is the probability of each sentence, which is given by: DISPLAYFORM1 where v i is a vertex in the graph and v j are the other vertices in the same sentence. The probabilities in this model are learned using the feature vectors r vi, which are then used as the vertex representations. The probability p (v j |v i, r) is given by: DISPLAYFORM2 where r T vj is the transposed output feature vector of vertex j, used to make predictions. The representations r v and r v are learned simultaneously by optimizing Equation. This is done using stochastic gradient ascent with negative sampling (b).By optimizing this log probability, the algorithm maximizes the predictability of a neighbor given a node, creating node embeddings where nodes with similar neighborhoods have similar representations. Since there is more than one neighbor in each sentence, this model also makes connected nodes have similar representations, because they will both predict each others neighbors, ing in representations also having some first order similarities. A trade-off between first and second order proximity can be achieved by changing the parameter k, which simultaneously controls both the size of sentences generated and the size of the window used in the SkipGram algorithm. A further discussion on this effect can be seen in Appendix B.3. When using large values of n (i.e., number of permutations) on graphs with few edges per node, some overfitting can be seen on the representations, as shown in details in Section 5.1 and in Appendix B.2. This overfitting can be avoided by sequentially training on increasing values of n and testing the embeddings on a validation set every few iterations, stopping when performance stops improving, as shown in Algorithm 2.Algorithm 2 NBNE without Overfitting DISPLAYFORM0 sentences ← get sentences(graph, max n) DISPLAYFORM1 for j in 0: log 2 (max n) do brief description of each dataset can be found in Appendix A and an analysis of their assortativity properties in Appendix B.1. We present for the link prediction problem in Section 4.1 and for the node classification problem in Section 4.2. For all experiments we used sentences of size k = 5 and embeddings of size d = 128, while the number of permutations was run for n ∈ {1, 5, 10}. The best value of n was chosen according to the precision on the validation set and we used early stopping, as described in Section 3.3.On both these tasks, DeepWalk and Node2Vec were used as baselines, having been trained and tested under the same conditions as NBNE and using the parameters as proposed in BID13. More specifically, we trained them with the same training, validation and test sets as NBNE and used a window size of 10 (k), walk length (l) of 80 and 10 runs per node (r). For Node2Vec, which is a semi-supervised algorithm, we tuned p and q on the validation set, doing a grid search on values p, q ∈ {0.25; 0.5; 1; 2; 4}. We also evaluated NBNE on two synthetic graphs with different sizes and sparseness, which can be seen on Appendix C, and an author name disambiguation task, on Appendix F. A comparison between NBNE and SDNE can be seen on Appendix D. Setup. Link prediction attempts to estimate the likelihood of the existence of a link between two nodes, based on observed links and the nodes' attributes (Lü &). Typical approaches to this task use similarity metrics, such as Common Neighbors or Adamic-Adar BID1. Instead of these hand made similarity metrics, we propose to train a logistic classifier based on the concatenation of the embeddings from both nodes that possibly form an edge and predict the existence or not of the edge. To train NBNE on this task, we first obtained a sub-graph with 90% randomly select edges from each dataset, and obtained the node embeddings by training NBNE on this sub-graph. We, then, separated a small part of these sub-graph edges as a validation set, using the rest to train a logistic regression with the learned embeddings as features. After the training was completed, the unused 10% of the edges were used as a test set to predict new links. 10-fold cross-validation was used on the entire training process to access the statistical significance of the , analyzing statistical difference between the baselines and NBNE. To evaluate the on this task, we used as metrics: AUC (area under the ROC curve) , and training time. 3 The logistic regressions were all trained and tested using all available edges (respectively in the training or test set), and an equal sized sample of negative samples, which, during training, included part of the 10% removed edges. Results. TAB1 presents for this task. Considering AUC scores on the Link Prediction task, NBNE was statistically better 4 than both DeepWalk and Node2Vec on the Astro and PPI datasets, with more than 7% improvement, also showing a 4.67% performance gain in Wikipedia and a small, but statistically significant, gain on Blog. Only losing by a small percentage on Facebook, with a difference that was not statistically significant. In DBLP, NBNE again presents the best AUC score, although this difference was small and its statistical significance could not be verified due to the large training times of the baselines. This dataset contains the largest graph analyzed in this work (317,080 nodes and 1,049,866 edges) and in it, to train a single fold, Node2Vec took 3,285m59s (more than 54 hours) and DeepWalk took 164m34s (approximately 2 hours and 44 minutes), while NBNE took only 14m30s, which represents a 226/11 times improvement over Node2Vec and DeepWalk, respectively. Considering training time for this task, NBNE presents the biggest improvements on sparser networks of medium size, like Astro, PPI and Wikipedia datasets. On these graphs, the best are for n = 1, ing in more than 50x faster training than DeepWalk and more than 1,500 times faster than Node2Vec, achieving a 6,049 times faster training than Node2Vec on Wikipedia. For the Blog and Facebook datasets the best are for n = 5, ing in larger training times, but still more than one order of magnitude faster than DeepWalk and more than 350 times faster than Node2Vec. For the DBLP dataset, the best were achieved with n = 10, still much faster than the baselines. Setup. Given a partially labeled graph, node classification is the task of inferring the classification of the unknown nodes, using the structure of the graph and/or the properties of the nodes. In this task, the node embeddings were trained using NBNE on the complete graph. After obtaining the node embeddings, 80% of the labeled nodes in the graph were used to train a logistic classifier that predicted the class of each node, while 5% of the nodes were used for validation and the remaining 15% nodes were used as a test set. This test was repeated for 10 different random seed initializations to access the statistical relevance of the . Results. Results on the Blog, PPI and Wikipedia datasets are shown in TAB2 and are presented in terms of Macro F1 scores and training times. NBNE produces statistically similar to its baselines, in terms of Macro F1, on both PPI and Wikipedia, while showing a statistically significant 22.45% gain in the Blog dataset, indicating that NBNE's embeddings did not only get a better accuracy on Blog, but also that correct answers were better distributed across the 39 possible classes. Considering training times, NBNE is more than 10 times faster than DeepWalk on these three datasets and is [300 ∼ 600] times faster than Node2Vec. NBNE didn't show statistically worse in any dataset analyzed here 5, while having an order of magnitude faster training time than DeepWalk and more than two orders of magnitude faster training time than Node2Vec. The quality of NBNE's embeddings depends on both the size of the embeddings (d) and the number of permutations (n). For highly connected graphs, larger numbers of permutations should be chosen (n ∈) to better represent distributions, while for sparser graphs, smaller values can be used (n ∈). FIG0 shows AUC scores versus embedding sizes for several values of n on the Facebook link prediction task. Quadratic functions approximating log(auc score) were plotted to allow for a better understanding of the . It can be seen that for larger numbers of permutations (n > 100) improve with embedding size, while for small ones (n = 1) they decrease. The plot also shows that n = 10 gives fairly robust values for all tested embedding sizes. A further analysis can be seen in TAB3, which shows that graphs with more edges per node tend to get better with larger values of n, while graphs with a smaller branching factor have better with only one permutation (n = 1). Other factors also enter into account when choosing n, like graph size. For example, link prediction on the DBLP graph had its best for n = 10, although its branching size was only 3.31. Further experiments on this parameter can be seen in Appendices B.2 and C.1. which is the size of the vocabulary (|V |), the algorithm will take a time bounded by: DISPLAYFORM0 Figure 2 (Top-Left and Top-Right) show training time is indeed linear on both embedding size and number of permutations. It also shows that Node2Vec is considerably slower than DeepWalk, and only has a similar training time to running NBNE with at least n = 1000. NBNE with n < 10 was by far the fastest algorithm. NBNE, Node2Vec and DeepWalk run in a time bounded by O(|V | log |V |), as can be seen in. FIG1 (Bottom-Right) shows that NBNE's time complexity is linear in the branching factor b, while Node2Vec's is quadratic. DeepWalk's running time is constant in this parameter, but for a graph with a larger branching factor, a higher number of walks per node should be used to train this algorithm, which would make it indirectly dependent on this factor. The proposed node embedding method NBNE shows similar or better than the state-of-theart algorithms Node2Vec and DeepWalk on several different datasets. It shows promising in two application scenarios: link prediction and node classification, while being efficient and easy to compute for large graphs, differently from other node embedding algorithms, such as LINE or SDNE . NBNE focuses learning on node's immediate neighbors, creating more ego-centric representations, which we suspect makes them more stable and faster to learn. Empirical show that, although it has a similar time complexity, NBNE can be trained in a fraction of the time taken by DeepWalk (10 to 190 times faster) or Node2Vec (200 to 6,000 times faster), giving fairly robust . Since embeddings are learned using only a node's immediate neighbors, we suspect it should also be easier to implement more stable asynchronous distributed algorithms to train them, and we leave this as future work. We used a total of six graph datasets to evaluate NBNE, with DeepWalk and Node2Vec being used as baselines. Next we briefly describe these datasets:1. Facebook : A snapshot of a subgraph of Facebook, where nodes represent users and edges represent friendships.2. Astro BID15: A network that covers scientific collaborations between authors whose papers were submitted to the Astrophysics category in Arxiv.3. Protein-Protein Interactions (PPI) : We use the same subgraph of the PPI network for Homo Sapiens as in BID13. This subgraph contains nodes with labels from the hallmark gene sets BID16 Assortativity, also referred to as homophily in social network analysis, is a preference of nodes to attach themselves to others which are similar in some sense. In this section, we further investigate assortativity properties related to both the representations generated by our algorithm, as of the graphs themselves. In Section B.1, we do a quantitative analysis on the homophily inherent to the datasets considered in this work. In Section B.2, we make a qualitative analysis of how assortativity varies depending on the number of permutations n. In Section B.3, we make a qualitative analysis on the trade-off of first and second order proximities based on the choice of k. There are several ways to quantitatively capture the homophily present in a graph. Jensen & Neville describe relational auto-correlation, which is Pearson's contingency coefficient on the characteristics of nodes which share edges BID14 ). Park & Barabási define dyadicity and heterophilicity, which respectively measure how a graph's nodes share common/different characteristics in edges, compared to a random model. TAB5 presents both degree and label assortativity properties of the six graphs analysed here, calculated using the definition of. We can see in this table that the datasets analyzed in this work cover a broad specter of assortativity properties. PPI, Wikipedia and Blog graphs present negative degree assortativity, which means nodes in these graphs are more likely to connect with nodes of different connectivity degrees. At the same time, Facebook, Astro and DBLP present positive degree assortativity, which indicates that their nodes tend to connect to others with similar degrees. We also analyze graphs with both positive and negative label assortativity in our label classification task. While PPI and Blog datasets present positive label assortativity, with connected nodes more frequently sharing classes, Wikipedia has a negative assortativity, with its connected nodes being more likely to have different classes. Here, we further analyze how the number of permutations (n) influences both homophily and overfitting in our learned representations. We qualitatively measure homophily by comparing either cosine or euclidean distances between nodes on edges to the distances in non-edges. The cosine distances for the PPI dataset, shown by the box plots in FIG4 (top-left), clearly show for larger values of n how the embeddings overfit to the specific graph structure, with the learned similarity on edges not generalizing to the links which were previously removed. In this graph, for larger numbers of permutation the removed edges have a distribution more similar to the non edges than to the edges used during training, which is a tendency that can be observed in the other graphs, although in a smaller scale. The box plots in FIG4 (top-right) show the cosine distance for Facebook nodes. We can see that for n = 5 there is a larger separation between removed edges and non edges, which justifies the algorithm's choice of this value. For larger values of n we can again see an overlap between the distributions, caused by the embeddings overfitting. On the other hand, the cosine distances for the DBLP in FIG4 (bottom-left) show the largest separation for n = 10.Finally, the box plots in FIG4 (bottom-right) show cosine distances for the Blog dataset. We can see that for n = 1 and n = 5 there is actually a larger cosine distance between nodes in removed edges than in non edges, with this situation only inverting for n ≥ 10. This happens due to this graph's negative degree homophily. This is also observed for similar graphs in the PPI and Wikipedia datasets, though with a smaller intensity in the PPI graph, which has a smaller absolute value of degree assortativity and where only embeddings for n = 1 present this property. The box plots from FIG4 further support our intuition that graphs with larger branching factors should have larger values of n. At the same time, this choice also depends on the graph size and structure, as shown by the algorithms choice of n = 10 for the DBLP dataset, which contains the largest degree assortativity. The best choice of n depends on the analyzed task, but we believe that, at least for link prediction, this choice is both directly proportional to a graph's branching size and degree assortativity. Nonetheless, the difficulty in analyzing these graphs supports our choice for a semi-supervised approach, automatically choosing n on a per graph instance. Considering again the experiment on the PPI dataset with the number of permutations n = 1 in FIG4 (top-left), in FIG5 we present in detail the euclidean distances between nodes that share or not an edge for this number of permutations. We can see that the distribution of removed edges is a lot closer to the edges used for training than to the non edges. Removed Edge Non Edge Connection type The window size and the number of neighbors in a sentence are both adjusted by a single variable, k, and this variable also controls a trade-off between first and second order proximities in the node embeddings. This can be explained intuitively by analyzing both the sentence sampling method in Algorithm 1 and Equations 1, 2 and 3, in Section 3.2.When a smaller k is chosen, each node's embedding r vi will be mainly used to predict its own neighbors. This causes nodes with shared neighbors to have closer representations (second order proximity). When larger values of k are chosen, nodes will appear more often in its neighbors sentences, and will predict not only its own neighbors, but his neighbors' neighbors. This will in connected nodes to have more similar embeddings, increasing first order similarity. We further analyze this by examining the distribution of cosine distances between nodes at different graph distances. For this analysis, we use three different synthetic graphs: Barabási-Albert BID4; Erdõs-Rényi BID9; Watts-Strogatz . We choose these graphs because of their structural differences, believing they cover an ample specter of different graphs' properties. These graphs were created with |V | = 2000 and b = 20, and Watts-Strogatz graphs had a probability β = 0.2 of generating non-lattice edges. To train our representations we used n = 10 and d = 128. FIG6 shows box plots of these cosine distances of nodes' representations versus their graph distance on these different artificial random graphs. In this figure, we can see that, for both Barabàsi-Albert and Erdõs-Rényi graphs, when using a sentence size (k) equal to 1, the cosine similarity is larger for nodes which are two steps away than for nodes which share an edge (second order proximity), while for larger values of k, nodes which share an edge have larger similarity (first order proximity). The box plots in FIG6 also show that the difference in similarity increases with the value of k. The larger the value of k, the larger the difference between similarities of nodes which share an edge and nodes with larger distances, as can be seen in detail in FIG7 for the Barabási-Albert graph. In this section, we analyze how a graph's sparseness (represented here by its branching factor) and size (represented here by its number of vertices) affect the choice of the number of permutations (n) and of the window/sentence size (k). With this purpose we ran several link prediction experiments on two different synthetic graphs: Watts-Stogratz and Barabási-Albert. 6 These graphs were generated for different sizes (|V |) and sparseness (b), and we ran experiments with the same setup as in Section 4.1, with Watts-Stogratz graphs having again β = 0.2.7 Section C.1 presents this analysis for the number of permutations (n) and Section C.2 contains the analysis for different window/sentence sizes (k). We analyze, in this section, how a graph's size and sparseness affect the choice of the number of permutations (n), for both Watts-Stogratz and Barabási-Albert graphs. Analyzing the graphs in Figure 8, we see a correlation between the best choice of n and a graph's number of vertices (|V |) and branching factor (b). In Figure 8a, which contains the experiments in the most sparse graphs, for n = 1 are better for all graph sizes. A random algorithm would return an AUC score of 0.5, so bellow this value clearly expose a problem in the learning algorithm. This is the case for both n = 10 and n = 5 in these graphs, which overfit its representations. In Figure 8b we can see that, when considering a graph with a branching size of b = 4, for smaller graphs a smaller value of n is preferable, while for larger graphs a larger number of permutations gives better (n = 10). In Figure8c we can see that, for a branching size of b = 8, for larger values of n are always better than for n = 1. Notice also that, while for b = 2 and b = 4 were around 0.55 ∼ 0.7, for b = 8 are closer to 0.9, showing that this algorithm is better at learning with more information. Our experiments in link prediction using synthetic Barabási-Albert graphs present slightly more complex . FIG9 shows that for smaller branching factors (b ≤ 8), n = 1 indeed generate better for small graphs, but for larger graphs, a larger number of permutations is necessary. For intermediary branch sizes the best value of n is harder to determine, and only for b = 64 we start to see a tendency of larger number of permutations consistently giving better . We can also see from FIG9 that edges in Barabási-Albert graphs are considerably more difficult to predict, specially for smaller branching sizes. Most of our are around 60% and our best AUC scores in these graphs are around 70%.Again, n's dependency on these graph properties (|V | and b) depends highly on the graph's structure, further supporting our choice of a semi-supervised approach, choosing n on a per graph instance by validating on a small validation set. This can be considered as a form of early stopping when training these node embeddings. In this section, we again use Watts-Stogratz and Barabási-Albert graphs, this time to analyze how a graph's size and sparseness affect for different window and sentence sizes (k) in our model. For these experiments we keep n = 5 fixed. Barabási-Albert graphs' edges are considerably harder for our algorithm to predict, as shown in the previous section, so we only report for larger values of b (the algorithm, with our choice of hyper-parameters, overfits for smaller values). We can see from FIG0 that larger values of k usually produce better for this graph, but are more propense to overfit, specially when being applied to larger sparse graphs (|V | ≥ 800 and b = 16). Further analysis on the representations' properties for different values of k could provide better motivation on its choice, but we leave this to future studies, keeping our choice of k = 5 constant in this work. Studying if geometric operations between representations have comprehensible meanings would also be interesting, such as was done for Word2Vec algorithms, but this is also left as future work. Structural Deep Network Embedding is another algorithm used for learning node embeddings in graphs. As described in Section 2, SDNE is based on first and second order proximities, using autoencoders to learn compact representations based on a node's adjacency matrix (second-order proximity), while forcing representations of connected nodes to be similar (first-order proximity) by using an hybrid cost function. This algorithm has a time complexity of O(|V | 2), but its main computation, which is calculating the gradients of its cost function and updating model parameters, can be highly parallelized by using modern GPUs and Deep Learning frameworks. In this section, we compare NBNE and SDNE in terms of both efficiency and efficacy, analysing both AUC/Macro F1 scores and training time. With this objective, we trained SDNE embeddings using both a dedicated K40 GPU with CUDA 8.0 and a dedicated 16 core linux server. In their original work, SDNE was run in a semi-supervised setting, finding the best value of α, β and ν by tuning them on a small validation set. In this work we fix α = 0.2 and β = 10, since in their work they state that these values commonly give the best , while only choosing ν in a semi-supervised manner. We use SDNE's architecture with [10,300; 1,000; 128] nodes on each layer and test it on both Link Prediction and Node Classification tasks, using the same steps described in Sections 4.1 and 4.2. We train these embeddings using ν ∈ {0.1, 0.01, 0.001} and choose the best value on the same validation sets used to tune n for NBNE and p and q for Node2vec. TAB6 shows using both NBNE and SDNE embeddings on Link Prediction tasks. In this table we can see that both algorithms produce similar in terms of AUC scores, with each having a statistically significant better on two datasets, and NBNE having a non statistically significant, but slightly better on the fifth. It is clear that even when training SDNE using a K40 GPU, NBNE still has more than an order of magnitude faster training time on all datasets, being more than two orders of magnitude faster on most. When comparing to SDNE trained on a CPU, NBNE has more than three orders of magnitude faster training time. On Astro, the dataset with the largest number of nodes analyzed here, NBNE had a 2,009 times faster training time compared to SDNE on a GPU and 44,896 times faster compared to SDNE on CPU. TAB7 shows the of running NBNE and SDNE on the Node Classification task. On this task NBNE gave statistically better on two datasets, with an impressive gain of 29.27% on PPI and 46.94% on Blog, only losing on Wikipedia with an also large gain of −20.20%. We can again see that NBNE has a more than an order of magnitude faster training time than SDNE on a GPU in this dataset, being more than two orders of magnitude faster when SDNE is trained on a CPU. Analyzing both these tables we can also see that the largest gains in training time occur when using NBNE on a large but sparse network, such as Astro. This agrees with our theoretical expectations, since SDNE's time complexity grows quadratically with the number of nodes O(|V 2 |) and NBNE's grows with O(|V | · log(|V |) · b), which is close to linear on the number of nodes for large graphs. In this section, we extend the presented in Section 4, considering now the precision on the training and test sets. We present for the link prediction problem in TAB8 and for the node classification problem in TAB9 NBNE produces statistically similar to its baselines, in terms of Macro F1, on the node classification task using both PPI and Wikipedia datasets, while showing a statistically significant 13% (11.34x, 226 .52x) † average of 10 fold ‡ no statistical tests were run, due to the time necessary to run one fold 22.45% gain in the Blog dataset. Node classification in the PPI dataset had the smallest precision among all datasets, with only approximately 14%, but there are 49 classes in it and a classifier which always guessed the most common class would only get 2.95% precision. NBNE only shows a statistically worse in test precision for node classification on the Wikipedia dataset, losing to DeepWalk, but having an order of magnitude faster training time than DeepWalk and more than two orders of magnitude faster training time than Node2Vec. On all other experiments in either node classification or link prediction it presented either statistically better or similar to its baselines, while showing much faster training times. F AUTHOR NAME DISAMBIGUATION One of the hardest problems faced by current scholarly digital libraries is author name ambiguity BID10. This problem occurs when an author publishes works under distinct names or distinct authors publish works under similar names BID11. Automatic solutions, which are effective, efficient and practical in most situations, are still in need . In this section, we test our algorithm against the case where distinct authors publish works under similar names. Using these co-authorship networks, embeddings were obtained by training on the graphs with 20% of the papers from each ambiguous author removed. After the embeddings had already been learned for each author, the probability of each possible author-coauthors "sentence" was calculated as: s possible author = [v possible author, v coauthor 1, ..., v coauthor j] This probability is given by: DISPLAYFORM0 (log (p (v t+j |v t))) DISPLAYFORM1 where v 1 = author, which comes from the NBNE model itself. As a baseline, we used the typical solution that classifies the closest of the possible ambiguous authors as co-author for each of the test papers. If no path on the graph existed to any of the possible ambiguous authors, or if there was a tie between the distances to two or more of them, a random one was chosen between the possible ones. DeepWalk and Node2Vec were not used as baselines for this task due to the size of the 14 graphs analyzed here, most with more than 100,000 nodes and 500,000 edges, which would in a prohibitive training time. F.2 EXPERIMENTAL TAB0 presents the for the author name disambiguation task for each chosen author. This experiment was run using NBNE as an unsupervised algorithm with a fixed number of permutations n = 10, having no validation set. We also used sentences of size k = 5 and node embeddings of size d = 128.only required computing the probability of each possible author-coauthors "sentence" (p(s)), while the baseline had to dynamically get the distance between the papers' co-authors and the possible authors. It can be seen in TAB0 that for all but two authors the precision was higher when using the NBNE embeddings instead of the graph baseline, while for the other two precision score remained the same.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJyfrl-0b
A faster method for generating node embeddings that employs a number of permutations over a node's immediate neighborhood as context to generate its representation.
Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix. This class of models is particularly effective to solve tasks that require the memorization of long sequences. We propose an alternative solution based on explicit memorization using linear autoencoders for sequences. We show how a recently proposed recurrent architecture, the Linear Memory Network, composed of a nonlinear feedforward layer and a separate linear recurrence, can be used to solve hard memorization tasks. We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution. The initialization schema can be easily adapted to any recurrent architecture. We argue that this approach is superior to a random orthogonal initialization due to the autoencoder, which allows the memorization of long sequences even before training. The empirical analysis show that our approach achieves competitive against alternative orthogonal models, and the LSTM, on sequential MNIST, permuted MNIST and TIMIT. Several sequential problems require the memorization of long sequences of patterns. As an example, a generative model for music should be able to memorize long sequences of notes and be able to repeat them, as it is typically done in musical pieces. RNNs and LSTMs struggle to solve even simple memorization tasks . Therefore, it is important to study alternative solutions to this problem. Orthogonal RNNs are a class of recurrent architectures that solve the vanishing gradient problem by constraining the recurrent connections to be an orthogonal or unitary matrix . They are particularly effective to solve long memorization tasks. In this paper, we address the problem of memorization with orthogonal RNNs and linear autoencoders. Our objective is to find a solution to the problem of memorization of long sequences. The memorization of long sequences with orthogonal models can require a large number of hidden units, increasing the hidden state size and the cost in time and space. If we assume that the input sequences in the training data lay in a low-dimensional manifold, as is typically believed for real-world data, then we can train an autoencoder with a small number of hidden units, sufficient to encode the entire sequence. If we restrict ourselves to the linear case, we can compute the optimal autoencoder with a closedform solution . This can be exploited to initialize recurrent architectures to approximate the linear autoencoder for the input sequences. In our experiments we use the RNN and the Linear Memory Network (LMN) . The LMN with the autoencoder initialization is a recurrent architecture equivalent to the Elman RNN but able to solve the vanishing gradient problem and memorize input sequences with a minimal hidden state. We test our approach on classic benchmarks for orthogonal RNNs, showing that our proposed approach behaves similarly to orthogonal architectures on pure memorization tasks, and even improving the performance on real-world datasets. Finally, we show that the model can also be used in situations where a strict orthogonal parameterization struggles , like the TIMIT benchmark . Work on orthogonal models typically focuses on the properties of an orthogonal parameterization at the backward step, to address the vanishing gradient. In our work instead, we focus on the forward step, by investigating the effect of an orthogonal parameterization against an autoencoder-based solution. For these reasons, we are not particularly interested in enforcing exact orthogonality constraints, but in the study of the effectiveness of an autoencoder-based memorization mechanism. Our proposed approach requires the memorization of the entire sequence within the hidden state activations. It is necessary to note that this approach can be quite inefficient whenever the underlying task does not require complete memorization of the input sequences. In this situation, the hidden state size necessary to encode the entire sequence could be much larger than the minimal hidden state size necessary to solve the problem. Therefore, it is fundamental to allow the model to diverge from the orthogonality or the autoencoder solution by only imposing soft orthogonality constraints. Nonetheless, we believe that even when complete full memorization is not necessary, the autoencoder initialization can help to speed up training convergence by allowing the model to remember long sequences, which can then be gradually forgotten during training if deemed uninformative. In summary, the main contributions of the paper are: • the proposal of a novel initialization schema designed for the explicit memorization of long sequences; • highlighting the connection between orthogonal models and linear autoencoders for sequences; • an empirical analysis that shows the effectiveness of our proposed initialization schema to solve tasks that require long memorization. The Linear Memory Network (LMN) is a recurrent neural network that computes a hidden state h t and a separate memory state m t. The hidden state h t is computed with a fully connected layer from the input and the previous state followed by an activation function. The memory state m t is computed with a linear transformation of the current hidden state and the previous memory state. The equations for the model update are the following: where W xh, W mh, W hm, W mm are the model's parameters and σ is a non-linear activation function (tanh in our experiments), and m t is the final output of the layer. The linear recurrence can be used to control the memorization properties of the model, as we will show in the subsequent sections. The linearity of the memory update can be exploited to guarantee a constant propagation of the gradient by constraining W mm to be an orthogonal matrix. This property allows the LMN to avoid the vanishing gradient problem. Notice that to guarantee the constant gradient propagation it is also necessary to truncate the gradient between m t−1 and h t by setting ∂h t ∂m t−1 = 0. This approach is similar to the truncated backpropagation proposed originally for the LSTM , which is necessary to guarantee that an LSTM without a forget gate propagates the gradients without vanishing effects. The modification to the training algorithm is trivial to implement with the current automatic differentiation packages, as done in . However, in this work, we focus on the effect of the orthogonality on the forward propagation and therefore we only use the full gradient in our experimental . The linearity of the memory update of the LMN might seem a major limitation of the architecture. However, it is easy to show that an RNN can always be rewritten as an equivalent LMN. Given an RNN with parameters V and U such that h rnn ), we can define an equivalent LMN such that m t = h t rnn ∀t by setting the parameters in the following way: Therefore, the linearity does not limit the expressivity of the LMN. Differences between the two architectures however become important during training. The linear update of the LMN without proper regularization can be affected by the exploding gradient. This problem is less frequent in RNNs due to the non-linearity (which often favors the vanishing gradient). In preliminary experiments, we found that training LMNs without any kind of regularization can lead to instability during the training. Fortunately, the gradient propagation can be controlled in the linear recurrence by tuning the spectral radius ρ of the recurrent matrix W mm, for example, by using an orthogonal parameterization. In our experiments we use soft orthogonality constraints by adding a regularization term λ W W − I 2 as in. Alternatively, it is possible to build a fading memory system by imposing ρ < 1 . Another possible cause of instability is the uncontrolled growth of the norm of the memory state m t. The LSTM solves this problem by using a forget gate to reset the memory cell activations. This is not an issue in RNNs with a bounded nonlinear activation. We overcome this problem by adding to the loss function a regularization term β 2 that penalizes the norm of the hidden activations as in. The linear recurrence of the LMN allows to exploit from the field of linear autoencoder for sequences (LAES) (; 2015;). In this section, we show how to initialize the LMN and RNN models using a linear autoencoder for sequences trained on the input samples. This initialization allows the ing model to encode the input sequences into a minimal hidden state. A linear autoencoder for sequences (LAES) is a linear model composed of a linear autoencoder, which is a linear dynamical system, and a linear decoder. The autoencoder takes as input a sequence x 1,..., x T and computes an internal state m t with the following equations: where A and B are the encoder parameters, C is the decoding matrix andx t andm t−1 are the reconstructions of the current input and the previous memory state. provides a closed-form solution for the optimal linear autoencoder. The corresponding decoder matrix C can be reconstructed from the autoencoder parameters as C = A B. By means of the linear autoencoder for sequences, we can construct a simple linear recurrent model which uses the autoencoder to encode the input sequences within a single vector, the memory state of the autoencoder. To predict the desired target, we can train a linear layer that takes as input the states of the autoencoder to predict the target. The equations for the model are as follows: where A and B are the parameters of the autoencoder trained to reconstruct the input sequences and W o are the parameters of the readout trained to the predict the target. Figure 1a shows a schematic view of the architecture. This initialization approximates the linear RNN. The quality of the approximation depends on the values of the hidden activations m t. With small values close to zero, the tanh activation function is approximately linear, and therefore we obtain a good approximation of the autoencoder. However, for large values the correspondence between the autoencoder activations and the initialized RNN degrades quickly since the tanh activation function enters into the nonlinear part and cannot represent values > 1 or < −1. In practice, we find that the correspondence between the linear autoencoder and the hidden states of the initialized RNN with a tanh activation function tends to degrade quickly due to the accumulation of the error through time. To solve this problem, we propose to adopt a similar initialization scheme to initialize the weights of an LMN using the linear RNN defined in Eq. 1 and 2. The LMN model is initialized as follows: where the parameters of the LMN are initialized to approximate the linear RNN: The initialization is only an approximation of the linear RNN but since the nonlinearity only affects the input transformation Ax t and not the recurrent part By t, the approximation is closer to the original linear RNN. Figures 1b and 1c show a schematic view of the architectures. Experimental in Section 5 provide a quantitative comparison between the two approaches. These show that the accuracy of the initialized LMN is the same as the linear RNN. To give an intuition on the operations enabled by the use of an autoencoder-based memorization mechanism, we discuss how it is possible to use the decoder to build an explicit model that is able to reconstruct the entire sequence, and to process the unrolled sequence with a feedforward layer that combines weights for each specific time delay. Let us assume to have the optimal autoencoder with parameters A and B trained on the input sequence x 1,..., x T. We further assume that the autoencoder has enough hidden units to have exactly zero reconstruction error. The output of the entire model can be computed with a linear layer as follows: where y t is the output vector at time t and W o the parameters of the output layer. Given the hidden state m t at time t of the autoencoder, we can reconstruct the input at time t − k using the decoder In fact, after k applications of the decoder we find: which shows how to reconstruct any input vector x τ, τ = 1,..., t − 1, from the autoencoder memory m t. The output layer can reconstruct the entire input sequence using the decoder. This can be made explicit with a decomposition of W o as: where W i is the matrix applied to the input at delay i. Using this decomposition, the output y t can be expressed as: This ing model requires explicit parameters for each time delay, with a total number of parameters that grows linearly with the length T of the sequence (or the maximum dependency length k < T), and it is expensive to train. However, using the autoencoder, these two operations are combined within a single matrix multiplication. Notice that in general W o will not be decomposed with this factorization. However, any model of this form can be potentialy learned by gradient descent (given a suitable loss function). Therefore it is in principle possible for an autoencoder-based model to reconstruct the entire sequence and process each element separately. The unrolled model shows the expressiveness of the autoencoder. By comparison, an orthogonal parameterization contains the activations of the entire sequence but it is not possible, in principle, to separate them and reconstruct the original input. Furthermore, since the encoding is not explicitly optimized, memorization with an orthogonal parameterization might become inefficient and be prone to require a larger hidden state size. In this section, we evaluate the performance of the orthogonal LMN and the proposed initialization scheme on synthetic tasks and real-world datasets: the copy task , digit classification with sequential and permuted MNIST , and framewise classification with TIMIT . These datasets are standard benchmarks for the assessment of orthogonal recurrent neural networks, and they offer the opportunity to evaluate the proposed approach in different settings. While for permuted MNIST orthogonal models reach state-of-the-art , and showed that orthogonality constraints can reduce the performance of the trained model on TIMIT. We also compare the LAES initialization for the RNN and the LMN, showing that the LMN is a better approximation in practical scenarios. Each model is trained with Adam , with a learning rate in {10 −3, 10 −4, 10 −5} chosen by selecting the best model on a separate validation set. A soft orthogonality constraint λ W W − I 2 is added to the cost function as in , with λ chosen {0, 10 −5, 10 −4, 10 −3, 10 −2}. The copy task is a synthetic benchmark often used to test the ability of a model to capture long-term dependencies. We use the same setup as in. The objective is the memorization of a sequence of S elements, which must be repeated sequentially after T timesteps. The input is a sequence of 2S + T timesteps where the first S elements are randomly generated from a set of K elements, followed by S − 1 blank elements, an output delimiter, and S blank elements. The target sequence contains S + T blank elements followed by the first S elements of the input sequence. To solve this task, the network must be able to memorize S elements into its hidden state and remember them for T timesteps. Each model is a single layer recurrent architectures with 100 hidden units and has been trained with 10 000 batches containing 64 samples each. The copy task can be easily solved with linear orthogonal models. showed a handcrafted solution for the problem. However, adding a nonlinearity makes the task difficult to solve even for orthogonal models . The LMN solves this limitation by separating the nonlinearity used to compute the hidden activations from the linear memory update. As shown in Table 1, the LMN can solve the task even using a saturating nonlinearity (tanh in our experiments). In more general terms, the LMN is able to combine nonlinear activations with the memorization properties of orthogonal linear models. Table 1 shows the on the copy task for T = 100 and T = 500, with S = 10, and K = 8. We compare the to a memoryless baseline that outputs the blank element for S + T timesteps and random elements for the last S elements. The LMN and the linear RNN solve the task perfectly in both cases. The RNN with a tanh nonlinearity fails to beat the baseline even with T = 100. The LSTM beats the baseline in both settings but does not solve the task optimally. These confirm our expectations that we can use the LMN architecture with a saturating nonlinearity even for tasks that require learning long-term dependencies. MNIST are two standard benchmarks to test the ability of recurrent neural networks to learn long-term dependencies. They consist of sequences extracted from the MNIST dataset by scanning each image one pixel at a time, in order (sequential MNIST) or with a random fixed permutation (permuted MNIST). We use these datasets to compare the LAES initialization to a random orthogonal initialization. A validation set is extracted by sampling 10 000 random images from the training set and separating them from the training data to use them to perform the model selection. These datasets provide an ideal setting to test a pure memorization approach like the LAES initialization since the output layer can use the final memory state m t to reconstruct the entire sequence of hidden activations. Table 2 shows the on the two MNNIST benchmarks. We compare the RNN and LMN with the LAES initialization and a random orthogonal initialization with several orthogonal models. We also show the of the linear RNN used to initialize the models with the LAES initialization. All the models have 128 hidden units and a single hidden layer. Each model has been trained for 100 epochs with a batch size of 64. The best are achieved by the AntisymmetricRNN , while the LMN with the LAES initialization obtains slightly lower , but still improves compared to the of the other orthogonal models. 1024 --94.0 93.7 KRU 512 96.6 96.4 94.7 94.5 LSTM 128 98.1 97.8 91.7 91.3 ASRNN 128 -98.0 -95.8 ASRNN(gated) The strength of the LAES initialization is the of the ability of the LAES to efficiently compress the entire sequence and the high performance that can be obtained on MNIST using a linear model 1 and the linear RNN. TIMIT is a speech recognition corpus . In our experiments we follow the setup of for the framewise classification problem . Differently from MNIST, this dataset is more difficult for orthogonal models than for non-orthogonal models.; have shown that orthogonality constraints can be detrimental on this dataset. Therefore, it is ideal to study the performance of the LMN and the LAES initialization in a more challenging setup. All the models have been trained for 50 epochs with a batch size of 1. We compare models with a single layer and a varying number of hidden units to keep the number of parameters approximately constant across the models (about 400k). Table 3 shows the of the experiments. We found that without any regularization the norm of the memory state m t tended to grow indefinitely, which degraded the final performance. We did not find the same problem on the MNIST dataset. A penalization on the memory state norm was sufficient to solve this issue. We selected λ by cross-validation in {0, 1, 10, 100}. It is important to note that, while on this benchmark the best is achieved by the LMN, the best model does not use the LAES initialization. The best configuration for the LMN use an orthogonal initialization but does not use soft orthogonality constraints since the best configuration is obtained when the hyperparameter for the soft orthogonality constraints is λ = 0. This confirms the other in the literature and shows that the architecture is general enough to model problems that do not require explicit memorization. In Section 3 we remarked that LAES initialization can incur in large errors due to the nonlinearity that affects the hidden state of the autoencoder. To verify if these problems happen in practical scenarios, we trained a linear autoencoder with 128 hidden units on sequences from sequential and permuted MNIST and initialized an LMN and an RNN with it. Table 4 shows the . The show that the error of the RNN initialization is large enough to cause a significant drop in the accuracy, from 85.5 of the linear RNN to 11.7 of the initialized model. Conversely, the initialized LMN reaches the same accuracy of the linear RNN. Therefore, it seems that the linear recurrence is the model is necessary to achieve a good approximation of the autoencoder. The on TIMIT show a similar trend. Orthogonal RNNs solve the vanishing gradient problem by parameterizing the recurrent connections with an orthogonal or unitary matrix. Some orthogonal models exploit a specific parameterization or factorization of the matrix (; ;) to guarantee the orthogonality. Other approaches constrain the parameters with soft or hard orthogonality constraints (; ; ; Lezcano-Casado & Martínez-). have shown that hard orthogonality constraints can hinder training speed and final performance. Linear autoencoders for sequences can be trained optimally with a closed-form solution . They have been used to pretrain RNNs . The LMN is a recurrent neural network with a separate linear connection. The memorization properties of untrained models are studied in the field of echo state echo networks , a recurrent model with untrained recurrent parameters. showed that untrained RNNs with a random weight initialization have a Markovian bias, which in the clustering of input with similar suffixes in the hidden state space. Tiňo & and study the short-term memory properties of linear and orthogonal memories. In this work, we studied the problem of building an autoencoder-based memorization mechanism. This system has the ability to encode and decode the input sequences to compute the desired target. Using for the linear autoencoder for sequences, we showed how to initialize a recurrent neural network by approximating a linear autoencoder of the input sequences. The architecture exploits a linear recurrence to obtain a better approximation of the autoencoder. The show that an autoencoder-based initialization can be effective for learning memorization tasks. In the future, we plan to extend this work by studying the effect of the autoencoder during training, possibly by enforcing the encoding of the input sequence even during the learning phase. Another possible avenue of research is the study of better optimization algorithms for the parameters of the linear component, where the linearity could be exploited to speed up the training process through dedicated learning algorithms.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
BkgM7xHYwH
We show how to initialize recurrent architectures with the closed-form solution of a linear autoencoder for sequences. We show the advantages of this approach compared to orthogonal RNNs.
This paper improves upon the line of research that formulates named entity recognition (NER) as a sequence-labeling problem. We use so-called black-box long short-term memory (LSTM) encoders to achieve state-of-the-art while providing insightful understanding of what the auto-regressive model learns with a parallel self-attention mechanism. Specifically, we decouple the sequence-labeling problem of NER into entity chunking, e.g., Barack_B Obama_E was_O elected_O, and entity typing, e.g., Barack_PERSON Obama_PERSON was_NONE elected_NONE, and analyze how the model learns to, or has difficulties in, capturing text patterns for each of the subtasks. The insights we gain then lead us to explore a more sophisticated deep cross-Bi-LSTM encoder, which proves better at capturing global interactions given both empirical and a theoretical justification. Named entity recognition is an important task in information extraction in which we seek to locate entity chunks in text and classify their entity types. Originally a structured prediction task, NER has since been formulated as a task of sequential token labeling, much like text chunking and part-ofspeech tagging. With the ability to compute representations of past and future context respectively for each token, bidirectional LSTM (Bi-LSTM) has proved a robust building block for sequencelabeling NER BID7 BID13 BID0. However, it has been predominantly used as a black box; research directed to understanding how the model learns to tackle the task is minimal. In this work, we decouple sequence-labeling NER into the entity chunking and entity typing subtasks, and seek insight into what patterns LSTM learns to capture or has difficulties capturing. We propose the use of a fast and effective parallel self-attention mechanism alongside Bi-LSTM. Unlike traditional attention mechanisms used for tasks such as machine translation BID12 and sentence classification BID2 BID11, our self-attentive Bi-LSTM uses the hidden state of each token as its own query vector and computes context vectors for all tokens in parallel. For both subtasks, we then find important global patterns that cross past and future context, and in particular discover the way multi-chunk entities are handled. Furthermore, we discover that the theoretical limitations of traditional Bi-LSTMs harms performance on the task, and hence propose using a cross construction of deep Bi-LSTMs. As a , with these cross structures, both selfattentive Bi-LSTM and cross-Bi-LSTM achieve new state-of-the-art on sequence-labeling NER.In Section 3, the normal Bi-LSTM-CNN model is formulated. Section 4 details the computation of the parallel self-attention mechanism. Section 5 presents the empirical and detailed analyses of the models, with a particular focus on patterns captured for {B, I, E} labels. Finally in Section 6, cross-Bi-LSTM-CNN is formulated and evaluated on a theoretical basis. Our contribution is threefold:• We provide insightful understanding of how a sequence-labeling model tackles NER and the difficulties it faces;• We propose using cross-Bi-LSTM-CNN for sequence-labeling NER with theoreticallygrounded improvements. Many have attempted tackling the NER task with LSTM-based sequence encoders BID7 BID13 BID0 BID9. Among these, the most similar to the proposed Bi-LSTM-CNN is the model proposed by BID0. In contrast to previous work, BID0 stack multiple layers of LSTM cells per direction, and also use a CNN to compute character-level word vectors alongside pre-trained word vectors. We largely follow their work in constructing the Bi-LSTM-CNN, including the selection of raw features, the CNN, and the multi-layer Bi-LSTM. The subtle difference is that they send the output of each direction through separate affine-softmax classifiers and then sum their probabilities, effectively forming an ensemble of forward and backward LSTM-CNNs. Another difference is that they focus on proposing a new representation of external lexicon features, which we do not make use of in this work. The modeling of global context for sequential-labeling NER has been accomplished using traditional models with intensive feature engineering and conditional random fields (CRF). BID17 build the Illinois NER tagger with feature-based perceptrons. In their analysis, the usefulness of Viterbi decoding is minimal, as class transition patterns only occur in small chunks and greedy decoding can handle them comparatively well. On the other hand, recent research on LSTM or CNNbased encoders report empirical improvements brought by CRF BID7 BID13 BID9 BID18, as it discourages illegal predictions by explicitly modeling class transition probabilities. In contrast, the cross structures of self-attention and crossBi-LSTM studied in this work provide for the direct capture of global patterns and extraction of better features to improve class observation likelihoods. Various attention mechanisms have been proposed and shown success in natural language tasks. They lighten the LSTM's burden of compressing all relevant information into a single hidden state by consulting past memory. For seq2seq models, attention has been used for current decoder hidden states BID12. For models computing sentence representations, trainable weights are used for self-attention BID2 BID11. In this work, we propose using a token-level parallel self-attention mechanism for sequential token-labeling and show that it enables the model to capture cross interactions between past and future contexts.3 BI-LSTM-CNN FOR SEQUENCE LABELING All models in our experiments use the same set of raw features: word embedding, word capitalization pattern type, character embedding, and character type. For character embedding, 25d vectors are randomly initialized and trained end-to-end with the model. Appended to these are 4d one-hot character-type features indicating whether a character is uppercase, lowercase, digit, or punctuation BID0. In addition, an unknown character vector and a padding character vector are also trained. We unify the word token length to 20 by truncation and padding. The ing 20-by-(25+4) feature map of each token are applied to a character-trigram CNN with 20 kernels per length 1 to 3 and max-over-time pooling to compute a 60d character-based word vector BID8 BID0 BID13.For word embedding, pre-trained 300d GloVe word vectors BID15 are used without further tuning. In addition, 4d one-hot word capitalization features indicate whether a word is uppercase, upper-initial, lowercase, or mixed-caps BID1 BID0.Throughout this paper, we use X to denote the n-by-d x matrix of raw sequence features, with n denoting the number of word tokens in a sentence and d x = 60 + 300 + 4. Given a sequence of input feature vectors x 1, x 2,..., x T ∈ R d1, an LSTM cell computes a sequence of hidden feature vectors h 1, h 2,..., h T ∈ R d2 by DISPLAYFORM0 are trainable weight matrices and biases, tanh denotes hyperbolic tangent, σ denotes sigmoid function, and denotes element-wise multiplication. Bidirectional LSTMs (Bi-LSTMs) are used to capture the future and the past for each time step. Following BID0, 4 distinct LSTM cells -two in each direction -are stacked to capture higher level representations: DISPLAYFORM1 where DISPLAYFORM2 H denote the ing feature matrices of the stacked application, and || denotes row-wise concatenation. In all our experiments, 100d LSTM cells are used, so H ∈ R n×d h and d h = 200. Finally, suppose there are d p token classes, the probability of each of which is given by the composition of affine and softmax transformations: DISPLAYFORM0 where H t is the t th row of H, W p ∈ R d h ×dp, b ∈ R dp are a trainable weight matrix and bias, and s ti and s tj are the i-th and j-th elements of s t.Following BID0, we use the 5 chunk labels O, S, B, I, and E to denote if a word token is {O}utside any entities, the {S}ole token of an entity, the {B}eginning token of a multitoken entity, {I}n the middle of a multi-token entity, or the {E}nding token of a multi-token entity. Hence when there are P types of named entities, the actual number of token classes d p = P × 4 + 1 for sequence labeling NER. We propose using a token-level self-attention mechanism (FIG0) that is computed after the autoregressive Bi-LSTM in Section 3.2. This has two benefits over traditional auto-regressive attention, which wraps stacked LSTM cells to look at past tokens at each time step for each direction of Bi-LSTM. First, it allows each token to look at both past and future sequences simultaneously with one combined hidden state of past and future, thus capturing cross interactions between the two contexts. And secondly, since all time steps run in parallel with matrix computations, it introduces little computation time overhead. Specifically, given the hidden features H of a whole sequence, we project each hidden state to different subspaces, depending on whether it is used as the {q}uery vector to consult other hidden states for each word token, the {k}ey vector to compute its dot-similarities with incoming queries, or the {v}alue vector to be weighted and actually convey information to the querying token. Moreover, as different aspects of a task can call for different attention, multiple "attentions" running in parallel are used, i.e., multi-head attention BID19.Formally, let m be the number of attention heads and d c be the subspace dimension. For each head i ∈ {1..m}, the attention weight matrix and context matrix are computed by DISPLAYFORM0 where W qi, W ki, W vi ∈ R d h ×dc are trainable projection matrices and σ performs softmax along the second dimension. Each row of the ing α 1, α 2,..., α m ∈ R n×n contains the attention weights of a token to its context, and each row of C 1, C 2,..., C m ∈ R n×dc is its context vector. DISPLAYFORM1 H, the computation of α i and C i models the cross interaction between past and future. Finally, for Bi-LSTM-CNN augmented with the attention mechanism, the hidden vector and context vectors of each token are considered together for classification: We conduct experiments on the challenging OntoNotes 5.0 English NER corpus BID6 BID16. OntoNotes is an ambitious project that collects large corpora from diverse sources and provides multi-layer annotations for joint research on constituency parsing, semantic role labeling, coreference resolution, and NER. The data sources include newswires, web, broadcast news, broadcast conversations, magazines, and telephone conversations. Some are transcriptions of talk shows and some are translated from Chinese or Arabic. Such diversity and noisiness requires that models are robust and able to capture a multitude of linguistic patterns. BID16, excluding the New Testament corpus as it contains no entity annotations. Despite this million-token corpus with over 100K annotated entities, previous work has struggled to reach state-of-the-art NER on the dataset. This is due partly to the fact that there are 18 types of entities to be classified. Eleven of these are classes of general names, with NORP including nationalities such as American, FAC including facilities such as The White House, and WORK OF ART including titles of books, songs, and so on. Moreover, various forms of values of the seven numerical classes must also be identified. DISPLAYFORM0 The hyperparameters of our models were given in Sections 3 and 4. When training the models, we minimized per-token cross-entropy loss with the Nadam optimizer BID3. In addition, we randomly dropped 35% hidden features (dropout) and upscaled the same amount during training. Following previous lines of work, we evaluated NER performance with the per-entity F1 score. The tokens for an entity were all to be classified correctly to count as a correct prediction; otherwise it was counted as either a false positive prediction or a false negative non-prediction. We stopped training when the validation F1 had not improved for 20 epochs. All models were initialized and trained 5 times; we report the mean precision, recall, and F1 scores (%) of the experiments. Validation scores are also reported for future research on this task. TAB1 shows the overall of our models against notable previous work. It can be seen that simple LSTM-based sequence encoders already beat the previous best without using external lexicons BID0, document-level context BID18, or constituency parsers BID10. Furthermore, with the proposed parallel self-attention mechanism (ATT), we achieve a new state-of-the-art (88.29 F1) with a clear margin over past systems. More importantly, the attention mechanism allows us to conduct insightful analyses in the following sections, yielding important understanding of how Bi-LSTM learns or has difficulty tackling the different sequence-labeling NER subtasks: entity chunking and entity typing. We decouple the entity chunking task from sequence-labeling NER. Specifically, for a sentence such as {Barack Obama moves out of the White House .}, the task is to correctly label each token as TAB3 shows the performance of different setups on validation data. We take the pre-trained models from TAB1 without re-training for this subtask. {O, S, B, I, E} are the chunk classes. The column of HC all lists the performance of the full Bi-LSTM-CNN+ATT model on each chunk class, where C all stands for C 1,..., C 5. Other columns list the performance of other setups compared to the full model. Columns H to C 5 are when the full model is deprived of all other information by zeroing all other vectors for the affine-softmax classification layer in testing time, except for those specified by the column header. NativeH is the native Bi-LSTM-CNN trained without attention. The figures shown in the table are the per-token recalls for each chunk class, which tells if a part of the model is responsible for signaling the whole model to predict the class. DISPLAYFORM0 Looking at the three columns on the left, the first thing we discover is that Bi-LSTM-CNN+ATT designates the task of predicting {I} to the attention mechanism. The model performance on tokens {I}n the middle of an entity significantly degrades (-28.18) in the absence of global context C all, when token hidden state H is left alone. On the other hand, without the information on the token itself, it is clear that the model strongly favors predicting I (-3.80) given its global context C all.Taking this one step further and zeroing out all other vectors except for each attention head, the roles of context for entity chunking become even clearer. C 2 and C 3 send strong signals (-36.45,-39.19) on entity chunk {E}nding to the model, plus weak signals (-60.56,-50.19) on entity chunk {I}nside, while C 4 sends a strong signal (-12.21) on entity chunk {B}eginning plus weak signals (-57.19) on {I}nside. When all these heads fire simultaneously, the model produces a strong signal to {I}.However, NativeH -Bi-LSTM-CNN trained without attention -underperforms in chunk labels {B} (-0.63), {I} (-0.41), {E} (-0.38) in comparison to HC all, the model trained with ATT. This suggests that entity chunking is indeed a crucial aspect in sequence-labeling NER, and that it is difficult for pure LSTM encoders to compress all necessary information in each hidden state to correctly label all the tokens of a multi-token entity. Aside from knowing that entity chunking is a crucial, challenging aspect in sequence-labeling NER for Bi-LSTM, one remaining question is how exactly the encoder is attempting to properly classify the {B}egin, {I}nside, and {E}nd of a multi-token entity. To shed light on this question, we visualize samples from validation data and discover consistent patterns in the attention weight heat maps across sentences and entities. FIG2 shows one of the samples, where the attention weights α 2, α 3, α 4 of a sentence containing the B White I house E are visualized. The full Bi-LSTM-CNN+ATT (HC all) classifies the tokens correctly, but when in the absence of the context vectors (H), the predictions become the B White S house E. For Bi-LSTM-CNN trained without attention at all (NativeH), the predictions are the O White S house O. Each row of the matrix shows the attention weight distribution for the diagonal token in bold font. We observe that α 2 and especially α 3 have a tendency to focus on the previous tokens: the diagonal shifted left. In contrast, α 4 tends to look at the immediate following tokens: the diagonal shifted right. By looking for previous tokens that belong to the same entity chunk and finding some, an attention head, via its context vector, can signal to the model that the token spoken of might be the {E}nding token or {I}nside token. The same is true for an attention head looking at next tokens, but this time signaling for {B}egin and {I}nside. This also dictates that both signals need to be weaker for {I} but stronger when combined. This behavior can be observed throughout the heat maps of α 2, α 3, α 4. In particular for the White house, C all predicts the B White I house O as Saturday is wrongly focused by α 4 for house. 27.27 -9.09 From TAB3, we already know that NativeH has some difficulties in handling multi-token entities, being more inclined to predict {S}ingle-token entities, and that HC all mitigates this problem by delegating work to C all, especially by relying on the latter to signal for {I}n tokens. The heat maps further tell the story of how the related labels {B, I, E} are handled collectively. In addition, this also suggests that modeling interactions between future and past contexts is crucial for sequencelabeling NER and motivates the use of a deep cross-Bi-LSTM encoder in Section 6. DISPLAYFORM0 When the entity chunking task is decoupled from sequence-labeling NER, the remaining entity typing task requires a model to label {Barack Obama moves out of the White House .} as {Barack PERSON Obama PERSON moves NONE out NONE of NONE the FAC White FAC House FAC . NONE}. TAB4 shows the entity classes for which HC all yields notably different performance (> 2%) from that of NativeH. Of particular interest is C 5's strong signal (27.27) for LAN (language) in comparison to the NativeH's struggles (-9.09) on this class without attention. Qualitatively, we study the two sentences shown in Figure 3, containing Dutch LAN into NONE English LAN and Chinese LAN and NONE English LAN. HC all classifies the tokens correctly, but both H and N ativeH wrongly predict Dutch NORP into NONE English LAN and Chinese NORP and NONE English LAN. Here NORP stands for nationality, meaning that both models without attention wrongly judge that Dutch and Chinese here refer to people from these countries. With attention, in Figure 3, we see that α 1 attends to Dutch and English at the same time for the two tokens and attends to Chinese and English at the same time for the other two. On the other hand, α 5 focuses on all possible LAN tokens, including a small mis-attention to Taiwanese in the second sentence, which is actually a NORP in this case. These attention weights signify that the model learns a pattern of cross interaction between entities: when two ambiguous entities of NORP, LAN occur together in the same context, the model predicts both as LAN. In Section 4.1, we briefly mentioned that the computation of attention weights α i and context features C i models the cross interaction between past and future. Mathematically, since H = − → H || ← − H, the computation of attention scores can be rewritten as DISPLAYFORM0 The un-shifted covariance matrix of the projected (− → H || ← − H) thus computes the interaction between past context and future context for each token, capturing cross-context patterns that the deep Bi-LSTM-CNN specified in Section 3 cannot. The consequence of this inability has been empirically shown in Section 5. Here, we further consider the following four simple phrases that form an XOR: {Key and Peele} WOA; {You and I} WOA; {Key and I}; {You and Peele} where WOA stands for WORK OF ART. The first two phrases are respectively a show title and a song title. The other two are not entities, where the last one actually occurs in an interview with Keegan-Michael Key. Suppose the phrases themselves are the only available context for the classification of and. Then the Bi-LSTM-CNN cannot capture good enough features to classify and correctly simultaneously for the four cases, even if they are the training data, no matter how many LSTM cells are stacked. The key is that given the same half-context of past or future, and is sometimes {WOA : I} but sometimes {NONE : O}. It is only when patterns that cross past and future are captured that the model is able to decide the correct label. Motivated by the limitation of the conventional Bi-LSTM-CNN for sequence labeling, we propose the use of Cross-Bi-LSTM-CNN by changing the deep structure in Section 3.2 to Note that when computing sentence embeddings for tasks such as sentence classification, both directions of a normal Bi-LSTM look at the whole sentence. However, when computing hidden node features for sequence labeling, each direction of a normal Bi-LSTM looks at only half of the sentence. Cross-Bi-LSTM remedies this problem by interleaving the hidden features between LSTM layers. The output of the first layers of both directions are sent to the second layers of both directions, allowing higher layers to capture interactions between past and future contexts for each token. Empirically, we experiment with cross construction 5 times and find it further improves the performance of Bi-LSTM-CNN from 87.56 (±0.07) to 88.09 (±0.16). In this paper, we have decoupled named entity recognition into entity chunking and entity typing and demonstrated how sequence-labeling models can learn to handle each of these two subtasks. By using a fast parallel self-attention mechanism, we have discovered how the beginning and ending of a multi-token entity is determined and how they are jointly correlated to locate the inside tokens. Further, through our quantitative and qualitative analyses for both chunking and typing, we have shown that it is crucial to capture global patterns that cross both sides of a token. We demonstrate the theoretical limitation of the conventional deep Bi-LSTM-CNN used in sequence labeling tasks. In addition to the interpretability of the proposed parallel self-attention, it is shown that it constitutes a way to correlate past and future contexts. We have also provided deep cross-Bi-LSTM-CNN as another way to extract global context features. With their respective cross structures, both selfattentive Bi-LSTM and cross-Bi-LSTM achieve new state-of-the-art on sequence-labeling NER.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rklNwjCcYm
We provide insightful understanding of sequence-labeling NER and propose to use two types of cross structures, both of which bring theoretical and empirical improvements.
Knowledge Graph Embedding (KGE) is the task of jointly learning entity and relation embeddings for a given knowledge graph. Existing methods for learning KGEs can be seen as a two-stage process where (a) entities and relations in the knowledge graph are represented using some linear algebraic structures (embeddings), and (b) a scoring function is defined that evaluates the strength of a relation that holds between two entities using the corresponding relation and entity embeddings. Unfortunately, prior proposals for the scoring functions in the first step have been heuristically motivated, and it is unclear as to how the scoring functions in KGEs relate to the generation process of the underlying knowledge graph. To address this issue, we propose a generative account of the KGE learning task. Specifically, given a knowledge graph represented by a set of relational triples (h, R, t), where the semantic relation R holds between the two entities h (head) and t (tail), we extend the random walk model (a) of word embeddings to KGE. We derive a theoretical relationship between the joint probability p(h, R, t) and the embeddings of h, R and t. Moreover, we show that marginal loss minimisation, a popular objective used by much prior work in KGE, follows naturally from the log-likelihood ratio maximisation under the probabilities estimated from the KGEs according to our theoretical relationship. We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph. The KGEs learnt by our proposed method obtain state-of-the-art performance on FB15K237 and WN18RR benchmark datasets, providing empirical evidence in support of the theory. Knowledge graphs such as Freebase BID2 organise information in the form of graphs, where entities are represented by vertices in the graph and the relation between two entities is represented by the edge that connects the corresponding two vertices. By embedding entities and relations that exist in a knowledge graph in some (possibly lower-dimensional and latent) space we can infer previously unseen relations between entities, thereby expanding a given knowledge graph BID11 BID20; BID9 BID16 BID17 BID4.Existing KGE methods can be seen as involving two main steps. First, given a knowledge graph represented by a set of relational triples (h, R, t), where a semantic relation R holds between a head entity h and a tail entity t, entities and relations are represented using some mathematical structures such as vectors, matrices or tensors. Second, a scoring function is proposed that evaluates the relational strength of a triple (h, R, t) and entity and relation embeddings that optimise the defined scoring function are learnt using some optimisation method. Table 1 shows some of the scoring functions proposed in prior work in KGE learning. Despite the wide applications of entity and relation embeddings created via KGE methods, the existing scoring functions are motivated heuristically to capture some geometric requirements of the embedding space. For example, TransE BID4 assumes that the entity and relation embeddings co-exist in the same (possibly lower dimensional) vector space and translating (shifting) the head entity embedding by the relation embedding must make it closer to the tail entity embedding, whereas ComplEx BID16 ) models the asymmetry in relations using Score function f (h, R, t) Relation parameters Unstructured BID4 h − t 1/2 none Structured embeddings BID4 R 1 h − R 2 t 1,2 R 1, R 2 ∈ R d×d TransE BID4 h + R − t 1/2 R ∈ R d DistMult BID20 h, R, t R ∈ R d RESCAL BID9 h Rt R d×d ComplEx BID16 h, R,t R ∈ C d Table 1: Score functions proposed in selected prior work on KGE. Entity embeddings h, t ∈ R d are vectors in all models, except in ComplEx where h, t ∈ C d. Here, x 1/2 denotes either 1 or 2 norm of the vector x. In ComplEx,x is the elementwise complex conjugate, and ·, ·, · denotes the component-wise multi-linear inner-product.the component-wise multi-linear inner-product among entity and relation embeddings. Relational triples extracted from a given knowledge graph are used as positive training instances, whereas pseudo-negative BID4 instances are automatically generated by randomly corrupting positive instances. Finally, KGE are learnt such that the prediction loss computed over the positive and negative instances is minimised. Despite the good empirical performances of the existing KGE methods, theoretical understanding of KGE methods is comparatively under developed. For example, it is not clear how the heuristically defined KGE objectives relate to the generative process of a knowledge graph. In this paper, we attempt to fill this void by providing a theoretical analysis of KGE. Specifically, in section 2, we propose a generative process where we explain the formation of a relation R between two entities h and t using the corresponding relation and entity embeddings. Following this generative story, we derive a relationship between the probability of R holding between h and t, p(h, t | R), and the embeddings of R, h and t. Interestingly, the derived relationship is not covered by any of the previously proposed heuristically-motivated scoring functions, providing the first-ever KGE method with a provable generative explanation. Next, in section 3, we show that the margin loss, which has been popularly used as a training objective in prior work on KGE, naturally arises as the log-likelihood ratio computed from p(h, t | R). Based on this , we derive a training objective that we subsequently optimise for learning KGEs that satisfy our theoretical relationship. Using standard benchmark datasets proposed in prior work on KGE learning, we evaluate the learnt KGEs on a link prediction task and a triple classification task. Experimental show that the learnt KGEs obtain state-of-the-art performance on FB15K237 and WN18RR benchmarks, thereby providing empirical evidence to support the theoretical analysis. Let us consider a knowledge graph D where the knowledge is represented by relational triples (h, R, t) ∈ D. Here, R is a relational predicate of two arguments, where h (head) and t (tail) entities respectively filling the first and second arguments. We assume relations to be asymmetric in general. In other words, if (h, R, t) ∈ D then it does not necessarily follow that (t, R, h) ∈ D. The goal of KGE is to learn embeddings (representations) for the relations and entities in the knowledge graph such that the entities that participate in similar relations are embedded closely to each other in the entity embedding space, while at the same time relations that hold between similar entities are embedded closely to each other in the relational embedding space. We call the learnt entity and relation embeddings collectively as KGEs. Following prior work on KGE BID4 BID16 BID20, we assume that entities and relations are embedded in the same vector space, allowing us to perform linear algebraic operations using the embeddings in the same vector space. Let us consider a random walk characterised by a time-dependent knowledge vector c k, where k is the current time step. The knowledge vector represents the knowledge we have about a particular group of entities and relations that express some facts about the world. For example, the knowledge that we have about people that are employed by companies can be expressed using entities of classes such as people and organisation, using relations such as CEO-of, employed-at, works-for, etc. We assume that entities h and t are represented by time-independent d-dimensional vectors, respectively h, t ∈ R d.We assume the task of generating a relational triple (h, R, t) in a given knowledge graph to be a two-step process as described next. First, given the current knowledge vector at time k, c = c k and the relation R, we assume that the probability of an entity h satisfying the first argument of R to be given by. DISPLAYFORM0 Here, R 1 ∈ R d×d is a relation-specific orthogonal matrix that evaluates the appropriateness of h for the first argument of R. For example, if R is the CEO-of relation, we would require a person as the first argument and a company as the second argument of R. However, note that the role of R 1 extends beyond simply checking the types of the entities that can fill the first argument of a relation. For our example above, not all people are CEOs and R 1 evaluates the likelihood of a person to be selected as the first argument of the CEO-of relation. Z c is a normalisation coefficient such that h∈V p(h | R, c) = 1, where the vocabulary V is the set of all entities in the knowledge graph. After generating h, the state of our random walker changes to c = c k+1, and we next generate the second argument of R with the probability given by. DISPLAYFORM0 Here, R 2 ∈ R d×d is a relation-specific orthogonal matrix that evaluates the appropriateness of t as the second argument of R. Z c is a normalisation coefficient such that t∈V p(t | R, c) = 1. Following our previous example of the CEO-of relation, R 2 evaluates the likelihood of an organisation to be a company with a CEO position. Importantly, R 1 and R 2 are representations of the relation R and independent of the entities. Therefore, we consider (R 1 and R 2) to collectively represent the embedding of R. Orthogonality of R 1, R 2 is a requirement for the mathematical proof and also act as a regularisation constraint to prevent overfitting by restricting the relational embedding space. We first perform our mathematical analysis for relational embeddings represented by orthogonal matrices and discuss later how this requirement can be relaxed. We assume a slow random walk where the knowledge vectors do not change significantly between consecutive time steps (c k ≈ c k+1). More specifically, we assume that c k − c k+1 ≤ 2 for some small 2 > 0. This is a realistic assumption for generating the two entity arguments in the same relational triple because, if the knowledge vectors were significantly different in the two generation steps, then it is likely that the corresponding relations are also different, which would not be coherent with the above-described generative process. Moreover, we assume that the knowledge vectors are distributed uniformly in the unit sphere and denote the distribution of knowledge vectors by C.To learn KGEs, we must estimate the probability that h and t satisfy the relation R, p(h, t | R), which can be obtained by taking the expectation of p(h, t | R, c, c) w.r.t. c, c ∼ C given by. DISPLAYFORM1 Here, partition functions are given by Z c = h∈V c∈C exp h R 1 c and Z c = t∈V c ∈C exp t R 2 c. follows from our two-step generative process where the generation of h and t in each step is independent given the relation and the corresponding knowledge vectors. Computing the expectation in is generally difficult because of the two partition functions Z c and Z c. However, Lemma 1 shows that the partition functions are narrowly distributed around a constant value for all c (or c) values with high probability. Lemma 1 (Concentration Lemma). If the entity embedding vectors satisfy the Bayesian prior v = sv, wherev is from the spherical Gaussian distribution, and s is a scalar random variable, which is always bounded by a constant κ, then the entire ensemble of entity embeddings satisfies that DISPLAYFORM2 for z = O(1/ √ n), and δ = exp(−Ω(log 2 n)), where n ≥ d is the number of words and Z c is the partition function for c given by c∈V exp h R 1 c.proof: To prove the concentration lemma, we show that the mean E h [Z c] of Z c is concentrated around a constant for all knowledge vectors c and its variance is bounded. Recall that DISPLAYFORM3 If P is an orthogonal matrix and x is a vector, then P x DISPLAYFORM4 2, because P P = I. Therefore, from FORMULA4 and the orthogonality of the relational embeddings, we see that R 1 c is a simple rotation of c and does not alter the length of c. We represent h = s hĥ, where s h = h andĥ is a unit vector (i.e. ĥ 2 = 1) distributed on the spherical Gaussian with zero mean and unit covariance matrix I d ∈ R d×d. Let s be a random variable that has the same distribution as s h. Moreover, let us assume that s is upper bounded by a constant κ such that s ≤ κ. From the assumption of the knowledge vector c, it is on the unit sphere as well, which is then rotated by R 1.We can write the partition function using the inner-product between two vectors h and R 1 c, Z c = h∈V exp h (R 1 c). BID0 showed that (Lemma 2.1 in their paper) the expectation of a partition function of this form can be approximated as follows: DISPLAYFORM5 where n = |V| is the number of entities in the vocabulary. follows from the expectation of a sum and the independence of h and R 1 from c. The inequality of FORMULA6 is obtained by applying the Taylor expansion of the exponential series and the final equality is due to the symmetry of the spherical Gaussian. From the law of total expectation, we can write DISPLAYFORM6 where, x = h R 1 c. Note that conditioned on s h, h is a Gaussian random variable with variance σ 2 = s 2 h. Therefore, conditioned on s h, x is a random variable with variance σ 2 = σ 2 h. Using this distribution, we can evaluate E x|s h exp h R 1 c as follows: DISPLAYFORM7 Therefore, it follows that DISPLAYFORM8 where s is the variance of the 2 norms of the entity embeddings. Because the set of entities is given and fixed, both n and σ are constants, proving that E[Z c] does not depend on c. Next, we calculate the variance V c [Z c] as follows: DISPLAYFORM9 Because 2h R 1 t is a Gaussian random variable with variance 4σ 2 = 4s 2 h from a similar calculation as in FORMULA0 we obtain, DISPLAYFORM10 for Λ = exp(8κ 2) a constant bounding s ≤ κ as stated. From above, we have bounded both the mean and variance of the partition function by constants that are independent of the knowledge vector. Note that neither exp h R 1 c nor exp t R 2 c are subGaussian nor sub-exponential. Therefore, standard concentration bounds derived for sub-Gaussian or sub-exponential random variables cannot be used in our analysis. However, the argument given in Appendix A.1 in BID1 for a partition function with bounded mean and variance can be directly applied to Z c in our case, which completes the proof of the concentration lemma. From the symmetry between h and t, Lemma 1 also applies for the partition function t∈V t R 2 c. Under the conditions required to satisfy Lemma 1, the following main theorem of this paper holds: Theorem 1. Suppose that the entity embeddings satisfy. Then, we have DISPLAYFORM11 DISPLAYFORM12 where DISPLAYFORM13 The complete proof of Theorem 1 is given in Appendix A. Below we briefly sketch the main steps. Proof sketch: Let F be the event that both c and c are within (1 ± z)Z. Then, from Lemma 1 and the union bound, event F happens with probability at least 1 − 2 exp(−Ω(log 2 n)). The R.H.S. of can be split into two parts T 1 and T 2 according to whether F happens or not. DISPLAYFORM14 T 1 can be approximated as given by. DISPLAYFORM15 On the other hand, T 2 can be shown to be a constant, independent of d, given by. DISPLAYFORM16 The vocabulary size n of real-world knowledge graphs is typically over 10 5, for which T 2 becomes negligibly small. Therefore, it suffices to consider only T 1. Because of the slowness of the random walk we have c ≈ c Using the law of total expectation we can write T 1 as follows: DISPLAYFORM17 where A(c):= E c |c exp t R 2 c. Doing some further evaluations we show that The relationship given by indicates that head and tail entity embeddings are first transformed respectively by R 1 and R 2, and the squared 2 norm of the sum of the transformed vectors is proportional to the probability p(h, t | R). DISPLAYFORM18 In this section, we derive a training objective from Theorem 1 that we can then optimise to learn KGE. The goal is to empirically validate the theoretical by evaluating the learnt KGEs. Knowledge graphs represent information about relations between two entities in the form of relational triples. The joint probability p(h, R, t) given by Theorem 1 is useful for determining whether a relation R exists between two given entities h and t. For example, if we know that with a high probability that R holds between h and t, then we can append (h, R, t) to the knowledge graph. The task of expanding knowledge graphs by predicting missing links between entities or relations is known as the link prediction problem BID16. In particular, if we can automatically append such previously unknown knowledge to the knowledge graph, we can expand the knowledge graph and address the knowledge acquisition bottleneck. To derive a criteria for determining whether a link must be predicted among entities and relations, let us consider a relational triple (h, R, t) ∈ D that exists in a given knowledge graph D. We call such relational triples as positive triples because from the assumption it is known that R holds between h and t. On the other hand, consider a negative relational triple (h, R, t) ∈ D formed by, for example, randomly perturbing a positive triple. A popular technique for generating such (pseudo) negative triples is to replace h or t with a randomly selected different instance of the same entity type. As an alternative for random perturbation, proposed a method for generating negative instances using adversarial learning. Here, we are not concerned about the actual method used for generating the negative triples but assume a set of negative triples,D, generated using some method, to be given. Given a positive triple (h, R, t) ∈ D and a negative triple (h, R, t) ∈D, we would like to learn KGEs such that a higher probability is assigned to (h, R, t) than that assigned to (h, R, t). We can formalise this requirement using the likelihood ratio given by. DISPLAYFORM0 Here, η > 1 is a threshold that determines how higher we would like to set the probabilities for the positive triples compares to that of the negative triples. By taking the logarithm of both sides in we obtain DISPLAYFORM1 If a positive triple (h, R, t) is correctly assigned a higher probability than a negative triple p(h, R, t), then the left hand side of will be negative, indicating that there is no loss incurred during this classification task. Therefore, we can re-write to obtain the marginal loss BID5, L(D,D), a popular choice as a learning objective in prior work in KGE, as shown in. DISPLAYFORM2 We can assume 2d log η to be the margin for the constraint violation. Theorem 1 requires R 1 and R 2 to be orthogonal. To reflect this requirement, we add two 2 regularisation terms R 1 R 1 − I 2 2 and R 2 R 2 − I 2 2 respectively with regularisation coefficients λ 1 and λ 2 to the objective function given by. In our experiments, we compute the gradients w.r.t. each of the parameters h, t, R 1 and R 2 and use stochastic gradient descent (SGD) for optimisation. This approach can be easily extended to learn from multiple negative triples as shown in Appendix B. At a high-level of abstraction, KGE methods can be seen as differing in their design choices for the following two main problems: (a) how to represent entities and relations, and (b) how to model the interaction between two entities and a relation that holds between them. Next, we briefly discuss prior proposals to those two problems (refer BID17 BID10 BID6 for an extended survey on KGE).A popular choice for representing entities is to use vectors, whereas relations have been represented by vectors, matrices or tensors. For example, TransE BID4, TransH BID18, TransD , TransG BID19, TransR , lppTransD BID21, DistMult BID20, HolE BID11 and ComplEx BID16 represent relations by vectors, whereas Structured Embeddings BID4 Given entity and relation embeddings, a scoring function is defined that evaluates the strength of a relation R between two entities h and t in a triple (h, R, t). The scoring functions that encode various intuitions have been proposed such as the 1 or 2 norms of the vector formed by a translation of the head entity embedding by the relation embedding over the target embedding, or by first performing a projection from the entity embedding space to the relation embedding space BID21 As an alternative to using vector norms as scoring functions, DistMult and ComplEx use the component-wise multi-linear dot product. Once a scoring function is defined, KGEs are learnt that assign better scores to relational triples in existing knowledge graphs (positive triples) over triples where the relation does not hold (negative triples) by minimising a loss function such as the logistic loss (RESCAL, DistMult, ComplEx) or marginal loss (TransE, TransH, TransD, TransD). Because knowledge graphs record only positive triples, a popular method to generate pseudo negative triples is to perturb a positive instance by replacing its head or tail entity by an entity selected uniformly at random from the vocabulary of the entities. However, uniformly sampled negative triples are likely to be obvious examples that do not provide much information to the learning process and can be detected by simply checking for the type of the entities in a triple. proposed an adversarial learning approach where a generator assigns a probability to each relation triple and negative instances are sampled according to this probability distribution to train a discriminator that discriminates between positive and negative instances. BID19 proposed TransG, a generative model based on the Chinese restaurant process, to model multiple relations that exist between a pair of entities. However, their relation embeddings are designed to satisfy vector translation similar to TransE.As an alternative to directly learning embeddings from a graph, several methods (; BID12 BID13 have considered the vertices visited during truncated random walks over the graph as pseudo sentences, and have applied popular word embedding learning algorithms such as skip-gram with negative sampling or continuous bag-of-words model to learn vertex embeddings. However, pseudo sentences generated this way are syntactically very different from sentences in natural languages. On the other hand, our work extends the random walk analysis by BID0 that derives a useful connection between the joint co-occurrence probability of two words and the 2 norm of the sum of the corresponding word embeddings. Specifically, they proposed a latent variable model where the words in a corpus are generated by a probabilistic model parametrised by a time-dependent discourse vector that performs a random walk. However, unlike in our work, they do not consider the relations between two co-occurring words in a corpus. BID3 extended the model proposed by BID0 to capture co-occurrences involving more than two words. They defined the co-occurrence of k unique words in a given context as a k-way co-occurrence, where BID0's could be seen as a special case corersponding to k = 2. Moreover, BID3 showed that it is possible to learn word embeddings that capture some types of semantic relations such as antonymy and collocation using 3-way co-occurrences more accurately than using 2-way co-occurrences. However, their model does not explicitly consider the relations between words/entities and uses only a corpus for learning the word embeddings. To empirically evaluate the theoretical stated in Theorem 1, we learn KGEs (denoted by RelWalk) by minimising the marginal loss objective derived in section 3. We use the FB15k237, FB13 (subsets of Freebase) and WN11, WN18RR (subsets of WordNet) datasets, which are standard benchmarks for KGE. We use the standard training, validation and test splits as detailed in TAB2. We generate negative triples by replacing a head or a tail entity in a positive triple by a randomly selected different entity and learn KGEs. We train the model until convergence or at most 1000 epochs over the training data where each epoch is divided into 100 mini-batches. The best model is selected by early stopping based on the performance of the learnt embeddings on the validation set (evaluated after each 20 epochs). The training details and hyperparameter settings are detailed in Appendix C. RelWalk is implemented in the open-source toolkit OpenKE . We conduct two evaluation tasks: link prediction (predict the missing head or tail entity in a given triple (h, R, ?) or (?, R, t)) BID4 and triple classification (predict whether a relation R holds between h and t in a given triple (h, R, t)) BID14. We evaluate the performance in the link prediction task using mean reciprocal rank (MRR), mean rank (MR (the average of the rank assigned to the original head or tail entity in a corrupted triple) and hits at ranks 1, 3 and 10 (H@1,3,10), whereas in the triple classification task we use accuracy (percentage of the correctly classified test triples). We only report scores under the filtered setting BID5, which removes all triples appeared in training, validating and testing sets from candidate triples before obtaining the rank of the ground truth triple. In link prediction, we consider all entities that appear in the corresponding argument in the entire knowledge graph as candidates. In TAB0 we compare the KGEs learnt by RelWalk against prior work using the published . For link prediction, RelWalk reports SoTA on both WN18RR and FB15K237 in all evaluation measures, except against ConvE in WN18RR measured by MRR. WN18RR excludes triples from WN18 that are simply inverted between train and test partitions BID15 ). RelWalk's consistently good performance on both versions of this dataset shows that it is considering the global structure in the knowledge graph when learning KGEs. For triple classification, RelWalk reports the best performance on FB13, whereas TransG reports the best performance on WN11. Considering that both TransG and RelWalk are generative models, it would be interesting to further investigate generative approaches for KGE in the future. Overall, the experimental support our theoretical claim and emphasise the importance of theoretically motivating the scoring function design process. We proposed RelWalk, a generative model of KGE and derived a theoretical relationship between the probability of a triple and entity, relation embeddings. We then proposed a learning objective based on the theoretical relationship we derived. Experimental on a link prediction and a triple classification tasks show that RelWalk obtains strong performances in multiple benchmark datasets. A PROOF OF THEOREM 1Let us consider the probabilistic event that DISPLAYFORM0 Then from the union bound we have, DISPLAYFORM1 Moreover, let F be the probabilistic event that both F c and F c being True. Then from DISPLAYFORM2 We can decompose the expectation in the R.H.S. in FORMULA2 into two terms T 1 and T 2 depending on whether respectively F is True or False as follows: DISPLAYFORM3 Here, 1 F and 1F are indicator functions given by: DISPLAYFORM4 and DISPLAYFORM5 Let us first show that T 2 is negligibly small. For two real integrable functions ψ 1 (x) and ψ 2 (x) in [a, b], the Cauchy-Schwarz's inequality states that DISPLAYFORM6 The second term of FORMULA2 is upper bounded by DISPLAYFORM7 The first term of can be bounded as follows: DISPLAYFORM8 where α > 1. Therefore, it is sufficient to bound E c exp(αh R 1 c) DISPLAYFORM9 Let us denote by z the random variable 2h R 1 c. Moreover, let r(z) = E c |z [1F], which is a function of z between. We wish to upper bound E c [exp(z)r(z)]. The worst-case r(z) can be quantified using a continuous version of Abel's inequality (proved as Lemma A.4 in BID1), we can upper bound E c [exp(z)r(z)] as follows: DISPLAYFORM10 where DISPLAYFORM11 is a function that takes the value 1 when z ≥ t and zero elsewhere. Then, we claim Pr c [z ≥ t] ≤ exp(−Ω(log 2 n)) implies that t ≥ Ω(log .9 n).If c was distributed as N (0, DISPLAYFORM12, this would be a simple tail bound. However, as c is distributed uniformly on the sphere, this requires special care, and the claim follows by applying the tail bound for the spherical distribution given by Lemma A.1 in BID0 instead. Finally, applying Corollary A.3 in BID0, we have: DISPLAYFORM13 From a similar argument as above we can obtain the same bound for c as well. Therefore, T 2 in can be upper bounded as follows: DISPLAYFORM14 Because n = |V|, the size of the entity vocabulary, is large (ca. n > 10 5) in most knowledge graphs, we can ignore the T 2 term in. Combining this with we obtain an upper bound for p(h, t | R) given by. DISPLAYFORM15 where |D| is the number of relational tuples (h, R, t) in the KB D and δ 0 = |D| exp(−Ω(log 1.8 n)) ≤ exp(−Ω(log 1.8 n)) by the fact that Z ≤ exp(2κ)n = O(n), where κ is the upper bound on h R 1 c and t R 2 c, which is regarded as a constant. On the other hand, we can lower bound p(h, t | R) as given by. DISPLAYFORM16 Taking the logarithm of both sides, from and, the multiplicative error translates to an additive error given by. DISPLAYFORM17 where A(c):= E c |c exp t R 2 c.We assumed that c and c are on the unit sphere and R 1 and R 2 to be orthogonal matrices. Therefore, R 1 c and R 2 c are also on the unit sphere. Moreover, if we let the upper bound of the 2 norm of the entity embeddings to be κ DISPLAYFORM18 Then we can lower bound A(c) as follows: DISPLAYFORM19 For some 2 > 0. The last inequality holds because DISPLAYFORM20 To obtain a lower bound on A(c) from the first-order Taylor approximation of exp(x) ≥ 1 + x we observe that DISPLAYFORM21 Therefore, from our model assumptions we have DISPLAYFORM22 Hence, DISPLAYFORM23 Therefore, from FORMULA3 and FORMULA6 we have DISPLAYFORM24 Plugging A(c) back in we obtain DISPLAYFORM25 = log E c exp h R 1 c exp t R 2 c ± δ 0 − 2 log Z + 2 log(1 ± z) + log(1 ± 2)= log E c exp h R 1 c + t R 2 c ± δ 0 − 2 log Z + 2 log(1 ± z) + log(1 ± 2)= log E c exp R 1 h + R 2 t c ± δ 0 − 2 log Z + 2 log(1 ± z) + log(1 ± 2)Note that c has a uniform distribution over the unit sphere. In this case, from Lemma A.5 in BID1 ), holds approximately. DISPLAYFORM26 where 3 =Õ(1/d). Plugging FORMULA1 in FORMULA0 we have that 8 n) ). Therefore, δ 0 can be ignored. Note that 3 =Õ(1/d) and z =Õ(1/ √ n) by assumption. Therefore, we obtain that DISPLAYFORM27 DISPLAYFORM28 DISPLAYFORM29 In this section, we show how the margin loss-based learning objective derived in section 3 can be extended to learn from more than one negative triples per each positive triple. This formulation leads to rank-based loss objective used in prior work on KGE. Considering that negative triples are generated via random perturbation, it is important to consider multiple negative triples during training to better estimate the classification boundary. Let us consider that we are given a positive triple, (h, R, t) and a set of K negative triples {(h k, R, t k)} K k=1. We would like our model to assign a probability, p(h, t | R), to the positive triple that is higher than that assigned to any of the negative triples. This requirement can be written as. DISPLAYFORM0 We could further require the ratio between the probability of the positive triple and maximum probability over all negative triples to be greater than a threshold η ≥ 1 to make the requirement of to be tighter. DISPLAYFORM1 By taking the logarithm of we obtain log p(h, t | R) − log max k=1,...,K DISPLAYFORM2 Therefore, we can define the margin loss for a misclassification as follows:L (h, R, t), {(h k, R, t k)} K k=1 = max 0, log max k=1,...,K p(h k, t k | R) + log(η) − log p(h, t | R) FORMULA2 However, from the monotonicity of the logarithm we have ∀x 1, x 2 > 0, if log(x 1) ≥ log(x 2) then x 1 ≥ x 2. Therefore, the logarithm of the maximum can be replaced by the maximum of the logarithms in as shown in.L (h, R, t), {(h k, R, t k)} K k=1 = max 0, max k=1,...,K log p(h k, t k | R) + log(η) − log p(h, t | R)By substituting for the probabilities in we obtain the rank-based loss given by.L (h, R, t), {(h k, R, t k)} In practice, we can use p(h k, t k | R) to select the negative triple with the highest probability for training with the positive triple. C TRAINING DETAILSThe statistics of the benchmark datasets are show in TAB2.We selected the initial learning rate (α) for SGD in {0.01, 0.001}, the regularisation coefficients (λ 1, λ 2) for the orthogonality constraints of relation matrices in {0, 1, 10, 100}. The number of randomly generated negative triples n neg for each positive example is varied in {1, 10, 20, 50, 100} and d ∈ {50, 100}. Optimal hyperparameter settings were: λ 1 = λ 2 = 10, n neg = 100 for all the datasets, α = 0.001 for FB15K, FB15K237 and FB13, α = 0.01 for WN18, WN18RR and WN11. For FB15K237 and WN18RR d = 100 was the best, whereas for all other datasets d = 50 performed best.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkxbDsR9Ym
We present a theoretically proven generative model of knowledge graph embedding.
Currently the only techniques for sharing governance of a deep learning model are homomorphic encryption and secure multiparty computation. Unfortunately, neither of these techniques is applicable to the training of large neural networks due to their large computational and communication overheads. As a scalable technique for shared model governance, we propose splitting deep learning model between multiple parties. This paper empirically investigates the security guarantee of this technique, which is introduced as the problem of model completion: Given the entire training data set or an environment simulator, and a subset of the parameters of a trained deep learning model, how much training is required to recover the model’s original performance? We define a metric for evaluating the hardness of the model completion problem and study it empirically in both supervised learning on ImageNet and reinforcement learning on Atari and DeepMind Lab. Our experiments show that the model completion problem is harder in reinforcement learning than in supervised learning because of the unavailability of the trained agent’s trajectories, and its hardness depends not primarily on the number of parameters of the missing part, but more so on their type and location. Our suggest that model splitting might be a feasible technique for shared model governance in some settings where training is very expensive. With an increasing number of deep learning models being deployed in production, questions regarding data privacy and misuse are being raised BID4. The trend of training larger models on more data BID28, training models becomes increasingly expensive. Especially in a continual learning setting where models get trained over many months or years, they accrue a lot of value and are thus increasingly susceptible to theft. This prompts for technical solutions to monitor and enforce control over these models BID41. We are interested in the special case of shared model governance: Can two or more parties jointly train a model such that each party has to consent to every forward (inference) and backward pass (training) through the model?Two popular methods for sharing model governance are homomorphic encryption (HE; BID36 and secure multi-party computation (MPC; BID48 . The major downside of both techniques is the large overhead incurred by every multiplication, both computationally, >1000x for HE BID29 BID14, >24x for MPC BID25 BID7, in addition to space (>1000x in case of HE) and communication (>16 bytes per 16 bit floating point multiplication in case of MPC). Unfortunately, this makes HE and MPC inapplicable to the training of large neural networks. As scalable alternative for sharing model governance with minimal overhead, we propose the method of model splitting: distributing a deep learning model between multiple parties such that each party holds a disjoint subset of the model's parameters. Concretely, imagine the following scenario for sharing model governance between two parties, called Alice and Bob. Alice holds the model's first layer and Bob holds the model's remaining layers. In each training step Alice does a forward pass through the first layer, sends the ing activations to Bob, Bob completes the forward pass, computes the loss from the labels, and does a backward pass to the first layer, sends the ing gradients to Alice, and Alice finishes the backward pass. How much security would Alice and Bob enjoy in this setting? To answer this question, we have to consider the strongest realistic attack vector. In this work we assume that the adversary has access to everything but the missing parameters held by the other party. How easy would it be for this adversary to recover the missing part of the model? We introduce this as the problem of model completion:Given the entire training data set or an environment simulator, and a subset of the parameters of a trained model, how much training is required to recover the model's original performance?In this paper, we define the problem of model completion formally (Section 3.1), propose a metric to measure the hardness of model completion (Section 3.2), and provide empirical (Section 4 and Section 5) in both the supervised learning (SL) and in reinforcement learning (RL). For our SL experiments we use the AlexNet convolutional network BID26 and the ResNet50 residual network BID17 on ImageNet BID9 ); for RL we use A3C and Rainbow BID19 in the Atari domain BID1 and IMPALA BID11 on DeepMind Lab BID0. After training the model, we reinitialize one of the model's layers and measure how much training is required to complete it (see FIG1).Our key findings are: Residual networks are easier to complete than nonresidual networks (The closest well-studied phenomenon to model completion is unsupervised pretraining, first introduced by BID22 . In unsupervised pretraining a subset of the model, typically the lower layers, is trained in a first pass with an unsupervised reconstruction loss BID10 . The aim is to learn useful high-level representations that make a second pass with a supervised loss more computationally and sample efficient. This second pass could be thought as model completion. In this paper we study vertical model completion where all parameters in one layer have to be completed. Instead we could have studied horizontal model completion where some parameters have to be completed in every layer. Horizontal model completion should be easy as suggested by the effectiveness of dropout as a regularizer BID40, which trains a model to be resilient to horizontal parameter loss. Pruning neural networks BID27) is in a sense the reverse operation to model completion. BID6 prune individual connections and BID33 prune entire feature maps using different techniques; their findings, lower layers are more important, are compatible with ours. BID13 present empirical evidence for the lottery ticket hypothesis: only a small subnetwork matters (the 'lottery ticket') and the rest can be pruned away without loss of performance. The model completion problem for this lottery ticket (which is spread over all layers) would be trivial by definition. All of these works only consider removing parts of the model horizontally. The model completion problem can also be viewed as transfer learning from one task to the same task, while only sharing a subset of the parameters BID34 BID44. BID49 investigate which layers in a deep convolutional model contain general versus task-specific representations; some of their experiments follow the same setup as we do here and their are in line with ours, but they do not measure the hardness of model completion task. Finally, our work has some connections to distillation of deep models BID5 BID3. Distillation can be understood as a'reverse' of model completion, where we want to find a smaller model with the same performance instead of completing a smaller, partial model. The literature revolves around two techniques for sharing model governance: homomorphic encryption (HE; BID36 and secure multi-party computation (MPC; BID48 BID8 . Both HE and MPC have been successfully applied to machine learning on small datasets like MNIST BID14 BID32 BID7 BID46 and the Wisconsin Breast Cancer Data set BID16 .HE is an encryption scheme that allows computation on encrypted numbers without decrypting them. It thus enables a model to be trained by an untrusted third party in encrypted form. The encryption key to these parameters can be cryptographically shared between several other parties who effectively retain control over how the model is used. Using MPC numbers can be shared across several parties such that each share individually contains no information about these numbers. Nevertheless computational operations can be performed on the shared numbers if every party performs operations on their share. The of the computation can be reconstructed by pooling the shares of the . While both HE and MPC fulfill a similar purpose, they face different tradeoffs for the additional security benefits: HE incurs a large computational overhead BID29 while MPC incurs a much smaller computational overhead in exchange for a greater communication overhead BID25 . Moreover, HE provides cryptographic security (reducing attacks to break the cipher on well-studied hard problems such as the discrete logarithm) while MPC provides perfect information-theoretic guarantees as long as the parties involved (3 or more) do not collude. There are many applications where we would be happy to pay for the additional overhead because we cannot train the model any other way, for example in the health sector where privacy and security are critical. However, if we want to scale shared model governance to the training of large neural networks, both HE and MPC are ruled out because of their prohibitive overhead. In contrast to HE and MPC, sharing governance via model splitting incurs minimal computational and manageable communication overhead. However, instead of strong security guarantees provided by HE and MPC, the security guarantee is bounded from above by the hardness of the model completion problem we study in this paper. Let f θ be a model parameterized by the vector θ. We consider two settings: supervised learning and reinforcement learning. In our supervised learning experiments we evaluate the model f θ by its performance on the test loss L(θ).In reinforcement learning an agent interacts with an environment over a number of discrete time steps BID43: In time step t, the agent takes an action a t and receives an observation o t+1 and a reward r t+1 ∈ R from the environment. We consider the episodic setting in which there is a random final time step τ ≤ K for some constant K ∈ N, after which we restart with timestep t = 1. The agent's goal is to maximize the episodic return G:= τ t=1 r t. Its policy is a mapping from sequences of observations to a distribution over actions parameterized by the model f θ. To unify notation for SL and RL, we equate L(θ) = E at∼f θ (o1,...,ot−1) [−G] such that the loss function for RL is the negative expected episodic return. To quantify training costs we measure the computational cost during (re)training. To simplify, we assume that training proceeds over a number of discrete steps. A step can be computation of gradients and parameter update for one minibatch in the case of supervised learning or one environment step in the case of reinforcement learning. We assume that computational cost are constant for each step, which is approximately true in our experiments. This allows us to measure training cost through the number of training steps executed. Let T denote the training procedure for the model f θ and let θ 0, θ 1,... be the sequence of parameter vectors during training where θ i denotes the parameters in training step i. Furthermore, let *:= min{L(θ i) | i ≤ N } denote the best model performance during the training procedure T (not necessarily the performance of the final weights). We define the training cost as the random variable C T := arg min i∈N {L(θ i) ≤ }, the number of training steps until the loss falls below the given threshold ∈ R. After we have trained the model f θ for N steps and thus end up with a set of trained parameters θ N with loss L(θ N), we split the parameters θ N = [θ How hard is the model completion problem? To answer this question, we use the parameters DISPLAYFORM0 N are the previously trained parameters and θ 0 2 are freshly initialized parameters. We then execute a (second) retraining procedure T ∈ T from a fixed set of available retraining procedures T.1 The aim of this retraining procedure is to complete the model, and it may be different from the initial training procedure T. We assume that T ∈ T since retraining the entire model from scratch (reinitializing all parameters) is a valid way to complete the model. Let θ 0, θ 1,... be the sequence of parameter vectors obtained from running the retraining procedure T ∈ T. Analogously to before, we define C T := arg min i∈N {L(θ i) ≤ } as the retraining cost to get a model whose test loss is below the given threshold ∈ R. Note that by definition, for T = T we have that C T is equal to C T in expectation. In addition to recovering a model with the best original performance *, we also consider partial model completion by using some higher thresholds * DISPLAYFORM1. These higher thresholds * α correspond to the relative progress α from the test loss of the untrained model parameters L(θ 0) to the best test loss DISPLAYFORM2 We define the hardness of model completion as the expected cost to complete the model as a fraction of the original training cost for the fastest retraining procedure T ∈ T available: DISPLAYFORM3 where the expectation is taken over all random events in the training procedures T and T.It is important to emphasize that the hardness of model completion is a relative measure, depending on the original training cost C T (* α). This ensures that we can compare the hardness of model completion across different tasks and different domains. In particular, for different values of α we compare like with like: MC-hardness T (α) is measured relative to how long it took to get the loss below the threshold * α during training. Importantly, it is not relative to how long it took to train the model to its best performance *. This means that naively counter-intuitive such as MC-hardness T (0.8) being less than MC-hardness T (0.5) are possible. Since C T and C T are nonnegative, MC-hardness T (α) is nonnegative. Moreover, since T ∈ T by assumption, we could retrain all model parameters from scratch (formally setting T to T). Thus we have MC-hardness T (α) ≤ 1, and therefore MC-hardness is bounded between 0 and 1. Equation 1 denotes an infimum over available retraining procedures T. However, in practice there is a vast number of possible retraining procedures we could use and we cannot enumerate and run all of them. Instead, we take an empirical approach for estimating the hardness of model completion: we investigate the following set of retraining strategies T to complete the model. All the retraining strategies, if not noted otherwise, are built on top of the original training procedure T. Our best are only an upper bound on the hardness of model completion. It is likely that much faster retraining procedures exist. T 1 Optimizing θ 0 1 and θ 0 2 jointly. We repeat the original training procedure T on the preserved parameters θ 0 1 and reinitialized parameters θ 0 2. The objective function is optimized with respect to all the trainable variables in the model. We might vary in hyperparameters such as learning rates or loss weighting schemes compared to T, but keep hyperparameters that change the structure of the model (e.g. size and number of layers) fixed. T 2 Optimizing θ 0 2, but not θ 0 1. Similarly to T 1, in this retraining procedure we keep the previous model structure. However, we freeze the trained weights θ 0 1, and only train the reinitialized parameters θ 0 2.T 3 Overparametrizing the missing layers. This builds on retraining procedure T 1. Overparametrization is a common trick in computer vision, where a model is given a lot more parameters than required, allowing for faster learning. This idea is supported by the'lottery ticket hypothesis' BID13: a larger number of parameters increases the odds of a subpart of the network having random initialization that is more conducive to optimization. T 4 Reinitializing parameters θ 0 2 using a different initialization scheme. Previous research shows that parameter initialization schemes can have a big impact on convergence properties of deep neural networks BID15 BID42. In T 1 our parameters are initialized using a glorot uniform scheme. This retraining procedure is identical to T 1 except that we reinitialize θ 0 2 using one of the following weight initialization schemes: glorot normal BID15, msra BID18 or caffe BID24. Our main experimental establish upper bounds on the hardness of model completion in the context of several state of the art models for both supervised learning and reinforcement learning. In all the experiments, we train a model to a desired performance level (this does not have to be stateof-the-art performance), and then reinitialize a specific part of the network and start the retraining procedure. Each experiment is run with 3 seeds, except IMPALA (5 seeds) and A3C (10 seeds).Supervised learning. We train AlexNet BID26 and ResNet50 BID17 on the ImageNet dataset BID9 to minimize cross-entropy loss. The test loss is the top-1 error rate on the test set. AlexNet is an eight layer convolutional network consisting of five convolutional layers with max-pooling, followed by two fully connected layers and a softmax output layer. ResNet50 is a 50 layer convolutional residual network: The first convolutional layer with max-pooling is followed by four sections, each with a number of ResNet blocks (consisting of two convolutional layers with skip connections and batch normalization), followed by average pooling, a fully connected layer and a softmax output layer. We apply retraining procedures T 1 and T 2 and use a different learning rate schedule than in the original training procedure because it performs better during retraining. All other hyperparameters are kept the same. Reinforcement learning. We consider three different state of the art agents: A3C BID31, Rainbow BID19 and the IMPALA reinforcement learning agent BID11. A3C comes from a family of actor-critic methods which combine value learning and policy gradient approaches in order to reduce the variance of the gradients. Rainbow is an extension of the standard DQN agent, which combines double Q-learning (van), dueling networks BID47 ), distributional RL (and noisy nets BID12 . Moreover, it is equipped with a replay buffer that stores the previous million transitions of the form (o t, a t, r t+1, o t+1), which is then sampled using a prioritized weighting scheme based on temporal difference errors BID38. Finally, IMPALA is an extension of A3C, which uses the standard actor-critic architecture with off-policy corrections in order to scale effectively to a large scale distributed setup. We train IMPALA with population based training BID23.For A3C and Rainbow we use the Atari 2600 domain BID1 and for IMPALA DeepMind Lab BID0. In both cases, we treat the list of games/levels as a single learning problem by averaging across games in Atari and training the agent on all level in parallel in case of DeepMind Lab. In order to reduce the noise in the MC-hardness metric, caused by agents being unable to learn the task and behaving randomly, we filter out the levels in which the original trained agent performs poorly. We apply the retraining procedures T 1, T 2 on all the models, and on A3C we apply additionally T 3 and T 4. All the hyperparameters are kept the same during the training and retraining procedures. Further details of the training and retraining procedures for all models can be found in Appendix A, and the parameter counts of the layers are listed in Appendix B. Our experimental on the hardness of the model completion problem are reported in FIG4. These figures show on the x-axis different experiments with different layers being reinitialized (lower to higher layers from left to right). We plot MC-hardness T (α) as a bar plot with error bars showing the standard deviation over multiple experiment runs with different seeds; the colors indicate different values of α. The numbers are provided in Appendix C. In the following we discuss the .1. In the majority of cases, T 1 is the best of our retraining procedures. From the retraining procedures listed in Section 3.3 we use T 1 and T 2 in all experiments and find that T 1 performs substantially better in all settings except two: First, for A3C, starting from the third convolutional layer, T 2 has lower MC-hardness for all the threshold levels (FIG7 . Second, T 2 performs well on all the layers when retraining ResNet-50, for all α ≤ 0.9 ( FIG2 ; the difference is especially visible at α = 0.9.For A3C we use all four retraining procedures. The difference between T 1 and T 2 are shown in FIG7 . For T 3 we tried replacing the first convolutional layer with two convolutional layers using a different kernel size, as well as replacing a fully connected layer with two fully connected layers of varying sizes. The were worse than using the same architecture and we were often unable to retrieve 100% of the original performance. With T 4 we do not see any statistically significant difference in retraining time between the initialization schemes glorot normal, msra, and caffe. FIG4 and FIG2 for T 1, the model hardness for threshold α = 0.5 and α = 0.8 is much lower for ResNet50 than for AlexNet. However, to get the original model performance (α = 1), both models need about 40% of the original training cost. As mentioned above, T 2 works better than T 1 on ResNet50 for α ≤ 0.9. An intact skip connection helps retraining for α ≤ 0.9 and T 1, but not T 2, as illustrated in the experiment S4 B1 -W FIG2. A noticeable outlier is S4 B1 at α = 0.9; it is unclear what causes this effect, but it reproduced every time we ran this experiment. Residual neural networks use skip connections across two or more layers BID17. This causes the features in those layers to be additive with respect to the incoming features, rather than replacing them as in non-residual networks. Thus lower-level and higher-level representations tend to be more spread out across the network, rather than being confined to lower and higher layers, respectively. This would explain why model completion in residual networks is more independent of the location of the layer.3. For A3C lower layers are often harder to complete than upper layers. FIG7 shows that for A3C the lower layers are harder to complete than the higher layers since for each value of α the MC-hardness decreases from left to right. However, this effect is much smaller for Rainbow (Figure 5) and AlexNet FIG4 ).In nonresidual networks lower convolutional layers typically learn much simpler and more general features that are more task independent BID49. Moreover, noise perturbations of lower layers have a significantly higher impact on the performance of deep learning models since noise grows exponentially through the network layers BID35. Higher level activations are functions of the lower level ones; if a lower layer is reset, all subsequent activations will be invalidated. This could imply that the gradients on the higher layers are incorrect and thus slow down training.4. The absolute number of parameters has a minimal effect on the hardness of model completion. If information content is spread uniformly across the model, then model completion should be a linear function in the number of parameters that we remove. However, the number of parameters in deep models usually vary greatly between layers; the lower-level convolutional layers have 2-3 orders of magnitude fewer parameters than the higher level fully connected layers and LSTMs (see Appendix B).In order to test this explicitly, we performed an experiment on AlexNet both increasing and decreasing the total number of feature maps and fully connected units in every layer by 50%, ing in approximately an order of magnitude difference in terms of parameters between the two models. We found that there is no significant difference in MC-hardness across all threshold levels.5. RL models are harder to complete than SL models. Across all of our experiments, the model completion of individual layers for threshold α = 1 in SL FIG4 and FIG2 is easier than the model completion in RL FIG7, Figure 5, and Figure 6 ). In many cases the same holds from lower thresholds as well. By resetting one layer of the model we lose access to the agent's ability to generate useful experience from interaction with the environment. As we retrain the model, the agent has to re-explore the environment to gather the right experience again, which takes extra training time. While this effect is also present during the training procedure T, it is possible that resetting one layer makes the exploration problem harder than acting from a randomly initialized network.6. When completing RL models access to the right experience matters. To understand this effect better, we allow the retraining procedure access to Rainbow's replay buffer. At the start of retraining this replay buffer is filled with experience from the fully trained policy. Figure 5 shows that the model completion hardness becomes much easier with access to this replay buffer: the three left bar plots are lower than the three right. This is supported by the benefits of kickstarting BID39, where a newly trained agent gets access to an expert agent's policy. Moreover, this is consistent with findings by BID20, who show performance benefits by adding expert trajectories to the replay buffer. Our shed some initial glimpse on the model completion problem and its hardness. Our findings include: residual networks are easier to complete than non-residual networks, lower layers are often harder to complete than higher layers, and RL models are harder to complete than SL models. Nevertheless several question remain unanswered: Why is the difference in MC-hardness less pronounced between lower and higher layers in Rainbow and AlexNet than in A3C? Why is the absolute number of parameters insubstantial? Are there retraining procedures that are faster than T 1? Furthermore, our definition of hardness of the model completion problem creates an opportunity to modulate the hardness of model completion. For example, we could devise model architectures with the explicit objective that model completion be easy (to encourage robustness) or hard (to increase security when sharing governance through model splitting). Importantly, since Equation 1 can be evaluated automatically, we can readily combine this with architecture search BID50.Our experiments show that when we want to recover 100% of the original performance, model completion may be quite costly: ∼ 40% of the original training costs in many settings; lower performance levels often retrain significantly faster. In scenarios where a model gets trained over many months or years, 40% of the cost may be prohibitively expensive. However, this number also has to be taken with a grain of salt because there are many possible retraining procedures that we did not try. The security properties of model splitting as a method for shared governance require further investigation: in addition to more effective retraining procedures, an attacker may also have access to previous activations or be able to inject their own training data. Yet our experiments suggest that model splitting could be a promising method for shared governance. In contrast to MPC and HE it has a substantial advantage because it is cost-competitiveness with normal training and inference. Learning rate Training batches Retraining batches 5e − 2 0 0 5e − 3 60e3 30e3 5e − 4 90e3 45e3 5e − 5 105e3 72.5e3 Table 1: AlexNet: Learning schedule for training and retraining procedures. 1e − 1 0 / 1e − 2 30e3 0 1e − 3 45e3 20e3 AlexNet We train this model for 120e3 batches, with batch size of 256. We apply batch normalization on the convolutional layers and 2 -regularization of 1e-4. Optimization is done using Momentum SGD with momentum of 0.9 and the learning rate schedule which is shown in Table 1. Note that the learning schedule during retraining is 50% faster than during training (for T 1 and T 2).For both retraining procedures T 1 and T 2, we perform reset for each of the first 5 convolutional layers and the following 2 fully connected layers. TAB3 shows the number of trainable parameters for each of the layers. We perform all training and retraining procedures for 60e3 batches, with batch size of 64 and 2 -regularization of 1e-4. Optimization is done using Momentum SGD with momentum of 0.9 and the learning rate schedule shown in TAB1.For our experiments, we reinitialize the very first convolutional layer, as well as the first ResNet block for each of the four subsequent network sections. In the'S4 B1 -W' experiment, we leave out resetting the learned skip connection. Finally, we also reset the last fully connected layer containing logits. A3C Each agent is trained on a single Atari level for 5e7 environment steps, over 10 seeds. We use the standard Atari architecture consisting of 3 convolutional layers, 1 fully connected layer and 2 fully connected'heads' for policy and value function. The number of parameters for each of those layers is shown in TAB5. For optimization, we use RMSProp optimizer with = 0.1, decay of 0.99 and α = 6e-4 that is linearly annealed to 0. For all the other hyperparameters we refer to BID31. Finally, while calculating reported statistics we removed the following Atari levels, due to poor behaviour of the trained agent: Montezuma's Revenge, Venture, Solaris, Enduro, Battle Zone, Gravitar, Kangaroo, Skiing, Krull, Video pinball, Freeway, Centipede, and Robotank. Rainbow Each agent is trained on a single Atari level for 20e6 environment frames, over 3 seeds. Due to agent behaving randomly, we remove the following Atari games from our MC-hardness calculations: Montezuma's Revenge, Venture, and Solaris. For our experiments, we use the same network architecture and hyperparameters as reported in BID19 and target the first 3 convolutional layers. TAB6 has the total number of parameters for each of the 3 layers. IMPALA We train a single agent over a suite of 28 DeepMind Lab levels for a total of 1 billion steps over all the environments, over 5 seeds. During training we apply population based training (PBT; BID23 with population of size 12, in order to evolve the entropy cost, learning rate and for RMSProp. For language modelling a separated LSTM channel is used. In the we report, we removed two DeepMind Lab levels due to poor behavior of the trained agent:'language_execute_random_task' and'psychlab_visual_search'. All the other hyperparameters are retained from BID11. For our experiments, we reinitialize the first convolutional layer TAB7. 1.00 1.00 0.00 1.00 1.00
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1xEtoRqtQ
We study empirically how hard it is to recover missing parts of trained models
This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference. Unlike the existing methods on domain transfer through deep generative models, such as StarGAN and UFDN , the variational domain adaptation has three advantages. Firstly, the samples from the target are not required. Instead, the framework requires one known source as a prior $p(x)$ and binary discriminators, $p(\mathcal{D}_i|x)$, discriminating the target domain $\mathcal{D}_i$ from others. Consequently, the framework regards a target as a posterior that can be explicitly formulated through the Bayesian inference, $p(x|\mathcal{D}_i) \propto p(\mathcal{D}_i|x)p(x)$, as exhibited by a further proposed model of dual variational autoencoder (DualVAE). Secondly, the framework is scablable to large-scale domains. As well as VAE encodes a sample $x$ as a mode on a latent space: $\mu(x) \in \mathcal{Z}$, DualVAE encodes a domain $\mathcal{D}_i$ as a mode on the dual latent space $\mu^*(\mathcal{D}_i) \in \mathcal{Z}^*$, named domain embedding. It reformulates the posterior with a natural paring $\langle, \rangle: \mathcal{Z} \times \mathcal{Z}^* \rightarrow \Real$, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation. Thirdly, DualVAE fastly converges without sophisticated automatic/manual hyperparameter search in comparison to GANs as it requires only one additional parameter to VAE. Through the numerical experiment, we demonstrate the three benefits with multi-domain image generation task on CelebA with up to 60 domains, and exhibits that DualVAE records the state-of-the-art performance outperforming StarGAN and UFDN. "...we hold that all the loveliness of this world comes by communion in Ideal-Form. All shapelessness whose kind admits of pattern and form, as long as it remains outside of Reason and Idea, is ugly from that very isolation from the Divine-Thought. Agents that interact in various environments have to handle multiple observation distributions. Domain adaptation BID0 ) is a methodology employed to exploit deep generative models, such as adversarial learning BID2 and variational inference BID8, that can handle distributions that vary with environments and other agents. Further, multi-task learning and domain transfer are examples of how domain adaptation methodology is used. We focus on domain transfer involving transfers across a distribution between domains. For instance, pix2pix BID5 ) outputs a sample from the target domain that corresponds to the input sample from the source domain. This can be achieved by learning the pair relation of samples from the source and target domains. CycleGAN BID21 transfers the samples between two domains using samples obtained from both domains. Similarly, UNIT BID12, DiscoGAN, and DTN BID20 have been proposed in previous studies. However, the aforementioned method requires samples that are obtained from the target domains, and because of this requirement, it cannot be applied to domains for which direct sampling is expensive or often impossible. For example, the desired, continuous, high-dimensional action in the environment, intrinsic reward (e.g., preference and curiosity) and the policy of interacting agents other than itself cannot be sampled from inside, and they can only discriminate the proposed input. Even for ourselves, the concept of beauty or interest in our conscious is subjective, complex, and difficult to be sampled from the inside, although it is easy to discriminate on the outside. The key concept of variational domain adaptation. a) Given the proposal drawn from the prior, the discriminator discriminates the target domain from the others. Each domain is posterior for the prior N (z|0, 1); further, the distribution in the latent space is observed to be a normal distribution using the conjugate likelihood. b) Domain transfer is represented by the mean shift in the latent space. c) Domain embedding: After training, all the domains can only be represented by vectors µ i.In this study, we propose variational domain adaptation, which is a framework for targets that pose challenges with respect to direct sampling. One solution is multi-domain semi-supervision, which converts the problem to semi-supervised learning, thereby making is possible to perform variational inference. In this supervision, a source domain is regarded as a prior p(x) and a target domain is considered to be a posterior p(x|D i) by referring to the label given by a supervised discriminator p(D i |x) that distinguishes the target domain from others. Our model imitates the behavior of the discriminator and models the target domain using a simple of the Bayesian theorem, p θ (x|D i) ∝ p θ (D i |x)p θ (x). The end-to-end learning framework also makes it possible to learn good prior p θ (x) with respect to all the domains. After the training was completed, the posterior p θ (x|D i) succeeded in deceiving the discriminator p(D i |x). This concept is similar to rejection sampling in the Monte Carlo methods. Further, variational domain adaptation is the first important contribution from this study. The second contribution from this study is a model of dual variational autoencoder (DualVAE), which is a simple extension of the conditional VAE BID9 ), employed to demonstrate our concept of multi-domain semi-supervision. DualVAE learns multiple domains in one network by maximizing the variational lower bound of the total negative KL-divergence between the target domain and the model. DualVAE uses VAE to model the prior p(x) and an abstract representation for the discriminator p(D i |x). The major feature of DualVAE is domain embedding that states that all the posteriors are modeled as a normal distribution N (z|µ i, σ 2) in the same latent space Z using the conjecture distribution of the prior. Here, µ i is the domain embedding that represents the domain D i. This enables us to sample from p θ (x|D i). Our major finding was that the discriminator of DualVAE was a simple inner product between the two means of domain embedding and the VAE output: DISPLAYFORM0 that acts as a natural paring between the sample and the domain. The probabilistic end-to-end model learns multiple domains in a single network, making it possible to determine the effect of transfer learning and to learn data that multi-domains cannot observe from sparse feedback. Domain embedding is a powerful tool and allows us to use VAEs instead of GANs. The third contribution of this study is that DualVAE was validated for use in a recommendation task using celebA BID13. In the experiment, using celebA and face imaging data obtained based on evaluations by 60 users, an image was generated based on the prediction of user evaluation and an ideal image that was determined to be good by multiple users. We demonstrated that the image could be modified to improve the evaluation by interpolating the image, and the image was evaluated using the domain inception score (DIS), which is the score of the model that has learned the preference of each user. We present the beauty inside each evaluator by simply sampling p θ (x|D i). The DIS of DualVAE is higher than that of a single domain, and the dataset and code are available online. Under review as a conference paper at ICLR 2019 The existing literature related to the domain transfer is based on the assumption that the samples are obtained from the target domain. For example, pix2pix BID5 can output the samples from the target domain that corresponds to the input samples from the source domain by learning the pair relation between the samples of the source and target domains. CycleGAN BID21, which differs from pix2pix, does not require sample pairs from both domains. Similarly, UNIT BID12, DiscoGAN, and DTN BID20 also do not require sample pairs. Furthermore, because there are few cases in which samples from the source and target domains form a one-to-one pair in real world research after being extended to the conversion of one-to-many relationships, including BicycleGAN BID22 and MUNIT BID3.Several studies were conducted to model multiple distributions in a semi-supervised manner. Star-GAN BID1, UFDN, and RegCGAN BID14 are extensions of the aforementioned models and are frameworks that can convert the source domain samples into samples for various target domains with a single-network structure. However, the problem with these methods is associated with hyperparameter tuning, which arises from the characteristics of adversarial learning. DualVAE is a simple extension of a conditional VAE in a multi-domain situation. Conditional VAEs utilizes VAE for semi-supervised learning. Although the model is quite simple, it is powerful and scalable making it possible to learn multiple distributions with domain embedding. In fact, we demonstrated that DualVAE quickly converged for more than 30 domains without sophisticated hyperparameter tuning. In the experiment conducted in this study, E ω [J(θ|ω)] was evaluated instead of J(θ|ω) to demonstrate that our method required less hyperparameter tuning. With regard to n domains D 1,..., D n, and a sample x on an observation space X, the objective of unsupervised domain adaptation is to minimize the KL-divergence between the target distribution and the model, DISPLAYFORM0, over all the domains D i. From the perspective of optimizing θ, minimizing the KL divergence is equivalent to maximizing the cross-entropy. As DISPLAYFORM1, the unsupervised domain adaptation can be formulated as a maximizing problem for the weighted average of cross-entropy over the domains: DISPLAYFORM2 where DISPLAYFORM3 for all the i's, the objective function is simply the mean, and if γ i = 0 for certain i's, the domain D i is ignored. The difficulty arises from the fact that it is not possible to directly sample x from p (i) x can be directly sampled from the likelihood p(D i |x). This challenge was the motivation for considering multi-domain semi-supervision. Multi-domain semi-supervision assumes a prior p(x) and models each the domain as a posterior p (i) = p(x|D i). As the Bayesian inference, we reformulate the cross-entropy E x∼p (i) [log p θ (x|D i)] in Eq. as follows: DISPLAYFORM0 where DISPLAYFORM1, the objective is identical to: DISPLAYFORM2 where [n] is a uniform distribution over {1, . . ., n} and DISPLAYFORM3 The first term is the likelihood from the discriminator; the second term is the prior learned by a generative model, including VAE; and the last term is the regularizer. Because the equation is intractable, we use Monte Carlo sampling to estimate the function. During the estimation, we initially sample x 1,..., x m from the prior p(x) and subsequently obtain the binary labels y ij ∈ {0, 1} from each discriminator y ij ∼ p(D i |x j). Since the number of labels from supervises is nm, the situation that the sparse labels: k << nm is considered. Further, some discriminators only provide parts of the labels. In the situation, the missing values are 0-padded: DISPLAYFORM4 where ≈ indicates Monte Carlo estimation andȳ = n i=1 m j=1 y ij /k. In the limit of n → ∞, the right side of the equation is identical to the left side. We extended the VAE for multi-domain transfer to demonstrate our concept of multi-domain semisupervision. Our proposed model, dual variational autoencoder (DualVAE), models each domain p i (x) as a posterior distribution p(x|D i) that is similar to that observed in a conditional VAE. FIG1 depicts the VAE and DualVAE graphical models. The major feature of DualVAE is domain embedding, where all the domains and the prior share the same latent space Z. For the prior distribution, p(z) = N (z|0, I) and p(z|D i) = N (z|µ i, σ 2 I), where µ i ∈ Z is an embedding and I is a unit matrix in Z. In the following, we denote σ 2 I = σ 2 without loss of generality. The domain D i is characterized only by its embedding µ i. Here, µ 0 is the embedding of the prior that can be assumed to be µ 0 = 0.Training DualVAE is virtually equivalent to simultaneously training (n + 1) VAEs which share a parameter, including the prior. Using conjecture distribution for the prior p(z), the posterior distribution is observed to be a normal distribution. Therefore, all the posteriors are VAEs. The joint distribution can be given as follows: DISPLAYFORM0 A VAE BID8 ) is used to model the prior p(x), a deep generative model that employs an autoencoder to model the hidden variable as random variable. The benefit of a VAE is that it can be used to model each distribution as a normal distribution in Z, achieved by maximizing the variational lower bound of log p(x) as follows: DISPLAYFORM1 where φ, w ∈ θ is a parameter of the encoder and the decoder, respectively. The objective is to learn a pair of the encoder p w (x|z) and the decoder q φ (z|x) to maximize L(x). z acts as a prior DISPLAYFORM2 The lower bound L θ (x) is derived using the reconstruction error and penalty term as the KL divergence between the model and the prior p(z). Further, the gradient of the reconstruction term can be calculated using the Monte Carlo method, and because the construction term is the KL divergence between two normal distributions, it can be analytically calculated. Right: The network structure of DualVAE. The label is structured as the inner product of latent z θ and domain embedding z i. DISPLAYFORM3 Using the definition and the Bayesian theorem, log f θ (D i |x) can be written as follows: DISPLAYFORM4 The equation above indicates log f θ (D i |x) can be written simply as the inner product between µ i and µ φ (x), and the objective can be written as follows: DISPLAYFORM5 where U = (µ 1, . . ., µ n) T, µ * U = y T U/n and α = σ −2. Interestingly, it only requires one additional parameter U except a hyperparameter α. U is named as a domain embedding matrix, representing the set of the domain prototypes. Domain embedding makes it possible to extend our method to infinite domains such as a continuous domain. In fact, µ * U (y) ∈ Z * represents a prototype of mixed domains indicated by y in a domain latent space Z *, a dual space of Z. Note that dim Z = dim Z *. The overall parameters of DualVAE is θ = (w, φ, U), where w is the encoder's, parameterφ is the decoders's parameter, and U is the domain embedding matrix. While a typical VAE does not assume any distribution of w, φ, p(U) is set as an exponential distribution with an additional hyperparameter β ∈ (0, ∞) to obtain sparse representation: DISPLAYFORM0 As the terms except for the first are independent of θ, we ignore them later as constants. By putting together the prior, the discriminator, and the regularizer, the variational lower bound of the point-wise objective of DualVAE J(θ|x, y) can be written as a surprisingly simple form: DISPLAYFORM0 where u, v = v T u. Consequently, a DualVAE maximizes a duality paring ·, ·: Z × Z * → R between the sample latent space Z = Z φ (X) and the domain latent space DISPLAYFORM1 n. Note that the objective requires only two additional hyperparameters in addition to the VAE. If α, β → 0, it is equivalent to a single VAE. Intuitively, 1/α and 1/β control variance and bias of the domain embeddings, respectively. The training algorithm of the DualVAE is shown in Algorithm 1. Require: observations (x j) m j=1, batch size M, VAE/encoder optimisers: g, g e, hyperparameters α, β, and the label matrix Y = (y j) m j=1. Initialize encoder, decoder and domain embedding parameters: φ, w, U repeat DISPLAYFORM0 Based on an original numerical experiment in domain adaptation, we confirmed that the DualVAE learns multiple distributions both qualitatively and quantitatively. Similar to the case of the existing methods, domain adaptation was confirmed via an image-generation task in this study. First, we performed A facial image recommendation task, which is a content-based recommendation task for generating the preferences of users. Second, we performed the standard domain transfer task with 40 domains in CelebA BID13 and we showed that DualVAE outperformed two state-ofthe-art methods through GAN and VAE.The objective of the first task was to generate an image that was preferred by a specific user. We set the input space X as the raw image, the prior p(x) as faces, and the domain D i as a user. We used the dataset of CelebA and SCUT-FBP5500 as the samples from the prior. The objective of the task was to generate samples from p θ (x|D i), exhibiting the images that were preferred by a user. We used label y i ∼ p(D i |x) as the existing dataset of SCUT-FBP5500 with 5,500 faces and 60 users for the content-based recommendation. The purpose of the second task was to transfer samples from p(x) into samples from p θ (x|D i). We set the prior p(x) as face images and the posterior p θ (x|D i) as face images with certain attributes of CelebA. We used label y i ∼ p(D i |x) as the attribute of CelebA.The revealed that the DualVAE successfully learned the model of the target distribution p θ (x|D i) both quantitatively and qualitatively. Quantitatively, we confirmed that the discriminator learned the distribution by evaluating the negative log-likelihood loss, − log p θ (D i |x). We evaluated the samples using the domain inception score (DIS), which is the score for evaluating the transformation of images into multiple target domains. Notably, the DIS of the DualVAE was higher than several models. Qualitatively, we demonstrated that the image could be transferred to improve the evaluation by interpolating the image. We further exhibited several beautiful facial images that the users were conscious of by decoding each domain embedding µ i, which can be considered as the projection of the ideal from inside the users. In addition, 40 domain-transferred images using the dataset of CelebA by the proposed method was better than the images by other models. CelebA CelebA BID13 comprises approximately 200,000 images of faces of celebrities with 40 attributes. SCUT-FBP5500 SCUT-FBP5500 BID10 comprises 5500 face images and employs a 5-point scale evaluation by 60 people in terms of beauty preference. The face images can be categorized as Asian male, Asian female, Caucasian male, and Caucasian female, with 2000, 2000, 750, 750 images, respectively. The quantitative of the experiment can be demonstrated by evaluating the generated images by several models using a Domain Inception Score (DIS). Although the Inception Score BID18 ) is a score for measuring generated images, it can only measure the diversity of the images, and it is not for evaluating domain transfer of the images. Therefore, we proposed using a DIS, which is a score for evaluating the transformation of images into multiple target domains. The DIS is a scalar value using the output of Inceptionv3 BID19 pretrained to output the domain label, and it is evaluated by the sum of two elements. The first is whether the domain transfer of the original image has been successful (transfer score), and the second is whether the features other than the transferred domain are retained (reconstruction score). A more detailed explanation of the DIS is provided in the appendix. Comparison of a DualVAE and a single-domain VAE A DualVAE can transform the image of the source domain into images of multiple target domains with one model. However, considering a simpler method, it is also possible to transfer the image of the source domain to the images of the multiple target domains by creating multiple models. We will call each of these models a Single Domain VAE (SD-VAE). Since an SD-VAE is a model that converts the image of one source domain to the image of one target domain, models corresponding to the number of target domains are required, and thus, 60 models required training. We demonstrated that the DualVAE performance was equal to or higher than that of the SD-VAE using the DIS. With respect to the output images of these two models, the one with a higher DIS value was considered to be capable of outputting ideal images. We calculated the DIS of 200 test images transferred by these two model. The DIS of the DualVAE was -0.0185, whereas that of the SD-VAE was -0.0282. Thus, the DIS of the DualVAE was 0.01 higher than that of SD-VAE.Comparison of DualVAE and several models The DualVAE was compared with several models capable of performing image-to-image translations for multiple domains using a single model. In this experiment, only the celebA dataset and the attributes of the dataset were used as the domain. Also, the input image was resized to 128 × 128. In each model, the dimension of the latent variable and the learning rate were randomly changed, the DIS was calculated several times, and the average and the standard deviation were obtained. The DualVAE obtained a higher DIS than the other models. We transferred the images by interpolating between the original and the target domain images. We calculated the following vector w i: DISPLAYFORM0 Here, w i was constrained by giving it the same norm as z to retain as much of the original features as possible. By changing λ and decoding w i, five images were determined to represent unideal to ideal reconstructions for each of the three sample users (i = 14, 18, and 32), and interpolation was performed to approach the ideal image x i in FIG2. In addition, we have visualized transferred images of the 40 attributes by the proposed method and other models in FIG3.3. Although StarGAN and UFDN retained the characteristics of the original image considerably, it was qualitatively understood that domain transfer was not good especially when the number of domains was large like 40 attributes. Variational domain adaptation, which is a unified framework for learning multiple distributions in a single network, is proposed in this study. Our framework uses one known source as a prior p(x) and binary discriminator p(D i |x), thereby discriminating the target domain D i from the others; this is in contrast with the existing frameworks in which samples undergo domain transfer through deep generative models. Consequently, our framework regards the target as a posterior that is characterized through Bayesian inference, p(x|D i) ∝ p(D i |x)p(x). This was exhibited by the proposed DualVAE. The major feature of the DualVAE is domain embedding, which is a powerful tool that encodes all the domains and the samples obtained from the prior into normal distributions in the same latent space as that learned by a unified network through variational inference. In the experiment, we applied our framework and model to a multi-domain image generation task. celebA and face image data that were obtained based on evaluation by 60 users were used, and the revealed that the DualVAE method outperformed StarGAN and UFDN.Several directions should be considered for future research. First, we intend to expand DualVAEs for learning in complex domains, such as high-resolution images with several models, for example, glow BID7. Second, we will perform an experiment to consider wider domains with respect to beauty. We expect that our proposed method will contribute to society in a number of ways and will help to deal with the paradigm of multiple contexts-multimodal, multi-task, and multi-agent contexts. We visualized the latent space Z of VAE and DualVAE. VAE differs from DualVAE methodology because evaluation regression is not conducted during training. For each model, we can achieve 5500 latent vectors of 63 dimensions by encoding 5500 images from SCUT-FBP5500. We obtained a scatter plot after using UMAP BID15 to reduce the number of dimensions to two. The average score is indicated by colors ranging from red to blue. As can be observed from the UMAP of DualVAE, the gradient of the score is learned, and it represents the user vector(domain embedding vector) in FIG4. Although the Inception Score BID18 ) is a score for measuring generated images, it can only measure the diversity of the images, and it is not for evaluating domain transfer of the images. Therefore, we proposed using a DIS, which is a score for evaluating the transformation of images into multiple target domains. DIS is a scalar value, and it is evaluated by the sum of two elements. The first is whether the domain transfer of the original image has been successful (transfer score), and the second is whether the features other than the transferred domain are retained (reconstruction score).We calculated the DIS using Algorithm 2. First, we assumed that there were N domains and we knew which domain each image belongs to. We fine-tuned Inceptionv3 BID19 using images X as inputs and domains as outputs. To enable the model to classify the images as the domains, we replaced the last layer of the model in a new layer which had N outputs. Second, we transferred test images into N domains using Equation 10 and loaded the transferred images into the Inceptionv3 pretrained above. Through this process we got N × N matrix for every original image, because one image was transferred into N domains and each domain image was mapped to N-dim vector. We then mapped the original image into N-dim vector using Inceptionv3, and subtracted this vector from each row of the abobe N × N matrix. We named this matrix M. The key points are the diagonal elements of M should be large because we transferred the original image into the diagonal domains, and the off-diagonal elements of M should be small because the transferred images should preserve original features as possible. In a later subsection, we will directly visualize these two elements and evaluate models. Require: observation x ∈ X, Inceptionv3 f, domain transfer model m. DISPLAYFORM0 In the Algorithm, abs denotes taking the absolute value, diag denotes taking the diagonal elements of the matrix, notdiag denotes taking the non-diagonal elements, avg denotes taking the mean of multiple values. This section shows further of TAB0, the experimental for domain adaptation over 40 domains made from CelebA. In the experimental setting above, we use attributes in CelebA as a domain, the setting is used by several studies with domain adaptation BID1. The shows DualVAE only learns 40 domains in one network, which indicates DualVAE is an easy way to learn over 10 domains. Next, we show several experimental when we change the parameters of the models. Because StarGAN uses GAN, the learning rate parameter is not robust, thus the learning is not conducted well. Moreover, celebA has 40 domains which are too many for StarGAN, and this can also be considered as one of the reasons that learning is not conducted well. Because reconstruction is conducted well, rs in Algorithm 2 becomes larger than that of DualVAE. On the other hand, domain transfer is not conducted properly, ts in Algorithm 2 becomes extremely small compares to that of DualVAE. Therefore, as we can see from TAB0, DIS becomes a very small value. Next, we conduct domain transfer experiments using the MNIST dataset. In this experiment, we demonstrated that it is possible to transfer the image into another label (domain), while not compromising the style of the original image. We also plotted the relation with DIS when labels are sparse. Moreover, we showed in subsection I.1 it is possible to transfer to another domain step by step. DISPLAYFORM0 By reducing the dimensions of the 60 domain embedding vectors from 63 to 2 using UMAP BID15, the domain embedding vectors were visualized by means of a scatter plot. Furthermore, x i was visualized by decoding samples from the domain distribution. Figure 9: Scatter plot of the domain embedding vectors, and several decoded images of the samples from each domain. Six z i from the target domain distribution and output x i were decoded. Furthermore, z 0 from the source domain data distribution and output x 0 was also decoded. In this chapter, we show it is possible to conduct arithmetic operations among domains. For example, suppose we learned the embedding vector of a charming image domain for each single person. We can output the charming image for the group of people as an entity without learning simply by taking the average value of the domain embedding vectors. Denote Community preference as f I, personal evaluation model as DISPLAYFORM0 where,μ = (1/|I|) i∈I µ i, which is the average of domain embedding vectors. Moreover, i is the index denoting the domain (person), I is the number of domains, and z(x) is the latent vector of image x., since the domain embedding vectors are linearly functional, by taking the inner product of the average of these vectorsμ and the latent vector z, the average of personal evaluation (evaluation of the community) can be obtained. Therefore, by substituting µ i forμ in Equation 10, we can reconstruct the face images with high a high degree of community evaluation. We reconstructed for higher (and lower) evaluation using 10 face images from both genders. Each image enjoys higher evaluation to the right. We can see that gradually the caving becomes deep, the beard disappears, the eyes become bigger and the outline becomes sharp FIG0. The section tells the proposed method, DualVAE, is a natural generalization from probabilistic Matrix Factorization (PMF) BID17, proposed in ten years ago. PMF is used in several application area, mainly collaborative filtering algorithm, which are typical recommendation algorithms. PMF learns the user matrix U ∈ R K×N and the item matrix V ∈ R K×J that can restore the evaluation matrix. Here, r ij is the evaluation value of item j by user i, the evaluation matrix is denoted as R ∈ R I×J. Moreover, the column vector of the user matrix U and the item matrix V are denoted as u i,v j respectively. K is the dimension of these vectors, N is the number of users, J is the number of items. I ij is the indicator function that takes the value 1 when evaluation r ij exists and 0 otherwise. The log likelihood of PMF is DISPLAYFORM0 Our objective is to find the u i, v j that maximizes the above. Relationship to DualVAE DualVAE is an end-to-end coupling of VAE and PMF. We could see DualVAE as PMF extended to a generative model. u i in Equation 12 corresponds to the domain embedding vector in DVAE, v j corresponds to the latent vector in DVAE, r ij corresponds to the likelihood that item j belongs to domain i. We experimentally show that the DualVAE outperformed the non-end-to-end coupling. We compared two models. One is the model trained to regress evaluation of the image end-to-end by calculating inner product of hidden representation of VAE and domain embedding (DVAE). The other is the model which learns hidden representation of VAE followed by learning to regress evaluation by inner product like above (VAE-PMF). We used SCUTFBP-5500 FIG0 dataset, and validated it into 5000 images with 60 evaluators and 500 test images with 60 evaluators. We quantitatively compared these two models in terms of Root Mean Square Error (RMSE) of model prediction and reconstruction error of test images. The suggests that DualVAE achieved a much smaller RMSE. Moreover, though DualVAE constrained its hidden representation to regress evaluation, the reconstruction error was almost the same as VAE-PMF. This suggests that DualVAE can generate as clear images as vanilla VAE. Figure 11: RMSE and Reconstruction loss. DualVAE is far superior to VAE in classification accuracy, and there is almost no difference in reconstruction error between them. In addition to generalization capability, another benefit from PMF is robustness to sparsity as PMF is robust to a matrix with many missing values. We will experimentally demonstrate that DualVAE is also robust with respect to sparse labels. We calculate the rs and ts when applying Algorithm 2 on 160 celebA test images, and plot the below figure when we change the missing ratio of celeA's domain labels and the λ in Equation 10. From FIG0, keeping the characteristic of the upper right plots, it is possible to conduct domain transfer at the same time. Moreover, the method is strong on the sparseness of domain labels, and DIS does not drop even when 90 of the labels are missing. On the other hand, we show that StarGAN is not as robust as DualVAE with respect to sparseness. When 90 of domain labels are missing, StarGAN cannot learn at all and generates identical images. Under review as a conference paper at ICLR 2019 (b) s = 0.9. All identical images are generated, and domain transfer is not properly conducted. We conducted a comparison experiment with the existing methods when changing α(= σ −2) in Equation 9. Here, the number of domains was set to 40. As you can see from the below, it turns out that the performance of DualVAE is robust to α. The section shows three models used in tasks of domain adaptation over three types of domains: environment, attribute and class. Environment First, we describe the experimental setting for domain transfer to the ideal image of each individual. We assumed that the beauty criterion required for evaluating the facial images de-pends on the gender of a person in the target image. Therefore, we added the gender information to the images. For this purpose, we applied CGAN BID16 to VAE. We normalized the scoring in [−1, 1] to accelerate the learning. Subsequently, we considered the specific model structure of DualVAE. Both the input and output images were RGB images, x ∈ R 256×256×3. We used convolution networks for the encoder and stride 2 for convolution and no pooling. Convolution, batch normalization BID4, and LeakyReLU were repeated four times and were subsequently connected to fully connected layers. Further, after batch normalization and LeakyReLU layers, a 63-dimensional latent variable was obtained. The decoder exhibited a completely symmetric shape with deconvolution layers instead of convolution layers. Furthermore, as the gender attribute, we set 0 as female and 1 as male. We added an image x ∈ R 256×256×1 comprising 0 or 1 data as the input of the encoder and a scalar of 0 or 1 for gender to the latent variable, which was the input to the decoder. The detailed structure is in Structure A of TAB2. We optimized DualVAE on SCUT-FBP5500. Because there were no face evaluation data in celebA, we only used it to optimize VAE. Learning was alternatively realized using these two datasets. We show the image example of SCUT-FBP5500 BID10. From FIG0, we can see the evaluation value depends on each person. Attribute Next, in comparative experiment with several models, domain transfer was performed with only celebA data and domain number of 40, 20, 10, and 5. We experimented with several parameters of the models. In particular, the dimensions of the latent variable and the learning rates were randomly selected. Both the input and output images were RGB images, x ∈ R 128×128×3. The detailed structure is in Structure B of TAB2.Class Finally, we describe the experimental setting of domain transfer in the MNIST dataset. This experimental is stated in the subsection C.2. Both the input and output images were gray images, x ∈ R 28×28×1. The detailed structure is in Structure C of TAB2. The below shows from domain adaptation performed by DualVAE by randomlysampled images from two datasets: MNIST and CelebA. Figure 18: DualVAE stably transfers samples across 10 domains while domain-irrelevant features (e.g., style) are kept.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByeLmn0qtX
This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference
We propose a new method to train neural networks based on a novel combination of adversarial training and provable defenses. The key idea is to model training as a procedure which includes both, the verifier and the adversary. In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail. We experimentally show that this training method is promising and achieves the best of both worlds – it produces a model with state-of-the-art accuracy (74.8%) and certified robustness (55.9%) on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation. This is a significant improvement over the currently known best of 68.3% accuracy and 53.9% certified robustness, achieved using a 5 times larger network than our work. The discovery of adversarial examples in deep learning has increased the importance of creating new training methods which produce accurate and robust neural networks with provable guarantees. Existing work: adversarial and provable defenses Adversarial training provides a basis framework which augments the training procedure with adversarial inputs produced by an adversarial attack. instantiated adversarial training using a strong iterative adversary and showed that their approach can train models which are highly robust against the strongest known adversarial attacks . This method has also been used to train robust ImageNet models . While promising, the main drawback of the method is that when instantiated in practice, via an approximation of an otherwise intractable optimization problem, it provides no guarantees -it does not produce a certificate that there are no possible adversarial attacks which could potentially break the model. To address this lack of guarantees, recent line of work on provable defenses; has proposed to train neural networks which are certifiably robust under a specific attacker threat model. However, these guarantees come at the cost of a significantly lower standard accuracy than models trained using adversarial training. This setting raises a natural question -can we leverage ideas from both, adversarial training techniques and provable defense methods, so to obtain models with high accuracy and certified robustness? This work: combining adversarial and provable defenses In this work, we take a step towards addressing this challenge. We show that it is possible to train more accurate and provably robust neural networks using the same convex relaxations as those used in existing, state-of-the-art provable defense methods, but with a new, different optimization procedure inspired by adversarial training. Our optimization works as follows: (i) to certify a property (e.g., robustness) of the network, the verifier produces a convex relaxation of all possible intermediate vector outputs in the neural network, then (ii) an adversary now searches over this (intermediate) convex regions in order to find, what we refer to as a latent adversarial example -a concrete intermediate input contained in the relaxation that when propagated through the network causes a misclassification that prevents verification, and finally (iii) the ing latent adversarial examples are now incorporated into our training scheme using adversarial training. Overall, we can see this method as bridging the gap between adversarial training and provable defenses (it can conceptually be instantiated with any convex relaxation). We experimentally show that the method is promising and in a neural network with state-of-theart 78.8% accuracy and 58.1% certified robustness on the challenging CIFAR-10 dataset with 2/255 L ∞ perturbation (the best known existing are 68.3% accuracy and 53.9% certified robustness using 5 times larger network). • A new method we refer to as layerwise adversarial training which can train provably robust neural networks and conceptually bridges the gap between adversarial training and existing provable defense methods. • Instantiation of layerwise adversarial training using linear convex relaxations used in prior work (accomplished by introducing a projection operator). • Experimental showing layerwise adversarial training can train neural network models which achieve both, state-of-the-art accuracy and certified robustness on CIFAR-10 with 2/255 L ∞ perturbation. Overall, we believe the method presented in this work is a promising step towards training models that enjoy both, higher accuracy and higher certification guarantees. An interesting item for future work would be to explore instantiations of the method with other convex relaxations than the one considered here. We now discuss some of the closely related work on robustness of neural networks. Heuristic adversarial defenses After the first introduction of adversarial examples , defense mechanisms to train robust neural networks were built based on the inclusion of adversarial examples to the training set . Models trained using adversarial training with projected gradient descent (PGD) were shown to be robust against the strongest known attacks . This is in contrast to other defense mechanisms which have been broken by new attack techniques . While models trained using adversarial training achieve robustness against strong adversaries, there are no guarantees that model is robust against any kind of adversarial attack under the threat model considered. Provable adversarial defenses Another line of work proposes to learn classifiers which come with robustness guarantees. These approaches are based on linear or semidefinite (; a) relaxations, hybrid zonotope or interval bound propagation . While these approaches currently obtain robustness guarantees, accuracy of these networks is relatively small and limits practical use of these methods. There has also been recent work on certification of general neural networks, not necessarily trained in a special way. These methods are based on SMT solvers , mixed-integer linear programs, abstract interpretation, restricted polyhedra (b) or combinations of those (a). Another line of work proposes to replace neural networks with a randomized classifier (; ; a) which comes with probabilistic guarantees on its robustness. While these approaches scale to larger datasets such as ImageNet (although with probabilistc instead of exact guarantees), their bounds come from the relationship between L 2 robustness and Gaussian distribution. In this paper, we consider general verification problem where input is not necessarily limited to an L p ball, but arbitrary convex set. In this work we consider a threat model where an adversary is allowed to transform an input x ∈ R d0 into any point from a convex set S 0 (x) ⊆ R d0. For example, for a threat model based on L ∞ perturbations, the convex set will be defined as Figure 1: An iteration of layerwise adversarial training. Latent adversarial example x 1 is found in the convex region C 1 (x) and propagated through the rest of the layers in a forward pass which is shown with the blue line. During backward pass, gradients are propagated through the same layers, shown with the red line. Note that the first convolutional layer does not receive any gradients. We now describe our layerwise adversarial training approach which yields a provable defense that bridges the gap between standard adversarial training and existing provable defenses. Motivation: latent adversarial examples Consider an already trained neural network model h θ which we would like to certify using convex relaxations. A fundamental issue here is that certification methods based on convex relaxations can struggle to prove the target property (e.g., robustness) due to the iterative accumulation of errors introduced by the relaxation. More precisely, assume the neural network actually satisfies the property from Equation 1 for an input x, meaning that. Naturally, this also implies that the neural network behaves correctly in the latent space of its first hidden layer in the region S 1 (x). Formally, this means that c T h 2:k θ (x 1)+d < 0, ∀x 1 ∈ S 1 (x). However, if one would use a certification method which replaces the region S 1 (x) by its convex relaxation C 1 (x), then it is possible that we would fail to certify our desired property. This is due to the fact that there may exist an input Of course, we could repeat the above thought experiment and possibly find more violating latent inputs in the set C i (x) \ S i (x) of any hidden layer i. The existence of points found in the difference between a convex relaxation and the true region is a fundamental reason for the failure of certification methods based on convex approximations. For convenience, we refer to such points as latent adversarial examples. Next, we describe a method which trains the neural network in a way that aims to minimize the number of latent adversarial examples. Layerwise provable optimization via convex relaxations Our key observation is that the two families of defense methods described earlier are in fact different ends of the same spectrum: methods based on adversarial training maximize the cross-entropy loss in the first convex region C 0 (x) while provable defenses maximize the same loss, but in the last convex region C k (x). Both methods then backpropagate the loss through the network and update the parameters using SGD. However, as explained previously, certification methods may fail even before the last layer due to the presence of latent adversarial examples in the difference of the regions C i (x) and S i (x). A natural question then is -can we leverage adversarial training so to eliminate latent adversarial examples from hidden layers and obtain a provable network? To this end, we propose adversarial training in layerwise fashion. The initial phase of training is equivalent to adversarial training as used by. In this phase in the inner loop we repeatedly find an input in C 0 (x) which maximizes the cross-entropy loss and update the parameters of the neural network so to minimize this loss using SGD. Note that the outcome of this phase is a model which is highly robust against strong multi-step adversaries. However, certification of this fact often fails due to the previously mentioned accumulation of errors in the particular convex relaxation being used. The next step of our training method is visually illustrated in Figure 1. Here, we propagate the initial convex region through the first layer of the network and obtain the convex relaxation C 1 (x). We then solve the optimization problem to find a concrete point x 1 inside of C 1 (x) which produces Algorithm 1: Layerwise adversarial training via convex relaxations Data: k-layer neural network h θ, training set (X, Y), learning rate η, step size α, inner steps n Result: Update in parallel n times: Freeze parameters θ l+1 of layer l + 1; 12 end the maximum loss when this point is propagated further through the network (this forward pass is shown with the blue line). Finally, we backpropagate the final loss (red line) and update the parameters of the network so to minimize the loss. Critically, we do not backpropagate through the convex relaxation in the first layer as standard provable defenses do ). We instead freeze the first layer and stop backpropagation after the update of the second layer. Because of this, our optimization problem is significantly easier -the neural network only has to learn to behave well on the concrete points that were found in the convex region C l (x). This can be viewed as an extension of the robust optimization method that found to work well in practice. We then proceed with the above process for later layers. Formally, this training process amounts to (approximately) solving the following min-max optimization problem at the l-th step: Note that for l = 0 this formulation is equivalent to the standard min-max formulation in Equation 2 because C 0 (x) = S 0 (x). Our approach to solve this min-max optimization problem for every layer l is shown in Algorithm 1. We initialize every batch by random sampling from the corresponding convex region. Then, in every iteration we use projected gradient descent (PGD) to maximize the inner loss in 3. We first update x j in the direction of the gradient of the loss and then project it back to C l (x j) using the projection operator Π. Note that this approach assumes the existence of an efficient projection method to the particular convex relaxation the method is instantiated with. In the next section, we show how to instantiate the training algorithm described above to a particular convex relaxation which is generally tighter than a hyperrectangle and where we derive an efficient projection operation. So far we described the general approach of layerwise adversarial training. Now we show how to instantiate it for a particular convex relaxation based on linear approximations. If instead one would use interval approximation ) as the convex relaxation, then all regions C l (x) will be hyperrectangles and projection to these sets is fast and simple. However, the interval relaxation provides a coarse approximation which motivates the need to train with relaxations that provide tighter bounds. Thus, we consider linear relaxations which are generally tighter than those based on intervals. In particular we leverage the same relaxation which was previously proposed in;; as an effective way to certify neural networks. Here, each convex region is represented as a set Vector a l represents the center of the set and the matrix A l represents the affine transformation of the hypercube [−1, 1] m l. The initial convex region C 0 (x) is represented using a 0 = x and A 0 = I d0 is a diagonal matrix. Propagation of these convex regions through the network is out of the scope of this paper -a full description can be found in or. At a high level, the convolutional and fully connected layers are handled by multiplying A l and a l by appropriate matrices. To handle the ReLU activation, for ReLU units which cross 0, we apply a convex relaxation which amounts to multiplying A l and a l by appropriately chosen scalar values, depending whether the ReLU is activated or not. Using this relaxation of ReLU, we can recursively obtain all convex regions C l (x). In practice, A l e can be computed without explicitly constructing matrix A l because A l e = W l Λ l−1 W l−2 · · · M 0 e. Then we can perform matrix-vector multiplication right to left to obtain vector A l e. We provide more detailed description of this propagation in Appendix A. Projection to linear convex regions To use our training method we now need to instantiate Algorithm 1 with a suitable projection operator Π C l (x). The key insight here is that the vector x ∈ C l (x) is uniquely determined by auxiliary vector e ∈ [−1, 1] m l where x = a l + A l e. Then instead of directly solving for x which requires projecting to C l (x), we can solve for e instead which would uniquely determine x. Crucially, the domain of e is a hyperrectangle [−1, 1] m l which is easy to project to. To visualize this further we provide an example in Figure 2. The goal is to project the red point x in the right picture to the convex region C l (x). To project, we first perform change of variables to substitute x with e and then project e to the square [−1, 1] × [−1, 1] to obtain the blue point Π(e) on the left. Then, we again perform change of variables to obtain the blue point Π(x) on the right, the projection of x we were looking for. Based on these observations, we modify Line 7 of Algorithm 1 to first update the coefficients e j using the following update rule: e j ← clip(e j + αA Here clip is function which thresholds its argument between -1 and 1, formally clip(x, −1, 1) = min(max(x, −1), 1). This is followed by an update to x j via x j ← a l + A l e j, completing the update step. Sparse representation While our representation of convex regions with matrix A l and vector a l has clean mathematical properties, in practice, a possible issue is that the matrix A l can grow to be quite large. Because of this, propagating it through the network can be memory intensive and prohibit the use of larger batches. To overcome this difficulty, we first observe that A l is quite sparse. We start with a very sparse, diagonal matrix A 0 at the input. After each convolution, an element of matrix A l+1 is non-zero only if there is a non-zero element inside of its convolutional kernel in matrix A l. We can leverage this observation to precompute positions of all non-zero elements in matrix A l+1 and compute their values using matrix multiplication. This optimization is critical to enabling training to take place altogether. An interesting item for future work is further optimizing the current relaxation (via a specialized GPU implementation) or developing more memory friendly relaxations, so to scale the training to larger networks. After training a neural network via layerwise adversarial training, our goal is to certify the target property (e.g., robustness). Here we leverage certification techniques which are not fast enough to be incorporated into the training procedure, but which can significantly boost the certification performance. The linear relaxation of ReLU that we are using is parameterized by slopes λ of the linear relaxation. Prior work which employed this relaxation; ) chose these slopes in a greedy manner by minimizing the area of the relaxation. During training we also choose λ in the same way. However, during certification, we can also optimize for the values of λ that give rise to the convex region inside of which the maximum loss is minimized. This optimization problem can be written as: Solving this is computationally too expensive inside the training loop, but during certification it is feasible to approximate the solution. We solve for λ using the Adam optimizer and clipping the elements between 0 and 1 after each update. We remark that the idea of learning the slope is similar to Dvijotham et al. (2018b) who propose to optimize dual variables in a dual formulation, however here we stay in the primal formulation. Combining convex relaxations with exact bound propagation During layerwise adversarial training we essentially train the network to be certified on all regions C 0 (x),..., C k (x). While computing exact regions S l (x) ⊆ C l (x) is not feasible during training, we can afford it to some extent during certification. The idea is to first propagate the bounds using convex relaxations until one of the hidden layers l and obtain a region C l (x). If training was successful, there should not exist a concrete point x l ∈ C l (x) which, if propagated through the network, violates the correctness property in Equation 1. We can encode both, the property and the propagation of the exact bounds S l (x) using a Mixed-Integer Linear Programming (MILP) solver. Note that we can achieve this because we represent the region C l (x) using a set of linear constraints, however, for general convex shapes this may not be possible. We perform the MILP encoding using the formulation from. It is usually possible to encode only the last two layers using MILP due to the poor scalability of these solvers for realistic network sizes. One further improvement we also include is to tighten the convex regions C l (x) using refinement via linear programming as described in Singh et al. (2019a). We remark that this combination of convex relaxation and exact bound propagation does not fall under the recently introduced convex barrier to certification Salman et al. (2019b). We now present an evaluation of our training method on the challenging CIFAR-10 dataset. Experimental setup We evaluate on a desktop PC with 2 GeForce RTX 2080 Ti GPU-s and 16-core Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz. We use Gurobi as a MILP solver. Our method is implemented in PyTorch and we plan to release both, the code and the trained models. Neural network architecture All presented are on a 4-layer convolutional network with 49 402 neurons: first 3 layers are convolutional layers with filter sizes 32, 32 and 128, kernel sizes 3, 4, 4 and strides 1, 2, 2 respectively. These are followed by a fully connected layer with 250 hidden units. After each of these layers, there is a ReLU activation. Training We use batch size 50 and L1 regularization 0.00001 for training. We perform optimization using Adam with initial learning rate 0.001 which is decreased by 10× every 100 epochs. During layerwise training we start with perturbation which is 10% higher than the one we certify and we decrease it by 5% when the training progresses to the next layer. Certification After training completes, we perform certification as follows: for every image, we first try to certify it using only linear relaxations (with the improvement of learned slopes, Section 6). If this fails, we encode the last layer as MILP and try again. Finally, if this fails we encode the ReLU activation after the last convolution using additional 300 binary variables and the rest using the triangle formulation. We consider an image to be not certifiable if we fail to verify it using these methods. We always evaluate on the first 1 000 images from the test set. Comparison to prior work We first train and certify using our method for the L ∞ perturbation 2/255. Results are shown in Table 1. We always compare to the best reported and reproducible et al. 68.3 53.9 70.1 50.0 59.9 46.1 62.3 45.5 61.1 45.9, as this improvement is also orthogonal to the method here. Thus, we only consider their best single network architecture (inline with prior work which compares to a single architecture). We believe all methods listed in Table 1, including ours, would benefit from additional techniques such as cascades, pre-training and leveraging unlabeled data. Experimentally, we find that the neural network trained using our method substantially outperforms all existing approaches, both in terms of standard accuracy and certified robustness for 2/255. Note that here we are using the same linear relaxation as, but our optimization procedure is different and shows significant improvements over the one used in their work. We also run the same experiment for L ∞ perturbation 8/255. Here we do not include comparison with as their were found to be not reproducible (; ;). These are presented in Table 2. Here we substantially outperform all existing approaches in terms of standard accuracy. However, in terms of certified robustness we are not able to achieve similar to whose method is based on a combination of interval approximation and linear relaxation. The main issue is that our 4-layer network lacks capacity to solve this task -even if training only using standard adversarial training our empirical robustness does not go above ∼ 34%. We remark that capacity was found to be one of the key components necessary to obtain a robust classifier . Due to promising for 2/255, we believe achieving state-of-the-art for 8/255 is very likely an issue of instantiating our method with a convex relaxation that is more memory efficient, which we believe is an interesting item for future work. We presented a new method to train certified neural networks. The key concept was to combine techniques from provable defenses using convex relaxations with those of adversarial training. Our method achieves state-of-the-art 78.8% accuracy and 58.1% certified robustness on CIFAR-10 with a 2/255 L ∞ perturbation, significantly outperforming prior work when considering a single network (it also achieves competitive on 8/255 L ∞). The method is general and can be instantiated with any convex relaxation. A promising future work item is scaling to larger networks: this will require tight convex relaxations with a low memory footprint that allow for efficient projection. Here we provide additional details that were omitted in the main body of the paper. In this section, we describe how to propagate convex relaxations of the form C l (x) = {a l + A l e | e ∈ [−1, 1] m l } through the network, for a single input x. As explained before, these relaxations were previously proposed in;. For the sake of completeness we describe them here using our notation. Depending on the form of function h i θ representing operation applied at layer i we distinguish different cases. Here we assume we have obtained region C i−1 (x) and our goal is to compute the region C i (x) using convex relaxation g i θ of the function h i θ. Initial convex region Let be L ∞ radius that we are certifying. We then compute minimum and maximum pixel values for each pixel as x l = max(0, x −) and x u = min(1, x +). We define initial convex region as: Convolutional and fully connected layers For both convolutional and fully connected layers, the update is given by mi. We can then compute: Using this formula, we define convex region C i+1 (x) = {a i+1 + A i+1 e | e ∈ [−1, 1] mi+1 } where: We will explain the transformation of a single element x i,j = a i,j + A T i,j e. We first compute lower bound l i,j and upper bound u i,j of element x i in the set C i (x): In the other case where 0 is between l i,j and u i,j we define ReLU (x i,j) = λ i,j x i,j + µe mi+j where e mi+j ∈ [−1, 1] is a coefficient for a new error term. Formulas for λ i,j and µ i,j are the following: This computation can also be written in the matrix form as ReLU (x i) = Λ i+1 x i + M i+1 e new where Λ i+1 and M i+1 are diagonal matrices with elements computed as above. Finally, new convex region C i+1 (x) = {a i+1 + A i+1 e | e ∈ [−1, 1] mi+1 } is defined as: where [] denotes concatenation of matrices. Here we describe how we apply random projection approach from to estimate the bounds during training. While operate in dual framework, their method to statistically estimate the bounds during training can also be applied in primal framework which we use in this paper. Recall that lower and upper bound for each neuron are computed as Thus, we need to compute ||A i || 1, which is L 1 norm of each row in the matrix A i. Using the method from, based on the from , we estimate ||A i || 1 with the method of random projections. Here A i is a matrix with d i rows and m i columns, where d i is dimensionality of the output vector in the i-th layer and m i is number of unit terms in the definition of region C i (x). The method of random projections samples standard Cauchy random matrix R of dimensions m i × k (k is number of projections) and then estimates ||A i || 1 ≈ median(|A i R|). To avoid computing the entire matrix A i we substitute: In the formula above, we set M 0 = A 0. Now, Dvijotham et al. (2018a) 83.4 62.4 To calculate A i R we split R = [R 0, R 2, ..., R i−1] and compute: Crucially, each summand can now be efficiently computed due to the associativity of matrix multiplication by performing the multiplication backwards. In this section we present additional on SVHN and MNIST datasets. SVHN We evaluated on SVHN dataset. For this experiment, we used convolutional network with 2 convolutional layers of kernel size 4 and stride 1 with 32 and 128 filters respectively. These convolutional layers are followed by a fully connected layer with 200 neurons. Each of the layers is followed by a ReLU activation function. For our training, we started with perturbation 10% higher than the one we are certifying and decreased it by 5% when progressing to the next layer. We trained each layer for 50 epochs and used L 1 regularization factor 0.0001. Results of this experiment are shown in Table 3. We certified first 1 000 images in SVHN test dataset. Our network has both, higher accuracy and higher certified robustness, than networks trained using other techniques for provable defense. MNIST In the next experiment, we evaluted on MNIST dataset. For this experiment, we used convolutional network with 2 convolutional layers with kernel sizes 5 and 4, and strides 2 followed by 1 fully connected layer. Each of the layers is followed by a ReLU activation function. For our training, we started with perturbation 10% higher than the one we are certifying and decreased it by 5% when progressing to the next layer. We trained each layer for 50 epochs and used L 1 regularization factor 0.00001. We certified first 1 000 images in MNIST test dataset. For perturbation 0.1, convolutional layers have filter sizes 32 and 64, and fully connected layer has 150 neurons. Results are presented in Table 4. Here, our numbers are comparable to those of state-of-the-art approaches. For perturbation 0.3, convolutional layers have filter sizes 32 and 128, and fully connected layer has 400 neurons. Results are presented in Table 4. Here, our certified robustness is somewhat lower than state-of-the-art. We believe this is due to the imprecise estimates of lower and upper bund via random projections. This is also reflected in relatively poor performance of who also rely on the same statistical estimates. Thus, for MNIST dataset and perturbation 0.3 it is likely necessary to use exact propagation instead of the estimates. However, this also induces large cost to the runtime. During training, we use two regularization mechanisms to make our convex relaxation tighter, both previosuly proposed in. First, we use L 1 -norm regularization which is known to induce sparsity in the weights of the network. has shown that weight sparsity helps induce more stable ReLU units which in turn makes our convex relaxation tighter (as for stable ReLU units it is already precise). 98.9 97.7 99.0 94. 4 98.9 96.3 Dvijotham et al. (2018a) 98.8 95. 6 98.7 95.8 99.0 95.6 Table 5: Evaluation on MNIST dataset with L ∞ perturbation 0.3 Method Accuracy (%) Certified robustness (%) Our work 97.6 84. 6 98.3 91.9 98.5 91.5 85.1 56.9 96.6 89.3 97.3 80.7 Second, in the i-th phase of training, we explicitly introduce a loss based on the volume of convex region C i+1. To make the relaxation tighter and minimize the volume, for each neuron j in layer i + 1 we add a loss of the form max(0, −l i+1,j) max(0, u i+1,j). This loss corresponds to the area under ReLU relaxation, see e.g. for a derivation.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJxSDxrKDr
We propose a novel combination of adversarial training and provable defenses which produces a model with state-of-the-art accuracy and certified robustness on CIFAR-10.
Learning tasks on source code (i.e., formal languages) have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known syntax. For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered. We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures. In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs. We evaluate our method on two tasks: VarNaming, in which a network attempts to predict the name of a variable given its usage, and VarMisuse, in which the network learns to reason about selecting the correct variable that should be used at a given program location. Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VarMisuse task in many cases. Additionally, our testing showed that VarMisuse identifies a number of bugs in mature open-source projects. The advent of large repositories of source code as well as scalable machine learning methods naturally leads to the idea of "big code", i.e., largely unsupervised methods that support software engineers by generalizing from existing source code BID4. Currently, existing deep learning models of source code capture its shallow, textual structure, e.g. as a sequence of tokens BID15 BID22 BID3, as parse trees BID18, or as a flat dependency networks of variables BID23. Such models miss out on the opportunity to capitalize on the rich and well-defined semantics of source code. In this work, we take a step to alleviate this by including two additional signal sources in source code: data flow and type hierarchies. We do this by encoding programs as graphs, in which edges represent syntactic relationships (e.g. "token before/after") as well as semantic relationships ("variable last used/written here", "formal parameter for argument is called stream", etc.). Our key insight is that exposing these semantics explicitly as structured input to a machine learning model lessens the requirements on amounts of training data, model capacity and training regime and allows us to solve tasks that are beyond the current state of the art. We explore two tasks to illustrate the advantages of exposing more semantic structure of programs. First, we consider the VARNAMING task BID1 BID23, in which given some source code, the "correct" variable name is inferred as a sequence of subtokens. This requires some understanding of how a variable is used, i.e., requires reasoning about lines of code far var clazz=classTypes ["Root"].Single as JsonCodeGenerator. ClassType; Assert. NotNull(clazz); var first=classTypes ["RecClass"].Single as JsonCodeGenerator. ClassType; Assert. NotNull(clazz); Assert. Equal("string", first. Properties ["Name"].Name); Assert. False(clazz. Properties ["Name"].IsArray); Figure 1: A snippet of a detected bug in RavenDB an open-source C# project. The code has been slightly simplified. Our model detects correctly that the variable used in the highlighted (yellow) slot is incorrect. Instead, first should have been placed at the slot. We reported this problem which was fixed in PR 4138. apart in the source file. Secondly, we introduce the variable misuse prediction task (VARMISUSE), in which the network aims to infer which variable should be used in a program location. To illustrate the task, Figure 1 shows a slightly simplified snippet of a bug our model detected in a popular open-source project. Specifically, instead of the variable clazz, variable first should have been used in the yellow highlighted slot. Existing static analysis methods cannot detect such issues, even though a software engineer would easily identify this as an error from experience. To achieve high accuracy on these tasks, we need to learn representations of program semantics. For both tasks, we need to learn the semantic role of a variable (e.g., "is it a counter? ", "is it a filename? "). Additionally, for VARMISUSE, learning variable usage semantics (e.g., "a filename is needed here") is required. This "fill the blank element" task is related to methods for learning distributed representations of natural language words, such as Word2Vec BID20 and GLoVe BID21. However, we can learn from a much richer structure such as data flow information. This work is a step towards learning program representations, and we expect them to be valuable in a wide range of other tasks, such as code completion ("this is the variable you are looking for") and more advanced bug finding ("you should lock before using this object").To summarize, our contributions are: (i) We define the VARMISUSE task as a challenge for machine learning modeling of source code, that requires to learn (some) semantics of programs (cf. section 3).(ii) We present deep learning models for solving the VARNAMING and VARMISUSE tasks by modeling the code's graph structure and learning program representations over those graphs (cf. section 4). (iii) We evaluate our models on a large dataset of 2.9 million lines of real-world source code, showing that our best model achieves 32.9% accuracy on the VARNAMING task and 85.5% accuracy on the VARMISUSE task, beating simpler baselines (cf. section 5). (iv) We document practical relevance of VARMISUSE by summarizing some bugs that we found in mature open-source software projects (cf. subsection 5.3). Our implementation of graph neural networks (on a simpler task) can be found at https://github.com/Microsoft/gated-graph-neural-network-samples and the dataset can be found at https://aka.ms/iclr18-prog-graphs-dataset. Our work builds upon the recent field of using machine learning for source code artifacts BID4. For example, BID15 BID7 model the code as a sequence of tokens, while BID18; model the syntax tree structure of code. All works on language models of code find that predicting variable and method identifiers is one of biggest challenges in the task. Closest to our work is the work of BID2 who learn distributed representations of variables using all their usages to predict their names. However, they do not use data flow information and we are not aware of any model that does so. BID23 and BID8 use conditional random fields to model a variety of relationships between variables, AST elements and types to predict variable names and types (resp. to deobfuscate Android apps), but without considering the flow of data explicitly. In these works, all variable usages are deterministically known beforehand (as the code is complete and remains unmodified), as in BID1.Our work is remotely related to work on program synthesis using sketches BID27 and automated code transplantation. However, these approaches require a set of specifications (e.g. input-output examples, test suites) to complete the gaps, rather than statistics learned from big code. These approaches can be thought as complementary to ours, since we learn to statistically complete the gaps without any need for specifications, by learning common variable usage patterns from code. Neural networks on graphs BID13 BID17 BID11 BID16 BID12 ) adapt a variety of deep learning methods to graph-structured input. They have been used in a series of applications, such as link prediction and classification BID14 and semantic role labeling in NLP BID19. Somewhat related to source code is the work of BID28 who learn graph-based representations of mathematical formulas for premise selection in theorem proving. Detecting variable misuses in code is a task that requires understanding and reasoning about program semantics. To successfully tackle the task one needs to infer the role and function of the program elements and understand how they relate. For example, given a program such as Fig. 1, the task is to automatically detect that the marked use of clazz is a mistake and that first should be used instead. While this task resembles standard code completion, it differs significantly in its scope and purpose, by considering only variable identifiers and a mostly complete program. Task Description We view a source code file as a sequence of tokens t 0... t N = T, in which some tokens t λ0, t λ1... are variables. Furthermore, let V t ⊂ V refer to the set of all type-correct variables in scope at the location of t, i.e., those variables that can be used at t without raising a compiler error. We call a token tok λ where we want to predict the correct variable usage a slot. We define a separate task for each slot t λ: Given t 0... t λ−1 and t λ+1,..., t N, correctly select t λ from V t λ. For training and evaluation purposes, a correct solution is one that simply matches the ground truth, but note that in practice, several possible assignments could be considered correct (i.e., when several variables refer to the same value in memory). In this section, we discuss how to transform program source code into program graphs and learn representations over them. These program graphs not only encode the program text but also the semantic information that can be obtained using standard compiler tools. Gated Graph Neural Networks Our work builds on Gated Graph Neural Networks BID17 (GGNN) and we summarize them here. A graph G = (V, E, X) is composed of a set of nodes V, node features X, and a list of directed edge sets E = (E 1, . . ., E K) where K is the number of edge types. We annotate each v ∈ V with a real-valued vector x (v) ∈ R D representing the features of the node (e.g., the embedding of a string label of that node).We associate every node v with a state vector h (v), initialized from the node label x (v). The sizes of the state vector and feature vector are typically the same, but we can use larger state vectors through padding of node features. To propagate information throughout the graph, "messages" of type k are sent from each v to its neighbors, where each message is computed from its current state vector as m DISPLAYFORM0 Here, f k can be an arbitrary function; we choose a linear layer in our case. By computing messages for all graph edges at the same time, all states can be updated at the same time. In particular, a new state for a node v is computed by aggregating all incoming messages as DISPLAYFORM1 k | there is an edge of type k from u to v}). g is an aggregation function, which we implement as elementwise summation. Given the aggregated messagem (v) and the current state vector h (v) of node v, the state of the next time step DISPLAYFORM2, where GRU is the recurrent cell function of gated recurrent unit (GRU) BID10 (a) Simplified syntax graph for line 2 of Fig. 1, where blue rounded boxes are syntax nodes, black rectangular boxes syntax tokens, blue edges Child edges and double black edges NextToken edges. Program Graphs We represent program source code as graphs and use different edge types to model syntactic and semantic relationships between different tokens. The backbone of a program graph is the program's abstract syntax tree (AST), consisting of syntax nodes (corresponding to nonterminals in the programming language's grammar) and syntax tokens (corresponding to terminals). We label syntax nodes with the name of the nonterminal from the program's grammar, whereas syntax tokens are labeled with the string that they represent. We use Child edges to connect nodes according to the AST. As this does not induce an order on children of a syntax node, we additionally add NextToken edges connecting each syntax token to its successor. An example of this is shown in FIG1.To capture the flow of control and data through a program, we add additional edges connecting different uses and updates of syntax tokens corresponding to variables. For such a token v, let D R (v) be the set of syntax tokens at which the variable could have been used last. This set may contain several nodes (for example, when using a variable after a conditional in which it was used in both branches), and even syntax tokens that follow in the program code (in the case of loops). Similarly, let D W (v) be the set of syntax tokens at which the variable was last written to. Using these, we add LastRead (resp. LastWrite) edges connecting v to all elements of D R (v) (resp. D W (v)). Additionally, whenever we observe an assignment v = expr, we connect v to all variable tokens occurring in expr using ComputedFrom edges. An example of such semantic edges is shown in FIG1.We extend the graph to chain all uses of the same variable using LastLexicalUse edges (independent of data flow, i.e., in if (...) {... v ...} else {... v ...}, we link the two occurrences of v). We also connect return tokens to the method declaration using ReturnsTo edges (this creates a "shortcut" to its name and type). Inspired by BID25, we connect arguments in method calls to the formal parameters that they are matched to with FormalArgName edges, i.e., if we observe a call Foo(bar) and a method declaration Foo(InputStream stream), we connect the bar token to the stream token. Finally, we connect every token corresponding to a variable to enclosing guard expressions that use the variable with GuardedBy and GuardedByNegation edges. For example, in if (x > y) {... x ...} else {... y ...}, we add a GuardedBy edge from x (resp. a GuardedByNegation edge from y) to the AST node corresponding to x > y. Finally, for all types of edges we introduce their respective backwards edges (transposing the adjacency matrix), doubling the number of edges and edge types. Backwards edges help with propagating information faster across the GGNN and make the model more expressive. We assume a statically typed language and that the source code can be compiled, and thus each variable has a (known) type τ (v). To use it, we define a learnable embedding function r(τ) for known types and additionally define an "UNKTYPE" for all unknown/unrepresented types. We also leverage the rich type hierarchy that is available in many object-oriented languages. For this, we map a variable's type τ (v) to the set of its supertypes, i.e. τ * (v) = {τ : τ (v) implements type τ } ∪ {τ (v)}. We then compute the type representation r * (v) of a variable v as the element-wise maximum of {r(τ): τ ∈ τ * (v)}. We chose the maximum here, as it is a natural pooling operation for representing partial ordering relations (such as type lattices). Using all types in τ * (v) allows us to generalize to unseen types that implement common supertypes or interfaces. For example, List<K> has multiple concrete types (e.g. List<int>, List<string>). Nevertheless, these types implement a common interface (IList) and share common characteristics. During training, we randomly select a non-empty subset of τ * (v) which ensures training of all known types in the lattice. This acts both like a dropout mechanism and allows us to learn a good representation for all types in the type lattice. Initial Node Representation To compute the initial node state, we combine information from the textual representation of the token and its type. Concretely, we split the name of a node representing a token into subtokens (e.g. classTypes will be split into two subtokens class and types) on camelCase and pascal_case. We then average the embeddings of all subtokens to retrieve an embedding for the node name. Finally, we concatenate the learned type representation r * (v), computed as discussed earlier, with the node name representation, and pass it through a linear layer to obtain the initial representations for each node in the graph. Programs Graphs for VARNAMING Given a program and an existing variable v, we build a program graph as discussed above and then replace the variable name in all corresponding variable tokens by a special <SLOT> token. To predict a name, we use the initial node labels computed as the concatenation of learnable token embeddings and type embeddings as discussed above, run GGNN propagation for 8 time steps 2 and then compute a variable usage representation by averaging the representations for all <SLOT> tokens. This representation is then used as the initial state of a one-layer GRU, which predicts the target name as a sequence of subtokens (e.g., the name inputStreamBuffer is treated as the sequence [input, stream, buffer]). We train this graph2seq architecture using a maximum likelihood objective. In section 5, we report the accuracy for predicting the exact name and the F1 score for predicting its subtokens. Program Graphs for VARMISUSE To model VARMISUSE with program graphs we need to modify the graph. First, to compute a context representation c(t) for a slot t where we want to predict the used variable, we insert a new node v <SLOT> at the position of t, corresponding to a "hole" at this point, and connect it to the remaining graph using all applicable edges that do not depend on the chosen variable at the slot (i.e., everything but LastUse, LastWrite, LastLexicalUse, and GuardedBy edges). Then, to compute the usage representation u(t, v) of each candidate variable v at the target slot, we insert a "candidate" node v t,v for all v in V t, and connect it to the graph by inserting the LastUse, LastWrite and LastLexicalUse edges that would be used if the variable were to be used at this slot. Each of these candidate nodes represents the speculative placement of the variable within the scope. Using the initial node representations, concatenated with an extra bit that is set to one for the candidate nodes v t,v, we run GGNN propagation for 8 time steps.2 The context and usage representation are then the final node states of the nodes, i.e., c(t) = h (v<SLOT>) and u(t, v) = h (vt,v). Finally, the correct variable usage at the location is computed as arg max v W [c(t), u(t, v)] where W is a linear layer that uses the concatenation of c(t) and u(t, v). We train using a max-margin objective. Using GGNNs for sets of large, diverse graphs requires some engineering effort, as efficient batching is hard in the presence of diverse shapes. An important observation is that large graphs are normally very sparse, and thus a representation of edges as an adjacency list would usually be advantageous to reduce memory consumption. In our case, this can be easily implemented using a sparse tensor representation, allowing large batch sizes that exploit the parallelism of modern GPUs efficiently. A second key insight is to represent a batch of graphs as one large graph with many disconnected components. This just requires appropriate pre-processing to make node identities unique. As this makes batch construction somewhat CPU-intensive, we found it useful to prepare minibatches on a separate thread. Our TensorFlow BID0 implementation scales to 55 graphs per second during training and 219 graphs per second during test-time using a single NVidia GeForce GTX Titan X with graphs having on average 2,228 (median 936) nodes and 8,350 (median 3,274) edges and 8 GGNN unrolling iterations, all 20 edge types (forward and backward edges for 10 original edge types) and the size of the hidden layer set to 64. The number of types of edges in the GGNN contributes proportionally to the running time. For example, a GGNN run for our ablation study using only the two most common edge types (NextToken, Child) achieves 105 graphs/second during training and 419 graphs/second at test time with the same hyperparameters. Our (generic) implementation of GGNNs is available at https://github.com/Microsoft/ gated-graph-neural-network-samples, using a simpler demonstration task. Dataset We collected a dataset for the VARMISUSE task from open source C # projects on GitHub. To select projects, we picked the top-starred (non-fork) projects in GitHub. We then filtered out projects that we could not (easily) compile in full using Roslyn 3, as we require a compilation to extract precise type information for the code (including those types present in external libraries). Our final dataset contains 29 projects from a diverse set of domains (compilers, databases, . . .) with about 2.9 million non-empty lines of code. A full table is shown in Appendix D.For the task of detecting variable misuses, we collect data from all projects by selecting all variable usage locations, filtering out variable declarations, where at least one other type-compatible replacement variable is in scope. The task is then to infer the correct variable that originally existed in that location. Thus, by construction there is at least one type-correct replacement variable, i.e. picking it would not raise an error during type checking. In our test datasets, at each slot there are on average 3.8 type-correct alternative variables (median 3, σ = 2.6).From our dataset, we selected two projects as our development set. From the rest of the projects, we selected three projects for UNSEENPROJTEST to allow testing on projects with completely unknown structure and types. We split the remaining 23 projects into train/validation/test sets in the proportion 60-10-30, splitting along files (i.e., all examples from one source file are in the same set). We call the test set obtained like this SEENPROJTEST.Baselines For VARMISUSE, we consider two bidirectional RNN-based baselines. The local model (LOC) is a simple two-layer bidirectional GRU run over the tokens before and after the target location. For this baseline, c(t) is set to the slot representation computed by the RNN, and the usage context of each variable u(t, v) is the embedding of the name and type of the variable, computed in the same way as the initial node labels in the GGNN. This baseline allows us to evaluate how important the usage context information is for this task. The flat dataflow model (AVGBIRNN) is an extension to LOC, where the usage representation u(t, v) is computed using another two-layer bidirectional RNN run over the tokens before/after each usage, and then averaging over the computed representations at the variable token v. The local context, c(t), is identical to LOC. AVGBIRNN is a significantly stronger baseline that already takes some structural information into account, as the averaging over all variables usages helps with long-range dependencies. Both models pick the variable that maximizes c(t)T u(t, v).For VARNAMING, we replace LOC by AVGLBL, which uses a log-bilinear model for 4 left and 4 right context tokens of each variable usage, and then averages over these context representations (this corresponds to the model in BID2). We also test AVGBIRNN on VARNAMING, which essentially replaces the log-bilinear context model by a bidirectional RNN. TAB1 shows the evaluation of the models for both tasks. 4 As LOC captures very little information, it performs relatively badly. AVGLBL and AVGBIRNN, which capture information from many variable usage sites, but do not explicitly encode the rich structure of the problem, still lag behind the GGNN by a wide margin. The performance difference is larger for VARMISUSE, since the structure and the semantics of code are far more important within this setting. Generalization to new projects Generalizing across a diverse set of source code projects with different domains is an important challenge in machine learning. We repeat the evaluation using the UNSEENPROJTEST set stemming from projects that have no files in the training set. The right side of TAB1 shows that our models still achieve good performance, although it is slightly lower compared to SEENPROJTEST. This is expected since the type lattice is mostly unknown in UNSEENPROJTEST.We believe that the dominant problem in applying a trained model to an unknown project (i.e., domain) is the fact that its type hierarchy is unknown and the used vocabulary (e.g. in variables, method and class names, etc.) can differ substantially. Ablation Study To study the effect of some of the design choices for our models, we have run some additional experiments and show their in TAB2. First, we varied the edges used in the program graph. We find that restricting the model to syntactic information has a large impact on performance on both tasks, whereas restricting it to semantic edges seems to mostly impact performance on VARMISUSE. Similarly, the ComputedFrom, FormalArgName and ReturnsTo edges give a small boost on VARMISUSE, but greatly improve performance on VARNAMING. As evidenced by the experiments with the node label representation, syntax node and token names seem to matter little for VARMISUSE, but naturally have a great impact on VARNAMING. Figure 3 illustrates the predictions that GGNN makes on a sample test snippet. The snippet recursively searches for the global directives file by gradually descending into the root folder. Reasoning about the correct variable usages is hard, even for humans, but the GGNN correctly predicts the variable 3 http://roslyn.io 4 Sect. A additionally shows ROC and precision-recall curves for the GGNN model on the VARMISUSE task..TrimEnd(Path. DirectorySeparatorChar); } path 13 = null; return false; } 1: path:59%, baseDirectory:35%, fullPath:6%, GlobalDirectivesFileName:1% 2: baseDirectory:92%, fullPath:5%, GlobalDirectivesFileName:2%, path:0.4% 3: fullPath:88%, baseDirectory:9%, GlobalDirectivesFileName:2%, path:1% 4: directivesDirectory:86%, path:8%, baseDirectory:2%, GlobalDirectivesFileName:1%, fullPath:0.1% 5: directivesDirectory:46%, path:24%, baseDirectory:16%, GlobalDirectivesFileName:10%, fullPath:3% 6: baseDirectory:64%, path:26%, directivesDirectory:5%, fullPath:2%, GlobalDirectivesFileName:2% 7: path:99%, directivesDirectory:1%, GlobalDirectivesFileName:0.5%, baseDirectory:7e-5, fullPath:4e-7 8: fullPath:60%, directivesDirectory:21%, baseDirectory:18%, path:1%, GlobalDirectivesFileName:4e-4 9: GlobalDirectivesFileName:61%, baseDirectory:26%, fullPath:8%, path:4%, directivesDirectory:0.5% 10: path:70%, directivesDirectory:17%, baseDirectory:10%, GlobalDirectivesFileName:1%, fullPath:0.6% 11: directivesDirectory:93%, path:5%, GlobalDirectivesFileName:1%, baseDirectory:0.1%, fullPath:4e-5% 12: directivesDirectory:65%, path:16%, baseDirectory:12%, fullPath:5%, GlobalDirectivesFileName:3% 13: path:97%, baseDirectory:2%, directivesDirectory:0.4%, fullPath:0.3%, GlobalDirectivesFileName:4e-4 Figure 3: VARMISUSE predictions on slots within a snippet of the SEENPROJTEST set for the ServiceStack project. Additional visualizations are available in Appendix B. The underlined tokens are the correct tokens. The model has to select among a number of string variables at each slot, where all of them represent some kind of path. The GGNN accurately predicts the correct variable usage in 11 out of the 13 slots reasoning about the complex ways the variables interact among them. usages at all locations except two (slot 1 and 8). As a software engineer is writing the code, it is imaginable that she may make a mistake misusing one variable in the place of another. Since all variables are string variables, no type errors will be raised. As the probabilities in Fig. 3 suggest most potential variable misuses can be flagged by the model yielding valuable warnings to software engineers. Additional samples with comments can be found in Appendix B.Furthermore, Appendix C shows samples of pairs of code snippets that share similar representations as computed by the cosine similarity of the usage representation u(t, v) of GGNN. The reader can notice that the network learns to group variable usages that share semantic similarities together. For example, checking for null before the use of a variable yields similar distributed representations across code segments (Sample 1 in Appendix C). We have used our VARMISUSE model to identify likely locations of bugs in RavenDB (a document database) and Roslyn (Microsoft's C # compiler framework). For this, we manually reviewed a sample of the top 500 locations in both projects where our model was most confident about a choosing a variable differing from the ground truth, and found three bugs in each of the projects. Figs. 1,4,5 show the issues discovered in RavenDB. The bug in Fig. 1 was possibly caused by copy-pasting, and cannot be easily caught by traditional methods. A compiler will not warn about if (IsValidBackup(backupFilename) == false) {output("Error:"+ backupLocation +" doesn't look like a valid backup"); throw new InvalidOperationException(backupLocation + " doesn't look like a valid backup");Figure 5: A bug found (yellow) in the RavenDB open-source project. Although backupFilename is found to be invalid by IsValidBackup, the user is notified that backupLocation is invalid instead.unused variables (since first is used) and virtually nobody would write a test testing another test. FIG3 shows an issue that, although not critical, can lead to increased memory consumption. Fig. 5 shows another issue arising from a non-informative error message. We privately reported three additional bugs to the Roslyn developers, who have fixed the issues in the meantime (cf. https://github.com/dotnet/roslyn/pull/23437). One of the reported bugs could cause a crash in Visual Studio when using certain Roslyn features. Finding these issues in widely released and tested code suggests that our model can be useful during the software development process, complementing classic program analysis tools. For example, one usage scenario would be to guide the code reviewing process to locations a VARMISUSE model has identified as unusual, or use it as a prior to focus testing or expensive code analysis efforts. Although source code is well understood and studied within other disciplines such as programming language research, it is a relatively new domain for deep learning. It presents novel opportunities compared to textual or perceptual data, as its (local) semantics are well-defined and rich additional information can be extracted using well-known, efficient program analyses. On the other hand, integrating this wealth of structured information poses an interesting challenge. Our VARMISUSE task exposes these opportunities, going beyond simpler tasks such as code completion. We consider it as a first proxy for the core challenge of learning the meaning of source code, as it requires to probabilistically refine standard information included in type systems. A PERFORMANCE CURVES FIG4 shows the ROC and precision-recall curves for the GGNN model. As the reader may observe, setting a false positive rate to 10% we get a true positive rate 5 of 73% for the SEENPROJTEST and 69% for the unseen test. This suggests that this model can be practically used at a high precision setting with acceptable performance. Below we list a set of samples from our SEENPROJTEST projects with comments about the model performance. Code comments and formatting may have been altered for typesetting reasons. The ground truth choice is underlined. The model predicts correctly all usages except from the one in slot #3. Reasoning about this snippet requires additional semantic information about the intent of the code. var response = ResultsFilter(typeof(TResponse), #1, #2, request);#1 httpMethod: 99%, absoluteUrl: 1%, UserName: 0%, UserAgent: 0% #2 absoluteUrl: 99%, httpMethod: 1%, UserName: 0%, UserAgent: 0%The model knows about selecting the correct string parameters because it matches them to the formal parameter names. #1 n: 100%, MAXERROR: 0%, SYNC_MAXRETRIES: 0% #2 MAXERROR: 62%, SYNC_MAXRETRIES: 22%, n: 16% It is hard for the model to reason about conditionals, especially with rare constants as in slot #2. Here we show pairs of nearest neighbors based on the cosine similarity of the learned representations u(t, v). Each slot t is marked in dark blue and all usages of v are marked in yellow (i.e. variableName). This is a set of hand-picked examples showing good and bad examples. A brief description follows after each pair. HasAddress is a local function, seen only in the testset. For this work, we released a large portion of the data, with the exception of projects with a GPL license. The data can be found at https://aka.ms/iclr18-prog-graphs-dataset. Since we are excluding some projects from the data, below we report the , averaged over three runs, on the published dataset: Accuracy (%) PR AUC SEENPROJTEST 84.0 0.976 UNSEENPROJTEST 74.1 0.934
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJOFETxR-
Programs have structure that can be represented as graphs, and graph neural networks can learn to find bugs on such graphs
Overfitting is an ubiquitous problem in neural network training and usually mitigated using a holdout data set. Here we challenge this rationale and investigate criteria for overfitting without using a holdout data set. Specifically, we train a model for a fixed number of epochs multiple times with varying fractions of randomized labels and for a range of regularization strengths. A properly trained model should not be able to attain an accuracy greater than the fraction of properly labeled data points. Otherwise the model overfits. We introduce two criteria for detecting overfitting and one to detect underfitting. We analyze early stopping, the regularization factor, and network depth. In safety critical applications we are interested in models and parameter settings which perform well and are not likely to overfit. The methods of this paper allow characterizing and identifying such models. Deep neural networks have shown superior performance for a wide range of machine learning task such as speech recognition BID4 ), image classification BID8 ), playing board games BID13 ); machine translation ); beating previous methods by orders of magnitudes. To apply neural networks to safety critical problems such as autonomous driving it is necessary to evaluate their performance on new previously unseen data. One of the major problems of neural networks is their vulnerability to adversarial attacks. It has been shown that tiny unrecognizable changes of the input can fool the network to predict any class the attacker has chosen. One way to interpret this vulnerability is that the neural network overfits to the training data, with the output varying rapidly around each training point and thus slight changes of the input can lead to big changes in the output. It is thus highly desirable to prevent the network from overfitting during training. Previously reported methods reduce the chance of overfitting by evaluating the neural network on some holdout set, or by penalizing the complexity of the model class. This has the disadvantage that a holdout set can only be used once. By using design choices proven to be successful in the past the model becomes dependent on the holdout set. Penalizing the model class is only a heuristic remedy to overfitting. In the present paper we devise a method which prevents overfitting by relying on the training data only. We motivate l 1 -regularization of the kernel weights as a preferable choice to control the network complexity. Using no holdout set requires an alternative notion of overfitting. In the paper, we say that a model overfits if it is able to learn noise. Heuristics. There exists several well known heuristics which reduce the chance of overfitting. Typically one reduces the hypothesis space, or one increases the data set. We can directly control the size of the hypothesis space by the number of parameters of the model. Typical choices are the width, the depth and the filter size of the network. Dropout introduced by BID14 at each training step, individual nodes and their incoming and outgoing edges are ignored (dropped out) with probability p. This reduces the dependency of the model on individual nodes. In early The complexity of a neural network is controlled by its hyper parameter and the hyper parameter of the training algorithm. We propose to control the model class by adding the l 1 norm of the kernel weights ||W j || 1 multiplied with a regularization factor λ to the loss function. Some additional for this section is provided in the appendix. Notation. This paper considers feed forward networks f: R d0 → R k which maps the input space V 0 = R d0 to some target space V L = R k. This is followed by an arg max function which picks as output the coordinate of the largest value. The margin measures the gap between the output for the correct label and the other labels, γ = f (x) y − max j =y f (x) j. A positive margin means a correct classification, whereas a negative margin means an incorrect classification. Each layer of the network consists of a linear block A k: V k−1 → W k and a non linearity φ k: W k → V k. The networks are thus written as: DISPLAYFORM0 Here of course φ • A(x) is just another way of writing φ(A(x)). The concatenation η k = φ k • A k will be called a block of the network. In this paper only the standard relu nonlinearity φ: R → R +, which is defined by φ(x) = x ∨ 0 = max{x, 0} = x +, is considered. The input data is denoted by x 0 ∈ R d0. Further, the output of layer k of the network is denoted by y k = f k (x):= φ k • A k (· · · (φ 1 • A 1 (x))) ∈ R d k and the network which outputs the k-th layer by f k: DISPLAYFORM1 Finally, the width of the network is defined by d = max{d 0, . . ., d L}. In the paper we will call the data y k, passing through the layers, signal. Finally, we arrange all data points and signals in matrices, denoted by X ∈ R d0×n and Y k ∈ R d k ×n. So we write by slightly abusing notation, DISPLAYFORM2 l 1 -regularization A typical convolutional kernel W is determined by the filter size, and the number of incoming and outgoing features. If the convolution is written as matrix operation y = Ax and zero padding is being assumed, then the matrix A can be arranged as a vertically stacked block matrix each subblock A i representing one outgoing feature. The entries of each these blocks are determined by the weights of the i-th filter. Due to zero padding and weight sharing, each weight occurs precisely once in each row and each column of A i. It follows that the filter matrix A contains in each column each weight precisely once. Lemma 3.1.1. The spectral norm of a convolution matrix A is bounded by the l 1 -norm of its kernel weights. DISPLAYFORM3 Proof. The inequality follows as the spectral norm can be bounded by the row and columns norms of A, which can be estimated by the weight matrix W. DISPLAYFORM4 Recently shown generalization bounds are dominated by BID2 ) see also Theorem A.1.1. Here γ > 0 is the margin, X denotes the training data arranged in a matrix, and R A is the spectral complexity. We will use a simplified version of R A defined by DISPLAYFORM5 DISPLAYFORM6 Lemma 3.1.2. The spectral complexity can be bounded by the l 1 -norm of the kernel weights W j. DISPLAYFORM7 Here the first inequality holds because of inequalities between the generalized 2/3-mean and 1-mean. The second inequality follows as the spectral norm can be bounded by ||W || 1 by Lemma 3.1.1.Both lemmas show that it is beneficial to l 1 -norm of the kernel weights. In fact Lemma 3.1 shows by looking at DISPLAYFORM8 that the margin scales as ||W || 1. Since we do not use a bias in our model, we may rescale each kernel matrix by ||W || −11. This is compensated by decreasing the margin accordingly. So in order to achieve better generalization bounds we can penalize the kernel weights by λ||W || 1. Here λ > 0 is a regularization parameter to be determined. Assumptions. To simplify the analysis we make three assumptions. First, we assume that the data is independent and identically distributed. This implies that the with an increasing level of randomness the complexity of the data also increases. In dependent data, this is not necessarily the case. As in that case correlation in the data can be destroyed by the introduction of randomness making the data faster to learn. Second, we assume that the model complexity is controlled by a regularization parameter in such a way that an increase of the regularization parameter implies a strict decrease of the complexity of the model. Third, we assume that the regularization parameter and the randomness are on a similar scale. To explain this, note that the accuracy is a function of the regularization parameter and randomness. The assumption simply means that the norm of the gradient of the accuracy taking in direction of the regularization is of the same order as the norm of the gradient taken in direction of randomness. Creating randomized data. In this paper we consider a classification problem. Each data point consists of a pair (x, y) which is typically an image and a corresponding class. For a fixed level of randomness p ∈ we split the training data Z = (X, Y) in two parts. The first split Z p contains a random sample of p-percent of each class of the training data. The second part Z 1−p contains the rest of the training data. The classes of the first set are randomly permuted to give the setZ p. The randomized data is obtained by joining D p =Z p ∪ Z p−1. With this D 0 is equal to the original data set Z and D 1 is obtained by randomly permuting all class labels Y of the data set Z.Accuracy curves. Central to our paper are accuracy plots. Here we plot the accuracy as computed on the training data over the randomness of the training data. Alternatively the plot of the accuracy as computed on the training data over the regularization parameter. In the paper we call such curves in the plots accuracy over randomness curves and accuracy over regularization curves. To generate the plots we keep everything fixed except the randomness of the training data or the regularization parameter. Monotony. Let us assume that we successfully trained our model on the unperturbed data set D 0 = Z. This means that the accuracy over randomness curve starts in the left upper corner at some point close to 1. As we increase the level of randomness the training data D p becomes more complex and it is thus more difficult to learn, which means our algorithm will take longer to achieve the same training error. Thus, we expect that the accuracy drops. In other words the accuracy curve is strictly monotonically decreasing for increasing randomness. Further, if we increase the regularization parameter the model complexity drops. Thus we also expect that accuracy drops if the regularization of the model is increased. This shows that our assumption imply that the accuracy is strictly monotonically decreasing as a function of randomness and regularization. Figure 1 shows the qualitative behavior of accuracy over randomness curves which follows the assumption we made. Real accuracy curves To compare these idealized curves with accuracy curves of real data we computed the accuracy curves for different data sets, see Figure 2. In each subfigure we trained a neural network on either mnist (a), cifar10 (b), and patched-noise (c) -a generated data set. For each curve we varied the l 1 regularization parameter. Furthermore, for each randomness value on the x-axis, the network was trained for five different randomization of the labels. More details on the networks, the data sets and more plots can be found in the appendix. Criterion 1 -Convexity of accuracy curve. In Figure 1 (a) three types of accuracy curves can be seen: dashed concave curves, dotted convex curves, and a full straight line. If the accuracy curve of our model is above the straight line, the model is able to learn noise. In other words the model overfits. Analogously, an accuracy curve below the straight line, shows underfitting. The model class has not enough capacity to learn the data. The figures show the accuracy curves for three different data sets. Each curve represents a different l 1 -regularization. The curves start with no regularization, depicted in blue, to strong regularization depicted in red. As the regularization is increase the curve tend to pushed below the straight line, confirming our intuition. The model overfits if the accuracy computed on the true data set is close to one and the accuracy over randomness curve of the model is strictly concave. The model underfits if the accuracy curve is strictly convex. This criterion can be computed by measuring the squared distance of the accuracy curve to the straight line connecting the points and (1, 1 number of classes). So assuming the r 1,..., r n parametrize the randomization of the labels, a(r i) denotes the accuracy at r i the criterion can be computed by: DISPLAYFORM0 The criterion if met if crit1 is small. Criterion 2 -Steep decrease in accuracy. Following our criterion 1 we want to determine if the accuracy curves are convex. Let us recall the accuracy curves are both strictly monotone decreasing in randomness and regularization. And that we are assuming that randomness and regularization are on a similar scale. If we look at the point in the upper left of Figure 1 (a) we see that the curves are convex if the accuracy drops sharply as the randomness increases. As the accuracy curve is also monotone decreasing with increasing regularization we will also detect the convexity by a steep drop in accuracy as depicted by the marked point in the Figure 1 (b).The model overfits if the accuracy on the training data is close to one and the accuracy over regularization curve (plotted in log-log space) is constant. Otherwise it underfits. This criterion can be detected by approximating the derivative crit2 = ∂ ∂λ a(λ) of the accuracy a as the regularization parameter λ increases. If the derivative becomes larger than a threshold, the optimal value is found. Criterion 3 -Two modes in margin histograms. Finally we derive a criterion based on the margin histograms of the fully randomized training data. Looking again at 1 we see that the accuracy of the underfitting curves remains constant if we decrease the randomness just a tiny bit. While training our model several things happen simultaneously. At starting time the network outputs random noise. If the parameter settings leads to a successful training, the model typically has a phase in which it outputs one class for all inputs. Looking at the margins of this phase, the distribution has two modes, one negative and a positive one containing the mass of 1 number of classes examples. Once we train further the two modes combine to one and the network starts to converge. Our third criterion looks for these two mode, because the accuracy will remain constant for a tiny decrease in randomness, as the two modes have to collapse before the accuracy can increase, we are thus in the underfitting regime. The model overfits if the margin histograms computed with fully random data D 1 respectively with true data D 0 are both positive. The model underfits if the margin histogram computed with fully randomized training data D 1 has two mode. This criterion can also be evaluated, by simply computing the quotient of positive margins to negative margins. So if i denotes the index of a training sample, and h i its margin, then the criterion is computed by DISPLAYFORM1 where χ denotes the indicator function. The criterion is fulfilled if crit3 is close to The plots show the accuracy curves of the network trained on cifar10 over different degrees of randomness with increasing degree of l 1 -regularization, after 19999 iterations. We select λ such that the blue accuracy curve stays below the optimal green line and is convex. Following our convexity criterion we choose λ * 1 = 0.00011 as regularization factor. Cifar10. We tested our criteria on cifar-10. This data set consists of 50000 images, equally divided in ten classes. As described above we generated random samples of the training data which we denote by cifar-10 0.0, cifar-10 0.1,...,cifar-10 1.0. A superscript stands for the fraction of randomized samples per class. So cifar-10 0.0 stands for the original data. In cifar-10 0.5 half of the labels of each class are randomly permuted while the other half remains fixed. Finally, in cifar-10 1.0 all class labels are randomly permuted. Details on the architecture and training parameter can be found in the appendix. Mnist. The data sets were created similar to cifar10.Noise. This data set consists of 50000 randomly generated rgb noise, equally divided in ten classes. For each class a set of 100 random 5x5x3 patches were generated. Each image sampled 36 patches from the set of patches to build a 30x30x3 rgb image. In this way 50000 training and 50000 test images were generated. The randomization was similar to cifar10. The capacity of a neural network is controlled by the number of parameters and (optionally) by a regularization term. In this section we show that the techniques of the paper can be used to tune the regularization parameter in order to avoid overfitting while still performing well. Convexity criterion. For the convexity criterion, we compute the accuracy curves over increasingly randomized data for different accuracy parameters as shown in FIG1. We expect that our algorithm achieves zero training error for true data. Furthermore for p-percent of randomized training data we expect the algorithm to achieve an error of p-percent. In other word we expect the algorithm to stay below a straight line starting at 1 for data 0.0 and going down to 1/number of classes for data 1.0. Let us call this line the optimal line. We pick the smallest λ for which the training error stays below the optimal line and is convex.l 1 -regularization. In these set of experiments we varied the regularization factor of the l 1 regularization of our Alexnet-type network on cifar10. To get more reliable we run the experiments for five different random samples. From the samples we computed the mean and the standard deviation, which we indicated in the plots by shading. Our experiments show that all criteria lead to a similar regularization factor λ *. With increasing regularization the curves approach the straight line, from which the optimal regularization parameter 0.0023 can be determined. In (b) the same network was trained on mnist with varying dropout rate and no other regularization. With decreasing drop out rates the curves approach the straight line. In the setting of the experiment a drop out rate of 0.1 is optimal. For (c) we trained an Alexnet-type neural network with l 2 regularization with parameter 0.0012, and computed the accuracy curves for the steps 1000, 2000,..., 60000. For 1000 training steps the curve is the lowest, for 60000 training steps the curve approaches the constant 1.In FIG1 the optimal line is depicted in green. According to our criterion we choose λ 1 = 0.00011. Both criteria (C2) and (C3) lead to a similar regularization parameter, details can be found in appendix B.1. l 2 -regularization. In these set of experiments we varied the regularization factor of the l 2 regularization of a (different) Alexnet-type network on mnist. Again we run the experiments for five different random samples and computed the mean and the standard deviation, which we indicated in the plots by shading. The ing accuracy curves are shown in Figure (a) 4, the ing parameter is λ 2 = 0.0023.Dropout. Dropout is another way to regularize a neural network. To analyze its effect on overfitting we trained a Alexnet-type network on mnist with different probabilities of drop out ranging from 0.1 to 1.0. The ing accuracy curves are shown in Figure (b) 4, in the setting of the experiment the optimal drop out value is 0.1.Early stopping. Early stopping follows the rational less training time leads to better generalization properties. In Figure (c) 4 we can observe this rational. The more we train the network the more the network overfits. The curves are a bit more bumpy as we trained the models only once. In this section we compare the training accuracy with the test accuracy. The plots of Figure show nine panels with different regularization factors. In each panel the accuracy as computed on the training data is plotted in blue, and the accuracy of the test data in red. So each blue dot represents the accuracy of the different training sets cifar10 0.0... cifar10 1.0. Each red point is computed for the same test set. With no regularization the model highly overfits. The model learns randomized data rather well as shown in λ 1 = 0.0 in FIG6. We further observe that it is easier for the network to learn more random data. With no l 1 -regularization the accuracy curves decreases with increasing randomness in the data and then starts to increase again. We attribute this to correlation in the data set which make the training data more complex for lower noise levels. Higher noise levels destroy these correlations and the data complexity of the data reduces. Recall that the small animal classes and also the car / truck class are correlated in cifar10. Finally, we note that the variance for learning entirely random data is very high. As the regularization parameter increases the blue accuracy show that the network is less able to learn random data. For λ 1 = 0.00011 the curve is convex, showing the optimal regularization parameter. In λ 1 > 0.00011 the model underfits. Looking again at λ 1 = 0.00011 we see that the model is able to learn from noisy data with lots of label noise. This confirms that l 1 -regularization is a good parameter to adjust the model complexity. Plots similar to Figure 5 can be used to analyze early stopping and overfitting. Due to lack of space we will only describe the verbally. Early in the training, at 19999 steps, we see that almost all curve are convex, hence the models underfit. Once we train the network to 59999 iterations, the model trained without any regularization begins to overfit with the others still underfitting. Training the networks further the more and more models begin to overfit. Flipping through the plots in the appendix illustrates this nicely. We also looked at models with different filter sizes in the first convolutional layer. We trained several networks with filter sizes starting from 2 × 2 to 9 × 9 and a regularization parameter of λ 1 = 0.00011. We observed that all networks showed underfitting, revealed by the convexity of the accuracy over randomness curves. This hints that l 1 regularization of the kernel weights is more important to overfitting than the number of parameters. Experiments with different network depths showed a similar behavior. In the paper we measure the capacity of a neural network by injecting different noise levels in the training data. The criteria we introduced in the paper are based on the assumption that the network should only be able to achieve a training accuracy corresponding to the injected noise level. This advances previous method in the neural network setting as they rely on either a hold out set, heuristics, or generalization theory. All of which are not mature enough to detect overfitting at present. In our experiments we saw that the hyper parameters fall in two classes, one which has no effect on overfitting (kernel size) and another which controls overfitting (regularization factor, number of iterations). In other experiments on mnist and cifar10 we observed the dominance of l 1 regularization for overfitting, while structural parameters such as network width, depth did not had an effect. The convexity criterion is the most reliable, as outliers and high variance are easily detected. On the downside it requires the most training runs. The steep decrease criterion only requires to train the model on the real data and and the fully random data. It can be used to narrow the parameter range. On the down side correlation between the classes are not easily detected by the steep decrease criterion. The mode criterion, is the most easiest to use as only the totally randomized training data is used. On the downside the margin plots are not always easy to interpret. Either the entire margin is positive, then the model clearly overfits, or two modes are observed in the plots, then the model clearly underfits. Yet most of the time, the margin is somewhere in between, which makes it hard to make a judgment based on the margin histograms alone. Let us put criteria (C2) and (C3) in perspective. Criterion (C2) comes close to what has been done before. We basically train a network on true and randomly shuffled labels, and analyze the attained accuracies. An analysis of the margin histograms for networks trained on true labels and random labels has been explored before. For example in BID2 margin histograms are used to conclude that regularization only seems to bring minor benefits to test error, BID10 use the margin histograms of networks trained on fully randomized labels and true labels to discuss normalization effects. Our contribution is to show that the regularization parameter can be set such that network does train on true labels, but is unable to do so for random labels. Both criteria are able to note this effect. All criteria can be numerically evaluated and put into an automated parameter search. At present it seems that the number of parameters do not contribute to overfitting. Thus to use the criteria of this paper one would proceed in two steps: search for an architecture which achieves zero training error, and then reducing the complexity of the model by regularizing it such that it does not overfit. So the additional burden is not that much.Analyzing neural networks with randomized training data has been done before BID15 ). In the paper the authors show that a neural network is able to train random labels, and they note that regularization... is neither necessary nor by itself sufficient for controlling generalization error. In the paper we argued that l 1 -normalization of the kernel weights is a good measure to control the capacity of a network. In the experiment we saw that adjusting l 1 -normalization leads to models which do not overfit and hence we expect them to generalize better. Using an l 1 regularization (the LASSO) is one of the popular choices for regularization. The rational is typically to enforce sparsity of the network weights. Our Lemma 3.1.1 adds another reason to the list why it might be a good choice for convolutional networks. We want to highlight another unexpected illustrative . By tuning the hyper parameter to pass our overfitting tests, we see that the test accuracy of the model is much higher than the training accuracy. This shows that our criteria can also be used to learn from noisy data and that a generalization gap does not need to be a bad thing. Although the paper focused on neural networks the methods can be applied for other machine learning algorithms as well. For example it would be interesting to apply our criteria for a systematic architecture search. Another line of research could investigate whether the criteria make adversarial attacks more difficult. Figure 5: The plots shows the accuracy of the network trained on cifar10 over different degrees of randomness with increasing degree of l 1 -regularization. The network trained for 199999 iterations. For the error curves five different samples were sampled for each data point. The network was evaluated on the training set (depicted in blue) and on the test set (depicted in red). We observe that the model does not overfit for λ = 0.00011. Furthermore, we note that with this choice of λ the model is able to learn from noise data, as the red curve is clearly above the green noise level curve. Matrix norms. A matrix A: V → W can be viewed as a linear operator between two normed spaces (V, || · || p) and (W, || · || q). We equip these normed spaces with a p-norms. So for x ∈ V we set ||x|| p = (i |x i | p) 1 p and for y ∈ W we set ||y|| q = (i |y i | q) 1 q. These vector space norms induce a matrix norm for A: DISPLAYFORM0 Special cases of this norm include the spectral norm ||A|| σ = ||A|| 2→2, ||A|| 1→1 = max 1≤j≤n m i=1 |a i j| and ||A|| ∞→∞ = max 1≤j≤m n i=1 |a i j|. In the paper we use the following fact: DISPLAYFORM1 A definition of these norms can be found in books about matrix analysis see for example §2.3.1 BID3 for a definition of the ||A|| p→q norm (in a slightly different notation). Equation FORMULA12 can be found in Corollary 2.3.2 of the same reference. Generalized mean. For a non zero real number p and positive reals x 1,..., x n we define the generalized mean by DISPLAYFORM2 We will use the following inequality which holds true for all real p < q and positive x y 1 ),..., (x n, y n) drawn iid from some probability distribution over R d0 × {1, . . ., k}, with probability at least 1 − δ over ((x i, y i)) n i=1, every margin γ > 0 and admissible network f: In our steep descent criterion we propose to detect the first change point at which the accuracy is not constant anymore but falls linearly. In the figure this is depicted by the green curves: around 10 −4 the accuracy begins to fall. As a measure of how much the net learned about the data, we also provide the accuracy curves for random data. We conclude from the gap between the red and the blue curve that the net learned something meaningful about the data, instead of just memorizing the data. DISPLAYFORM3 DISPLAYFORM4 In this section we empirically show that criteria (C2) and (C3) lead to similar than criterion (C1). Regularization. In these set of experiments we varied the regularization factor of the l 1 regularization of our Alexnet-type network. To get more reliable we run the experiments for five different random samples. From the samples we computed the mean and the standard deviation, which we indicated in the plots by shading. The following experiments show that all criteria lead to a similar regularization factor λ *.Steep decrease criterion. To test our steep decrease criterion we computed the accuracy over regularization curve. As the regularization increases we expect the accuracy to drop. This is shown in FIG5. Following our criterion the point of interest λ * occurs, at which the curve is not constant anymore. This occurs around λ 1 = 0.0001.Mode criterion. To test our mode criterion we computed the margin histograms of cifar-10 1.0 after training. As the regularization increases we expect the distribution to split up in two modes. This can be seen in FIG6. Following our criterion the point of interest occurs around λ 1 = 0.00011. Architecture and training parameter. Figure 8 shows a sketch of the architecture used in most experiments. We start with 5 × 5 convolutional filters, followed by 3 × 3-convolutional filters and two fully connected layers. In all layers the linear part is followed by a relu nonlinearity. We did not use a bias. In addition we apply three non overlapping 2 × 2-max-pooling. Further we used drop-out between the first and second fully connected layer. Everything was coded in Tensorflow, with SGD, a fixed learning rate of 0.01, a batch size of 32 and l 1 normalization of all weights. The networks were trained for 199999 steps. The input images were normalized to zero mean and standard deviation using one of tensorflows build in function. Additionally, we used some standard data augmentation. Here we provide additional plots of our l 1 regularization experiments, showing that all criterion have their uses. Figure shows how we would detect overfitting with the margin based criterion. Let us recall that a positive margin corresponds to a correct classification and a negative margin corresponds to an incorrect classification. In (a) and (e) of Figure D.1 the model clearly overfits, as it is able to learn random data (e) and true data (a). In (c) and (g) of Figure D.1, we clearly see underfitting the model neither able to learn random data nor true data. Based on this observation we would select λ = 0.0001 as our regularization parameter. Here we report similar plot for our early stopping experiments. Flipping through the plots we see that initial the regularization factor does not matter at 19999 steps all curves are convex. At later iterations the models begin to memorize the data. Figure 17: The plots shows the accuracy of the network trained on cifar10 over different degrees of randomness with increasing degree of l 1 -regularization. The network trained for 159999 iterations. For the error curves five different samples were sampled for each data point. The network was evaluated on the training set (depicted in blue) and on the test set (depicted in red). Figure 18: The plots shows the accuracy of the network trained on cifar10 over different degrees of randomness with increasing degree of l 1 -regularization. The network trained for 179999 iterations. For the error curves five different samples were sampled for each data point. The network was evaluated on the training set (depicted in blue) and on the test set (depicted in red). Figure 19: The plots shows the accuracy of the network trained on cifar10 over different degrees of randomness with increasing degree of l 1 -regularization. The network trained for 199999 iterations. For the error curves five different samples were sampled for each data point. The network was evaluated on the training set (depicted in blue) and on the test set (depicted in red).
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1lKtjA9FQ
We introduce and analyze several criteria for detecting overfitting.
Partially observable Markov decision processes (POMDPs) are a widely-used framework to model decision-making with uncertainty about the environment and under stochastic outcome. In conventional POMDP models, the observations that the agent receives originate from fixed known distribution. However, in a variety of real-world scenarios the agent has an active role in its perception by selecting which observations to receive. Due to combinatorial nature of such selection process, it is computationally intractable to integrate the perception decision with the planning decision. To prevent such expansion of the action space, we propose a greedy strategy for observation selection that aims to minimize the uncertainty in state. We develop a novel point-based value iteration algorithm that incorporates the greedy strategy to achieve near-optimal uncertainty reduction for sampled belief points. This in turn enables the solver to efficiently approximate the reachable subspace of belief simplex by essentially separating computations related to perception from planning. Lastly, we implement the proposed solver and demonstrate its performance and computational advantage in a range of robotic scenarios where the robot simultaneously performs active perception and planning. In the era of information explosion it is crucial to develop decision-making platforms that are able to judiciously extract useful information to accomplish a defined task. The importance of mining useful data appears in many applications including artificial intelligence, robotics, networked systems and Internet of things. Generally in these applications, a decision-maker, called an agent, must exploit the available information to compute an optimal strategy toward a given objective. Partially observable Markov decision processes (POMDPs) provide a framework to model sequential decision-making with partial perception of the environment and under stochastic outcomes. The flexibility of POMDPs and their variants in modeling real-world problems has led to extensive research on developing efficient algorithms for finding near-optimal policies. Nevertheless, the majority of previous work on POMDPs either deal with sole perception or sole planning. While independent treatment of perception and planning deteriorates performance, an integrated approach usually becomes computationally intractable. Thereupon, one must establish a trade-off between optimality and tractability when determining how much perception and planning rely on each other. We show that by restricting the perception to the class of subset selection problems and exploiting submodular optimization techniques, it is possible to partially decouple computing perception and planning policies while considering their mutual effect on the overall policy value. In this work, we consider joint perception and planning in POMDPs. More specifically, we consider an agent that decides about two sets of actions; perception actions and planning actions. The perception actions, such as employing a sensor, only affect the belief of the agent regarding the state of the environment. The planning actions, such as choosing navigation direction, are the ones that affect the transition of the environment from one state to another. In subset selection problems, at each time step, due to power, processing capability, and cost constraints, the agent can pick a subset of available information sources along a planning action. The subset selection problem arise in various applications in control systems and signal processing, in wireless sensor networks, as well as machine learning BID7 and have been widely-studied (; BID11 . However, the previous work on sensor selection problems assume that the planning strategy is known, while in this work, we simultaneously learn a selection strategy and a planning strategy. Exact POMDP solvers optimize the value function over all reachable belief points. However, finding exact solution to POMDPs is PSPACE-complete BID18 which deems solving even small POMDPs computationally intractable. This has led to extensive search for nearoptimal algorithms. A common technique is to sample a finite set of belief points that approximate the reachable subspace of belief and apply value iteration over this set, e.g., (; BID3 BID14 ; ; BID19 . BID19 proved that the errors due to belief sampling is bounded where the bound depends on the density of the belief set. A well-established offline POMDP solver is SARSOP BID13 . SARSOP, similar to HSVI , aims to minimize the gap between the lower and upper bounds on the value function by guiding the sampling toward the belief points that are reachable under union of optimal policies. In this paper, we show that the proposed greedy observation selection scheme leads to belief points that are on expectation close to the ones from the optimal (with respect to uncertainty reduction) set of observations, and hence value loss is small. An instance of active perception is dynamic sensor selection. BID12 proposes a reinforcement learning approach that uses Rènyi divergence to compute utility of sensing actions. BID8 formulated a single step sensor selection problem as semi-definite programming, however, it lacks theoretical guarantee. In Kalman filtering setting, developed a greedy selection scheme with near-optimal guarantee to minimize log-determinant of the error covariance matrix of estimated state. Some prior work such as (; ; BID16 model active perception as a POMDP. However, the most relevant work to ours are that of BID0 ;). BID0 proposed ρPOMDP framework where the reward depends on entropy of the belief. introduced POMDP-IR where the reward depends on accurate prediction about the state. established an equivalence property between ρPOMDP and POMDP-IR. Furthermore, they employed the submodularity of value function, under some conditions, to use greedy scheme for sensor selection. The main difference of our work is that we consider active perception as a means to accomplishing the original task while in these work, the active perception is the task itself and hence the POMDP rewards are metrics to capture perception quality. The problem of selecting an optimal set of sensors from a ground set under cardinality constraint is NP-hard . This hardness has motivated design of greedy algorithms since they make polynomial oracle calls to the objective function. Additionally, if the objective function is monotone non-decreasing and submodular, BID17 showed that a greedy selection achieves (1 − 1/e) approximation factor. BID15 and BID7 developed randomized greedy schemes that accelerate the selection process for monotone submodular and weak-submodular objective functions, respectively. BID11; BID10 have introduced different submodular information-theoretic objectives for greedy selection and have studied the theoretical guarantees of their maximization under different constraints. Here, we use the entropy of belief to capture the level of uncertainty in state and aim to select a subset of sensors that leads to maximum expected reduction in entropy. We employ the monotonicity and submodularity of the proposed objective to establish near-optimal approximation factor for entropy minimization. A summary of our contributions are as follows:• Formulating the active perception problem for POMDPs: We introduce a new mathematical definition of POMDPs, called AP 2 -POMDP, that captures active perception as well as planning. The objective is to find deterministic belief-based policies for perception and planning such that the expected discounted cumulative reward is maximized.• Developing a perception-aware point-based value iteration algorithm: To solve AP 2 -POMDP, we develop a novel point-based method that approximates the value function using a finite set of belief points. Each belief point is associated with a perception action and a planning action. We use the near-optimal guarantees for greedy maximization of monotone submodular functions to compute the perception action while the planning action is the of Bellman optimality equation. We further prove that greedy perception action leads to an expected reward that is close to that of optimal perception action. This section starts by giving an overview of the related concepts and then stating the problem. The standard POMDP definition models does not capture the actions related to perception. We present a different definition which we call AP 2 -POMDP as it models active perception actions as well as original planning actions. The active perception actions determine which subset of sensors (observations) the agent should receive. We restrict the set of states, actions, and observations to be discrete and finite. We formally define an AP 2 -POMDP below. DISPLAYFORM0 • S is the finite set of states.• A = A pl × A pr is the finite set of actions with A pl being the set of planning actions and A pr being the set of perception actions. A pr = {δ ∈ {0, 1} n | |δ| 0 ≤ k} constructs an n-dimensional lattice. Each component of an action δ ∈ A pr determines whether the corresponding sensor is selected.• k is the maximum number of sensor to be selected. DISPLAYFORM1 is the probabilistic transition function.• Ω = Ω 1 × Ω 2 ×... × Ω n is the partitioned set of observations, where each Ω i corresponds to the set of measurements observable by sensor i.• O: S × A × Ω → is the probabilistic observation function.• R: S × A pl → R is the reward function, and DISPLAYFORM2 At each time step, the environment is in some state s ∈ S. The agent takes an action β ∈ A pl that causes a transition to a state s ∈ S with probability P r(s |s, β) = T (s, β, s). At the same time step, the agent also picks k sensors by δ ∈ A pr. Then it receives an observation ω ∈ Ω with probability P r(ω|s, β, δ) = O(s, β, δ, ω), and a scalar reward R(s, β). Assumption 1. We assume that the observations from sensors are mutually independent given the current state and the previous action, i.e., ∀I 1, I 2 ⊆ {1, 2, . . ., n}, I 1 ∩ I 2 = ∅: DISPLAYFORM3 Let ζ(δ) = {i|δ(i) = 1} to denote the subset of sensors that are selected by δ. If Assumption 1 holds, then: DISPLAYFORM4 DISPLAYFORM5 The belief of the agent at each time step, denoted by b t is the posterior probability distribution of states given the history of previous actions and observations, i.e., h t = (a 0, ω 1, a 1, . . ., a t−1, ω t).A well-known fact is that due to Markovian property, a sufficient statistics to represent history of actions and observations is belief (Åström, 1965;). Given the initial belief b 0, the following update equation holds between previous belief b and the belief b a,ω b after taking action a = (β, δ) and receiving observation ω: DISPLAYFORM6 The goal is to learn a deterministic policy to maximize E[DISPLAYFORM7 . A deterministic policy is a mapping from belief to actions π : B → A, where B is the set of belief states. Note that B constructs a (|S| − 1)-dimensional probability simplex. The POMDP solvers apply value iteration , a dynamic programming technique, to find the optimal policy. Let V be a value function that maps beliefs to values in R. The following recursive expression holds for V: DISPLAYFORM8 The value iteration converges to the optimal value function V * which satisfies the Bellman's optimality equation BID2. Once the optimal value function is learned, an optimal policy can be derived. An important outcome of FORMULA8 is that at any horizon, the value function is piecewise-linear and convex (PWLC) and hence, can be represented by a finite set of hyperplanes. Each hyperplane is associated with an action. Let α's to denote the corresponding vectors of the hyperplanes and let Γ t to be the set of α vectors at horizon t. Then, DISPLAYFORM9 This fact has motivated approximate point based solvers that try to approximate the value function by updating the hyperplanes over a finite set of belief points. Since the proposed algorithm is founded upon the theoretical from the field of submodular optimization, here, we overview the necessary definitions. Let X to denote a ground set and f a set function that maps an input set to a real number. DISPLAYFORM0 is the marginal value of adding element i to set T 1.Monotonicity states that adding elements to a set increases the function value while submodularity refers to diminishing returns property. Having stated the required , next, we state the problem. Problem 1. Consider a AP 2 -POMDP P = (S, A, k, T, Ω, O, R, γ) and an initial belief b 0. We aim to learn a policy π(b) = (β, δ) such that the expected discounted cumulative reward is maximized, i.e, It is worth noting that the perception actions affect the belief and subsequently the received reward in the objective function. DISPLAYFORM0 3 ACTIVE PERCEPTION WITH GREEDY SCHEME For variety of performance metrics, finding an optimal subset of sensors poses a computationally challenging combinatorial optimization problem that is NP-hard. Augmenting POMDP planning actions with n k active perception actions in a combinatorial expansion of the action space. Thereupon, it is infeasible to directly apply existing POMDP solvers to Problem 1. Instead of concatenating both sets of actions and treating them similarly, we propose a greedy strategy for selecting perception actions that aims to pick the sensors that in minimal uncertainty about the state. The key enabling factor is that the perception actions does not affect the transition, consequently, we can decompose the single-step belief update in into two steps: DISPLAYFORM1 This in turn implies that after a transition is made, the agent should pick a subset of observations that lead to minimal uncertainty in b DISPLAYFORM2 To quantify uncertainty in state, we use Shannon entropy of the belief. For a discrete random variable x, the entropy is defined as H(x) = − i p(x i) log p(x i). An important property of entropy is its strict concavity on the simplex of belief points, denoted by ∆ B (Cover & BID4 . Further, the entropy is zero at the vertices of ∆ B and achieves its maximum, log |S|, at the center of ∆ B that corresponds to uniform distribution, i.e., when the uncertainty about the state is the highest. FIG0 demonstrates the entropy and its level sets for |S| = 3. Since the observation values are unknown before selecting the sensors, we optimize conditional entropy that yields the expected value of entropy over all possible observations. For discrete random variables x and y, conditional entropy is defined as DISPLAYFORM3 . Subsequently, with some algebraic manipulation, it can be shown that the conditional entropy of state given current belief with respect to δ is: DISPLAYFORM4 where ζ(δ) = {i 1, i 2, . . ., i k}. It is worth mentioning that b is the current distribution of s and is explicitly written only for the purpose of better clarity, otherwise, H(s|b, δ) = H(s|δ).To minimize entropy, we define the objective function as the following set function: DISPLAYFORM5 and the optimization problem as: DISPLAYFORM6 We propose a greedy algorithm, outlined in Algorithm 1 to find a near-optimal, yet efficient solution to. The algorithm takes as input the agent's belief and planning action. Then it iteratively adds elements from the ground set (set of all sensors) whose marginal gain with respect to f is maximal and terminates when k observations are selected. Algorithm 1 Greedy policy for perception action DISPLAYFORM7 ζ ← ζ ∪ {j *} 7: end for 8: return δ corresponding to ζ. Next, we derive a theoretical guarantee for the performance of the proposed greedy algorithm. The following lemma states the required properties to prove the theorem. The proof of the lemma follows from monotonicity and submodularity of conditional entropy BID9. See the appendix for the complete proof. Lemma 1. Let Ω = {ω 1, ω 2, . . ., ω n} to represent a set of observations of the state s that conditioned on the state, are mutually independent (Assumption 1 holds). Then, f (ζ), defined in, realizes the following properties: DISPLAYFORM8 2. f is monotone nondecreasing, and 3. f is submodular. The above lemma enables us to establish the approximation factor using the classical analysis in BID17. Theorem 1. Let ζ * to denote the optimal subset of observations with regard to objective function f (ζ), and ζ g to denote the output of the greedy algorithm in Algorithm 1. Then, the following performance guarantee holds: DISPLAYFORM9 Remark 1. Intuitively, one can interpret the minimization of conditional entropy as pushing the agent's belief toward the boundary of the probability simplex ∆ B. Due to convexity of POMDP value function on ∆ B , this in turn implies that the agent is moving toward regions of belief space that have higher value. Although Theorem 1 proves that the entropy of the belief point achieved by the greedy algorithm is close to the entropy of the belief point from the optimal solution, the key question is whether the value of these points are close. We assess this question in the following and show that at each time step, on expectation, the value from greedy scheme is close to the value from optimal observation selection with regard to. To that end, we first show that the distance between the two belief points is upper-bounded. Thereafter, using a similar analysis as that of BID19, we conclude that the difference between value function at these two points is upper-bounded. Theorem 2. Let the agent's current belief to be b and its planning action to be β. Consider the optimization problem in, and let δ * and δ g to denote the optimal perception action and the perception action obtained by the greedy algorithm, respectively. It holds that: Proof. We outline the sketch of the proof and bring the complete proof in the appendix. First, we show that minimizing conditional entropy of posterior belief is equivalent to maximizing KullbackLeibler (KL-) divergence between current belief and the posterior belief, i.e., D KL (b δ,ω b b). Next, we exploit Pythagorean theorem for KL-divergence alongside its convexity to find a relation between DISPLAYFORM10 DISPLAYFORM11 ). Afterwards, using Pinkster's inequality, we prove that the total variation distance between b DISPLAYFORM12 Proof. The proof is omitted for brevity. See the appendix for the proof. In this section, we propose a novel point-based value iteration algorithm to approximate the value function for AP 2 -POMDPs. The algorithm relies on the performance guarantee of the proposed greedy observation selection in previous section. Before describing the new point-based solver, we first overview how point-based solvers operate. Algorithm 2 outlines the general procedure for a point-based solver. It starts with an initial set of belief points B 0 and their corresponding α vectors. Then it performs a Bellman backup for each point to update α vectors. Next, it prunes α vectors to remove dominated ones. Afterwards, it samples a new set of belief points and repeats these steps until convergence or other termination criteria is met. The difference between solvers is in how they apply sampling and pruning. The sampling step usually depends on the reachability tree of belief space, see FIG2. The state-of-the-art point-based methods do not traverse the whole reachability tree, but they try to have enough sample points to provide a good coverage of the reachable space. Note that the combinatorial number of actions due to observation selection highly expand the size of the reachability tree. To avoid dealing with perception actions in the reachability tree, we apply the greedy scheme to make the choice of δ deterministically dependent on β and previous belief. To that end, we modify the BackUp step of point-based value iteration. The proposed BackUp step can be combined with any sampling and pruning method in other solvers, such as the ones developed by Spaan & Vlassis FORMULA6 In point-based solver each witness belief point is associated with an α vector and an action. Nevertheless, for AP 2 -POMDPs, each witness point is associated with two actions, β and δ. We compute δ based on greedy maximization of so that given b and β, δ is uniquely determined. Henceforth, we can rewrite using to obtain: DISPLAYFORM0 whereδ = argmax δ∈A pr f (ζ(δ)) and f is computed atb β b. This way, we can partially decouple the computation of perception action from the computation necessary for learning the planning policy. Inspired by the in the previous section, we propose the BackUp step detailed in Algorithm 3 to compute the new set of α vectors from the previous ones using Bellman backup operation. What distinguishes this algorithm from conventional Bellman backup step is the inclusion of perception actions. Basically, we need to compute the greedy perception action for each belief point and each action (Line 7). This in turn affects computation of Γ b,β,ω t as it represents a different set for each belief point (Lines 9-13). However, notice that this added complexity is significantly lower than concatenating the combinatorial perception actions with the planning actions and using conventional point-based solvers. See the appendix for detailed complexity analysis. To evaluate the proposed algorithm for active perception and planning, we developed a point-based value iteration solver for AP 2 -POMDPs. We initialized the belief set by uniform sampling from ∆ B BID6. To focus on the effect of perception, we did not apply a sampling step, i.e, the belief set is fixed throughout the iterations. However, one can integrate any sampling method such as the ones proposed by; BID13. The α vectors are initialized by 1 1−γ min s,a R(s, a).Ones(|S|) . Furthermore, to speedup the solver, one can employ a randomized backup step, as suggested by. The solver terminates once the difference between value functions in two consecutive iterations falls below a predefined threshold. We also implemented a random perception policy that selects a subset of information sources, uniformly at random, at each backup step. We implemented the solver in Python 2.7 and ran the simulations on a laptop with 2.0 GHz Intel Core i7-4510U CPU and with 8.00 GB RAM. The first scenario models a robot that is moving in a 1-D discrete environment. The robot can only move to adjacent cells and its navigation actions are A pl = {lef t, right, stop}. The robot's transitions are probabilistic due to possible actuation errors. The robot does not have any sensor and it relies on a set of cameras for localization. There is one camera at each cell that outputs a probability for b ∈ B t do 7:δ = Greedy argmax δ∈A pr f (ζ(δ)) 8: DISPLAYFORM0 for α ∈ Γ t−1 do 11: DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 Figure 3: The robot moves in a grid while communicating with the cameras to localize itself. There is a camera at each state on the perimeter. The accuracy of measurements made by each camera depends on the distance of the camera from that state. The robot's objective is to reach the goal state, labeled by star, while avoiding the obstacles.distribution over the position of the robot. The camera's certainty is higher when the robot's position is close to it. To model the effect of robot's position on the accuracy of measurements, we use a binomial distribution with its mean at the cell that camera is on. The binomial distribution represents the state-dependent accuracy. The robot's objective is to reach an specific cell in the map. For that purpose, at each time step, the robot picks a navigation action and selects k camera from the set of n cameras. After the solver terminates, we evaluate the computed policy. To that end, we run 1000 iterations of Monte Carlo simulations. The initial state of the robot is the origin of the map and its initial belief is uniform over the map. Figure 4 -(a) demonstrates the discounted cumulative reward, averaged over 1000 Monte Carlo runs, for random selection of 1 and 2 information sources, and greedy selection of 1 and 2 information sources. It can be seen that the greedy perception policy significantly outperforms the random perception. entropy of greedy perception, compared to random perception, shows less uncertainty of the robot when taking planning actions. See the appendix for further . The second setting is a variant of first scenario where the map is 2-D. Therefore the navigation actions of robot are A pl = {up, right, down, lef t, stop}. The rest of the setting is similar to 1-D case, except the cameras' positions, as they are now placed around the perimeter of the map. Additionally, the robot has to now avoid the obstacles in the map. The reward is 10 at the goal state. -4 at the obstacles, and -1 in other states. We applied the proposed point-based solver with both random perception and greedy perception on the 2-D example. Next, we let the robot to run for a horizon of 25 steps and we terminated the simulations once the robot reached the goal. Figure 5 illustrates the normalized frequency of visiting each state for each perception algorithm. It can be seen that the policy learned by greedy active perception leads to better obstacle avoidance. See the appendix for further . In this paper, we studied joint active perception and planning in POMDP models. To capture the structure of the problem, we introduced AP 2 -POMDPs that have to pick a cardinality-constrained subset of observations, in addition to original planning action. To tackle the computational challenge of adding combinatorial actions, we proposed a greedy scheme for observation selection. The greedy scheme aims to minimize the conditional entropy of belief which is a metric of uncertainty about the state. We provided a theoretical analysis for the greedy algorithm that led to boundedness of value function difference between optimal entropy reduction and its greedy counterpart. Furthermore, founded upon the theoretical guarantee of greedy active perception, we developed a point-based value iteration solver for AP 2 -POMDPs. The idea introduced in the solver to address active perception is general and can be applied on state-of-the-art point-based solvers. Lastly, we implemented and evaluated the proposed solver on a variety of robotic navigation scenarios. In this section, we provide the proofs to the lemmas and theorems stated in the paper. First, in the next lemma, we show that the objective function defined for uncertainty reduction has the required properties for the analysis by BID17, namely being normalized, monotone, and submodular. Lemma 1. Let Ω = {ω 1, ω 2, . . ., ω n} to represent a set of observations of the state s that conditioned on the state, are mutually independent (Assumption 1 holds). Then, f (ζ), defined in, realizes the following properties: DISPLAYFORM0 2. f is monotone nondecreasing, and 3. f is submodular. Proof. Notice thatb β b is explicitly present to determine the current distribution of s and it is not a random variable. Therefore, for simplicity, we omit that in the following proof. It is clear that DISPLAYFORM1 To prove the monotonicity, consider ζ 1 ⊂ [n] and j ∈ [n]\ζ 1. Then, DISPLAYFORM2 where (a) and (c) are due to Bayes' rule for entropy, (b) follows from the conditional independence assumption and joint entropy definition, (d) is due to the conditional independence assumption, and (e) stems from the fact that conditioning does not increase entropy. The monotonicity of the objective function means that if the number of obtained observations are higher, the conditional entropy will be lower, and hence, on expectation, the uncertainty in the state will be lower. Furthermore, from the third line of above proof, we can derive the marginal gain, i.e., the value of adding one sensor, as: DISPLAYFORM3 To prove submodularity, let ζ 1 ⊆ ζ 2 ⊂ [n] and j ∈ [n]\ζ 2. Then, DISPLAYFORM4 where (a) is based on the fact that conditioning does not increase entropy, and (b) from ζ 1 ⊆ ζ 2. The submodularity (diminishing returns property) of objective function indicates that as the number of obtained observations increases, the value of adding a new observation will decrease. In the next theorem, we exploit the properties of the proposed objective function to analyze the performance of the greedy scheme. Theorem 1. Let ζ * to denote the optimal subset of observations with regard to objective function f (ζ), and ζ g to denote the output of the greedy algorithm in Algorithm 1. Then, the following performance guarantee holds: DISPLAYFORM5 Proof. The properties of f stated in Lemma 1 along the theoretical analysis of greedy algorithm by BID17 yields DISPLAYFORM6 Using the definition of f (ζ) and rearranging the terms, we obtain the desired . Before stating the proof to Theorem 2, that bounds the distance of belief points from the greedy and optimal entropy minimization algorithms, we need to present a series of propositions and lemmas. Mutual information between two random variables is a positive and symmetric measure of their dependence and is defined as: DISPLAYFORM0 Mutual information, due to its monotonicity and submodularity, has inspired many subset selection algorithms BID10. In the following proposition, we express the relation between conditional entropy and mutual information. Proposition 1. Minimizing conditional entropy of the state with respect to a set of observations is equivalent to maximizing the mutual information of state and the set of observations. This equivalency is due to the definition of mutual information, i.e., I(s; DISPLAYFORM1 and the fact that H(s) is computed atb β b which amounts to a constant value that does not affect selection procedure. Additionally, notice that is the same as the definition of normalized objective function of greedy algorithm in.Another closely-related information-theoretic concept is Kullback-Leibler (KL-) divergence. The KL-divergence, also known as relative entropy, is a non-negative and non-symmetric measure of difference between two distributions. The KL-divergence from q(x) to p(x) is: DISPLAYFORM2 The following relation between mutual information and KL-divergence exists: DISPLAYFORM3 which allows us to state the next proposition. Proposition 2. The mutual information of state and a set of observations is the expected value of the KL-divergence from prior belief to posterior belief over all realizations of observations, i.e., I(s;, ω ∼ i∈ζ * Ω i to denote prior belief (after taking planning action), posterior belief after greedy perception action, and posterior belief after optimal perception action, respectively. So far, we have established a relation between minimizing the conditional entropy of posterior belief and maximizing the expected KL-divergence from prior belief to posterior belief, i.e., D KL (p g p 0) (See Proposition 1 and Proposition 2). To relate DISPLAYFORM4 DISPLAYFORM5, we state the next lemma. But first, we bring information-geometric definitions necessary for proving the lemma. Definition 4. Let p to be a probability distribution over a finite alphabet. An I-sphere with center p and radius ρ is defined as: DISPLAYFORM6 is called the I-projection of p on Λ. Lemma 2. Instate the definition of p 0, p g, and p *. The following inequality holds on expectation: DISPLAYFORM7 Proof. Consider the set Λ g = {p ∈ ∆ B |H(p) ≥ H(p g) that contain probability distributions whose entropy is lower-bounded by entropy of p g. Since entropy is concave over ∆ B, its hypographs are convex. Consequently Λ g, the projection of a hypograph onto ∆ B, is a convex set. Furthermore, due to monotonicity of conditional entropy, i.e., expected value of entropy over observations, we know that p 0 ∈ Λ g. Besides, Due to optimality of ζ *, it holds that DISPLAYFORM8 which in turn yields p * ∈ ∆ B \Λ g. FIG10 demonstrates these facts for an alphabet of size 3. p g is the I-projection of p * on Λ g. Therefore, by exploiting the analogue of Pythagoras' theorem for KL-divergence (Csiszár, 1975), we conclude: DISPLAYFORM9 A direct of the above lemma, after taking the expectation over i∈[n] ω i, is: DISPLAYFORM10 In the following theorem, we use the stated lemma to bound the expected KL-divergence distance between greedy and optimal selection strategies. Theorem 4. The KL-divergence between p g and p * is upper-bounded, i.e., DISPLAYFORM11 where C 3 is a constant. Proof. Notice that while KL-divergence is not symmetric, the following fact still holds: where (a) follows from the fact that α * is the gradient of the optimal value function, (b) is due toHölder's inequality, and (c) is the of Theorem 2. Taking C 2 = C 1 max{|Rmax|,|Rmin|} 1−γ yields the desired . In this section, we compare the computational complexity of a point-based value iteration method that works with the concatenated action space, with the computational complexity of the proposed point-based method that picks the perception actions based on a greedy approach. First, we compute the computations required for a single backup step in the point-based method with concatenated action space. To that end, consider a fixed set of sampled belief points B. Let Γ to denote the current set of α vectors. Further, for the simplicity of analysis, assume that the number of possible observations from each information source is |Ω i | =Ω, ∀i ∈ [n]. The cardinality of a concatenated action space is |A| = |A pr ||A pl | = n k |A pl |. Therefore, the complexity of a single backup step would be O(On the other hand, applying greedy algorithm to pick a perception action requires O(n × k) calls to an oracle that computes the objective function (or equivalently, the marginal gain). Here the objective function is the conditional entropy whose complexity with a naive approach in the k th iteration is O(Ω k × |S| 2). Therefore, applying Algorithm 3 as the backup step leads to O(|A pl | × |B| × n × k ×Ω k × |S| 2 + |A pl | × |B| ×Ω k × |Γ| × |S| 2 + |B| × |A pl | × |S| ×Ω k) operations. Hence, the proposed approach, as a of exploiting the structure of action space, would lead to significant computational gain, especially for large n. Figure 7 depicts the history of the belief entropy for the 2-D navigation when applying the proposed point-based method with random selection step and the proposed greedy selection step. As expected, the greedy selection leads to smaller entropy and hence, less uncertainty about the state. The corresponding average discounted cumulative reward after running 1000 Monte Carlo simulations is -18.8 for point-based value iteration with random selection step and -14.5 for point-based value iteration with greedy selection step, which demonstrates the superiority of the proposed method. We further analyzed the effect of number of selected cameras on the agent's performance in the 1-D navigation scenario. Figure 8 illustrates the value function for a subset of sampled belief points after the algorithm has been terminated. It can be seen that the diminishing returns property of entropy with respect to number of selected observations is propagated through the value function as well.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1lTg3RcFm
We develop a point-based value iteration solver for POMDPs with active perception and planning tasks.
Deep neural networks, in particular convolutional neural networks, have become highly effective tools for compressing images and solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements. This success can be attributed in part to their ability to represent and generate natural images well. Contrary to classical tools such as wavelets, image-generating deep neural networks have a large number of parameters---typically a multiple of their output dimension---and need to be trained on large datasets. In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters. The deep decoder has a simple architecture with no convolutions and fewer weight parameters than the output dimensionality. This underparameterization enables the deep decoder to compress images into a concise set of network weights, which we show is on par with wavelet-based thresholding. Further, underparameterization provides a barrier to overfitting, allowing the deep decoder to have state-of-the-art performance for denoising. The deep decoder is simple in the sense that each layer has an identical structure that consists of only one upsampling unit, pixel-wise linear combination of channels, ReLU activation, and channelwise normalization. This simplicity makes the network amenable to theoretical analysis, and it sheds light on the aspects of neural networks that enable them to form effective signal representations. Data models are central for signal and image processing and play a key role in compression and inverse problems such as denoising, super-resolution, and compressive sensing. These data models impose structural assumptions on the signal or image, which are traditionally based on expert knowledge. For example, imposing the assumption that an image can be represented with few non-zero wavelet coefficients enables modern (lossy) image compression BID1 and efficient denoising BID6.In recent years, it has been demonstrated that for a wide range of imaging problems, from compression to denoising, deep neural networks trained on large datasets can often outperform methods based on traditional image models BID19 BID0 BID18 BID4 BID22. This success can largely be attributed to the ability of deep networks to represent realistic images when trained on large datasets. Examples include learned representations via autoencoders BID12 and generative adversarial models BID8. Almost exclusively, three common features of the recent success stories of using deep neural network for imaging related tasks are i) that the corresponding networks are over-parameterized (i.e., they have much more parameters than the dimension of the image that they represent or generate), ii) that the networks have a convolutional structure, and perhaps most importantly, iii) that the networks are trained on large datasets. An important exception that breaks with the latter feature is a recent work by Ulyanov et al. BID20, which provides an algorithm, called the deep image prior (DIP), based on deep neural networks, that can solve inverse problems well without any training. Specifically, Ulyanov et al. demonstrated that fitting the weights of an over-parameterized deep convolutional network to a single image, together with strong regularization by early stopping of the optimization, performs competitively on a variety of image restoration problems. This is surprising because it does not involve a training dataset, which means that the notion of what makes an image'natural' is contained in a combination of the network structure and the regularization. However, without regularization the proposed network has sufficient capacity to overfit to noise, preventing meaningful image denoising. These prior works demonstrating the effectiveness of deep neural networks for image generation beg the question whether there may be a deep neural network model of natural images that is underparameterized and whose architecture alone, without algorithmic assistance, forms an efficient model for natural images. In this paper, we propose a simple image model in the form of a deep neural network that can represent natural images well while using very few parameters. This model thus enables image compression, denoising, and solving a variety of inverse problems with close to or state of the art performance. We call the network the deep decoder, due to its resemblance to the decoder part of an autoencoder. The network does not require training, and contrary to previous approaches, the network itself incorporates all assumptions on the data, is under-parameterized, does not involve convolutions, and has a simplicity that makes it amenable to theoretical analysis. The key contributions of this paper are as follows:• The network is under-parameterized. Thus, the network maps a lower-dimensional space to a higher-dimensional space, similar to classical image representations such as sparse wavelet representations. This feature enables image compression by storing the coefficients of the network after its weights are optimized to fit a single image. In Section 2, we demonstrate that the compression is on-par with wavelet thresholding BID1, a strong baseline that underlies JPEG-2000. An additional benefit of underparameterization is that it provides a barrier to overfitting, which enables regularization of inverse problems.• The network itself acts as a natural data model. Not only does the network require no training (just as the DIP BID20); it also does not critically rely on regularization, for example by early stopping (in contrast to the DIP). The property of not involving learning has at least two benefits: The same network and code is usable for a number of applications, and the method is not sensitive to a potential misfit of training and test data.• The network does not use convolutions. Instead, the network does have pixelwise linear combinations of channels, and, just like in a convolutional neural network, the weights are shared among spatial positions. Nonetheless, these are not convolutions because they provide no spatial coupling between pixels, despite how pixelwise linear combinations are sometimes called'1x1 convolutions.' In contrast, the majority of the networks for image compression, restoration, and recovery have convolutional layers with filters of nontrivial spatial extent BID19; BID0; BID18; BID4 BID22. This work shows that relationships characteristic of nearby pixels of natural images can be imposed directly by upsampling layers.• The network only consists of a simple combination of few building blocks, which makes it amenable to analysis and theory. For example, we prove that the deep decoder can only fit a small proportion of noise, which, combined with the empirical observation that it can represent natural images well, explains its denoising performance. The remainder of the paper is organized as follows. In Section 2, we first demonstrate that the deep decoder enables concise image representations. We formally introduce the deep decoder in Section 3. In Section 4, we show the performance of the deep decoder on a number of inverse problems such as denoising. In Section 5 we discuss related work, and finally, in Section 6 we provide theory and explanations on what makes the deep decoder work. Intuitively, a model describes a class of signals well if it is able to represent or approximate a member of the class with few parameters. In this section, we demonstrate that the deep decoder, an untrained, non-convolutional neural network, defined in the next section, enables concise representation of an image-on par with state of the art wavelet thresholding. The deep decoder is a deep image model G: R N → R n, where N is the number of parameters of the model, and n is the output dimension, which is (much) larger than the number of parameters (n N). The parameters of the model, which we denote by C, are the weights of the network, and not the input of the network, which we will keep fixed. To demonstrate that the deep decoder enables concise image representations, we choose the number of parameters of the deep decoder, N, such that it is a small fraction of the output dimension of the deep decoder, i.e., the dimension of the images 1.We draw 100 images from the ImageNet validation set uniformly at random and crop the center to obtain a 512x512 pixel color image. For each image x *, we fit a deep decoder model G(C) by minimizing the loss DISPLAYFORM0 with respect to the network parameters C using the Adam optimizer. We then compute for each image the corresponding peak-signal-to-noise ratio, defined as 10 log 10 (1/MSE), where DISPLAYFORM1 is the image generated by the network, and x * is the original image. We compare the compression performance to wavelet compression BID1 by representing each image with the N -largest wavelet coefficients. Wavelets-which underly JPEG 2000, a standard for image compression-are one of the best methods to approximate images with few coefficients. In Fig. 1 we depict the . It can be seen that for large compression factors (3 · 512 2 /N = 32.3), the representation by the deep decoder is slightly better for most images (i.e., is above the red line), while for smalle compression factors (3 · 512 2 /N = 8), the wavelet representation is slightly better. This experiment shows that deep neural networks can represent natural images well with very few parameters and without any learning. The observation that, for small compression factors, wavelets enable more concise representations than the deep decoder is intuitive because any image can be represented exactly with sufficiently many wavelet coefficients. In contrast, there is no reason to believe a priori that the deep decoder has zero representation error because it is underparameterized. The main point of this experiment is to demonstrate that the deep decoder is a good image model, which enables applications like solving inverse problems, as in Section 4. However, it also suggest that the deep decoder can be used for lossy image compression, by quantizing the coefficients C and saving the quantized coefficients. In the appendix, we show that image representations of the deep decoder are not sensitive to perturbations of its coefficients, thus quantization does not have a detrimental effect on the image quality. Deep networks were used successfully before for the compression of images BID19 BID0 BID18. In contrast to our work, which is capable of compressing images without any learning, the aforementioned works learn an encoder and decoder using convolutional recurrent neural networks BID19 and convolutional autoencoders BID18 based on training data. We consider a decoder architecture that transforms a randomly chosen and fixed tensor B 0 ∈ R n0×k0 consisting of k 0 many n 0 -dimensional channels to an n d × k out dimensional image, where k out = 1 for a grayscale image, and k out = 3 for an RGB image with three color channels. Throughout, n i has two dimensions; for example our default configuration has n 0 = 16 × 16 and n d = 512 × 512. The network transforms the tensor B 0 to an image by pixel-wise linearly combining the channels, upsampling operations, applying rectified linear units (ReLUs), and normalizing DISPLAYFORM0... lin. comb., upsampling, ReLU, CN lin. comb., sigmoid Figure 1: The deep decoder (depicted on the right) enables concise image representations, onpar with state-of-the-art wavelet based compression. The crosses on the left depict the PSNRs for 100 randomly chosen ImageNet-images represented with few wavelet coefficients and with a deep decoder with an equal number of parameters. A cross above the red line means the corresponding image has a smaller representation error when represented with the deep decoder. The deep decoder is particularly simple, as each layer has the same structure, consisting of a pixel-wise linear combination of channels, upsampling, ReLU nonlinearities, and channelwise normalization (CN). the channels. Specifically, the channels in the (i + 1)-th layer are given by DISPLAYFORM1 Here, the coefficient matrices C i ∈ R ki×ki+1 contain the weights of the network. Each column of the tensor B i C i ∈ R ni×ki+1 is formed by taking linear combinations of the channels of the tensor B i in a way that is consistent across all pixels. Then, cn(·) performs a channel normalization operation which is equivalent to normalizing each channel individually, and can be viewed as a special case of the popular batch normalization proposed in BID13. Specifically, let Z i = relu(U i B i C i) be the channels in the i-th layer, and let z ij be the j-th channel in the i-th layer. Then channel normalization performs the following transformation: DISPLAYFORM2, where mean and var compute the empirical mean and variance, and γ ij and β ij are parameters, learned independently for each channel, and is a fixed small constant. Learning the parameter γ and β helps the optimization but is not critical. This is a special case of batch normalization with batch size one proposed in BID13, and significantly improves the fitting of the model, just like how batch norm alleviates problems encountered when training deep neural networks. The operator U i ∈ R ni+1×ni is an upsampling tensor, which we choose throughout so that it performs bi-linear upsampling. For example, if the channels in the input have dimensions n 0 = 16×16, then the upsampling operator U 0 upsamples each channel to dimensions 32 × 32. In the last layer, we do not upsample, which is to say that we choose the corresponding upsampling operator as the identity. Finally, the output of the d-layer network is formed as DISPLAYFORM3 where Fig. 1 for an illustration. Throughout, our default architecture is a d = 6 layer network with k i = k for all i, and we focus on output images of dimensions n d = 512 × 512 and number of channels k out = 3. Recall that the parameters of the network are given by C = {C 0, C 1, . . ., C d}, and the output of the network is only a function of C, since we choose the tensor B 0 at random and fix it. Therefore, we write x = G(C). Note that the number of parameters is given by DISPLAYFORM4 DISPLAYFORM5 where the term 2k i corresponds to the two free parameters associated with the channel normalization. Thus, the number of parameters is N = dk 2 + 2dk + 3k. In the default architectures with d = 6 and k = 64 or k = 128, we have that N = 25,536 (for k = 64) and N =100,224 (k = 128) out of an RGB image space of dimensionality 512 × 512 × 3 = 786,432 parameters. We finally note that naturally variations of the deep decoder are possible; for example in a previous version of this manuscript, we applied upsampling after applying the relu-nonlinearity, but found that applying it before yields slightly better . While the deep decoder does not use convolutions, its structure is closely related to that of a convolutional neural network. Specifically, the network does have pixelwise linear combinations of channels, and just like in a convolutional neural network, the weights are shared among spatial positions. Nonetheless, pixelwise linear combinations are not proper convolutions because they provide no spatial coupling of pixels, despite how they are sometimes called 1 × 1 convolutions. In the deep decoder, the source of spatial coupling is only from upsampling operations. In contrast, a large number of networks for image compression, restoration, and recovery have convolutional layers with filters of nontrivial spatial extent BID19; BID0; BID18; BID4; BID22. Thus, it is natural to ask whether using linear combinations as we do, instead of actual convolutions yields better . Our simulations indicate that, indeed, linear combinations yield more concise representations of natural images than p × p convolutions, albeit not by a huge factor. Recall that the number of parameters of the deep decoder with d layers, k channels at each layer, and 1 × 1 convolutions is N (d, k; 1) = dk 2 + 3k + 2dk. If we consider a deep decoder with convolutional layers with filters of size p × p, then the number of parameters is: DISPLAYFORM0 If we fix the number of channels, k, but increase p to 3, the representation error only decreases since we increase the number of parameters (by a factor of approximately 32). We consider image reconstruction as described in Section 2. For a meaningful comparison, we keep the number of parameters fixed, and compare the representation error of a deep decoder with p = 1 and k = 64 (the default architecture in our paper) to a variant of the deep decoder with p = 3 and k = 22, so that the number of parameters is essentially the same in both configurations. We find that the representation of the deep decoder with p = 1 is better (by about 1dB, depending on the image), and thus for concise image representations, linear combinations (1 × 1 convolutions) appear to be more effective than convolutions of larger spatial extent. In this section, we use the deep decoder as a structure-enforcing model or regularizers for solving standard inverse problems: denoising, super-resolution, and inpainting. In all of those inverse problems, the goal is to recover an image x from a noisy observation y = f (x) + η. Here, f is a known forward operator (possibly equal to identity), and η is structured or unstructured noise. We recover the image x with the deep decoder as follows. Motivated by the finding from the previous section that a natural image x can (approximately) be represented with the deep decoder as G(C), we estimate the unknown image from the noisy observation y by minimizing the loss DISPLAYFORM0 with respect to the model parameters C. LetĈ be the of the optimization procedure. We estimate the image asx = G(Ĉ).We use the Adam optimizer for minimizing the loss, but have obtained comparable with gradient descent. Note that this optimization problem is non-convex and we might not reach a global minimum. Throughout, we consider the least-squares loss (i.e., we take · 2 to be the 2 norm), but the loss function can be adapted to account for structure of the noise. We remark that fitting an image model to observations in order to solve an inverse problem is a standard approach and is not specific to the deep decoder or deep-network-based models in general. Specifically, a number of classical signal recovery approaches fit into this framework; for example solving a compressive sensing problem with 1 -norm minimization amounts to choosing the forward operator as f (x) = Ax and minimizing over x in a 1 -norm ball. We start with the perhaps most basic inverse problem, denoising. The motivation to study denoising is at least threefold: First, denoising is an important problem in practice, second, many inverse problem can be solved as a chain of denoising steps BID15 Figure 2: An application of the deep decoder for denoising the astronaut test image. The deep decoder has performance on-par with state of the art untrained denoising methods, such as the DIP method BID20 and the BM3D algorithm BID5.problem is simple to model mathematically, and thus a common entry point for gaining intuition on a new method. Given a noisy observation y = x + η, where η is additive noise, we estimate an image with the deep decoder by minimizing the least squares loss G(C) − y 2 2, as described above. The in Fig. 2 and Table 1 demonstrate that the deep decoder has denoising performance onpar with state of the art untrained denoising methods, such as the related Deep Image Prior (DIP) method BID20 (discussed in more detail later) and the BM3D algorithm BID5. Since the deep decoder is an untrained method, we only compared to other state-of-the-art untrained methods (as opposed to learned methods such as BID22).Why does the deep decoder denoise well? In a nutshell, from Section 2 we know that the deep decoder can represent natural images well even when highly underparametrized. In addition, as a consequence of being under-parameterized, the deep decoder can only represent a small proportion of the noise, as we show analytically in Section 6, and as demonstrated experimentally in FIG2. Thus, the deep decoder "filters out" a significant proportion of the noise, and retains most of the signal. How to choose the parameters of the deep decoder? The larger k, the larger the number of latent parameters and thus the smaller the representation error, i.e., the error that the deep decoder makes when representing a noise-free image. On the other hand, the smaller k, the fewer parameters, and the smaller the range space of the deep decoder G(C), and thus the more noise the method will remove. The optimal k trades off those two errors; larger noise levels require smaller values of k (or some other form of regularization). If the noise is significantly larger, then the method requires either choosing k smaller, or it requires another means of regularization, for example early stopping of the optimization. For example k = 64 or 128 performs best out of {32, 64, 128}, for a PSNR of around 20dB, while for a PSNR of about 14dB, k = 32 performs best. We next super-resolve images with the deep denoiser. We define a forward model f that performs downsampling with the Lanczos filter by a factor of four. We then downsample a given image by a factor of four, and then reconstruct it with the deep decoder (with k = 128, as before). We compare performance to bi-cubic interpolation and to the deep image prior, and find that the deep decoder outperforms bicubic interpolation, and is on-par with the deep image prior (see Table 1 in the appendix). Finally, we use the deep decoder for inpainting, where we are given an inpainted image y, and a forward model f mapping a clean image to an inpainted image. The forward model f is defined by a mask that describes the inpainted region, and simply maps that part of the image to zero. FIG1 and Table 1 demonstrate that the deep decoder performs well on the inpainting problems; however, the deep image prior performs slightly better on average over the examples considered. For the impainting problem we choose a significantly more expressive prior, specifically k = 320. Image compression, restoration, and recovery algorithms are either trained or untrained. Conceptually, the deep decoder image model is most related to untrained methods, such as sparse representations in overcomplete dictionaries (for example wavelets BID6 and curvelets BID17). A number of highly successful image restoration and recovery schemes are not directly based on generative image models, but rely on structural assumptions about the image, such as exploiting self-similarity in images for denoising BID5 and super-resolution BID7 ).Since the deep decoder is an image-generating deep network, it is also related to methods that rely on trained deep image models. Deep learning based methods are either trained end-to-end for tasks ranging from compression BID19 BID0 BID18 BID4 BID22 to denoising BID4 BID22, or are based on learning a generative image model (by training an autoencoder or GAN BID12 BID8) and then using the ing model to solve inverse problems such as compressed sensing BID3, denoising BID11, phase retrieval, and blind deconvolution BID2, by minimizing an associated loss. In contrast to the deep decoder, where the optimization is over the weights of the network, in all the aforementioned methods, the weights are adjusted only during training and then are fixed upon solving the inverse problem. Most related to our work is the Deep Image Prior (DIP), recently proposed by Ulyanov et al. BID20. The deep image prior is an untrained method that uses a network with an hourglass or encoder-decoder architecture, similar to the U-net and related architectures that work well as autoencoders. The key differences to the deep decoder are threefold: i) the DIP is over-parameterized, whereas the deep decoder is under-parameterized. ii) Since the DIP is highly over-parameterized, it critically relies on regularization through early stopping and adding noise to its input, whereas the deep decoder does not need to be regularized (however, regularization can enhance performance). iii) The DIP is a convolutional neural network, whereas the deep decoder is not. We further illustrate point ii) comparing the DIP and deep decoder by denoising the astronaut image from Fig. 2. In FIG2 we plot the Mean Squared Error (MSE) over the number of iterations of the optimizer for fitting the noisy astronaut image x + η. Note that to fit the model, we minimize the error G(C) − (x + η) 2 2, because we are only given the noisy image, but we plot the MSE between the representation and the actual, true image G(C t) − x 2 2 at iteration t. Here, C t are the parameters of the deep decoder after t iterations of the optimizer. In FIG2 and (c), we plot the loss or MSE associated with fitting the noiseless astronaut image, x (G(C t) − x 2 2 ) and the noise itself, η, (G(C t) − η 2 2 ). Models are fitted independently for the noisy image, the noiseless image, and the noise. The plots in FIG2 show that with sufficiently many iterations, both the DIP and the DD can fit the image well. However, even with a large number of iterations, the deep decoder can not fit the noise well, whereas the DIP can. This is not surprising, given that the DIP is over-parameterized and the deep decoder is under-parameterized. In fact, in Section 6 we formally show that due to the The third panel shows the MSE of the output of DD or DIP for an image consisting purely of noise, as computed relative to that noise. Due to under-parameterization, the deep decoder can only fit a small proportion of the noise, and thus enables image denoising. Early stopping can mildly enhance the performance of DD; to see this note that in panel (a), the minimum is obtained at around 5000 iterations and not at 50,000. The deep image prior can fit noise very well, but fits an image faster than noise, thus early stopping is critical for denoising performance.underparameterization, the deep decoder can only fit a small proportion of the noise, no matter how and how long we optimize. As a consequence, it filters out much of the noise when applied to a natural image. In contrast, the DIP relies on the empirical observation that the DIP fits a structured image faster than it fits noise, and thus critically relies on early stopping. In the previous sections we empirically showed that the deep decoder can represent images well and at the same time cannot fit noise well. In this section, we formally show that the deep decoder can only fit a small proportion of the noise, relative to the degree of underparameterization. In addition, we provide insights into how the components of the deep decoder contribute to representing natural images well, and we provide empirical observations on the sensitivity of the parameters and their distribution. We start by showing that an under-parameterized deep decoder can only fit a proportion of the noise relative to the degree of underparameterization. At the heart of our argument is the intuition that a method mapping from a low-to a high-dimensional space can only fit a proportion of the noise relative to the number of free parameters. For simplicity, we consider a one-layer network, and ignore the batch normalization operation. Then, the networks output is given by DISPLAYFORM0 Here, we take C = (C 0, c 1), where C 0 is a k × k matrix and c 1 is a k-dimensional vector, assuming that the number of output channels is 1. While for the performance of the deep decoder the choice of upsampling matrix is important, it is not relevant for showing that the deep decoder cannot represent noise well. Therefore, the following statement makes no assumptions about the upsampling matrix U 0. Proposition 1. Consider a deep decoder with one layer and arbitrary upsampling and input matrices. That is, let B 0 ∈ R n0×k and U 0 ∈ R n×n0. Let η ∈ R n be zero-mean Gaussian noise with covariance matrix σI, σ > 0. Assume that k 2 log(n 0)/n ≤ 1/32. Then, with probability at least DISPLAYFORM1 The proposition asserts that the deep decoder can only fit a small portion of the noise energy, precisely a proportion determined by its number of parameters relative to the output dimension, n. Our The blue curves show a one-dimensional piecewise smooth signal, and the red crosses show estimates of this signal by a one-dimensional deep decoder with either linear or convex upsampling. We see that linear upsampling acts as an indirect signal prior that promotes piecewise smoothness.simulations and preliminary analytic suggest that this statement extends to multiple layers in that the lower bound becomes 1 − c DISPLAYFORM2, where c is a numerical constant. Note that the lower bound does not directly depend on the noise variance σ since both sides of the inequality scale with σ 2. Upsampling is a vital part of the deep decoder because it is the only way that the notion of locality explicitly enters the signal model. In contrast, most convolutional neural networks have spatial coupling between pixels both by unlearned upsampling, but also by learned convolutional filters of nontrivial spatial extent. The choice of the upsampling method in the deep decoder strongly affects the'character' of the ing signal estimates. We now discuss the impacts of a few choices of upsampling matrices U i, and their impact on the images the model can fit. No upsampling: If there is no upsampling, or, equivalently, if U i = I, then there is no notion of locality in the ing image. All pixels become decoupled, and there is then no notion of which pixels are near to each other. Specifically, a permutation of the input pixels (the rows of B 0) simply induces the identical permutation of the output pixels. Thus, if a deep decoder without upsampling could fit a given image, it would also be able to fit random permutations of the image equally well, which is practically equivalent to fitting random noise. Nearest neighbor upsampling: If the upsampling operations perform nearest neighbor upsampling, then the output of the deep decoder consists of piecewise constant patches. If the upsampling doubles the image dimensions at each layer, this would in patches of 2 d × 2 d pixels that are constant. While this upsampling method does induce a notion of locality, it does so too strongly in the sense that squares of nearby pixels become identical and incapable of fitting local variation within natural images. Linear and convex, non-linear upsampling: The specific choice of upsampling matrix affects the multiscale'character' of the signal estimates. To illustrate this, FIG3 shows the signal estimate from a 1-dimensional deep decoder with upsampling operations given by linear upsampling (x 0, x 1, x 2, . . .) → (x 0, 0.5x 0 + 0.5x 1, x 1, 0.5x 1 + 0.5x 2, x 2, . . .) and convex nonlinear upsampling given by (x 0, x 1, x 2, . . .) → (x 0, 0.75x 0 + 0.25x 1, x 1, 0.75x 1 + 0.25x 2, x 2, . . .). Note that while both models are able to capture the coarse signal structure, the convex upsampling in a multiscale fractal-like structure that impedes signal representation. In contrast, linear upsampling is better able to represent smoothly varying portions of the signal. Linear upsampling in a deep decoder indirectly encodes the prior that natural signals are piecewise smooth and in some sense have approximately linear behavior at multiple scales 6.3 NETWORK INPUT Throughout, the network input is fixed. We choose the network input B 1 by choosing its entries uniformly at random. The particular choice of the input is not very important; it is however desirable that the rows are incoherent. To see this, as an extreme case, if any two rows of B 1 are equal and if the upsampling operation preserves the values of those pixels exactly (for example, as with the linear upsampling from the previous section), then the corresponding pixels of the output image is Figure 6: The left panel shows an image reconstruction after training a deep decoder on the MRI phantom image (PSNR is 51dB). The right panel shows how the deep decoder builds up an image starting from a random input. From top to bottom are the input to the network and the activation maps (i.e., relu(B i C i)) for eight out of the 64 channels in layers one to six. Table 1: Performance comparison of the deep decoder for denoising (DN), superresolution (SR), and inpainting (IP), in peak signal to noise ratio (PSNR). Note that identity corresponds to the PSNR of the noise and corruption in the DN and IP experiments, respectively. also exactly the same, which restricts the range space of the deep decoder unrealistically, since for any pair of pixels, the majority of natural images does not have exactly the same value at this pair of pixels. The deep decoder is tasked with coverting multiple noise channels into a structured signal primarily using pixelwise linear combinations, ReLU activation funcions, and upsampling. Using these tools, the deep decoder builds up an image through a series of successive approximations that gradually morph between random noise and signal. To illustrate that, we plot the activation maps (i.e., relu(B i C i)) of a deep decoder fitted to the phantom MRI test image (see Fig. 6). We choose a deep decoder with d = 5 layers and k = 64 channels. This image reconstruction approach is in contrast to being a semantically meaningful hierarchical representation (i.e., where edges get combined into corners, that get combined into simple sample, and then into more complicated shapes), similar to what is common in discriminative networks. RH is partially supported by NSF award IIS-1816986, an NVIDIA Academic GPU Grant, and would like to thank Ludwig Schmidt for helpful discussions on the deep decoder in general, and in particular for suggestions on the experiments in Section 2.Code to reproduce the is available at https://github.com/reinhardh/ supplement_deep_decoder APPENDIX A PROOF OF PROPOSITION 1Suppose that the network has one layer, i.e., G(C) = relu(U 0 B 0 C 0)c 1. We start by re-writing B 1 = relu(B 0 C 0) in a convenient form. For a given vector x ∈ R n, denote by diag(x > 0) the matrix that contains one on its diagonal if the respective entry of x is positive and zero otherwise. Let c jci denote the i-th column of C j, and denote by W ji ∈ {0, 1} k×k the corresponding diagonal matrix W ji = diag(U j B j c jci > 0). With this notation, we can write DISPLAYFORM0 where [c 1] i denotes the i-th entry of c 1. Thus, G(C) lies in the union of at-most-k 2 -dimensional subspaces of R n, where each subspace is determined by the matrices {W 0j} k j=1. The number of those subspaces is bounded by n k 2. This follows from the fact that for the matrix A:= U 0 B 0, by Lemma 1 below, the number of different matrices W 0j is bounded by n k. Since there are k matrices, the number of different sets of matrices is bounded by n k 2. Lemma 1. For any A ∈ R n×k and k ≥ 5, |{diag(Av > 0)A|v ∈ R k }| ≤ n k.Next, fix the matrixes {W 0j} j. As G(C) lies in an at-most-k 2 -dimensional subspace, let S be a k 2 -dimensional subspace that contains the range of G for these fixed {W 0j} j. It follows that Now, we make use of the following bound on the projection of the noise η onto a subspace. Lemma 2. Let S ⊂ R n be a subspace with dimension. Let η ∼ N (0, I n) and β ≥ 1. Then, P P S c η n, then P X − n ≥ 2 √ nx + 2x ≤ e −x, DISPLAYFORM1 With these, we obtain P [X ≥ 5βn] ≤ e −βn if β ≥ 1,P [X ≤ n/2] ≤ e −n/16.We have noiseless C 1 noisy C 2 noisy C 3 noisy C 4 noisy C 5 noisy C 6 noisy c 7 noisy Figure 7: Sensitivity to parameter perturbations of the weights in each layer, and images generated by perturbing the weights in different layers, and keeping the weights in the other layers constant. A.1 PROOF OF LEMMA 1Our goal is to count the number of sign patterns (Av > 0) ∈ {0, 1}. Note that this number is equal to the maximum number of partitions one can get when cutting a k-dimensional space with n many hyperplanes that all pass through the origin, and are perpendicular to the rows of A. This number if well known (see for example BID21) and is upper bounded by DISPLAYFORM2 Thus, DISPLAYFORM3 where the last inequality holds for k ≥ 5. The deep decoder is not overly sensitive to perturbations of its coefficients. To demonstrate this, fit the standard test image Barbara with a deep decoder with 6 layers and k = 128, as before. We then perturb the weights in a given layer i (i.e., the matrix C i) with Gaussian noise of a certain signal-tonoise ratio relative to C i and leave the other weights and the input untouched. We then measure the peak signal-to-noise ratio in the image domain, and plot the corresponding curve for each layer (see Fig. 7). It can be seen that the representation provided by the deep decoder is relatively stable with respect to perturbations of its coefficients, and that it is more sensitive to perturbations in higher levels. Finally, in FIG7 we depict the distribution of the weights of the network after fitted to the Barbara test image, and note that the weights are approximately Gaussian distributed. The distribution of the weighs is approximately Gaussian.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylV-2C9KQ
We introduce an underparameterized, nonconvolutional, and simple deep neural network that can, without training, effectively represent natural images and solve image processing tasks like compression and denoising competitively.
In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to {\em global optimality} with runtime polynomial in the data size albeit exponential in the input dimension. Further, we improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of ``hard'' functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number $k$ there exists a function representable by a ReLU DNN with $k^2$ hidden layers and total size $k^3$, such that any ReLU DNN with at most $k$ hidden layers will require at least $\frac12k^{k+1}-1$ total nodes. Finally, for the family of $\R^n\to \R$ DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a \emph{smoothly parameterized} family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory. Deep neural networks (DNNs) provide an excellent family of hypotheses for machine learning tasks such as classification. Neural networks with a single hidden layer of finite size can represent any continuous function on a compact subset of R n arbitrary well. The universal approximation was first given by Cybenko in 1989 for sigmoidal activation function BID4, and later generalized by Hornik to an arbitrary bounded and nonconstant activation function BID15. Furthermore, neural networks have finite VC dimension (depending polynomially on the number of edges in the network), and therefore, are PAC (probably approximately correct) learnable using a sample of size that is polynomial in the size of the networks BID1. However, neural networks based methods were shown to be computationally hard to learn BID1 and had mixed empirical success. Consequently, DNNs fell out of favor by late 90s.to address the issue of efficiently training DNNs. These include heuristics such as dropouts BID39, but also considering alternate deep architectures such as convolutional neural networks BID33, deep belief networks BID14, and deep Boltzmann machines BID31. In addition, deep architectures based on new non-saturating activation functions have been suggested to be more effectively trainable -the most successful and widely popular of these is the rectified linear unit (ReLU) activation, i.e., σ(x) = max{0, x}, which is the focus of study in this paper. In this paper, we formally study deep neural networks with rectified linear units; we refer to these deep architectures as ReLU DNNs. Our work is inspired by these recent attempts to understand the reason behind the successes of deep learning, both in terms of the structure of the functions represented by DNNs, Telgarsky (2015; ; BID17 ; BID36, as well as efforts which have tried to understand the non-convex nature of the training problem of DNNs better BID18 ; BID10 . Our investigation of the function space represented by ReLU DNNs also takes inspiration from the classical theory of circuit complexity; we refer the reader to BID2 ; BID37 ; BID16 ; BID32 ; BID0 for various surveys of this deep and fascinating field. In particular, our gap are inspired by like the ones by BID12, BID27 and BID38 which show a strict separation of complexity classes. We make progress towards similar statements with deep neural nets with ReLU activation. We extend the ReLU activation function to vectors x ∈ R n through entry-wise operation: σ(x) = (max{0, x 1}, max{0, x 2},..., max{0, x n}). For any (m, n) ∈ N, let A n m and L n m denote the class of affine and linear transformations from R m → R n, respectively. [ReLU DNNs, depth, width, size] For any number of hidden layers k ∈ N, input and output dimensions w 0, w k+1 ∈ N, a R w0 → R w k+1 ReLU DNN is given by specifying a sequence of k natural numbers w 1, w 2,..., w k representing widths of the hidden layers, a set of k affine transformations T i: R wi−1 → R wi for i = 1,..., k and a linear transformation T k+1: R w k → R w k+1 corresponding to weights of the hidden layers. Such a ReLU DNN is called a (k + 1)-layer ReLU DNN, and is said to have k hidden layers. The function f: R n1 → R n2 computed or represented by this ReLU DNN is DISPLAYFORM0 where • denotes function composition. The depth of a ReLU DNN is defined as k + 1. The width of a ReLU DNN is max{w 1, . . ., w k}. The size of the ReLU DNN is w 1 + w 2 +... + w k. Definition 2. We denote the class of R w0 → R w k+1 ReLU DNNs with k hidden layers of widths DISPLAYFORM1 Definition 3. [Piecewise linear functions] We say a function f: R n → R is continuous piecewise linear (PWL) if there exists a finite set of polyhedra whose union is R n, and f is affine linear over each polyhedron (note that the definition automatically implies continuity of the function because the affine regions are closed and cover R n, and affine functions are continuous). The number of pieces of f is the number of maximal connected subsets of R n over which f is affine linear (which is finite).Many of our important statements will be phrased in terms of the following simplex. Definition 4. Let M > 0 be any positive real number and p ≥ 1 be any natural number. Define the following set: DISPLAYFORM2 One of the main advantages of DNNs is that they can represent a large family of functions with a relatively small number of parameters. In this section, we give an exact characterization of the functions representable by ReLU DNNs. Moreover, we show how structural properties of ReLU DNNs, specifically their depth and width, affects their expressive power. It is clear from definition that any function from R n → R represented by a ReLU DNN is a continuous piecewise linear (PWL) function. In what follows, we show that the converse is also true, that is any PWL function is representable by a ReLU DNN. In particular, the following theorem establishes a one-to-one correspondence between the class of ReLU DNNs and PWL functions. Theorem 2.1. Every R n → R ReLU DNN represents a piecewise linear function, and every piecewise linear function R n → R can be represented by a ReLU DNN with at most log 2 (n + 1) + 1 depth. Proof Sketch: It is clear that any function represented by a ReLU DNN is a PWL function. To see the converse, we first note that any PWL function can be represented as a linear combination of piecewise linear convex functions. More formally, by Theorem 1 in , for every piecewise linear function f: R n → R, there exists a finite set of affine linear functions 1,..., k and subsets S 1,..., S p ⊆ {1, . . ., k} (not necessarily disjoint) where each S i is of cardinality at most n + 1, such that DISPLAYFORM0 where s j ∈ {−1, +1} for all j = 1,..., p. Since a function of the form max i∈Sj i is a piecewise linear convex function with at most n + 1 pieces (because |S j | ≤ n + 1), Equation (is implementable by a two layer ReLU network and use this construction in an inductive manner to show that maximum of n + 1 numbers can be computed using a ReLU DNN with depth at most log 2 (n + 1).While Theorem 2.1 gives an upper bound on the depth of the networks needed to represent all continuous piecewise linear functions on R n, it does not give any tight bounds on the size of the networks that are needed to represent a given piecewise linear function. For n = 1, we give tight bounds on size as follows: Theorem 2.2. Given any piecewise linear function R → R with p pieces there exists a 2-layer DNN with at most p nodes that can represent f. Moreover, any 2-layer DNN that represents f has size at least p − 1.Finally, the main of this section follows from Theorem 2.1, and well-known facts that the piecewise linear functions are dense in the family of compactly supported continuous functions and the family of compactly supported continuous functions are dense in DISPLAYFORM1 is the space of Lebesgue integrable functions f such that |f | q dµ < ∞, where µ is the Lebesgue measure on R n (see BID29). DISPLAYFORM2 can be arbitrarily well-approximated in the L q norm (which for a function f is given by ||f || q = ( |f | q) 1/q ) by a ReLU DNN function with at most log 2 (n + 1) hidden layers. Moreover, for n = 1, any such L q function can be arbitrarily well-approximated by a 2-layer DNN, with tight bounds on the size of such a DNN in terms of the approximation. Proofs of Theorems 2.2 and 2.3 are provided in Appendix A. We would like to remark that a weaker version of Theorem 2.1 was observed in (, Proposition 4 .1) (with no bound on the depth), along with a universal approximation theorem (, Theorem 4. 3) similar to Theorem 2.3. The authors of BID9 also used a previous of Wang for obtaining their . In a subsequent work Boris Hanin BID11 has, among other things, found a width and depth upper bound for ReLU net representation of positive PWL functions on n. The width upperbound is n+3 for general positive PWL functions and n + 1 for convex positive PWL functions. For convex positive PWL functions his depth upper bound is sharp if we disallow dead ReLUs. Success of deep learning has been largely attributed to the depth of the networks, i.e. number of successive affine transformations followed by nonlinearities, which is shown to be extracting hierarchical features from the data. In contrast, traditional machine learning frameworks including support vector machines, generalized linear models, and kernel machines can be seen as instances of shallow networks, where a linear transformation acts on a single layer of nonlinear feature extraction. In this section, we explore the importance of depth in ReLU DNNs. In particular, in Section 3.1, we provide a smoothly parametrized family of R → R "hard" functions representable by ReLU DNNs, which requires exponentially larger size for a shallower network. Furthermore, in Section 3.2, we construct a continuum of R n → R "hard" functions representable by ReLU DNNs, which to the best of our knowledge is the first explicit construction of ReLU DNN functions whose number of affine pieces grows exponentially with input dimension. The proofs of the theorems in this section are provided in Appendix B. In this section, we are only concerned about R → R ReLU DNNs, i.e. both input and output dimensions are equal to one. The following theorem shows the depth-size trade-off in this setting. Theorem 3.1. For every pair of natural numbers k ≥ 1, w ≥ 2, there exists a family of hard functions representable by a R → R (k + 1)-layer ReLU DNN of width w such that if it is also representable by a (k + 1)-layer ReLU DNN for any k ≤ k, then this (k + 1)-layer ReLU DNN has size at least DISPLAYFORM0 In fact our family of hard functions described above has a very intricate structure as stated below. Theorem 3.2. For every k ≥ 1, w ≥ 2, every member of the family of hard functions in Theorem 3.1 has w k pieces and this family can be parametrized by DISPLAYFORM1 i.e., for every point in the set above, there exists a distinct function with the stated properties. The following is an immediate corollary of Theorem 3.1 by choosing the parameters carefully. Corollary 3.3. For every k ∈ N and > 0, there is a family of functions defined on the real line such that every function f from this family can be represented by a (k 1+) + 1-layer DNN with size k 2+ and if f is represented by a k +1-layer DNN, then this DNN must have size at least DISPLAYFORM2 Moreover, this family can be parametrized as, DISPLAYFORM3 A particularly illuminative special case is obtained by setting = 1 in Corollary 3.3: Corollary 3.4. For every natural number k ∈ N, there is a family of functions parameterized by the set DISPLAYFORM4 such that any f from this family can be represented by a k 2 + 1-layer DNN with k 3 nodes, and every k + 1-layer DNN that represents f needs at least DISPLAYFORM5 We can also get hardness of approximation versions of Theorem 3.1 and Corollaries 3.3 and 3.4, with the same gaps (upto constant terms), using the following theorem. Theorem 3.5. For every k ≥ 1, w ≥ 2, there exists a function f k,w that can be represented by a (k + 1)-layer ReLU DNN with w nodes in each layer, such that for all δ > 0 and k ≤ k the following holds: DISPLAYFORM6 where G k,δ is the family of functions representable by ReLU DNNs with depth at most k + 1, and size at most k DISPLAYFORM7 The depth-size trade-off in Theorems 3.1, and 3.5 extend and improve Telgarsky's theorems from (; in the following three ways:(i) If we use our Theorem 3.5 to the pair of neural nets considered by Telgarsky in Theorem 1.1 in which are at depths k 3 (of size also scaling as k 3) and k then for this purpose of approximation in the 1 −norm we would get a size lower bound for the shallower net which scales as Ω(2 k 2) which is exponentially (in depth) larger than the lower bound of Ω(2 k) that Telgarsky can get for this scenario.(ii) Telgarsky's family of hard functions is parameterized by a single natural number k. In contrast, we show that for every pair of natural numbers w and k, and a point from the set in equation 3.1, there exists a "hard" function which to be represented by a depth k network would need a size of at least w k k k. With the extra flexibility of choosing the parameter w, for the purpose of showing gaps in representation ability of deep nets we can shows size lower bounds which are super-exponential in depth as explained in Corollaries 3.3 and 3.4.(iii) A characteristic feature of the "hard" functions in Boolean circuit complexity is that they are usually a countable family of functions and not a "smooth" family of hard functions. In fact, in the last section of , Telgarsky states this as a "weakness" of the state-of-the-art on "hard" functions for both Boolean circuit complexity and neural nets research. In contrast, we provide a smoothly parameterized family of "hard" functions in Section 3.1 (parametrized by the set in equation 3.1). Such a continuum of hard functions wasn't demonstrated before this work. We point out that Telgarsky's in apply to deep neural nets with a host of different activation functions, whereas, our are specifically for neural nets with rectified linear units. In this sense, Telgarsky's from are more general than our in this paper, but with weaker gap guarantees. Eldan-Shamir BID36 BID7 show that there exists an R n → R function that can be represented by a 3-layer DNN, that takes exponential in n number of nodes to be approximated to within some constant by a 2-layer DNN. While their are not immediately comparable with Telgarsky's or our , it is an interesting open question to extend their to a constant depth hierarchy statement analogous to the recent of Rossman et al BID28. We also note that in last few years, there has been much effort in the community to show size lowerbounds on ReLU DNNs trying to approximate various classes of functions which are themselves not necessarily exactly representable by ReLU DNNs (; BID22 BID30 . One measure of complexity of a family of R n → R "hard" functions represented by ReLU DNNs is the asymptotics of the number of pieces as a function of dimension n, depth k + 1 and size s of the ReLU DNNs. More precisely, suppose one has a family H of functions such that for every n, k, w ∈ N the family contains at least one R n → R function representable by a ReLU DNN with depth at most k + 1 and maximum width at most w. The following definition formalizes a notion of complexity for such a H.Definition 5 (comp H (n, k, w)). The measure comp H (n, k, w) is defined as the maximum number of pieces (see Definition 3) of a R n → R function from H that can be represented by a ReLU DNN with depth at most k + 1 and maximum width at most w. Similar measures have been studied in previous works BID24; BID25; BID26. The best known families H are the ones from Theorem 4 of BID24 ) and a mild generalization of Theorem 1.1 of to k layers of ReLU activations with width w; these constructions achieve (DISPLAYFORM0 At the end of this section we would explain the precise sense in which we improve on these numbers. An analysis of this complexity measure is done using integer programming techniques in BID34 . DISPLAYFORM1 Figure 1: We fix the a vectors for a two hidden layer R → R hard function as DISPLAYFORM2 Left: A specific hard function induced by 1 norm: 0). Note that in this case the function can be seen as a composition of H a 1,a 2 with 1 -norm The set of vertices of DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 The following are well-known in the theory of zonotopes . Theorem 3.6. The following are all true. DISPLAYFORM6. The set of (b 1, . . ., b m) ∈ R n ×... × R n such that this does not hold at equality is a 0 measure set. DISPLAYFORM7 Definition 7 (extremal zonotope set). The set S(n, m) will denote the set of DISPLAYFORM8. S(n, m) is the so-called "extremal zonotope set", which is a subset of R nm, whose complement has zero Lebesgue measure in R nm.Lemma 3.7. Given any b 1,..., b m ∈ R n, there exists a 2-layer ReLU DNN with size 2m which represents the function γ Z(b 1,...,b m) (r).Definition 8. For p ∈ N and a ∈ ∆ p M, we define a function h a: R → R which is piecewise linear over the segments DISPLAYFORM9 DISPLAYFORM10 Proposition 3.8. Given any tuple (b 1, . . ., b m) ∈ S(n, m) and any point DISPLAYFORM11 n−1 w k pieces and it can be represented by a k + 2 layer ReLU DNN with size 2m + wk. Finally, we are ready to state the main of this section. Theorem 3.9. For every tuple of natural numbers n, k, m ≥ 1 and w ≥ 2, there exists a family of R n → R functions, which we call ZONOTOPE n k,w,m with the following properties:(i) Every f ∈ ZONOTOPE n k,w,m is representable by a ReLU DNN of depth k + 2 and size 2m + wk, and has n−1 i=0 m−1 i w k pieces.(ii) Consider any f ∈ ZONOTOPE n k,w,m. If f is represented by a (k + 1)-layer DNN for any k ≤ k, then this (k + 1)-layer DNN has size at least DISPLAYFORM12 The family ZONOTOPE n k,w,m is in one-to-one correspondence with DISPLAYFORM13 Comparison to the in BID24 Firstly we note that the construction in BID24 requires all the hidden layers to have width at least as big as the input dimensionality n. In contrast, we do not impose such restrictions and the network size in our construction is independent of the input dimensionality. Thus our probes networks with bottleneck architectures whose complexity cant be seen from their . Secondly, in terms of our complexity measure, there seem to be regimes where our bound does better. One such regime, for example, is when n ≤ w < 2n and k ∈ Ω(n log(n) ), by setting in our construction m < n. Thirdly, it is not clear to us whether the construction in BID24 gives a smoothly parameterized family of functions other than by introducing small perturbations of the construction in their paper. In contrast, we have a smoothly parameterized family which is in one-to-one correspondence with a well-understood manifold like the higher-dimensional torus. In this section we consider the following empirical risk minimization problem. Given D data points (x i, y i) ∈ R n × R, i = 1,..., D, find the function f represented by 2-layer R n → R ReLU DNNs of width w, that minimizes the following optimization problem DISPLAYFORM0 where: R × R → R is a convex loss function (common loss functions are the squared loss, (y, y) = (y − y) 2, and the hinge loss function given by (y, y) = max{0, 1 − yy}). Our main of this section gives an algorithm to solve the above empirical risk minimization problem to global optimality. Proof Sketch: A full proof of Theorem 4.1 is included in Appendix C. Here we provide a sketch of the proof. When the empirical risk minimization problem is viewed as an optimization problem in the space of weights of the ReLU DNN, it is a nonconvex, quadratic problem. However, one can instead search over the space of functions representable by 2-layer DNNs by writing them in the form similar to (2.1). This breaks the problem into two parts: a combinatorial search and then a convex problem that is essentially linear regression with linear inequality constraints. This enables us to guarantee global optimality. Where DISPLAYFORM0 All possible instantiations of top layer weights 3: DISPLAYFORM1 All possible partitions of data into two parts 4: DISPLAYFORM2 for s ∈ S do 7: DISPLAYFORM3 end for 11: OPT = argmin loss(count) 12: end for 13:return {ã}, {b}, s corresponding to OPT's iterate 14: end function Let T 1 (x) = Ax + b and T 2 (y) = a · y for A ∈ R w×n and b, a ∈ R w. If we denote the i-th row of the matrix A by a i, and write b i, a i to denote the i-th coordinates of the vectors b, a respectively, due to homogeneity of ReLU gates, the network output can be represented as DISPLAYFORM4 whereã i ∈ R n,b i ∈ R and s i ∈ {−1, +1} for all i = 1,..., w. For any hidden node i ∈ {1 . . ., w}, the pair (ã i,b i) induces a partition P i:= (P DISPLAYFORM5 − and a i · x j +b i ≥ 0 ∀j ∈ P i + which are imposed for all i = 1, . . ., w, which is a convex program. Algorithm 1 implements the empirical risk minimization (ERM) rule for training ReLU DNN with one hidden layer. To the best of our knowledge there is no other known algorithm that solves the ERM problem to global optimality. We note that due to known hardness exponential dependence on the input dimension is unavoidable;; Algorithm 1 runs in time polynomial in the number of data points. To the best of our knowledge there is no hardness known which rules out empirical risk minimization of deep nets in time polynomial in circuit size or data size. Thus our training is a step towards resolving this gap in the complexity literature. A related for improperly learning ReLUs has been recently obtained by Goel et al BID8. In contrast, our algorithm returns a ReLU DNN from the class being learned. Another difference is that their considers the notion of reliable learning as opposed to the empirical risk minimization objective considered in (4.1). The running time of the algorithm that we give in this work to find the exact global minima of a two layer ReLU-DNN is exponential in the input dimension n and the number of hidden nodes w. The exponential dependence on n can not be removed unless P = N P; see BID35; BID3; BID6. However, we are not aware of any complexity which would rule out the possibility of an algorithm which trains to global optimality in time that is polynomial in the data size and/or the number of hidden nodes, assuming that the input dimension is a fixed constant. Resolving this dependence on network size would be another step towards clarifying the theoretical complexity of training ReLU DNNs and is a good open question for future research, in our opinion. Perhaps an even better breakthrough would be to get optimal training algorithms for DNNs with two or more hidden layers and this seems like a substantially harder nut to crack. It would also be a significant breakthrough to get gap between consecutive constant depths or between logarithmic and constant depths. We would like to thank Christian Tjandraatmadja for pointing out a subtle error in a previous version of the paper, which affected the complexity for the number of linear regions in our constructions in Section 3.2. Anirbit would like to thank Ramprasad Saptharishi, Piyush Srivastava and Rohit Gurjar for extensive discussions on Boolean and arithmetic circuit complexity. This paper has been immensely influenced by the perspectives gained during those extremely helpful discussions. Amitabh Basu gratefully acknowledges support from the NSF grant CMMI1452820. Raman Arora was supported in part by NSF BIGDATA grant IIS-1546482.Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. Proof of Theorem 2.2. Any continuous piecewise linear function R → R which has m pieces can be specified by three pieces of information, s L the slope of the left most piece, the coordinates of the non-differentiable points specified by a (m − 1)−tuple {(a i, b i)} One notes that for any a, r ∈ R, the function DISPLAYFORM0 is equal to sgn(r) max{|r|(x − a), 0}, which can be implemented by a 2-layer ReLU DNN with size 1. Similarly, any function of the form, DISPLAYFORM1 is equal to − sgn(t) max{−|t|(x − a), 0}, which can be implemented by a 2-layer ReLU DNN with size 1. The parameters r, t will be called the slopes of the function, and a will be called the breakpoint of the function. If we can write the given piecewise linear function as a sum of m functions of the form (A.1) and (A.2), then by Lemma D.2 we would be done. It turns out that such a decomposition of any p piece PWL function h: R → R as a sum of p flaps can always be arranged where the breakpoints of the p flaps all are all contained in the p − 1 breakpoints of h. First, observe that adding a constant to a function does not change the complexity of the ReLU DNN expressing it, since this corresponds to a bias on the output node. Thus, we will assume that the value of h at the last break point a m−1 is b m−1 = 0. We now use a single function f of the form (A.1) with slope r and breakpoint a = a m−1, and m − 1 functions g 1,..., g m−1 of the form (A.2) with slopes t 1,..., t m−1 and breakpoints a 1,..., a m−1, respectively. Thus, we wish to express h = f + g 1 +... + g m−1. Such a decomposition of h would be valid if we can find values for r, t 1,..., t m−1 such that the slope of the above sum is = s L for x < a 1, the slope of the above sum is = s R for x > a m−1, and for each i ∈ {1, 2, 3, .., m − 1} we have DISPLAYFORM2 The above corresponds to asking for the existence of a solution to the following set of simultaneous linear equations in r, t 1,..., t m−1: DISPLAYFORM3 It is easy to verify that the above set of simultaneous linear equations has a unique solution. Indeed, r must equal s R, and then one can solve for t 1,..., t m−1 starting from the last equation b m−2 = t m−1 (a m−2 − a m−1) and then back substitute to compute t m−2, t m−3,..., t 1. The lower bound of p − 1 on the size for any 2-layer ReLU DNN that expresses a p piece function follows from Lemma D.6.One can do better in terms of size when the rightmost piece of the given function is flat, i.e., s R = 0. In this case r = 0, which means that f = 0; thus, the decomposition of h above is of size p − 1. A similar construction can be done when s L = 0. This gives the following statement which will be useful for constructing our forthcoming hard functions. Corollary A.1. If the rightmost or leftmost piece of a R → R piecewise linear function has 0 slope, then we can compute such a p piece function using a 2-layer DNN with size p − 1.Proof of theorem 2.3. Since any piecewise linear function R n → R is representable by a ReLU DNN by Corollary 2.1, the proof simply follows from the fact that the family of continuous piecewise linear functions is dense in any L p (R n) space, for 1 ≤ p ≤ ∞. Lemma B.1. For any M > 0, p ∈ N, k ∈ N and a 1,..., a k ∈ ∆ p M, if we compose the functions h a 1, h a 2,..., h a k the ing function is a piecewise linear function with at most (p + 1) k + 2 pieces, i.e., DISPLAYFORM0 is piecewise linear with at most (p + 1) k + 2 pieces, with (p + 1) k of these pieces in the range [0, M] (see Figure 2). Moreover, in each piece in the range [0, M], the function is affine with minimum value 0 and maximum value M.Proof. Simple induction on k. Proof of Theorem 3.2. Given k ≥ 1 and w ≥ 2, choose any point DISPLAYFORM1 By Definition 8, each h a i, i = 1,..., k is a piecewise linear function with w + 1 pieces and the leftmost piece having slope 0. Thus, by Corollary A.1, each h a i, i = 1,..., k can be represented by a 2-layer ReLU DNN with size w. Using Lemma D.1, H a 1,...,a k can be represented by a k + 1 layer DNN with size wk; in fact, each hidden layer has exactly w nodes. inside any triangle of s q, any affine function will incur an 1 error of at least DISPLAYFORM2 Proof of Theorem 4.1. Let: R → R be any convex loss function, and let (x 1, y 1),..., (x D, y D) ∈ R n × R be the given D data points. As stated in (4.1), the problem requires us to find an affine transformation T 1: R n → R w and a linear transformation T 2: R w → R, so as to minimize the empirical loss as stated in (4.1). Note that T 1 is given by a matrix A ∈ R w×n and a vector b ∈ R w so that T (x) = Ax + b for all x ∈ R n. Similarly, T 2 can be represented by a vector a ∈ R w such that T 2 (y) = a · y for all y ∈ R w. If we denote the i-th row of the matrix A by a i, and write b i, a i to denote the i-th coordinates of the vectors b, a respectively, we can write the function represented by this network as DISPLAYFORM0 In other words, the family of functions over which we are searching is of the form DISPLAYFORM1 whereã i ∈ R n, b i ∈ R and s i ∈ {−1, +1} for all i = 1,..., w. We now make the following observation. For a given data point (x j, y j) ifã i · x j +b i ≤ 0, then the i-th term of (C.1) does not contribute to the loss function for this data point (x j, y j). Thus, for every data point (x j, y j), there exists a set S j ⊆ {1, . . ., w} such that f (x j) = i∈Sj s i (ã i · x j +b i). In particular, if we are given the set S j for (x j, y j), then the expression on the right hand side of (C.1) reduces to a linear function ofã i,b i. For any fixed i ∈ {1, . . ., w}, these sets S j induce a partition of the data set into two parts. In particular, we define P i +:= {j : i ∈ S j} and P i −:= {1, . . ., D} \ P i +. Observe now that this partition is also induced by the hyperplane given byã i,b i: DISPLAYFORM2 and DISPLAYFORM3. Our strategy will be to guess the partitions P For a fixed selection of partitions (P i +, P i −), i = 1,..., w and a vector s in {+1, −1} w, the algorithm solves the following convex optimization problem with decision variablesã i ∈ R n,b i ∈ R for i = 1,..., w (thus, we have a total of (n + 1) · w decision variables). The feasible region of the optimization is given by the constraints DISPLAYFORM4 which are imposed for all i = 1,..., w. Thus, we have a total of D · w constraints. Subject to these constraints we minimize the objective DISPLAYFORM5 Assuming the loss function is a convex function in the first argument, the above objective is a convex function. Thus, we have to minize a convex objective subject to the linear inequality constraints from (C.2).We finally have to count how many possible partitions (P n which only holds for n ≥ 2. For n = 1, a similar algorithm can be designed, but one which uses the characterization achieved in Theorem 2.2. Let : R → R be any convex loss function, and let (x 1, y 1),..., (x D, y D) ∈ R 2 be the given D data points. Using Theorem 2.2, to solve problem (4.1) it suffices to find a R → R piecewise linear function f with w pieces that minimizes the total loss. In other words, the optimization problem (4.1) is equivalent to the problem DISPLAYFORM0 f is piecewise linear with w pieces.(C.3)We now use the observation that fitting piecewise linear functions to minimize loss is just a step away from linear regression, which is a special case where the function is contrained to have exactly one affine linear piece. Our algorithm will first guess the optimal partition of the data points such that all points in the same class of the partition correspond to the same affine piece of f, and then do linear regression in each class of the partition. Altenatively, one can think of this as guessing the interval (x i, x i+1) of data points where the w − 1 breakpoints of the piecewise linear function will lie, and then doing linear regression between the breakpoints. More formally, we parametrize piecewise linear functions with w pieces by the w slope-intercept values (a 1, b 1),..., (a 2, b 2),..., (a w, b w) of the w different pieces. This means that between breakpoints j and j + 1, 1 ≤ j ≤ w − 2, the function is given by f (x) = a j+1 x + b j+1, and the first and last pieces are a 1 x + b 1 and a w x + b w, respectively. Define I to be the set of all (w − 1)-tuples (i 1, . . ., i w−1) of natural numbers such that DISPLAYFORM1 Given a fixed tuple I = (i 1, . . ., i w−1) ∈ I, we wish to search through all piecewise linear functions whose breakpoints, in order, appear in the intervals (x i1, x i1+1), (x i2, x i2+1),..., (x iw−1, x iw−1+1). Define also S = {−1, 1} w−1. Any S ∈ S will have the following interpretation: if S j = 1 then a j ≤ a j+1, and if S j = −1 then a j ≥ a j+1. Now for every I ∈ I and S ∈ S, requiring a piecewise linear function that respects the conditions imposed by I and S is easily seen to be equivalent to imposing the following linear inequalities on the parameters (a 1, b 1),..., (a 2, b 2),..., (a w, b w): DISPLAYFORM2 Let the set of piecewise linear functions whose breakpoints satisfy the above be denoted by PWL 1 I,S for I ∈ I, S ∈ S.Given a particular I ∈ I, we define DISPLAYFORM3 The right hand side of the above equation is the problem of minimizing a convex objective subject to linear constraints. Now, to solve (C.3), we need to simply solve the problem (C.5) for all I ∈ I, S ∈ S and pick the minimum. Since |I| = Now we will collect some straightforward observations that will be used often. The following operations preserve the property of being representable by a ReLU DNN. Proof. Follows from (1.1) and the fact that a composition of affine transformations is another affine transformation. Proof. We prove this by induction on k. The base case is k = 1, i.e, we have a 2-layer ReLU DNN. Since every activation node can produce at most one breakpoint in the piecewise linear function, we can get at most w 1 breakpoints, i.e., w 1 + 1 pieces. Now for the induction step, assume that for some k ≥ 1, any R → R ReLU DNN with depth k + 1 and widths w 1,..., w k of the k hidden layers produces at most 2 k−1 · (w 1 + 1) · w 2 ·... · w k pieces. Consider any R → R ReLU DNN with depth k + 2 and widths w 1,..., w k+1 of the k + 1 hidden layers. Observe that the input to any node in the last layer is the output of a R → R ReLU DNN with depth k + 1 and widths w 1,..., w k. By the induction hypothesis, the input to this node in the last layer is a piecewise linear function f with at most 2 k−1 · (w 1 + 1) · w 2 ·... · w k pieces. When we apply the activation, the new function g(x) = max{0, f (x)}, which is the output of this node, may have at most twice the number of pieces as f, because each original piece may be intersected by the x-axis; see Figure 4. Thus, after going through the layer, we take an affine combination of w k+1 functions, each with at most 2 · (2 k−1 · (w 1 + 1) · w 2 ·... · w k ) pieces. In all, we can therefore get at most 2·(2 k−1 ·(w 1 +1)·w 2 ·...·w k )·w k+1 pieces, which is equal to 2 k ·(w 1 +1)·w 2 ·...·w k ·w k+1, and the induction step is completed. Lemma D.5 has the following consequence about the depth and size tradeoffs for expressing functions with agiven number of pieces. Lemma D.6. Let f: R → R be a piecewise linear function with p pieces. If f is represented by a ReLU DNN with depth k + 1, then it must have size at least 1 2 kp 1/k − 1. Conversely, any piecewise linear function f that represented by a ReLU DNN of depth k + 1 and size at most s, can have at most (2s k) k pieces. Proof. Let widths of the k hidden layers be w 1,..., w k. By Lemma D.5, we must have DISPLAYFORM0 By the AM-GM inequality, minimizing the size w 1 + w 2 +... + w k subject to (D.1), means setting w 1 + 1 = w 2 =... = w k. This implies that w 1 + 1 = w 2 =... = w k ≥ 1 2 p 1/k. The first statement follows. The second statement follows using the AM-GM inequality again, this time with a restriction on w 1 + w 2 +... + w k.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1J_rgWRW
This paper 1) characterizes functions representable by ReLU DNNs, 2) formally studies the benefit of depth in such architectures, 3) gives an algorithm to implement empirical risk minimization to global optimality for two layer ReLU nets.
The backpropagation of error algorithm (BP) is often said to be impossible to implement in a real brain. The recent success of deep networks in machine learning and AI, however, has inspired a number of proposals for understanding how the brain might learn across multiple layers, and hence how it might implement or approximate BP. As of yet, none of these proposals have been rigorously evaluated on tasks where BP-guided deep learning has proved critical, or in architectures more structured than simple fully-connected networks. Here we present the first on scaling up a biologically motivated model of deep learning to datasets which need deep networks with appropriate architectures to achieve good performance. We present on CIFAR-10 and ImageNet. For CIFAR-10 we show that our algorithm, a straightforward, weight-transport-free variant of difference target-propagation (DTP) modified to remove backpropagation from the penultimate layer, is competitive with BP in training deep networks with locally defined receptive fields that have untied weights. For ImageNet we find that both DTP and our algorithm perform significantly worse than BP, opening questions about whether different architectures or algorithms are required to scale these approaches. Our and implementation details help establish baselines for biologically motivated deep learning schemes going forward. The suitability of the backpropagation of error (BP) algorithm BID27 for explaining learning in the brain was questioned soon after its popularization BID8 BID5. Weaker objections included undesirable characteristics of artificial networks in general, such as their violation of Dale's Law, their lack of cell-type variability, and the need for the gradient signals to be both positive and negative. Much more serious objections were: The need for the feedback connections carrying the gradient to have the same weights as the corresponding feedforward connections and The need for a distinct form of information propagation (error propagation) that does not influence neural activity, and hence does not conform to known biological feedback mechanisms underlying neural communication. Researchers have long sought biologically plausible and empirically powerful learning algorithms that avoid some of these flaws BID1 BID25 BID0 BID23; BID12 BID14 BID10 BID21. A common theme of some of the most promising approaches -such as Contrastive Hebbian Learning BID22, and Generalized Recirculation BID23 -is to use feedback connections to influence neural activity, and to use differences in feedfoward-driven and feedback-driven activities or products of activities to locally approximate gradients BID0 BID26 BID23; BID30 ). Since these activity propagation methods don't require explicit propagation of gradients through the network, they go a long way towards answering the second serious objection noted above. However, many of these methods require long "positive" and "negative" settling phases for computing the activities or activity products whose differences provide the learning signal. Proposals for shortening the phases BID11 BID4 are not entirely satisfactory as they still fundamentally depend on a settling process, and, in general, any settling process will likely be too slow for a brain that needs to quickly compute hidden activities. Indeed, for the same reason, only a handful of the algorithms that require settling have ever been used on large scale problems in machine learning. Perhaps the most practical among this family of "activity propagation" algorithms is target propagation (TP) and its variants BID17 BID11 BID2. The intuition for TP is as follows: Suppose you have a feedforward neural network and have the capacity to compute perfect inverses backwards through the network (i.e., given the activities in layer h l+1, we can compute h l = f −1 (h l+1 ; θ l+1)). If we impose an output target (for a given input) on the output layer, then we can propagate activity backwards through the network to infer what the activities should be to produce the output target. These backwards propagated activities are denoted the layer targets, or simply targets. Then, when computing a feedfoward propagation through the network given some input, we can layer-wise compare the feedforward activations to what they should have been (i.e., the targets), and use the differences to compute weight changes. TP algorithms do not require settling dynamics, and thus can compute forward passes and updates quickly. As well, for one TP variant, it has been shown that weight changes that cause future feedforward activity to be nudged towards their targets approximate the weight changes computed by BP.While TP and its variants are promising as biologically-plausible algorithms, there are some lingering questions about their applicability to the brain. First, the only variant explored empirically -difference target propagation (DTP) -still depends on explicit gradient computation via backpropagation for learning the penultimate layer's outgoing synaptic weights (see Algorithm Box 1 in). Second, they have not been tested on datasets more difficult than MNIST. And third, they have not been incorporated into architectures more complicated than simple multi-layer perceptrons (MLPs).In this work we address each of these issues. Our contribution is threefold: We examine the learning and performance of a biologically-motivated algorithm, Difference Target-propagation (DTP), on MNIST, CIFAR, and ImageNet, We develop a variant of DTP called Simplified Difference Target Propagation (SDTP), which eliminates significant lingering biologically implausible features from DTP, and We investigate the role of weight-sharing convolutions, which are key to performance on difficult datasets in artificial neural networks, by testing the effectiveness of locally connected architectures trained with BP, DTP, and SDTP.Sharing the weights of locally connected units greatly reduces the number of free parameters and this has several very beneficial effects on computer simulations of large neural nets. It improves generalization and it drastically reduces both the amount of memory needed to store the parameters and the amount of communication required between replicas of the same model running on different subsets of the data on different processors. From a biological perspective we are interested in how STDP compares with BP without using weight sharing, so both our BP and our SDTP are considerably worse than convolutional neural nets and take far longer to produce. Consider the case of a feed-forward neural network with L layers {h l} L l=1, whose activations h l are computed by elementwise-applying a non-linear function σ l to an affine transformation of previous layer activations h l−1: DISPLAYFORM0 with input to the network denoted as h 0 = x and the last layer h L used as output. For example, in classification problems the output layer h L parametrizes a predicted distribution over possible labels p(y|h L), usually using the softmax function. The learning signal is then provided as a loss L(h L) incurred by making a prediction for an input x, which in the classification case can be cross-entropy between the ground-truth label distribution q(y|x) and the predicted one: DISPLAYFORM1 The goal of training is then to adjust the parameters Θ = {θ l} L l=1 in order to minimize a given loss over the training set of inputs. In BP and DTP, the final layer target is used to compute a loss, and the gradients from this loss are shuttled backwards (through all layers, in BP, or just one layer, in DTP) in error propagation steps that do not influence actual neural activity. SDTP never transports gradients using error propagation steps, unlike DTP and BP. Backpropagation BID27 ) was popularized as a method for learning in neural networks by computing gradients with respect to layer parameters using the chain rule: DISPLAYFORM0 Thus, gradients are obtained by first propagating activations forward to the output layer and then recursively applying these equations. These equations imply that gradients are propagated backwards through the network using weights symmetric to their feedforward counterparts. This is biologically problematic because it implies a mode of information propagation (error propagation) that does not influence neural activity, and that depends on an implausible network architecture (symmetric weight connectivity for feedforward and feedback directions, which is called the weight transport problem). In target propagation BID17 BID2 backwards communication induces neural activity, unlike in BP where backwards communication passes on gradients without inducing neural activity. The induced activities are those that layers should strive to match so as to produce the target output. After feedforward propagation given some input, the final output layer h L is trained directly to minimize the loss L, while all other layers are trained so as to match their associated targets. In general, good targets are those that minimize the loss computed in the output layer if they were actually realized in feedforward propagation. In networks with invertible layers one could generate such targets by first finding a loss-optimal output activationĥ L (e.g. the correct label distribution) and then propagating it back using inverse transformationsĥ l = f −1 (ĥ l+1 ; θ l+1). Since it is hard to maintain invertibility in a network, approximate inverse transformations (or decoders) can be learned DISPLAYFORM0 Note that this learning obviates the need for symmetric weight connectivity. The generic form of target propagation algorithms we consider in this paper can be summarized as a scheduled minimization of two kinds of losses for each layer: DISPLAYFORM1 2 2 used to train the approximate inverse that is parametrized similarly to the forward computation g(DISPLAYFORM2 where activations h l−1 are assumed to be propagated from the input. One can imagine other learning rules for the inverse, for example, the original DTP algorithm trained inverses on noise-corrupted versions of activations with the purpose of improved generalization. In our implementation we instead used the denoising criterion which we find more biologically plausible, see the appendix for details. The loss is applied for every layer except the first, since the first layer does not need to propagate target inverses backwards. 2 2 penalizes the layer parameters for producing activations different from their targets. Parameters of the last layer are trained to minimize the task's loss L directly. Under this framework both losses are local and involve only single layer's parameters, and implicit dependencies on other layer's parameters are ignored. Variants differ in the way targetsĥ l are computed. Target propagation "Vanilla" target propagation (TP) computes targets by propagating the higher layers' targets backwards through layer-wise inverses; i.e.ĥ l = g(ĥ l+1 ; λ l+1). For traditional categorization tasks the same 1-hot vector in the output will always map back to precisely the same hidden unit activities in a given layer. Thus, this kind of naive TP may have difficulties when different instances of the same class have very different appearances since it will be trying to make their representations identical even in the early layers. Also, there are no guarantees about how TP will behave when the inverses are imperfect. Difference target propagation Difference target propagation updates the output weights and biases using the standard gradient rule, but this is biologically unproblematic because it does not require weight transport BID23 BID21. For most other layers in the network, difference target propagation (DTP) computes targets asĥ l = h l + g(ĥ l+1 ; λ l+1) − g(h l+1 ; λ l+1). The extra terms provide a stabilizing linear correction for imprecise inverse functions. However, in the original work by the penultimate layer target, h L−1, was computed using gradients from the network's loss, rather than by target propagation. That is, DISPLAYFORM0 Though not stated explicitly, this approach was presumably taken to insure that the penultimate layer received reasonable and diverse targets despite the low-dimensional 1-hot targets at the output layer. When there are a small number of 1-hot targets (e.g. 10 classes), learning a good inverse mapping from these vectors back to the hidden activity of the penultimate hidden layer (e.g. 1000 units) might be problematic, since the inverse mapping cannot provide information that is both useful and unique to a particular sample. Using BP in the penultimate layer sidesteps this concern, but deviates from the intent of using these algorithms to avoid gradient computation and delivery. Simplified difference target propagation We introduce SDTP as a simple modification to DTP. In SDTP we compute the target for the penultimate layer DISPLAYFORM1. This completely removes biologically infeasible gradient communication (and hence weight-transport) from the algorithm. However, it is not clear whether targets for the penultimate layer will be diverse enough (given low entropy classification targets) or precise enough (given the inevitable poor performance of the learned inverse for this layer). This is a non-trivial change that requires empirical validation. Parallel and alternating training of inverses In the original implementation of DTP 1, the authors trained forward and inverse model parameters by alternating between their optimizations; in practice they trained one loss for one full epoch of the training set before switching to training the other loss. We considered a variant that simply optimizes both losses in parallel, which seems nominally more plausible in the brain since both forward and feedback connections are thought to undergo plasticity changes simultaneously. Though, it is possible that a kind of alternating learning schedule for forward and backward connections could be tied to wake/sleep cycles. Noise-preserving versus de-noising autoencoder training In the original DTP algorithm, autoencoder training is done via a noise-preserving loss, which may be a principled choice for the algorithm on a computer. But in the brain, autoencoder training is de-noising, since uncontrolled noise is necessarily added downstream of a given layer (e.g. by subsequent spiking activity and stochastic vesicle release). Therefore, in our experiments with TP we use de-noising autoencoder training. We also compared noise-preserving and de-noising losses in the context of DTP and SDTP and found that they performed roughly equivalently (see Appendix 4). Propagate activity forward: DISPLAYFORM0 Compute targets for lower layers: DISPLAYFORM1 Convolution-based architectures are critical for achieving state of the art in image recognition BID16. These architectures are biologically implausible, however, because of their extensive weight sharing. To implement convolutions in biology, many neurons would need to share the values of their weights precisely, which is unlikely. In the absence of weight sharing, the "locally connected" receptive field structure of convolutional neural networks is in fact very biologically realistic and may still offer a useful prior. Under this prior, neurons in the brain could sample from small areas of visual space, then pool together to create spatial maps of feature detectors. We assess the the degree to which BP-guided learning is enhanced by convolutions, and not BP per se, by evaluating learning methods (including BP) on networks with locally connected layers. Since the purpose of our study was not to establish state of the art , but rather to assess the limitations of biologically-motivated learning methods, we focused on evaluating architectures that were considered reasonable for a particular task or dataset. Thus, we did not perform an exhaustive architecture search beyond adjusting total number of training parameters to prevent overfitting. All experiments share the same straightforward methodology: a hyperparameter search was performed for a fixed architecture, for each learning algorithm. We then selected the best run from each hyperparameter search based on validation set accuracy across 5 consecutive training epochs (i.e. passes over training set) at the end of which we also measured accuracy on the test set. All locally-connected architectures consist of a stack of locally-connected layers specified as (receptive field size, number of output channels, stride, padding) followed by an output softmax layer. For padding, SAME denotes padding with zeros to ensure unchanged shape of the output with stride = 1 and VALID padding denotes no padding. For optimization we use Adam BID13, with different hyper-parameters for forward and inverse models in the case of target propagation. All layers are initialized using the method of BID7. In all networks we used the hyperbolic tangent as a nonlinearity between layers as it was previously found to work better with DTP than ReLUs. To compare to previously reported we began with the MNIST dataset, consisting of 60000 train and 10000 test 28 × 28 gray-scale images of hand-drawn digits, with 10000 images from the train test reserved for validation. For the evaluation of fully-connected architectures we chose a network from the original DTP paper, consisting of 7 hidden layers with 240 units per layer. While 7 hidden layers provide arguably excessive capacity for this task, this setup is well-suited for understanding how suitable the considered methods are for learning in relatively deep networks which are known to be prone to exploding or vanishing learning signals. The locally-connected architecture consisted of 4 hidden layers and has the following structure: (3 × 3, 32, 2, SAME), (3 × 3, 16, 1, SAME), (3 × 3, 16, 1, SAME), (3 × 3, 10, 1, VALID).Results are reported in table 1 and the learning dynamics is plotted on figure 4. Quite surprisingly, SDTP performed competitively with respect to both DTP and BP, even though it didn't use gradient propagation to assign targets for the penultimate hidden layer. This suggests that, at least for relatively simple learning tasks, the problem of finite number of targets may not be as serious as one might expect. Locally connected architectures performed well with all variants of target propagation, and about as well as with BP. Still, the ing test accuracy did not match previous known obtained with convolutional networks, which can produce less than 1% test error, see, e.g. BID19. However, the observed improvement in generalization in our experiments must have been solely caused by locally-connected layers, as none of the fully-connected networks with smaller number of hidden layers (and hence with less excessive numbers of parameters) performed similarly. We noticed that target propagation showed noisier and slower learning comparing to BP (see FIG2). Yet, with early stopping and hyper-parameter tuning it performed competitively. One can also see that with a fully-connected architecture BP achieved worse test error selected by our methodology. This is likely explained by the fact that BP overfits to the training set faster (in contrast, none of target propagation variants achieved 0% train error). These same phenomena were also observed in the locally-connected network. CIFAR-10 is a more challenging dataset introduced by BID15. It consists of 32 × 32 RGB images of 10 categories of objects in natural scenes, split into 50000 train and 10000 test images, where we also reserve 10000 train images for validation. In contrast to MNIST, classes in CIFAR-10 do not have a "canonical appearance" such as a "prototypical bird" or "prototypical truck" as opposed to "prototypical 7" or "prototypical 9". This makes them harder to classify with simple template matching, making depth imperative for achieving good performance. To our best knowledge, this is the first empirical study of biologically-motivated learning methods without weight transport on this dataset. We considered a fully-connected network with 3 hidden layers of 1024 units and a 5-layer network with locally-connected layers having the following structure: (3 × 3, 32, 2, SAME), (3 × 3, 32, 2, SAME), (3 × 3, 16, 1, SAME), (3 × 3, 16, 2, SAME), (1 × 1, 10, 1, SAME).Final can be found in table 2. One can see that with even on a more complex dataset different TP variants, including the most biologically-feasible SDTP performed similarly to BP. Clearly, the data augmentation employed (random crops and left-right flips) has been necessary for the locallyconnected network to demonstrate a significant improvement over the fully-connected network, otherwise LC models begin to overfit (see FIG2). At the same time, convolutional analog of the LC network has achieved 31.23% and 34.37% of train and test error correspondingly, without use of data augmentation. This quantitatively demonstrates the need of further advances in biologically-plausible architectures in order to match performance of modern convolutional networks. Table 3: Top-1 test error on ImageNet after 18 epochs. Finally, we assessed performance of the methods on the ImageNet dataset BID28, a large-scale benchmark that has propelled recent progress in deep learning. Again, to the best of our knowledge, this is the first empirical study of biologically-motivated methods and architectures conducted on a dataset of such scale and difficulty. ImageNet consists of 1271167 training examples from which 10000 were reserved for validation and 50000 for testing. It has 1000 object classes appearing in a variety of natural scenes and captured in high-resolution images (resized to 224 × 224).The locally-connected architecture we considered for this experiment was inspired by the ImageNet architecture used in BID31. Unfortunately, the naive replacement of convolutional layers with locally-connected layers would into a computationally-prohibitive architecture, so we decreased number of output channels in the layers and also removed layers with 1 × 1 filters. We also slightly decreased filters in the first layer, from 11 × 11 to 9 × 9. The ing network had the following architecture: (9 × 9, 48, 4, SAME), pooling, (5 × 5, 64, 1, SAME), pooling, (3 × 3, 96, 1, SAME), pooling, (3 × 3, 128, 1, SAME), spatial 6 × 6 average. Here every pooling layer is an average pooling with 3 × 3 receptive field. See the appendix for details of implementing locally-connected networks. To further reduce the amount of required computation, we included only parallel variants of DTP and SDTP in the evaluation, as these methods are more representative of the biological constraints, and are more straightforward to implement given the size of this dataset's epochs. Models were trained for 5 days, ing in 18 passes over training set. The final can be found in table 3. Unlike on MNIST and CIFAR, on ImageNet all variants performed quite poorly. Additionally, it is on this dataset where we first observed a striking difference between BP and the TP variants. A number of factors could contribute to this . One factor may be that deeper networks might require more careful hyperparameter tuning when using TP; for example, different learning rates or amount of noise injected for each layer. A second factor may be the difficulty with learning in the output layer, where a 1000-dimensional vector is predicted from just a 128-dimensional output from the final spatial average layer. Moreover, the inverse computation involves non-compressing learning, which has not been well studied in the context of TP. Unfortunately, preserving the original 1920 channels in the layer certainly presents a computational challenge. Addressing both of these factors could help improve performance, so it would be untimely to conclude on any principal inefficiencies of TP. Therefore, we leave the challenge of matching performance of BP on ImageNet to the future work. Historically, there has been significant disagreement about whether BP can tell us anything interesting about learning in the brain BID5 BID8. Indeed, from the mid 1990s to 2010, work on applying BP to the brain all but disappeared. Recent progress in machine learning has prompted a revival of this debate; where other approaches have failed, deep networks trained via BP have been key to achieving impressive performance on difficult datasets such as ImageNet. It is once again natural to wonder whether some approximation of BP might underlie learning in the brain. However, none of the algorithms proposed as approximations of BP have been tested on the datasets that were instrumental in convincing the machine learning and neuroscience communities to revisit these questions. Here we introduced a straightforward variant of difference target-propogation that completely removed gradient propagation and weight transport and tested it on the challenging task of classifying CIFAR and ImageNet images. We also investigated and reported on the use of local connectivity. We demonstrated that networks trained with SDTP without any weight sharing (i.e. weight transport in the backward pass or weight tying in convolutions) are generally able to compete with those trained with BP on difficult tasks such as CIFAR. However, BP significantly outperforms both DTP and SDTP on ImageNet, and more work is required to understand why this issue arises at scale. We note that although activity-propagation-based algorithms go a long way towards biological plausibility, there are still many biological constraints that we did not address here. For example, we've set aside the question of spiking neurons entirely to focus on asking whether variants of TP can scale up to solve difficult problems at all. The question of spiking networks is an important one BID29 BID9 ), but it is nevertheless possible to gain algorithmic insight to the brain without tackling all of the elements of biological complexity simultaneously. Similarly, we also ignore Dale's law in all of our experiments BID24. In general, we've aimed at the simplest models that allow us to address questions around weight sharing, and the form and function of feedback communication. Algorithms that contend for a role in helping us understand learning in cortex should be able to perform well on difficult domains without relying on weight transport or tying. Thus, our offer a new benchmark for future work looking to evaluate the effectiveness of potential biologically plausible algorithms in more powerful architectures and on more difficult datasets. Although locally-connected layers can be seen as a simple generalization of convolution layers, their implementation is not entirely straightforward. First, a locally-connected layer has many more trainable parameters than a convolutional layer with an equivalent specification (i.e. receptive field size, stride and number of output channels). This means that a simple replacement of every convolutional layer with a locally-connected layer can be computationally prohibitive for larger networks. Thus, one has to decrease the number of parameters in some way to run experiments using a reasonable amount of memory and compute. In our experiments we opted to decrease the number of output channels in each layer by a given factor. Obviously, this can have a negative effect on the ing performance and more work needs to be done to scale locally-connected architectures. Inverse operations When training locally-connected layers with target propagation, one also needs to implement the inverse computation in order to train the feedback weights. As in fully-connected layers, the forward computation implemented by both locally-connected and convolutional layers can be seen as a linear transformation y = W x + b, where the matrix W has a special, sparse structure (i.e., has a block of non-zero elements, and zero-elements elsewhere), and the dimensionality of y is not more than x. The inverse operation requires computation of the form x = V y + c, where matrix V has a similar sparse structure as W T. However, given this sparsity of V, computing the inverse of y using V would be highly inefficient BID6. We instead use an implementation trick often used in deconvolutional architectures. First, we define a forward computation z = Ax, where z and A are dummy activities and weights. We then define a transpose matrix as the gradient of this feedforward operation: DISPLAYFORM0 and thus DISPLAYFORM1 The gradient dz dx (and its multiplication with y) can be very quickly computed by the means of automatic differentiation in many popular deep learning frameworks. Note that this is strictly an implementation detail and does not introduce any additional use of gradients or weight sharing in learning. For DTP and SDTP we optimized over parameters of the model and inverse Adam optimizers, learning rate α used to compute targets for h L−1 in DTP, and the Gaussian noise magnitude σ used to train inverses. For backprop we optimized only the model Adam optimizer parameters. For all experiments the best hyperparameters were found by random searches over 60 random configurations drawn from the relevant ranges specified in table 4. As we mention in section 2.1.1, in our implementation of TP algorithms we use denoising training of model inverses which we find more biologically motivated than noise-preserving training used by. In particular, because downstream activity will always have noise applied to it (e.g., given that downstream neurons spike stochastically), one is always fundamentally in the denoising case in the brain. We did not observe a significant empirical difference between these two methods in practice for either DTP and SDTP. FIG6 shows the learning dynamics for parallel versions of DTP and SDTP with noise-preserving inverse losses to compare with figure 2c with denoising inverse loss. One can see that the considered methods converge to roughly same train and test errors with similar speed.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BypdvewVM
Benchmarks for biologically plausible learning algorithms on complex datasets and architectures
Deep neural networks (DNNs) usually contain millions, maybe billions, of parameters/weights, making both storage and computation very expensive. This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers. Another well-known approach for controlling the complexity of DNNs is parameter sharing/tying, where certain sets of weights are forced to share a common value. Some forms of weight sharing are hard-wired to express certain in- variances, with a notable example being the shift-invariance of convolutional layers. However, there may be other groups of weights that may be tied together during the learning process, thus further re- ducing the complexity of the network. In this paper, we adopt a recently proposed sparsity-inducing regularizer, named GrOWL (group ordered weighted l1), which encourages sparsity and, simulta- neously, learns which groups of parameters should share a common value. GrOWL has been proven effective in linear regression, being able to identify and cope with strongly correlated covariates. Unlike standard sparsity-inducing regularizers (e.g., l1 a.k.a. Lasso), GrOWL not only eliminates unimportant neurons by setting all the corresponding weights to zero, but also explicitly identifies strongly correlated neurons by tying the corresponding weights to a common value. This ability of GrOWL motivates the following two-stage procedure: (i) use GrOWL regularization in the training process to simultaneously identify significant neurons and groups of parameter that should be tied together; (ii) retrain the network, enforcing the structure that was unveiled in the previous phase, i.e., keeping only the significant neurons and enforcing the learned tying structure. We evaluate the proposed approach on several benchmark datasets, showing that it can dramatically compress the network with slight or even no loss on generalization performance. Deep neural networks (DNNs) have recently revolutionized machine learning by dramatically advancing the state-of-the-art in several applications, ranging from speech and image recognition to playing video games BID20. A typical DNN consists of a sequence of concatenated layers, potentially involving millions or billions of parameters; by using very large training sets, DNNs are able to learn extremely complex non-linear mappings, features, and dependencies. A large amount of research has focused on the use of regularization in DNN learning BID20, as a means of reducing the generalization error. It has been shown that the parametrization of many DNNs is very redundant, with a large fraction of the parameters being predictable from the remaining ones, with no accuracy loss BID14. Several regularization methods have been proposed to tackle the potential over-fitting due to this redundancy. Arguably, the earliest and simplest choice is the classical 2 norm, known as weight decay in the early neural networks literature, and as ridge regression in statistics. In the past two decades, sparsity-inducing regularization based on the 1 norm (often known as Lasso) BID35, and variants thereof, became standard tools in statistics and machine learning, including in deep learning BID20. Recently, BID32 used group-Lasso (a variant of Lasso that assumes that parameters are organized in groups and encourages sparsity at the group level BID37) in deep learning. One of the effects of Lasso or group-Lasso regularization in learning a DNN is that many of the parameters may become exactly zero, thus reducing the amount of memory needed to store the model, and lowering the computational cost of applying it. Figure 1: A DNN is first trained with GrOWL regularization to simultaneously identify the sparse but significant connectivities and the correlated cluster information of the selected features. We then retrain the neural network only in terms of the selected connectivities while enforcing parameter sharing within each cluster. It has been pointed out by several authors that a major drawback of Lasso (or group-Lasso) regularization is that in the presence of groups of highly correlated covariates/features, it tends to select only one or an arbitrary convex combination of features from each group BID6 BID7 BID17 BID28 BID42. Moreover, the learning process tends to be unstable, in the sense that subsets of parameters that end up being selected may change dramatically with minor changes in the data or algorithmic procedure. In DNNs, it is almost unavoidable to encounter correlated features, not only due to the high dimensionality of the input to each layer, but also because neurons tend to co-adapt, yielding strongly correlated features that are passed as input to the subsequent layer BID34.In this work, we propose using, as a regularizer for learning DNNs, the group version of the ordered weighted 1 (OWL) norm BID17, termed group-OWL (GrOWL), which was recently proposed by BID28. In a linear regression context, GrOWL regularization has been shown to avoid the above mentioned deficiency of group-Lasso regularization. In addition to being a sparsity-inducing regularizer, GrOWL is able to explicitly identify groups of correlated features and set the corresponding parameters/weights to be very close or exactly equal to each other, thus taking advantage of correlated features, rather than being negatively affected by them. In deep learning parlance, this corresponds to adaptive parameter sharing/tying, where instead of having to define a priori which sets of parameters are forced to share a common value, these sets are learned during the training process. We exploit this ability of GrOWL regularization to encourage parameter sparsity and group-clustering in a two-stage procedure depicted in Fig. 1: we first use GrOWL to identify the significant parameters/weights of the network and, simultaneously, the correlated cluster information of the selected features; then, we retrain the network only in terms of the selected features, while enforcing the weights within the same cluster to share a common value. The experiments reported below confirm that using GrOWL regularization in learning DNNs encourages sparsity and also yields parameter sharing, by forcing groups of weights to share a common absolute value. We test the proposed approach on two benchmark datasets, MNIST and CIFAR-10, comparing it with weight decay and group-Lasso regularization, and exploring the accuracy-memory trade-off. Our indicate that GrOWL is able to reduce the number of free parameters in the network without degrading the accuracy, as compared to other approaches. In order to relieve the burden on both required memory and data for training and storing DNNs, a substantial amount of work has focused on reducing the number of free parameters to be estimated, namely by enforcing weight sharing. The classical instance of sharing is found in the convolutional layers of DNNs BID20. In fact, weight-sharing as a simplifying technique for NNs can be traced back to more than 30 years ago BID24 BID30.Recently, there has been a surge of interest in compressing the description of DNNs, with the aim of reducing their storage and communication costs. Various methods have been proposed to approximate or quantize the learned weights after the training process. BID15 have shown that, in some cases, it is possible to replace the original weight matrix with a low-rank approximation. Alternatively, BID1 propose retraining the network layer by layer, keeping the layer inputs and outputs close to the originally trained model, while seeking a sparse transform matrix, whereas BID19 propose using vector quantization to compress the parameters of DNNs. Network pruning is another relevant line of work. In early work, BID25 and BID22 use the information provided by the Hessian of the loss function to remove less important weights; however, this requires expensive computation of second order derivatives. Recently, BID21 reduce the number of parameters by up to an order of magnitude by alternating between learning the parameters and removing those below a certain threshold. propose to prune filters, which seeks sparsity with respect to neurons, rather than connections; that approach relieves the burden on requiring sparse libraries or special hardware to deploy the network. All those methods either require multiple training/retraining iterations or a careful choice of thresholds. There is a large body of work on sparsity-inducing regularization in deep learning. For example, BID12 exploit 1 and 0 regularization to encourage weight sparsity; however, the sparsity level achieved is typically modest, making that approach not competitive for DNN compression. Group-Lasso has also been used in training DNNs; it allows seeking sparsity in terms of neurons BID32 BID2 BID41 BID27 or other structures, e.g., filters, channels, filter shapes, and layer depth BID36. However, as mentioned above, both Lasso and group-Lasso can fail in the presence of strongly correlated features (as illustrated in Section 4, with both synthetic data and real data. A recent stream of work has focused on using further parameter sharing in convolutional DNNs. By tying weights in an appropriate way, BID16 obtain a convolutional DNN with rotation invariance. On the task of analyzing positions in the game Go, BID10 showed improved performance by constraining features to be invariant to reflections along the x-axis, y-axis, and diagonal-axis. Finally, BID9 used a hash function to randomly group the weights such that those in a hash bucket share the same value. In contrast, with GrOWL regularization, we aim to learn weight sharing from the data itself, rather than specifying it a priori. Dropout-type methods have been proposed to fight over-fitting and are very popular, arguably due to their simplicity of implementation BID34 . Dropout has been shown to effectively reduce over-fitting and prevent different neurons from co-adapting. Decorrelation is another popular technique in deep learning pipelines BID4 BID11 BID29 ; unlike sparsity-inducing regularizers, these methods try to make full use of the model's capacity by decorrelating the neurons. Although dropout and decorrelation can reduce over-fitting, they do not compress the network, hence do not address the issue of high memory cost. It should also be mentioned that our proposal can be seen as complementary to dropout and decorrelation: whereas dropout and decorrelation can reduce co-adaption of nodes during training, GrOWL regularization copes with co-adaptation by tying together the weights associated to co-adapted nodes. We start by recalling the definition of the group-OWL (GrOWL) regularizer and very briefly reviewing some of its relevant properties BID28. Definition 1. Given a matrix W ∈ R n×m, let w [i]· denote the row of W with the i-th largest 2 norm. Let λ ∈ R n +, with 0 < λ 1 ≥ λ 2 ≥ · · · ≥ λ n ≥ 0. The GrOWL regularizer (which is a norm) DISPLAYFORM0 This is a group version of the OWL regularizer BID17, also known as WSL1 (weighted sorted 1 BID40) and SLOPE BID5, where the groups are the rows of its matrix argument. It is clear that GrOWL includes group-Lasso as a special case when λ 1 = λ n. As a regularizer for multiple/multi-task linear regression, each row of W contains the regression coefficients of a given feature, for the m tasks. It has been shown that by adding the GrOWL regularizer to a standard squared-error loss function, the ing estimate of W has the following property: rows associated with highly correlated covariates are very close or even exactly equal to each other BID28. In the linear case, GrOWL encourages correlated features to form predictive clusters corresponding to the groups of rows that are nearly or exactly equal. The rationale underlying this paper is that when used as a regularizer for DNN learning, GrOWL will induce both sparsity and parameters tying, as illustrated in Fig. 2 and explained below in detail. A typical feed-forward DNN with L layers can be treated as a function f of the following form: DISPLAYFORM0 L denotes the set of parameters of the network, and each f i is a componentwise nonlinear activation function, with the rectified linear unit (ReLU), the sigmoid, and the hyperbolic tangent being common choices for this function BID20. DISPLAYFORM1, DNN learning may be formalized as an optimization problem, DISPLAYFORM2 where L y,ŷ is the loss incurred when the DNN predictsŷ for y, and R is a regularizer. Here, we adopt as regularizer a sum of GrOWL penalties, each for each layer of the neural network, i.e., DISPLAYFORM3 where N l denotes the number of neurons in the l-th layer and 0 < λ DISPLAYFORM4.., b L, the biases are not regularized, as is common practice. As indicated in Eq., the number of groups in each GrOWL regularizer is the number of neurons in the previous layer, i.e., λ (l) ∈ R N l−1. In other words, we treat the weights associated with each input feature as a group. For fully connected layers, where W l ∈ R N l−1 ×N l, each group is a row of the weight matrix. In convolutional layers, where W l ∈ R Fw×F h ×N l−1 ×N l, with F w and F h denoting the width and height, respectively, of each filter, we first reshape W l to a 2-dimensional array, i.e., DISPLAYFORM5, and then apply GrOWL on the reshaped matrix. That is, if the l-th layer is convolutional, then DISPLAYFORM6 Each row of W 2D l represents the operation on an input channel. The rationale to apply the GrOWL regularizer to each row of the reshaped weight matrix is that GrOWL can select the relevant features of the network, while encouraging the coefficient rows of each layer associated with strongly correlated features from the previous layer to be nearly or exactly equal, as depicted in Fig. 2. The goal is to significantly reduce the complexity by: (i) pruning unimportant neurons of the previous layer that correspond to zero rows of the (reshaped) weight matrix of the current layer; (ii) grouping the rows associated with highly correlated features of the previous layer, thus encouraging the coefficient rows in each of these groups to be very close to each other. As a consequence, in the retraining process, we can further compress the neural network by enforcing the parameters within each neuron that belong to the same cluster to share same values. In the work of BID2, each group is predefined as the set of parameters associated to a neuron, and group-Lasso regularization is applied to seek group sparsity, which corresponds to zeroing out redundant neurons of each layer. In contrast, we treat the filters corresponding Figure 2: GrOWL's regularization effect on DNNs. Fully connected layers (Left): for layer l, GrOWL clusters the input features from the previous layer, l − 1, into different groups, e.g., blue and green. Within each neuron of layer l, the weights associated with the input features from the same cluster (input arrows marked with the same color) share the same parameter value. The neurons in layer l − 1 corresponding to zero-valued rows of W l have zero input to layer l, hence get removed automatically. Convolutional layers (right): each group (row) is predefined as the filters associated with the same input channel; parameter sharing is enforced among the filters within each neuron that corresponds with the same cluster (marked as blue with different effects) of input channels.to the same input channel as a group, and GrOWL is applied to prune the redundant groups and thus remove the associated unimportant neurons of the previous layer, while grouping associated parameters of the current layer that correspond with highly correlated input features to different clusters. Moreover, as shown in Section 4, group-Lasso can fail at selecting all relevant features of previous layers, and for the selected ones the corresponding coefficient groups are quite dissimilar from each other, making it impossible to further compress the DNN by enforcing parameter tying. To solve, we use a proximal gradient algorithm BID3, which has the following general form: at the t-th iteration, the parameter estimates are updated according to DISPLAYFORM0 where, for some convex function Q, prox Q denotes its proximity operator (or simply "prox") BID3, defined as prox DISPLAYFORM1 2 denotes the sum of the squares of the differences between the corresponding components of ν and ξ, regardless of their organization (here, a collection of matrices and vectors).Since R(θ), as defined in FORMULA4, is separable across the weight matrices of different layers and zero for b 1,..., b L, the corresponding prox is also separable, thus DISPLAYFORM2 DISPLAYFORM3 It was shown by BID28 that the prox of GrOWL can be computed as follows. For some matrix V ∈ R N ×M, let U = prox Ω λ (V), and v i and u i denote the corresponding i-th rows. Then, DISPLAYFORM4 DISPLAYFORM5 For vectors in R N (in which case GrOWL coincides with OWL), prox Ω λ (l) can be computed with O(n log n) cost, where the core computation is the socalled pool adjacent violators algorithm (PAVA BID13)) for isotonic regression. We provide one of the existing algorithms in Appendix A; for details, the reader is referred to the work of BID5 and BID39. In this paper, we apply the proximal gradient algorithm per epoch, which generally performs better. The training method is summarized in Algorithm 1. GrOWL is a family of regularizers, with different variants obtained by choosing different weight sequences λ 1,..., λ n. In this paper, we propose the following choice: DISPLAYFORM6 where p ∈ {1, ...n} is a parameter. The first p weights follow a linear decay, while the remaining ones are all equal to Λ 1. Notice that, if p = n, the above setting is equivalent to OSCAR BID6. Roughly speaking, Λ 1 controls the sparsifying strength of the regularizer, while Λ 2 controls the clustering property (correlation identification ability) of GrOWL BID28. Moreover, by setting the weights to a common constant beyond index p means that clustering is only encouraged among the p largest coefficients, i.e., only among relevant coefficient groups. Finding adequate choices for p, Λ 1, and Λ 2 is crucial for jointly selecting the relevant features and identifying the underlying correlations. In practice, we find that with properly chosen p, GrOWL is able to find more correlations than OSCAR. We explore different choices of p in Section 4.1. After the initial training phase, at each layer l, rows of W l that corresponds to highly correlated outputs of layer l − 1 have been made similar or even exactly equal. To further compress the DNN, we force rows that are close to each other to be identical. We first group the rows into different clusters 1 according to the pairwise similarity metric DISPLAYFORM0 where W l,i and W l,j denote the i-th and j-th rows of W l, respectively. With the cluster information obtained by using GrOWL, we enforce parameter sharing for the rows that belong to a same cluster by replacing their values with the averages (centroid) of the rows in that cluster. In the subsequent retraining process, let Gk denote the k-th cluster of the l-th layer, then centroid g DISPLAYFORM1 We assess the performance of the proposed method on two benchmark datasets: MNIST and CIFAR-10. We consider two different networks and compare GrOWL with group-Lasso and weight decay, in terms of the compression vs accuracy trade-off. For fair comparison, the training-retraining pipeline is used with the different regularizers. After the initial training phase, the rows that are close to each other are clustered together and forced to share common values in the retraining phase. We implement all models using Tensorflow BID0. We evaluate the effect of the different regularizers using the following quantities: sparsity = (#zero params)/(# total params), compression rate = (# total params)/(# unique params), and parameter sharing = (# nonzero params)/(# unique params). First, we consider a synthetic data matrix X with block-diagonal covariance matrix Σ, where each block corresponds to a cluster of correlated features, and there is a gap g between two blocks. Within each cluster, the covariance between two features X i and X j is cov(X i, X j) = 0.96 |i−j|, while features from different clusters are generated independently of each other. We set n = 784, K = 10, block size 50, and gap g = 28. We generate 10000 training and 1000 testing examples. FORMULA14 ).We train a NN with a single fully-connected layer of 300 hidden units. FIG0 the first 25000 entries of the sorted pairwise similarity matrices (Eq 10) obtained by applying GrOWL with different p (Eq 9) values. By setting the weights beyond index p to a common constant implies that clustering is only encouraged among the p largest coefficients, i.e., relevant coefficient groups; however, FIG0 shows that, with properly chosen p, GrOWL yields more parameter tying than OSCAR (p = n). On the other hand, smaller p values allow using large Λ 2, encouraging parameter tying among relatively loose correlations. In practice, we find that for p around the target fraction of nonzero parameters leads to good performance in general. The intuition is that we only need to identify correlations among the selected important features. FIG0 shows that weight decay (denoted as 2) also pushes parameters together, though the parameter-tying effect is not as clear as that of GrOWL. As has been observed in the literature BID6, weight decay often achieves better generalization than sparsity-inducing regularizers. It achieves this via parameter shrinkage, especially in the highly correlated region, but it does not yield sparse models. In the following section, we explore the compression performance of GrOWL by comparing it with both group-Lasso and weight decay. We also explore how to further improve the accuracy vs compression trade-off by using sparsity-inducing regularization together with weight decay. For each case, the baseline performance is provided as the best performance obtained by running the original neural network (without compression) after sweeping the hyper-parameter on the weight decay regularizer over a range of values. The MNIST dataset contains centered images of handwritten digits, of size 28×28 pixels FIG2 shows the (784 × 784) correlation matrix of the dataset (the margins are zero due to the redundant of the images). We use a network with a single fully connected layer of 300 hidden units. The network is trained for 300 epochs and then retrained for an additional 100 epochs, both with momentum. The initial learning rate is set to 0.001, for both training and retraining, and is reduced by a factor of 0.96 every 10 epochs. We set p = 0.5, and Λ 1, Λ 2 are selected by grid search. Pairwise similarities (see Eq. FORMULA15) between the rows of the weight matrices learned with different regularizers are shown in FIG2. As we can see, GrOWL (+ 2) identifies more correlations than group-Lasso (+ 2), and the similarity patterns in FIG2 are very close to that of the data FIG2 ). On the other hand, weight decay also identifies correlations between parameter rows, but it does not induce sparsity. Moreover, as shown in Table 1 FORMULA15 ) of the parameter rows obtained by training the neural network with GrOWL, GrOWL+ 2, group-Lasso, group-Lasso+ 2 and weight decay. Table 1: Sparsity, parameter sharing, and compression rate on MNIST. Baseline model is trained with weight decay and we do not enforce parameter sharing for baseline model. We train each model for 5 times and report the average values together with their standard deviations. Sparsity Parameter Sharing Compression ratio Accuracy none 0.0 ± 0% 1.0 ± 0 1.0 ± 0 98.3 ± 0.1% weight decay 0.0 ± 0% 1.6 ± 0 1.6 ± 0 98.4 ± 0.0% group-Lasso 87.6 ± 0.1% 1.9 ± 0.1 15.8 ± 1.0 98.1 ± 0.1% group-Lasso+ 2 93.2 ± 0.4%1.6 ± 0.1 23.7 ± 2.1 98.0 ± 0.1% GrOWL 80.4 ± 1.0% 3.2 ± 0.1 16.7 ± 1.3 98.1 ± 0.1% GrOWL+ 2 83.6 ± 0.5% 3.9 ± 0.1 24.1 ± 0.8 98.1 ± 0.1%The compression vs accuracy trade-off of the different regularizers is summarized in Table 1, where we see that applying 2 regularization together with group-Lasso or GrOWL leads to a higher compression ratio, with negligible effect on the accuracy. Table 1 also shows that, even with lower sparsity after the initial training phase, GrOWL (+ 2) compresses the network more than group-Lasso (+ 2), due to the significant amount of correlation it identifies; this also implies that group-Lasso only selects a subset of the correlated features, while GrOWL selects all of them. On the other hand, group-Lasso suffers from randomly selecting a subset of correlated features; this effect is illustrated in FIG4, which plots the indices of nonzero rows, showing that GrOWL (+ 2) stably selects relevant features while group-Lasso (+ 2) does not. The mean ratios of changed indices 2 are 11.09%, 0.59%, 32.07%, and 0.62% for group-Lasso, GrOWL, group-Lasso+ 2, and GrOWL+ 2, respectively. To evaluate the proposed method on large DNNs, we consider a VGG-like BID33 architecture proposed by BID38 on the CIFAR-10 dataset. The network architecture is summarized in Appendix C; comparing with the original VGG of BID33, their fully connected layers are replaced with two much smaller ones. A batch normalization layer is added after each convolutional layer and the first fully connected layer. Unlike BID38, we don't use dropout. We first train the network under different regularizers for 150 epochs, then retrain it for another 50 epochs, using the learning rate decay scheme described by He et al. 2 The mean ratio of changed indices is defined as: Table 2: Sparsity (S1) and Parameter Sharing (S2) of VGG-16 on CIFAR-10. Layers marked by * are regularized. We report the averaged over 5 runs. Weight Decay group-Lasso group-Lasso + 2 GrOWL GrOWL + 2 (S1, S2) (S1, S2) (S1, S2) (S1, S2) (S1, S2) conv1 0%, 1.: the initial rates for the training and retraining phases are set to 0.01 and 0.001, respectively; the learning rate is multiplied by 0.1 every 60 epochs of the training phase, and every 20 epochs of the retraining phase. For GrOWL (+ 2), we set p = 0.1 n (see Eq. FORMULA14) for all layers, where n denotes the number of rows of the (reshaped) weight matrices of each layer. The are summarized in Table 2. For all of the regularizers, we use the affinity propagation algorithm (with preference value 3 set to 0.8) to cluster the rows at the end of initial training process. Our experiments showed that it is hard to encourage parameter tying in the first 7 convolutional layers; this may be because the filters of these first 7 convolutional layers have comparatively large feature maps (from 32 × 32 to 8 × 8), which are only loosely correlated. We illustrate this reasoning in Fig. 6, showing the cosine similarity between the vectorized output channels of layers 1, 6, 10, and 11, at the end of the training phase; it can be seen that the outputs of layers 10 and 11 have many more significant similarities than that of layer 6. Although the output channels of layer 1 also Figure 6: Output channel cosine similarity histogram obtained with different regularizers. Labels: GO:GrOWL, GOL:GrOWL+ 2, GL:group-Lasso, GLL:group-Lasso+ 2, WD:weight decay.have certain similarities, as seen in Table 2, neither GrOWL (+ 2) nor weight decay tends to tie the associated weights. This may mean that the network is maintaining the diversity of the inputs in the first few convolutional layers. Although GrOWL and weight decay both encourage parameter tying in layers 9-13, weight decay does it with less intensity and does not yield a sparse model, thus it cannot significantly compress the network. propose to prune small weights after the initial training phase with weight decay, then retrain the reduced network; however, this type of method only achieves compression 4 ratios around 3. As mentioned by, layers 3-7 can be very sensitive to pruning; however, both GrOWL (+ 2) and group-Lasso (+ 2) effectively compress them, with minor accuracy loss. On the other hand, similar to what we observed by running the simple fully-connected network on MNIST, the accuracy-memory trade-off improves significantly by applying GrOWL or group-Lasso together with 2. However, Table 2 also shows that the trade-off achieved by GrOWL (+ 2) and group-Lasso (+ 2) are almost the same. We suspect that this is caused by the fact that CIFAR-10 is simple enough that one could still expect a good performance after strong network compression. We believe this gap in the compression vs accuracy trade-off can be further increased in larger networks on more complex datasets. We leave this question for future research. We have proposed using the recent GrOWL regularizer for simultaneous parameter sparsity and tying in DNN learning. By leveraging on GrOWL's capability of simultaneously pruning redundant parameters and tying parameters associated with highly correlated features, we achieve significant reduction of model complexity, with a slight or even no loss in generalization accuracy. We evaluate the proposed method on both a fully connected neural network and a deep convolutional neural network. The show that GrOWL can compress large DNNs by factors ranging from 11.4 to 14.5, with negligible loss on accuracy. The correlation patterns identified by GrOWL are close to those of the input features to each layer. This may be important to reveal the structure of the features, contributing to the interpretability of deep learning models. On the other hand, by automatically tying together the parameters corresponding to highly correlated features, GrOWL alleviates the negative effect of strong correlations that might be induced by the noisy input or the co-adaption tendency of DNNs. The gap in the accuracy vs memory trade-off obtained by applying GrOWL and group-Lasso decreases as we move to large DNNs. Although we suspect this can be caused by running a much larger network on a simple dataset, it motivates us to explore different ways to apply GrOWL to compress neural networks. One possible approach is to apply GrOWL within each neuron by predefining each 2D convolutional filter as a group (instead all 2D convolutional filters corresponding to the same input features). By doing so, we encourage parameter sharing among much smaller units, which in turn would further improve the diversity vs parameter sharing trade-off. We leave this for future work. Various methods have been proposed to compute the proximal mapping of OWL (ProxOWL). It has been proven that the computation complexity of these methods is O(n log n) which is just slightly worse than the soft thresholding method for solving 1 norm regularization. In this paper, we use Algorithm 2 that was originally proposed in BID5.Unlike k-means or agglomerative algorithm, Affinity Propagation does not require the number of clusters as an input. We deem this as a desired property for enforcing parameter sharing in neural network compression because it's impossible to have the exact number of clusters as a prior information. In practice, the input preference of Affinity Propagation determines how likely each sample will be chosen as an exemplar and its value will influence the number of clusters created. APPENDIX C VGG-16 ON CIFAR-10 Table 4: VGG: Clustering rows over different preference values for running the affinity propagation algorithm (Algorithm 3). For each experiment, we report clustering accuracy (A), compression rate (C), and parameter sharing (S) of layers 9-14. For each regularizer, we use different preference values to run Algorithm 3 to cluster the rows at the end of initial training process. Then we retrain the neural network correspondingly. The are reported as the averages over 5 training and retraining runs. Preference Value 0.6 0.7 0.8 0.9 (A, C, S) (A, C, S) (A, C, S) (A, C, S) GrOWL 92.2%, 13.6, 3.5 92.2%, 12.5, 2.6 92.2%, 11.4, 2.1 92.2%, 10.9, 1.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rypT3fb0b
We have proposed using the recent GrOWL regularizer for simultaneous parameter sparsity and tying in DNN learning.
Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search. However, its success is poorly understood and often found to be surprising. We argue that, rather than just being an optimization trick, the weight-sharing approach is induced by the relaxation of a structured hypothesis space, and introduces new algorithmic and theoretical challenges as well as applications beyond neural architecture search. Algorithmically, we show how the geometry of ERM for weight-sharing requires greater care when designing gradient- based minimization methods and apply tools from non-convex non-Euclidean optimization to give general-purpose algorithms that adapt to the underlying structure. We further analyze the learning-theoretic behavior of the bilevel optimization solved by practical weight-sharing methods. Next, using kernel configuration and NLP feature selection as case studies, we demonstrate how weight-sharing applies to the architecture search generalization of NAS and effectively optimizes the ing bilevel objective. Finally, we use our optimization analysis to develop a simple exponentiated gradient method for NAS that aligns with the underlying optimization geometry and matches state-of-the-art approaches on CIFAR-10. Weight-sharing neural architecture search (NAS) methods have achieved state-of-the-art performance while requiring computation training of just a single shared-weights network (; ; . However, weight-sharing remains poorly understood. In this work, we present a novel perspective on weight-sharing NAS motivated by the key observation that these methods subsume the architecture hyperparameters as another set of learned parameters of the shared-weights network, in effect extending the hypothesis class. An important ramification of this insight is that weight-sharing is not NAS-specific and can be used to tune hyperparameters corresponding to parameterized feature maps of the input data. We refer this larger subset of hyperparameter optimization problems as architecture search, and we study the following two questions associated with weight-sharing applied to the architecture search problem: 1. How can we efficiently optimize the objective induced by applying weight sharing to architecture search, namely minimizing empirical risk in the joint space of model and architecture parameters? For large structured search spaces that preclude brute force search, a natural approach to architecture search with weight-sharing is to use gradient-based methods to minimize the empirical risk over a continuous relaxation of the discrete space . Although this has allowed NAS researchers to apply their preferred optimizers to determine architecture weights, it is far from clear that the success of established methods for unconstrained optimization in training neural networks will naturally extend to these constrained and often non-Euclidean environments. As we foresee that architecture search spaces will continue to become more complex and multi-faceted, we argue for and develop a more principled, geometry-aware formulation of the optimization problem. Drawing upon the mirror descent meta-algorithm and successive convex approximation, we give non-asymptotic stationary-point convergence guarantees for the empirical risk minimization (ERM) objective associated with weight-sharing via algorithms that simultaneously connect to the underlying problem structure and handle the alternating-block nature of the architecture search. Our guarantees inform the design of gradient-based weight-sharing methods by explicitly quantifying the impact of optimizing in the right geometry on convergence rates. 2. What are the generalization benefits of solving a bilevel optimization for the architecture search problem commonly considered in practice? At its core, the goal of architecture search is to find a configuration that achieves good generalization performance. Consequently, a bilevel objective that optimizes the architecture weights using a separate validation loss is commonly used in practice in lieu of the ERM objective naturally induced by weight sharing . The learning aspects of this approach have generally been studied in settings with much stronger control over the model complexity . We provide generalization guarantees for this objective over structured hypothesis spaces associated with a finite set of architectures; this leads to meaningful bounds for simple feature map selection problems as well as insightful for the NAS problem that depend on the size of the space of global optima. To validate our theoretical , we conduct empirical studies of weight-sharing in two settings: shallow feature map selection, i.e., tuning the hyperparameters of kernel classification and NLP featurization pipelines, and CNN neural architecture search. In we demonstrate that weightsharing efficiently optimizes the bilevel objective and achieves low generalization error with respect to the best architecture setting. For, motivated by insights from our convergence analysis, we develop a simple exponentiated gradient version of DARTS called EDARTS that better exploits the geometry of the optimization problem. We evaluate EDARTS on the design of CNN architectures for CIFAR-10 and demonstrate that EDARTS finds better architectures than DARTS in less than half the time. We also achieve very competitive relative to state-of-the-art architectures when using an extended evaluation routine. Related Work: Our work on optimization for weight-sharing benefits from the literature on firstorder stochastic optimization and in particular the mirror descent framework . Specifically, we use successive convex approximation to show convergence of alternating minimization and derive geometry-dependent rates comparable to existing work on non-convex stochastic mirror descent . Our generalizes to the constrained, nonEuclidean, and multi-block setting an approach of for obtaining non-convex convergence from strongly convex minimization, which may be of independent interest. Previous optimization for NAS have generally only shown bounds on auxiliary quantities such as regret that are not well-connected to the learning objective ) or have only given monotonic improvement or asymptotic guarantees . However, due to the generality of mirror descent, the approaches in the middle three papers can be seen as special cases of our analysis. Finally, our analysis of the properties of the bilevel optimization is related to work on model selection , but does not consider the configuration parameters as explicit controls on the model complexity. Our learning are broadly related to hyperparameter optimization, although most work focuses on algorithmic and not statistical questions . In this section, we formalize the weight-sharing learning problem, relate it to traditional ERM, and provide examples for the case of NAS and feature map selection that we use for the rest of the paper. Our main observation is that weight-sharing for architecture search extends the hypothesis space to be further parameterized by a finite set of configurations C. Formally, we have a structured hypothesis space H(C, W) = {h w ∈ H(C, W) with low population error w (x), y) for loss: Y × Y → R. Hence, we can apply ERM as usual, with optional regularization, to select a hypothesis from the extended hypothesis space; in fact this is done by some NAS methods (e.g.,). The learning algorithm is then min w∈W,c∈C for block specific regularizers R W and R C. Note that in the absence of weight-sharing, we would need to learn a separate hypothesis h (c) wc for each hypothesis subclass H c. Although a brute force approach to selecting a hypothesis from H(C, W) via ERM would in effect require this as well, our subsequent examples demonstrate how the weight-sharing construct allows us to apply more efficient gradient-based optimization approaches, which we study in Section 3. Feature Map Selection: In this setting, the structure is induced by a set of feature transformations C = {φ i : X → R n for i = 1, . . ., k}, so the hypothesis space is {f w (φ i (·)): w ∈ W, φ i ∈ C} for some W ⊂ R d. Examples of feature map selection problems include tuning kernel hyperparameters for kernel ridge classification and tuning NLP featurization pipelines for text classification. In these cases f w is a linear mapping f w (·) = w, · and W ⊂ R n. Neural Architecture Search: Weight-sharing methods almost exclusively use micro cell-based search spaces for their tractability and additional structure (; . These search spaces can be represented as directed acyclic graphs (DAGs) with a set of ordered nodes N and edges E. Each node x (i) in the DAG is a feature representation and each edge o (i,j) is an operation on the feature of node j passed to node i and aggregated with other inputs to form x (j), with the restriction that a given node j can only receive edges from prior nodes as input. Hence, the feature at a given node i is. Search spaces are then specified by the number of nodes, the number of edges per node, and the set of operations O that can be applied at each edge. In this case the structure C ⊂ {0, 1} |E||O| of the hypothesis space is the set of all valid architectures for this DAG encoded by edge and operation decisions. Treating both weights w ∈ W and architecture decision c ∈ C as parameters, weight-sharing methods train a single shared-weights network h (c) w: X → Y encompassing all possible functions within the search space. Therefore, the sharedweights network includes all possible edges between nodes and all possible operations per edges. In addition to the weights w ∈ W corresponding to all the operations, the shared-weights network also takes architecture weights c ∈ C as input, where c indicates the weight given to operation o on edge (i, j) so that the feature of a given node i is Gradient-based weight-sharing methods apply continuous relaxations to the architecture search space in order to compute gradients. Some methods like DARTS and its variants (; ; ; ; relax the search space by considering a mixture of operations per edge and then discretize to a valid architecture in the search space. With the mixture relaxation, we replace all c ∈ {0, 1} |E||O| in the above expressions by continuous counterparts θ ∈ |E||O|, with the constraint that o∈O θ (i,j) o = 1, i.e., the architecture weights for operations on each edge sum to 1. Other methods like SNAS, ASNG-NAS , and ProxylessNAS assume a parameterized distribution p θ from which architectures are sampled. By substituting continuous parameters θ ∈ Θ in for discrete parameters c ∈ C, we are able to use gradient-based methods to optimize. We address the question of how to effectively use gradient optimization for weight-sharing in the next section. While continuous relaxation enables state-of-the-art , architecture search remains expensive and noisy, with state-of-the-art mixture methods requiring second-order computations and probabilistic methods suffering from high variance policy gradients. Moreover, while the use of SGD to optimize network weights is a well-tested approach, architecture weights typically lie in constrained, non-Euclidean geometries in which other algorithms may be more appropriate. Recognizing this, several efforts have attempted to derive a better optimization schemes; however, the associated guarantees for most of them hold for auxiliary objectives, such as regret of local configuration decisions, that are not connected to the optimization objective and ignore the two-block nature of the problem ). do consider an alternating descent method for the training objective, their are asymptotic and certainly do not indicate any finite-time convergence. In this section we address these difficulties by showing that the mirror descent (MD) framework is the right tool for designing algorithms in the block optimization problems that occur in architecture search. We describe how such geometry-aware gradient algorithms lead to faster stationary-point convergence; as we will show in Section 5, this yields simple, principled, and effective algorithms for large-scale multi-geometry problems such as NAS. Algorithm 1: Two geometry-aware optimization algorithms for multi-block optimization for a β-strongly-smooth function over if SBMD Problem Geometry: We relax optimization problems of the form to the problem of minimizing a function f: X → R over a convex product space X i consisting of blocks i, each with an associated norm · (i). For example, in typical NAS we can set X 1 to be the set-product of |E| simplices over the operations O and the associated norm to be · 1 while X 2 = W ⊂ R d is the space of network weights and the associated norm is · 2. To each block i we further associate a distance-generating function (DGF) ω i: X → R that is 1-strongly-convex w.r.t. . For example, in the Euclidean case using ω 2 (·) = yields the usual squared Euclidean distance; over the probability simplex we often use the entropy ω 1 (·) = ·, log(·), which is 1-strongly-convex w.r.t. · 1 and for which D ω1 is the KL-divergence. Given an unbiased gradient estimate g(x t, ζ) = E ζ ∇f (x t), the (single-block) stochastic MD step is for some learning rate η > 0. In the Euclidean setting this reduces to SGD, while with the entropic regularizer the iteration becomes equivalent to exponentiated gradient. Key to the guarantees for MD is the fact that the dual norm of the problem geometry is used to measure the second moment of the gradient; if every coordinate of g is bounded a.s. by σ, then under Euclidean geometry the dependence is on E g(x) 2 2, * = E g(x) 2 2 ≤ σ 2 d while using entropic regularization it is on. Thus in such constrained 1 geometries mirror descent can yield dimension-free convergence guarantees. While in this paper we focus on the benefit in this simple case, the MD meta-algorithm can be used for many other geometries of interest in architecture search, such as for optimization over positive-definite matrices . Algorithms and Guarantees: We propose two methods for the above multi-block optimization problems: stochastic block mirror descent (SBMD) and alternating successive convex approximation (ASCA). At each step, both schemes pick a random coordinate i to update; SBMD then performs a mirror descent update similar to but with a batched gradient, while ASCA optimizes a strongly-convex surrogate function using a user-specified solver. Note that both methods require that f is β-strongly-smooth w.r.t. each block's norm · (i) to achieve convergence guarantees, a standard assumption in stochastic non-convex optimization . This condition holds for the architecture search under certain limitations, such as a restriction to smooth activations. In the supplement we also show that in the single-block case ASCA converges under the more general relative-weak-convexity criterion . We first discuss SBMD, for which non-convex convergence guarantees were shown by; this algorithm is the one we implement in Section 5 for NAS. A first issue is how to measure stationarity in constrained, non-Euclidean geometries. In the single-block setting we can set a smoothness-dependent constant λ > 0 and measure how far the proximal gradient operator prox∇ λ (x) = arg min u∈X λ ∇f (x), u + D ω (u||x) is from a fixed point, which yields the projected gradient measure. Notably, in the unconstrained Euclidean case this yields the standard stationarity measure ∇f (x) 2 2. For the multi-block case we replace an ε-stationary point of f w.r.t. the projected gradient if G λ (x) ≤ ε. In the non-Euclidean case the norm of the projected gradient measures how far we are from satisfying a first-order optimality condition, namely how far the negative gradient is from being in the normal cone of f to the set X (, Proposition 4.1). For this condition, Algorithm 1 has the following guarantee: Theorem 3.1 . If f is β-strongly-smooth, F = f (x ) − min u∈X f (u), and oracle calls to reach an ε-stationary-point x ∈ X as measured by G 1 We next provide a guarantee for ASCA in the form of a reduction to strongly-convex optimization algorithms. This is accomplished by the construction of the solving of a surrogate function at each iteration, which in the Euclidean case is effectively adding weight-decay. ASCA is useful in the case when efficient strongly-convex solvers are available for some or all of the blocks; for example, this is frequently the case for feature map selection problems such as our kernel approximation examples, which employ 2 -regularized ERM in the inner optimization. Taking inspiration from , for ASCA we analyze a stronger notion of stationarity that upper-bounds G λ (x) 2, which we term the projected stationarity measure:. Note that this is not the same measure used by , although in the appendix we show that in the single-block case our holds for their notion also. Theorem 3.2. If f is β-strongly-smooth and where ε > 0 is the solver tolerance and the expectation is over the randomness of the algorithm and associated oracles. Proof Summary. The proof generalizes a of and is in Appendix A. Thus if we have solvers that return approximate optima of strongly-convex functions on each block then we can converge to a stationary point of the original function. The convergence rate will depend on the solver used; for concreteness we give a specification for stochastic mirror descent. Corollary 3.1. Under the conditions of Theorem 3.1, if on i ASCA uses the Epoch-GD method of ωi oracle calls suffice to reach an ε-stationary-point x ∈ X as measured by E ∆ 1 This oracle complexity matches that of Theorem 3.1 apart from the extra β 2 L 2 ωi term due to the surrogate function, which we show below is in practice a minor term that does not obviate the benefit of geometry-awareness. On the other hand, ASCA is much more general, allowing for many different algorithms on individual blocks; for example, many popular neural optimizers such as Adam have variants with strongly-convex guarantees . The Benefit of Geometry-Aware Optimization: We conclude this section with a formalization of how convergence of gradient-based architecture-search algorithms can benefit from this optimization strategy. Recalling from our example, we have the architecture parameter space X 1 consisting of |E| simplices over |O| variables equipped with the 1-norm and the shared weight space X 2 = W equipped with the Euclidean norm. We suppose that the stochastic gradients along each block i have coordinates bounded a.s. by σ i > 0. Then if we run SBMD using SGD to optimize the shared weights and exponentiated gradient to update the architecture parameters, Theorem 3.1 implies that we reach an ε-stationary point in O stochastic gradient computations. The main benefit here is that the first term in the numerator is not σ 2 1 |E||O|, which would be the case if we used SGD; this improvement is critical as the noise σ 2 1 of the architecture gradient can be very high, especially if a policy gradient is used to estimate probabilistic derivatives. In the case of ASCA, we can get similar guarantees assuming the probabilities for each operation are lower-bounded by some small δ > 0 and that the space of shared weights is bounded by B; then the guarantee will be as above except with an additional O(log 2) term (independent of σ 1, σ 2). While for both SBMD and ASCA the σ 2 2 d term from training the architecture remains, this will be incurred even in single-architecture training using SGD. Furthermore, in the case of ASCA it may be improved using adaptive algorithms . In Section 2, we described the weight-sharing hypothesis class H(C, W) as a set of functions nondisjointly partitioned by a set of configurations C sharing weights in W and posed the ERM problem associated with selecting a hypothesis from H(C, W). However, as mentioned in Section 1, the objective solved in practice is a bilevel problem where a separate validation set is used for architecture parameter updates. Formally, the bilevel optimization problem considered is min w∈W,c∈C where T, V ⊂ Z is a pair of training/validation sets sampled i.i.d. from D, the upper objective w (x), y) is the empirical risk over V, and L T (w, c) is some objective induced by T. We intentionally differentiate the two losses since training is often regularized. This setup is closely related to the well-studied problems of model selection and cross-validation. However, a key difference is that the choice of configuration c ∈ C does not necessarily provide any control over the complexity of the hypothesis space; for example, in NAS as it is often unclear how the hypothesis space changes due to the change in one decision. By contrast, the theory of model selection is often directly concerned with control of model complexity. Indeed, in possibly the most common setting the hypothesis classes are nested according to some total order of increasing complexity, forming a structure . This is for example the case in most norm-based regularization schemes. Even in the non-nested case, there is often an explicit tradeoff between parsimony and accuracy . With the configuration parameters in architecture search behaving more like regular model parameters rather than as controls on the model complexity, it becomes reasonable to wonder why most NAS practitioners have used the bilevel formulation. Does the training-validation split exploit the partitioning of the hypothesis space H(C, W) induced by the configurations C? To see when this might be true, we first note that a key aspect of the optima of the bilevel weight-sharing problem is the restriction on the model weights -that they must be in the set arg min w∈W L T (h (c) w ) of the inner objective L T. As we will see, under certain assumptions this can reduce the complexity of the hypothesis space without harming performance. First, for any sample w )} be the version space (, Equation 6) induced by some configuration c and the objective function. Second, let N (F, ε) be the L ∞ -covering-number of a set of functions F at scale ε > 0, i.e. the number of L ∞ balls required to construct an ε-cover of F (, Equation 3 .60). These two quantities let us define a complexity measure over the shared weight hypothesis space: The version entropy is a data-dependent quantification of how much the hypothesis class is restricted by the inner optimization. For finite C, a naive bound shows that Λ(H, ε, T) is bounded by log |C| + max c∈C log N (H c (T), ε), so that the second term measures the worst-case complexity of the global minimizers of L T. In the feature selection problem, L T is usually a strongly-convex loss due to regularization and so all version spaces are singleton sets, making the version entropy log |C|. In the other extreme case of nested model selection the version entropy reduces to the complexity of the version space of the largest model and so may not be informative. However, in practical problems such as NAS an inductive bias is often imposed via constraints on the number of input edges. To bound the excess risk in terms of the version entropy, we first discuss an important assumption that describes cases when we expect the shared weights approach to perform well: Assumption 4.1. There exists a good c * ∈ C, i.e. one satisfying (w *, c w) for some w * ∈ W, such that w.h.p. over the drawing of training set T ∼ D m T at least one of the minima of the optimization induced by c * and T has low excess risk, i.e. w.p. 1 − δ there exists This assumption requires that w.h.p. the inner optimization objective does not exclude all low-risk classifiers for the optimal configuration. Note that it asks nothing of either the other configurations in C, which may be arbitrarily bad, nor of the hypotheses found by the procedure. It does however prevent the case where one knows the optimal configuration but minimizing the provided objective L T does not provide a set of good weights. Note that if the inner optimization is simply ERM over the training set T, i.e. L T = T, then standard learning-theoretic guarantees will give ε * exc (m T, δ) decreasing in m T and increasing at most poly-logarithmically in 1 δ. With this assumption, we can show the following guarantee on solutions to the bilevel optimization. Theorem 4.1. Letĥ be a hypothesis corresponding to the solution of the bilevel optimization. Then under Assumption 4.1 if is B-bounded we have w.p. 1 − 3δ that The first difference is bounded by the version entropy usng the constraint onĥ ∈ H c, the second by optimality ofĥ on V, the third by Hoeffding's inequality, and the last by Assumption 4.1. As shown in the applications below, the significance of this theorem is that a bound on the version entropy guarantees excess risk almost as good as that of the (unknown) optimal configuration without assuming anything about the complexity or behavior of sub-optimal configurations. Feature Map and Kernel Selection: In the feature map selection problem introduced in Section 2, } is a set of feature maps and the inner problem L T is 2 -regularized ERM for linear classification over the ing feature vectors. The bilevel problem is then Due to strong-convexity of L T, each map φ i induces a unique minimizing weight w ∈ W and thus a singleton version space, therefore upper bounding the version entropy by log |C| = log N. Furthermore, for Lipschitz losses and appropriate choice of regularization coefficient, standard for 2 -regularized ERM for linear classification (e.g. In the special case of kernel selection using random Fourier approximation, we can apply associated generalization guarantees (, Theorem 1) to show that we can compete with the optimal RKHS from among those associated with one of the configurations: Corollary 4.2. In feature map selection suppose each map φ ∈ C is associated with a random Fourier feature approximation of a continuous shift-invariant kernel that approximates an RKHS H φ and is the square loss. If the number of features In both cases we are able to get risk bounds almost identical to the excess risk achievable if we knew the optimal configuration beforehand, up to an additional capacity term depending weakly on the number of configurations. This would not be possible with solving the regular ERM objective instead of the bilevel optimization as we would then have to contend with the possibly high complexity of the hypothesis space induced by the worst configuration. Neural Architecture Search: In the case of NAS we do not have a bound on the version entropy, which now depends on all of C. Whether the version space, and thus the complexity, of deep networks is small compared to the number of samples is unclear, although we gather some evidence. number of critical points is exponential only in the number of layers, which would yield a small version entropy. It is conceivable that the quantity may be further bounded by the complexity of solutions explored by the algorithm when optimizing L T ; indeed, we find that shared-weight optimization leads to models with smaller 2 -norm and distance from initialization than from-scratch SGD on a single network (see Appendix D.4). On the other hand, argue, with evidence in restricted settings, that even the most stringent implicit regularization cannot lead to a non-vacuous uniform convergence bound; if true more generally this would imply that the NAS version entropy is quite large. Here we demonstrate how weight-sharing can be used as a tool to speed up general architecture search problems by applying it to two feature map selection problems. We then validate our optimization analysis with a geometry-aware weight-sharing method to design CNN cells for CIFAR-10. Feature Map Selection: Recall that here our configuration space has k feature maps φ i: X → R n with outputs passed to a linear classifier w ∈ R n, which will be the shared weights. We will approximate the bilevel optimization with the inner minimization over 2 -regularized ERM λ w 2 2 + (x,y)∈T (w, φ i (x), y). Our weight-sharing procedure starts with a vector θ ∈ ∆ N encoding a probability distribution p θ over [N] and proceeds as follows: according to (an estimate of) its validation loss (x,y)∈V (w (t), φ i (x), y). Observe the equivalence to probabilistic NAS: at each step the classifier (shared parameter) is updated using random feature maps (architectures) on the training samples. The distribution over them is then updated using estimated validation performance. We consider two schemes for this update of θ (t): exponentiated gradient using the score-function estimate and successive elimination, where we remove a fraction of the feature maps that perform poorly on validation and reassign their probability among the remainder. may be viewed as a softer version of, with halving also having only one hyperparameter (elimination rate) and not two (learning rate, stopping criterion). The first problem we consider is kernel ridge regression over random Fourier features on CIFAR-10. We consider three configuration decisions: data preprocessing, choice of kernel, and bandwidth parameter. This problem was considered by , except they fixed the Gaussian kernel whereas we also consider Laplacian; however, they also select the regularization parameter λ, which weight-sharing does not handle. We also study logistic regression for IMDB sentiment analysis of Bag-of-n-Gram (BonG) featurizations, a standard NLP baseline . Here there are eight configuration decisions: tokenization method, whether to remove stopwords, whether to lowercase, choice of n, whether to binarize features, type of feature weighting, smoothing parameter, and post-processing. As some choices affect the feature dimension we hash the BonGs into a fixed number of bins . To test the performance of weight-sharing for feature map selection, we randomly sample 64 configurations each for CIFAR-10 and IMDB and examine whether the above schemes converge to the optimal choice. The main comparison method here is thus random search, which runs a full sweep over these samples; by contrast successive halving will need to solve 6 = log 2 64 regression problems, while for exponentiated gradient we perform early stopping after five iterations. Note that weight-sharing can do no better than random search in terms of accuracy because they are picking a configuration from a space that random search sweeps over. The goal is to see if it consistently returns a good configuration much faster. As our in Figures 1 and 2 show, successive halving indeed does almost as well as random search in much less time. While exponentiated gradient usually does not recover a near-optimal solution, it does on average return a configuration in the top 10%. We also note the strong benefit of over-parameterization for IMDB -the n-gram vocabulary has size 4 million so the number of bins on the right is much larger than needed to learn in a singleconfiguration setting. Overall, these experiments show that weight-sharing can also be used as a fast way to obtain signal in regular learning algorithm configuration and not just NAS. NAS on CIFAR-10: Recall from Section 3 that when the architecture space consists of |E| simplices of dimension |O|, the convergence rate of exponentiated gradient descent to a stationary point of the objective function is independent of the dimension of the space, while SGD has linear dependence. This motivates our geometry-aware method called Exponentiated-DARTS (EDARTS). EDARTS modifies first-order DARTS in two ways. First, in lieu of the softmax operation used by DARTS on the architecture weights, we use standard normalization so that the weight of operation o on edge (i, j) is u. Second, in lieu of Adam, we use exponentiated gradient to update the architecture weights: c t = c t−1 exp(−η∇ c V (h wt−1 (c t−1)). While EDARTS resembles XNAS, our justification for using exponentiated gradient comes directly from aligning with the optimization geometry of ERM. Additionally, EDARTS only requires two straightforward modifications of first-order DARTS, while XNAS relies on a wipeout subroutine and granular gradient-clipping for each edge operation on the cell and data instance level. 1 1 Our own XNAS implementation informed by correspondence with the authors did not produce competitive . We still compare to the architecture XNAS reported evaluated by the DARTS training routine in Table 1. We evaluate EDARTS on the task of designing a CNN cell for CIFAR-10. We use the standard search space as introduced in DARTS ) for evaluation we use the same three stage process used by DARTS and random search with weight-sharing , with stage 3 considered the'final' . We provide additional experimental details in Appendix D. Table 1 shows the performance of EDARTS relative to both manually designed and NAS-discovered architectures. EDARTS finds an architecture that achieves competitive performance with manually designed architectures which have nearly an order-of-magnitude more parameters. Additionally, not only does EDARTS achieve significantly lower test error than first-order DARTS, it also outperforms second order DARTS while requiring less compute time, showcasing the benefit of geometry-aware optimization. Finally, EDARTS achieve comparable performance to the reported architecture for state-of-the-art method XNAS when evaluated using the stage 3 training routine of DARTS. Following XNAS, we also perform an extended evaluation of the best architecture found by EDARTS with AutoAugment, cosine power annealing , cross-entropy with label smoothing, and trains for 1500 epochs. We evaluated the XNAS architecture using our implementation for a direct comparison and also to serve as a reproducibility check. EDARTS achieved a test error of 2.18% in the extended evaluation compared to 2.15% for XNAS in our reproduced evaluation; note the published test error for XNAS is 1.81%. To meet a higher bar for reproducibility we report'broad reproducibility' by repeating the entire pipeline from stage 1 to stage 3 for two additional sets of seeds. Our in Table 2 (see Appendix) show that EDARTS has lower variance across experiments than random search with weight sharing . However, we do observe non-negligible variance in the performance of the architecture found by different random seed initializations of the shared-weights network, necessitating running multiple searches before selecting an architecture. A OPTIMIZATION This section contains proofs and generalizations of the non-convex optimization in Section 3. We first gather some necessary definitions and from convex analysis. Definition A.1. Let X be a convex subset of a finite-dimensional real vector space and f: X → R be everywhere sub-differentiable. 1. For α > 0, f is α-strongly-convex w.r.t. norm · if ∀ x, y ∈ X we have Definition A.2. Let X be a convex subset of a finite-dimensional real vector space. The Bregman divergence induced by a strictly convex distance-generating function (DGF) ω: X → R is D ω (x||y) = ω(x) − ω(y) − ∇ω(y), x − y ∀ x, y ∈ X By definition, the Bregman divergence satisfies the following properties: 3. If ω is β-strongly-smooth w.r.t. norm · then so is D ω (·||y) ∀ y ∈ X. Furthermore, D ω (x||y) ≤ β 2 x − y 2 ∀ x, y ∈ X. Lemma A.1 (Three-Points Lemma). (, Lemma 9.11) For any DGF ω: X → R and all x, y, z ∈ X we have ∇ω(y) − ∇ω(x), z − x = D ω (z||x) + D ω (x||y) − D ω (z||y) Definition A.3. Let ω: X → R be a 1-strongly-convex DGF. Then for constant λ > 0 and an everywhere sub-differentiable function f: X → R the proximal operator is defined over x ∈ X as prox λ (x) = arg min Note that the prox operator is well-defined whenever f is β-strongly-smooth for some β < λ. We will also use the following notation for the proximal gradient operator: Note that the prox grad operator is always well-defined. Theorem A.1. (, Theorem 9.12) For any λ > 0, 1-strongly-convex DGF ω: X → R, and x ∈ X let f: X → R be an everywhere sub-differentiable function s.t. λf (·) + D ω (·||x) is convex over X. Then for x + = prox λ (x) and all u ∈ X we have Lemma A.2. For any λ > 0, 1-strongly-convex DGF ω: X → R, and x ∈ X let f: X → R be an everywhere sub-differentiable function s.t. Proof. Applying Theorem A.1 followed by Lemma A.1 yields Corollary A.1. For any λ > 0, 1-strongly-convex DGF ω: X → R, x ∈ X, and everywhere sub-differentiable function f: X → R we have for Because we consider constrained non-convex optimization, we cannot measure convergence to a stationary point by the norm of the gradient. Instead, we analyze the convergence proximal-mappingbased stationarity measures. The most well-known measure is the norm of the projected gradient , which in the unconstrained Euclidean case reduces to the norm of the gradient and in the general case measure the distance between the current point and a cone satisfying first-order optimality conditions (, Proposition 4.1). Our convergence hold for a stronger measure that we call the projected stationarity and which is inspired by the Bregman stationarity measure of but using the prox grad operator instead of the prox operator. Definition A.4. Let ω: X → R be a 1-strongly-convex DGF and f: X → R be an everywhere sub-differentiable function. Then for any λ > 0 we define the following two quantities: The following properties follow: We can also consider the Bregman stationarity measure of directly. As this measure depends on the prox operator, which is not always defined, we first state the notion of non-convexity that consider. Definition A.5. An everywhere sub-differentiable function f: X → R is (γ, ω)-relatively-weaklyconvex ((γ, ω)-RWC) for γ > 0 and ω: X → R a 1-strongly-convex DGF if f (·) + γω(·) is convex over X. Note that all γ-strongly-smooth functions are (γ, ω)-RWC. Note that (γ, ω)-RWC is a generalization of γ-strong-smoothness w.r.t. the norm w.r.t. which ω is strongly-convex. Furthermore, for such functions we can always define the prox operator for λ > γ, allowing us to also define the Bregman gradient below. Similarly to before, bounding the Bregman stationarity measure yields a stronger notion of convergence than the squared norm of the Bregman gradient. For the relationship between the Bregman stationarity measure and first-order optimality conditions see Zhang and He (2018, Equation 2.11). Definition A.6. Let ω: X → R be a 1-strongly-convex DGF and f: X → R be a (γ, ω)-RWC everywhere sub-differentiable function for some γ > 0. Then for any λ > γ we define the following two quantities: Here we prove our main optimization . We begin with a descent lemma guaranteeing improvement of a non-convex function due to approximately optimizing a strongly convex surrogate. Lemma A.3. Let ω: X → R be a 1-strongly-convex DGF and f: X → R be everywhere subdifferentiable. For some x ∈ X and ρ > 0 definef x (·) = f (·) + ρD ω (·||x) and letx ∈ X be a point s.t. Ef x (x) − min u∈Xfx (u) ≤ ε. Then 1. If f is β-strongly-smooth, ρ > β, and λ = Proof. Generalizing an argument in Agarwal et al. (2019, Theorem A.2), for x + ∈ X we have by strong-convexity off x that If f is β-strongly-smooth set x + = prox∇ λ (x), so that by Corollary A.1 we have In the other case of f being (γ, ω)-RWC set x + = prox λ (x), so that by Lemma A.2 we have We now turn to formalizing our multi-block setting and assumptions. Setting A.1. For i = 1,..., b let X i be a convex subset of a real vector space with an associated DGF ω i: X i → R that is 1-strongly-convex w.r.t. some norm · (i) over X i. We have an everywhere sub-differentiable function f: X → R over the product space Our main will hold for the case when the following general assumption is satisfied. We will later show how this assumption can follow from strong smoothness or relative weak convexity and existing algorithmic . Assumption A.1. In Setting A.1, for any given ε > 0 and each i ∈ [b] there exists a constant ρ i > 0 and an algorithm A i: X → X that takes a point x ∈ X and returns a pointx ∈ X satisfyinĝ x −i = x −i and where the subscript i selects block i, the subscript −i selects all blocks other than block i, and E Ai denotes expectation w.r.t. the randomness of algorithm A i and any associated stochastic oracles. Algorithm 2: Generic successive convex approximation algorithm for reaching a stationary point of the non-convex function in Setting A.1. Input: Point x ∈ X in the product space of Setting A.1. Algorithms A 1,..., Our main relies on the following simple lemma guaranteeing non-convex convergence for a generic measure satisfying guaranteed expected descent: Lemma A.4. In Setting A.1, for some ε > 0 and each λi (x) be any measure s.t. for some λ i and some algorithm A i: X → X we have Then the output x of Algorithm 2 satisfies where F = f (x ) − arg min u∈X f (x) and the expectation is taken over the sampling at each iteration, the sampling of the output, and the randomness of the algorithms and any associated stochastic oracles. Proof. Define Ξ t = {(ξ s, A ξs)} t s=1 and note that x (t+1) = A ξt (x (t) ). We then have In the single-block setting, Lemmas A.3 and A.4 directly imply the following guarantee: Theorem A.2. In Setting A.1 and under Assumption A.1, let b = 1 and ρ satisfy one of the following: 1. f: X → R is β-strongly-smooth and ρ = 2β. 2. f: X → R is (γ, ω)-RWC for some DGF ω: X → R and ρ = 2γ. Then Algorithm 2 returns a point x ∈ X satisfying one of the following (respectively, w.r.t. the above settings) for Here the expectation is taken over the randomness of the algorithm and oracle. We can apply a known for the strongly convex case to recover the rate of for non-convex stochastic mirror descent, up to an additional depending on ω: Corollary A.2. In Setting A.1 for b = 1 and (γ, ω)-RWC f, suppose we have access to f through a stochastic gradient oracle g(x) = E∇f (x) such that E g 2 * ≤ G 2. Let A: X → X be an algorithm that for any x ∈ X runs the Epoch-GD method of with total number of steps N, initial epoch length T 1 = 4 and initial learning rate η 1 = 1 γ onf x (·) = f (·) + 2γD ω (·||x). Then with N T calls to the stochastic gradient oracle Algorithm 2 returns a point x ∈ X satisfying and L ω the Lipschitz constant of ω w.r.t. · over X. So an expected ε-stationary-point, as measured by Proof. Apply Theorem 5 of together with the fact thatf x is γ-stronglyconvex w.r.t. · and its stochastic gradient is bounded by For the multi-block case our hold only the projected stationarity measure: Theorem A.3. In Setting A.1 and under Assumption A.1 assume f (·, x −i) is β-strongly-smooth w.r.t. Here the expectation is taken over the randomness of the algorithm and oracle and the projected stationarity measure ∆ λ is defined w.r.t. the Bregman divergence of the DGF ω(where prox∇ To apply Lemma A.4 in the multiblock setting it suffices to show that the sum of the projected stationarity measures on each block is equal to the projected stationarity measure induced by the sum of the DGFs. For some λ > 0 and any i ∈ [b] we have that and so Thus applying Lemma A.2 with λ = 1 4ρ yields the . In the following corollary we recover the rate of for non-convex blockstochastic mirror descent, up to an additional term depending on ω i: Corollary A.3. In Setting A.1 for β-strongly-smooth f, suppose we have access to f through a stochastic gradient oracle g(x) = E∇f (x) such that E g i i. For i ∈ [b] let A i: X → X be an algorithm that for any x ∈ X runs the Epoch-GD method of with total number of steps N, initial epoch length T 1 = 4 and initial learning rate η 1 = 1 γ on surrogate functionf x (·) = f (·, x −i) + 2βD ωi (·||x i). Then with N T calls to the stochastic gradient oracle Algorithm 2 returns a point x ∈ X satisfying We can specialize this to the architecture search setting where we have a configuration search space contained in the product of simplices induced by having n decisions with c choices each together with a parameter space bounded in Euclidean norm. Corollary A.4. Under the assumptions of Corollary A.3, suppose b = 2 and we have the following two geometries: Suppose the stochastic gradient oracle of f has bounded ∞ -norm σ 1 over X 1 and σ 2 over X 2. Then Algorithm 2 will return an expected ε-stationary point of f under the projected stationarity measure in a number of stochastic oracle calls bounded by This section contains proofs of the generalization in Section 4. We first describe the setting for which we prove our general . Setting B.1. Let C be a set of possible architecture/configurations of finite size such that each c ∈ C is associated with a parameterized hypothesis class H c ={h (c) w: X → Y: w ∈ W} for input space Z = X × Y and fixed set of possible weights W. We will measure the performance of a hypothesis h (c) w on an input z = (x, y) ∈ Z using z (w, c) = (h Finally, we will consider solutions of optimization problems that depend on the training data and architecture. Specifically, for any configuration c ∈ C and finite subset S ⊂ Z let W c (S) ⊂ W be the set of global minima of some optimization problem induced by S and c and let the associated version space be H c (S) = {h w : X → Y is determined by a choice of architecture c ∈ C and a set of network weights w ∈ W and the loss : Y × Y → {0, 1} is the zero-one loss. In the simplest case W c (S) is the set of global minima of the ERM problem min We now state the main assumption we require. Assumption B.1. In Setting B.1 there exists a good architecture c * ∈ C, i.e. one satisfying (w *, c *) ∈ arg min W×C D (w, c) for some weights w * ∈ W, such that w.p. 1 − δ over the drawing of training set T ∼ D m T at least one of the minima of the optimization problem induced by c * and T has low excess risk, i.e. ∃ w ∈ W c * (T) s.t. for some error function ε c *. Clearly, we prefer error functions ε c * that are decreasing in the number of training samples m T and increasing at most poly-logarithmically in 1 δ. This assumption requires that if we knew the optimal configuration a priori, then the provided optimization problem will find a good set of weights for it. We will show how, under reasonable assumptions, Assumption B.1 can be formally shown to hold in Settings B.2 and B.3. Our general will be stated in terms of covering numbers of certain function classes. Definition B.1. Let H be a class of functions from X to Y. For any ε > 0 the associated L ∞ covering number N (H, ε) of H is the minimal positive integer k such that H can be covered by k balls of L ∞ -radius ε. The following is then a standard in statistical learning theory (see e.g. Lafferty et al. (2010, Theorem 7.82 where we use the loss notation from Setting B.1. Before stating our theorem, we define a final quantity, which measures the log covering number of the version spaces induced by the optimization procedure over a given training set. Definition B.2. In Setting B.1, for any sample S ⊂ X × Y define the version entropy to be Λ(H, ε, S) = log N c∈C H c (S), ε. Theorem B.2. In Setting B.1 let (ŵ,ĉ) ∈ W × C be obtained as a solution to the following optimization problem: arg min Then under Assumption B.1 we have w.p. 1 − 3δ that each term of which can be bounded as follows: 1. Sinceŵ ∈ Wĉ(T) for someĉ ∈ C the hypothesis space can be covered by the union of the coverings of H c (T) over c ∈ C, so by Theorem B.1 we have that w. 2. By optimality of the pair (ŵ,ĉ) and the fact that w ∈ W c * (T) we have 3. Hoeffding's inequality yields V (w, c We can then directly apply Theorem B.2 and the fact that the version entropy is bounded by log |C| because the minimizer over the training set is always unique to get the following: Corollary B.2. In Setting B.2 let (ŵ,ĉ) ∈ W × C be obtained as a solution to the following optimization problem: arg min In the special case of kernel selection we can apply generalization for learning with random features to show that we can compete with the optimal RKHS from among those associated with one of the configurations (, Theorem 1): Corollary B.3. In Setting B.2, suppose each configuration c ∈ C is associated with a random Fourier feature approximation of a continuous shift-invariant kernel that approximates an RKHS H c. Suppose is the squared loss so that (ŵ,ĉ) ∈ W × C is obtained as a solution to the following optimization problem: In the case of neural architecture search we are often solving (unregularized) ERM in the inner optimization problem. In this case we can make an assumption weaker than Assumption B.1, namely that the set of empirical risk minimizers contains a solution that, rather than having low excess risk, simply has low generalization error; then applying Hoeffding's inequality yields the following: Corollary B.4. In Setting B.1 let (ŵ,ĉ) ∈ W × C be obtained as a solution to the following optimization problem: arg min Suppose there exists c * ∈ C satisfying (w *, c *) ∈ arg min W×C D (w, c) for some weights w * ∈ W such that w.p. 1 − δ over the drawing of training set T ∼ D m T at least one of the minima of the optimization problem induced by c * and T has low generalization error, i.e. ∃ w ∈ arg min w ∈W T (w, c *) s.t. Solvers for Ridge regression and logistic regression were from scikit-learn . For CIFAR-10 we use the kernel configuration setting from but replacing the regularization parameter by the option to use the Laplace kernel instead of Gaussian. The regularization was fixed to λ = 1 2 The split is 40K/10K/10K. For IMDB we consider the following configuration choices: For stages 2 and 3, we train each architecture for 600 epochs with the same hyperparameter settings as DARTS. For completeness, we describe the convolutional neural network search space considered. The set of operations O considered at each node include: 3 × 3 separable convolution, 5 × 5 separable convolution, 3×3 dilated convolution, 5×5 dilated convolution, max pooling, average pooling, identity. We use the same search space to design a "normal" cell and a "reduction" cell; the normal cells have stride 1 operations that do not change the dimension of the input, while the reduction cells have stride 2 operations that half the length and width dimensions of the input. In the experiments, for both cell types, we set N = 6 with 2 input nodes and 4 intermediate nodes, after which the output of all intermediate nodes are concatenated to form the output of the cell. We use EDARTS to train a smaller shared-weights network in the search phase with 8 layers and 24 initial channels instead of the 16 used by DARTS. Additionally, to more closely mirror the architecture used for evaluation in stage 2 and 3, we use an auxiliary head with weight 0.4 and scheduled path dropout of 0.2. For the EDARTS architecture updates, we use a learning rate of 0.2 for the normal cell and 0.6 for the reduction cell. All other hyperparameters are the same as DARTS: 50 training epochs, batch size of 64, gradient clipping of 5 for network weights, SGD with momentum set to 0.9 and learning rate annealed from 0.025 to 0.001 with cosine annealing , and weight decay of 0.0003. We use the same evaluation scheme as DARTS when retraining architectures from scratch. The larger evaluation network has 20 layers and 36 initial channels and is trained for 600 epochs using SGD with momentum set to 0.9, a batch size of 96, and a learning rate of 0.025 annealed down to 0; the gradient clipping scheduled drop path rate and weight decay are identical to the search phase. We also use an auxiliary head with a weight of 0.4 and cutout . We investigate whether weight-sharing implicitly regularizes the hypothesis space by examining the 2 norms and distance from initialization of the shared-weights network relative to that observed when training the best EDARTS architecture from scratch. We use the same network depth and hyperparameters as those used for the shared-weights network to train the fixed architecture. Figure 4 shows the percent difference in the norms between the fixed architecture and the shared-weights network pruned to just the operations kept for the fixed architecture. From the chart, we can see that both the 2 distance from initialization and the 2 norm of the shared-weights is smaller than that of a fixed network are higher than that of the shared-weights network by over 40%, suggesting weight-sharing acts as a form of implicit regularization. Table 2 show that EDARTS has lower variance across experiments than random search with weight sharing . However, we do observe non-negligible variance in the performance of the architecture found by different random seed initializations of the shared-weights network, necessitating running multiple searches before selecting an architecture.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJgRCyHFDr
An analysis of the learning and optimization structures of architecture search in neural networks and beyond.
Deep latent variable models have seen recent success in many data domains. Lossless compression is an application of these models which, despite having the potential to be highly useful, has yet to be implemented in a practical manner. We present'`Bits Back with ANS' (BB-ANS), a scheme to perform lossless compression with latent variable models at a near optimal rate. We demonstrate this scheme by using it to compress the MNIST dataset with a variational auto-encoder model (VAE), achieving compression rates superior to standard methods with only a simple VAE. Given that the scheme is highly amenable to parallelization, we conclude that with a sufficiently high quality generative model this scheme could be used to achieve substantial improvements in compression rate with acceptable running time. We make our implementation available open source at https://github.com/bits-back/bits-back. The connections between information theory and machine learning have long been known to be deep, and indeed the two fields are so closely related that they have been described as'two sides of the same coin' BID18. One particularly elegant connection is the essential equivalence between probabilistic models of data and lossless compression methods. The source coding theorem BID22 can be thought of as the fundamental theorem describing this idea, and Huffman coding BID13, arithmetic coding BID28 and the more recently developed asymmetric numeral systems BID3 ) are actual algorithms for implementing lossless compression, given some kind of probabilistic model. The field of machine learning has experienced an explosion of activity in recent years, and we have seen a number of papers looking at applications of modern deep learning methods to lossy compression. BID7 discusses applications of a deep latent Gaussian model to compression, with an emphasis on lossy compression. BID0, BID23, BID1, and BID19 all implement lossy compression using (variational) auto-encoder style models, and BID24 train a model for lossy compression using a GAN-like objective. Applications to lossless compression have been less well covered in recent works. We seek to advance in this direction, and we focus on lossless compression using latent variable models. The lossless compression algorithms mentioned above do not naturally cater for latent variables. However there is a method, known as'bits back coding' BID27 BID10, first introduced as a thought experiment, but later implemented in BID5 and BID4, which can be used to extend those algorithms to cope with latent variables. Although bits back coding has been implemented in restricted cases by BID4, there is no known efficient implementation for modern neural net-based models or larger datasets. There is, in fact, a fundamental incompatibility between bits back and the arithmetic coding scheme with which it has previously been implemented. We resolve this issue, describing a scheme that instead implements bits back using asymmetric numeral systems. We term this new coding scheme'Bits Back with ANS' (BB-ANS). Our scheme improves on existing implementations of bits back coding in terms of compression rate and code complexity, allowing for efficient lossless compression of arbitrarily large datasets with deep latent variable models. We demonstrate the efficiency of BB-ANS by losslessly compressing the MNIST dataset with a variational auto-encoder (VAE), a deep latent variable model with continuous latent variables BID15 BID20. As far as we are aware, this is the first time bits back coding has been implemented with continuous latent variables. We find that BB-ANS with a VAE outperforms generic compression algorithms for both binarized and raw MNIST, even with a very simple model architecture. We extrapolate these to predict that the performance of BB-ANS with larger, state of the art models would be significantly better than generic compression algorithms. In this section we describe bits back coding, a method for lossless compression of data using a latent variable model. Before we describe bits back itself, we briefly discuss methods for encoding a stream of data given a fully observed model, a task sometimes referred to as'range coding' or'entropy coding'. We do not go into detail about the algorithms or their implementation, but describe the high level characteristics necessary for understanding bits back. For brevity, in the following sections we use simply log to refer to the base 2 logarithm, usually denoted log 2. Message lengths are measured in bits. Suppose that someone ('the sender') has a sequence of randomly distributed symbols, s = (s 1, ..., s N), with each s n drawn from a finite alphabet A n, which they would like to communicate to someone else ('the receiver') in as few bits as possible. Suppose that sender and receiver have access to a probabilistic model p for each symbol in the sequence given the previous, and can compute the mass p(s n = k | s 1, . . ., s n−1) for each k ∈ A n, n ∈ {1, . . ., N}.Arithmetic coding (AC) and asymmetric numeral systems (ANS) are algorithms which solve this problem, providing an encoding from the sequence s to a sequence of bits (referred to as the 'message'), and a decoding to recover the original data s. Both AC and ANS codes have message length equal to the'information content' h(s) − log p(s) of the sequence plus a small constant overhead of around 2 bits. By Shannon's Source Coding Theorem, the expected message length can be no shorter than the entropy of the sequence s, defined by H [s] E[h(s)], and thus AC and ANS are both close to optimal BID22 BID18. For long sequences the small constant overhead is amortized and has a negligible contribution to the compression rate. Critically for bits back coding, AC and ANS differ in the order in which messages are decoded. In AC the message is FIFO, or queue-like. That is, symbols are decoded in the same order to that in which they were encoded. ANS is LIFO, or stack-like. Symbols are decoded in the opposite order to that in which they were encoded. Note that the decoder in these algorithms can be thought of as a mapping from i.i.d. bits with p(b i = 0) = p(b i = 1) = 1 2 to a sample from the distribution p. Since we get to choose p, we can also think of ANS/AC as invertible samplers, mapping from random bits to samples via the decoder and back to the same random bits via the encoder. For a far more detailed introduction to arithmetic coding, see BID28, for asymmetric numeral systems, see BID3. We now give a short description of bits back coding, similar to those that have appeared in previous works. For a more involved derivation see Appendix A. We assume access to a coding scheme such as AC or ANS which can be used to encode and decode symbols according to any distribution. We will return to the question of which is the correct coding scheme to use in Section 2.4. Suppose now a sender wishes to communicate a symbol s 0 to a receiver, and that both sender and receiver have access to a generative model with a latent variable, y. For now we take y to be discrete, we address continuous latents in Section 2.5.1. Suppose both sender and receiver can compute the forward probabilities p(y) and p(s | y), and also have access to an approximate posterior q(y | s). Bits back coding allows the sender and receiver to efficiently encode and decode the symbol s 0.We must assume that, as well as the sample s 0, the sender has some extra bits to communicate. The sender can decode these extra bits to generate a sample y 0 ∼ q(y | s 0). Then they can encode the symbol s 0 according to p(s | y 0) and the latent sample according to p(y). The receiver then does the inverse to recover the latent sample and the symbol. The extra bits can also be recovered by the receiver by encoding the latent sample according to q(y | s 0). We can write down the expected increase in message length (over the extra bits): DISPLAYFORM0 This quantity is equal to the negative of the evidence lower bound (ELBO), sometimes referred to as the'free energy' of the model. A great deal of recent research has focused on inference and learning with approximate posteriors, using the ELBO as an objective function. Because of the above equivalence, methods which maximize the ELBO for a model are implicitly minimizing the message length achievable by bits back coding with that model. Thus we can draw on this plethora of existing methods when learning a model for use with bits back, safe in the knowledge that the objective function they are maximizing is the negative expected message length. If we wish to encode a sequence of data points, we can sample the extra bits for the first data point at random. Then we may use the encoded first data point as the extra information for the second data point, the encoded second data point as the extra information for the third, and so on. This daisychain-like scheme was first described by BID4, and was called'bits-back with feedback'. We refer to it simply as'chaining'.As BID4 notes, chaining cannot be implemented directly using AC, because of the order in which data must be decoded. Frey gets around this by implementing what amounts to a stack-like wrapper around AC, which incurs a cost both in code complexity and, importantly, in compression rate. The cost in compression rate is a of the fact that AC has to be'flushed' in between each iteration of bits back, and each flush incurs a cost which is implementation dependent but typically between 2 and 32 bits. The central insight of this work is to notice that the chaining described in the previous section can be implemented straightforwardly with ANS with zero compression rate overhead per iteration. This is because of the fact that ANS is stack-like by nature, which resolves the problems that occur if one tries to implement bits back chaining with AC, which is queue-like. We now describe this novel method, which we refer to as'Bits Back with ANS' (BB-ANS).We can visualize the stack-like state of an ANS coder as where the dashed line symbolizes the encoding/decoding end or'top' of the stack. When we encode a symbol s onto the stack we effectively add it to the end, ing in a'longer' state and when we decode (or equivalently, sample) a symbol t from the stack we remove it from the same end, ing in a'shorter' state, plus the symbol that we decoded. Table 1 shows the states of the sender as they encode a sample, using our bits back with ANS algorithm, starting with some'extra information' as well as the sample s 0 to be encoded. Table 1: Sender encodes a symbol s 0 using Bits Back with ANS. This process is clearly invertible, by reversing the order of operation and replacing encodes with decodes and sampling with encoding. Furthermore it can be repeated; the ANS stack at the end of encoding is still an ANS stack, and therefore can be readily used as the extra information for encoding the next symbol. The algorithm is compatible with any model whose prior, likelihood and (approximate) posterior can be encoded and decoded with ANS. A simple Python implementation of both the encoder and decoder of BB-ANS is given in Appendix C. A number of factors can affect the efficiency of compression with BB-ANS, and mean that in practice, the coding rate will never be exactly equal to the ELBO. For any algorithm based on AC/ANS, the fact that all probabilities have to be approximated at finite precision has some detrimental effect. When encoding a batch of only a small number of i.i.d. samples, with no'extra information' to communicate, the inefficiency of encoding the first datapoint may be significant. In the worst case, that of a batch with only one datapoint, the message length will be equal to the log joint, log p(s 0, y 0). Note that optimization of this is equivalent to maximum a posteriori (MAP) estimation. However, for a batch containing more than one image, this effect is amortized. FIG0 shows an example with 30 samples, where BB-ANS appears to perform well. Below we discuss two other issues which are specific to BB-ANS. We investigate the magnitude of these effects experimentally in Section 3.2. We find that when compressing the MNIST test set, they do not significantly affect the compression rate, which is typically close to the negative ELBO in our experiments. Bits back coding has previously been implemented only for models with discrete latent variables, in BID4. However, many successful latent variable models utilize continuous latents, including the VAE which we use in our experiments. We present here a derivation, based on BID18, of the surprising fact that continuous latents can be coded with bits back, up to arbitrary precision, without affecting the coding rate. We also briefly discuss our implementation, which as far as we are aware is the first implementation of bits back to support continuous latents. Further discussion can be found in Appendix B.We can crudely approximate a continuous probability distribution, with density function p, with a discrete distribution by partitioning the real line into'buckets' of equal width δy. Indexing the buckets with i ∈ I, we assign a probability mass to each bucket of P (i) ≈ p(y i)δy, where y i is some point in the i th bucket (say its centre).During bits back coding, we discretize both the prior and the approximate posterior using the same set of buckets. We use capital P and Q to denote discrete approximations. Sampling from the discrete approximation Q(i | s) uses approximately log(q(y i | s)δy) bits, and then encoding according to the discrete approximation to the prior P costs approximately log(p(y i)δy) bits. The expected message length for bits back with a discretized latent is therefore DISPLAYFORM0 The δy terms cancel, and thus the only cost to discretization from the discrepancy between our approximation and the true, continuous, distribution. However, if the density functions are sufficiently smooth (as they are in a VAE), then for small enough δy the effect of discretization will be negligible. Note that the number of bits required to generate the latent sample scales with the precision − log δy, meaning reasonably small precisions should be preferred in practice. Furthermore, the benefit from increasing latent precision past a certain point is negligible for most machine learning model implementations, since they operate at 32 bit precision. In our experiments we found that increases in performance were negligible past 16 bits per latent dimension. In our implementation, we divide the latent space into buckets which have equal mass under the prior (as opposed to equal width). This discretization is simple to implement and computationally efficient, and appears empirically to perform well. However, further work is required to establish whether it is optimal in terms of the trade-off between compression rate and computation. In our description of bits back coding in Section 2, we noted that the'extra information' needed to seed bits back should take the form of'random bits'. More precisely, we need the of mapping these bits through our decoder to produce a true sample from the distribution q(y | s). A sufficient condition for this is that the bits are i.i.d. Bernoulli distributed with probability 1 2 of being in each of the states 0 and 1. We refer to such bits as'clean'.During chaining, we effectively use each compressed data point as the seed for the next. Specifically, we use the bits at the top of the ANS stack, which are the of coding the previous latent y 0 according to the prior p(y). Will these bits be clean? The latent y 0 is originally generated as a sample from q(y | s 0). This distribution is clearly not equal to the prior, except in degenerate cases, so naively we wouldn't expect encoding y 0 according to the prior to produce clean bits. However, the true sampling distribution of y 0 is in fact the average of q(y | s 0) over the data distribution. That is, q(y) q(y | s)p(s)ds. This is referred to in BID11 as the'average encoding distribution'.If q is equal to the true posterior, then evidently q(y) ≡ p(y), however in general this is not the case. BID11 measure the discrepancy empirically using what they call the'marginal KL divergence' KL[q(z) p(z)], showing that this quantity contributes significantly to the ELBO for three different VAE like models learned on MNIST. This difference implies that the bits at the top the ANS stack after encoding a sample with BB-ANS will not be perfectly clean, which could adversely impact the coding rate. We demonstrate the BB-ANS coding scheme using a VAE. This model has a multidimensional latent with standard Gaussian prior and diagonal Gaussian approximate posterior: DISPLAYFORM0 We choose an output distribution (likelihood) p(s | y) suited to the domain of the data we are modelling (see below). The usual VAE training objective is the ELBO, which, as we noted in Section 2.2, is the negative of the expected message length with bits back coding. We can therefore train a VAE as usual and plug it into the BB-ANS framework. We consider the task of compressing the MNIST dataset BID17. We first train a VAE on the training set and then compress the test using BB-ANS with the trained VAE.The MNIST dataset has pixel values in the range of integers 0,..., 255. As well as compressing the raw MNIST data, we also present for stochastically binarized MNIST BID21. For both tasks we use VAEs with fully connected generative and recognition networks, with ReLU activations. For binarized MNIST the generative and recognition networks each have a single deterministic hidden layer of dimension 100, with a stochastic latent of dimension 40. The generative network outputs logits parameterizing a Bernoulli distribution on each pixel. For the full (non-binarized) MNIST dataset each network has one deterministic hidden layer of dimension 200 with a stochastic latent of dimension 50. The output distributions on pixels are modelled by a beta-binomial distribution, which is a two parameter discrete distribution. The generative network outputs the two beta-binomial parameters for each pixel. Instead of directly sampling the first latents at random, to simplify our implementation we instead initialize the BB-ANS chain with a supply of'clean' bits. We find that around 400 bits are required for this in our experiments. The precise number of bits required to start the chain depends on the entropy of the discretized approximate posterior (from which we are initially sampling).We report the achieved compression against a number of benchmarks in Table 2. Despite the relatively small network sizes and simple architectures we have used, the BB-ANS scheme outperforms benchmark compression schemes. While it is encouraging that even a relatively small latent variable model can outperform standard compression techniques when used with BB-ANS, the more Table 2: Compression rates on the binarized MNIST and full MNIST test sets, using BB-ANS and other benchmark compression schemes, measured in bits per dimension. We also give the negative ELBO value for each trained VAE on the test set.important observation to make from Table 2 is that the achieved compression rate is very close to the value of the negative test ELBO seen at the end of VAE training. In particular, the detrimental effects of finite precision, discretizing the latent (Section 2.5.1) and of less'clean' bits (Section 2.5.2) do not appear to be significant. Their effects can be seen in FIG2, accounting for the small discrepancy of around 1% between the negative ELBO and the achieved compression. Implementing a state-of-the-art latent variable model is not the focus of this work. However, as shown in our experiments, BB-ANS can compress data to sizes very close to the negative ELBO. This means that we can predict the best currently achievable compression using BB-ANS from the reported values of the negative ELBO for state-of-the-art latent variable models. We consider PixelVAE BID8, a latent variable model with close to state-of-the-art . We use their reported ELBO on binarized MNIST and the 64 × 64 ImageNet dataset introduced in van den BID26.The predictions are displayed in Table 3, and show that BB-ANS with PixelVAE may have a significantly better compression rate than existing schemes. These predictions are based on the assumption that the discrepancy between compression rate and ELBO will remain small for larger models. We believe this assumption is reasonable, since from the point of view of BB-ANS there are no fundamental differences, apart from dimensionality, between a complex, hierarchical VAE such as PixelVAE and the simple VAEs which we used for our experiments. We leave the experimental verification of these predictions to future work. Another potential extension of BB-ANS is to time series latent variable models such as hidden Markov models, or latent Gaussian state space models such as those studied in BID14. Such models could, in principal, be coded with BB-ANS, but the number of'extra bits' needed in a naive implementation scales with the length of the chain (the total time for a time series model), which could lead to a highly sub-optimal compression rate in practice. It would be useful to have a method for'interleaving' bits back with the time steps of the model, however it is unclear whether this is possible, and we leave deeper exploration of this problem to future work. Table 3: Predicted compression of BB-ANS with PixelVAE against other schemes, measured in bits per dimension. Modern machine learning models are optimized to exploit batch-parallelism and model-parallelism and run fastest on GPU hardware. Our current implementation of BB-ANS is written in pure Python, is not parallelized and executes entirely on CPU. During encoding/decoding the compression/decompression code is a computational bottleneck, running orders of magnitude slower than the computations of the model probabilities. However, we believe that almost all of the computation in the algorithm could be executed in parallel, on GPU hardware, potentially relieving this bottleneck. Firstly, our encoder requires computation of the CDF and inverse CDF of the distributions in the model. In the case of a VAE model of binarized MNIST, these are Gaussian and Bernoulli distributions. CDFs and inverse CDFs are already implemented to run on GPU, for many standard distributions, including Gaussian and Bernoulli, in various widely used machine learning toolboxes. Less trivial is the ANS algorithm. However, ANS is known to be amenable to parallelization. Techniques for parallel implementation are discussed in BID6, and BID16 presents an open source GPU implementation. We leave the performance optimization of BB-ANS, including adapting the algorithm to run on parallel architectures, to future work, but we are optimistic that the marriage of models which are optimized for parallel execution on large datasets with a parallelized and optimized BB-ANS implementation could yield an extremely high performance system. A neural net based model such as a VAE may have many thousands of parameters. Although not the focus of this work, the cost of communicating and storing a model's parameters may need to be considered when developing a system which uses BB-ANS with a large scale model. However, we can amortize the one-time cost of communicating the parameters over the size of the data we wish to compress. If a latent variable model could be trained such that it could model a wide class of images well, then BB-ANS could be used in conjunction with such a model to compress a large number of images. This would make the cost of communicating the model weights worthwhile to reap the subsequent gains in compression. Efforts to train latent variable models to be able to model such a wide range of images are currently of significant interest to the machine learning community, for example on expansive datasets such as ImageNet BID2 ). We therefore anticipate that this is the most fruitful direction for practical applications of BB-ANS.We also note that there have been many recent developments in methods to decrease the space required for neural network weights, without hampering performance. For example, methods involving quantizing the weights to low precision BID9 BID25, sometimes even down to single bit precision BID12, are promising avenues of research that could significantly reduce the cost of communicating and storing model weights. Probabilistic modelling of data is a highly active research area within machine learning. Given the progress within this area, it is of interest to study the application of probabilistic models to lossless compression. Indeed, if practical lossless compression schemes using these models can be developed then there is the possibility of significant improvement in compression rate over existing methods. We have shown the existence of a scheme, BB-ANS, which can be used for lossless compression using latent variable models. We demonstrated BB-ANS by compressing the MNIST dataset, achieving compression rates superior to generic algorithms. We have shown how to handle the issue of latent discretization. Crucially, we were able to compress to sizes very close to the negative ELBO for a large dataset. This is the first time this has been achieved with a latent variable model, and implies that state-of-the-art latent variable models could be used in conjunction with BB-ANS to achieve significantly better lossless compression rates than current methods. Given that all components of BB-ANS are readily parallelizable, we believe that BB-ANS can be implemented to run on GPU hardware, yielding a fast and powerful lossless compression system. We present here a more detailed derivation of bits back coding. As before, suppose that a sender and receiver wish to communicate a symbol s 0, and they both have access to a generative model with a latent variable, y. Suppose both sender and receiver can compute the forward probabilities p(y) and p(s | y). How might they communicate a sample s 0 from this model?Naively, the sender may draw a sample y 0 from p(y), and encode both y 0 and s 0 according to the forward model, p(y) and p(s | y 0) respectively. This would in a message length of − log p(y 0) + log p(s 0 | y 0) bits. The receiver could then decode according to the forward model by first decoding y 0 according to p(y) and then decoding s 0 according to p(s | y 0). However, they can do better, and decrease the encoded message length significantly. Firstly, if there is some other information which the sender would like to communicate to the receiver, then we may use this to our advantage. We assume the other information takes the form of some random bits. As long as there are sufficiently many bits, the sender can use them to generate a sample y 0 by decoding some of the bits to generate a sample from p(y), as described in Section 2.1. Generating this sample uses − log p(y 0) bits. The sender can then encode y 0 and s 0 with the forward model, and the message length will be − log p(y 0) + log p(s 0 | y 0) as before. But now the receiver is able to recover the other information, by first decoding s 0 and y 0, and then encoding y 0, reversing the decoding procedure from which the sample y 0 was generated, to get the'bits back'. This means that the net cost of communicating s 0, over the other information is DISPLAYFORM0 Secondly, note that we can choose any distribution for the sender to sample y 0 from, it does not have to be p(y), and it may vary as a function of s 0. If we generalize and let q(· | s 0) denote the distribution that we use, possibly depending functionally on s 0, we can write down the expected message length: DISPLAYFORM1 This quantity is equal to the negative of the evidence lower bound (ELBO), sometimes referred to as the'free energy' of the model. Having recognized this equivalence, it is straightforward to show using Gibbs' inequality that the optimal setting of q is the posterior p(y | s 0), and that with this setting the message length is DISPLAYFORM2 This is the information content of the sample s 0, which by the source coding theorem is the optimal message length. Thus bits back can achieve an optimal compression rate, if sender and receiver have access to the posterior. In the absence of such a posterior (as is usually the case), then an approximate posterior must be used. We note that BID1 and BID19 approach lossless compression with latent variables by generating a latent from an approximate posterior, and encoding according to the prior and likelihood as described above, but not recovering the bits back. BID1 mention that the cost of coding the hierarchical distribution is only a small fraction of the total coding cost in their setting. This small fraction upper bounds the potential gains from using bits back coding. However, their approach is sub-optimal, even if only slightly, and in the common case where more than one data-point is being encoded they would gain a better compression rate by using BB-ANS. As we discussed in Section 2.1, the coding scheme we wish to use, ANS, is defined for symbols in a finite alphabet. If we wish to encode a continuous variable we must restrict it to such a finite alphabet. This amounts to discretizing the continuous latent space. In choosing our discretization, it is important to note the following:• The discretization must be appropriate for the densities that will use it for coding. For example, imagine we were to discretize such that all but one of our buckets were in areas of very low density, with just one bucket covering the area with almost all of the density. This would in almost all of the latent variables being coded as the same symbol (corresponding to the one bucket with the majority of the density). Clearly this cannot be an efficient discretization.• The prior p(y) and the approximate posterior q(y | s) must share the same discretization.• The discretization must be known by the receiver before seeing data, since the first step of decoding is to decode y 0 according the prior. We propose to satisfy these considerations, by using the maximum entropy discretization of the prior, p(y), to code our latent variable. This amounts to allocating buckets of equal mass under the prior. We visualize this for a standard Gaussian prior in FIG3. Having the discretization be a function of the prior (which is fixed) allows the receiver to know the discretization up front, which we have noted is necessary. This would not be true for a discretization that depended on the posterior. This discretization is appropriate for coding according to the prior, since we are maximizing the entropy for this density. However, it is not obvious that it will be appropriate for coding according to the posterior, which it must also be used for. Note that we can write our the expected message length (negative ELBO) for a single data point as:L(q) = −E q (y | s 0) log p(s 0 | y) + KL[q(y | s 0) p(y)]We can see that minimizing this objective encourages the minimization of the KL divergence between the posterior and the prior. Therefore a trained model will generally have a posterior'close' (in a sense defined by the KL divergence) to the prior. This indicates that the maximum entropy discretization of the prior may also be appropriate for coding according to the posterior. C BB-ANS PYTHON IMPLEMENTATION FIG4 shows code implementing BB-ANS encoding (as described in Table 1) and decoding in Python. Since the message is stack-like, we use the Pythonic names'append' and'pop' for encoding and decoding respectively. Notice that each line in the decoding'pop' method precisely inverts an operation in the encoding'append' method. The functions to append and pop from the prior, likelihood and posterior could in principle use any LIFO encoding/decoding algorithm. They may, for example, do ANS coding according to a sophisticated autoregressive model, which would be necessary for coding using PixelVAE. The only strict requirement is that each pop function must precisely invert the corresponding append function. For more detail, including an example implementation with a variational auto-encoder model (VAE), see the repository https://github.com/bits-back/bits-back.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryE98iR5tm
We do lossless compression of large image datasets using a VAE, beat existing compression algorithms.
Hyperparameter tuning is arguably the most important ingredient for obtaining state of art performance in deep networks. We focus on hyperparameters that are related to the optimization algorithm, e.g. learning rates, which have a large impact on the training speed and the ing accuracy. Typically, fixed learning rate schedules are employed during training. We propose Hyperdyn a dynamic hyperparameter optimization method that selects new learning rates on the fly at the end of each epoch. Our explore-exploit framework combines Bayesian optimization (BO) with a rejection strategy, based on a simple probabilistic wait and watch test. We obtain state of art accuracy on CIFAR and Imagenet datasets, but with significantly faster training, when compared with the best manually tuned networks. Hyperparameter tuning is arguably the most important ingredient for obtaining state of art performance in deep neural networks. Currently, most networks are manually tuned after extensive trial and error, and this is derisively referred to as graduate student descent. Hyperparameter optimization (HPO), on the other hand, attempts to automate the entire process and remove the need for human intervention. Previous works on HPO propose various strategies to either adaptively select good configurations BID12 or to speed up configuration evaluations BID7.One drawback behind existing HPO frameworks is that they do not distinguish between different kinds of hyperparameters and treat them all uniformly. Broadly there are two categories of hyperparameters: those that are fixed throughout the training process and those that need to be varied during training. The former class is mostly structural (e.g. network depth and width), while the latter is mostly related to the optimization algorithm (e.g. learning rates, regularization). The two classes of hyperparameters have very different behaviors. For structural hyperparameters, online evaluation is not possible, and we need to wait for the entire training process to be completed, which is very expensive. On the other hand, for time-varying hyperparameters, it is possible to select parameters on the fly without waiting for training to finish. In this work we keep the structural hyperparameters fixed and focus on optimizing the time-varying hyperparameters to improve efficiency, stability, and accuracy of the training process. Our main contributions are as follows: We propose Hyperdyn an automated approach for dynamically tuning hyperparameters during the training process, based on the past observations. It selects new hyperparameters at the end of each epoch by combining Bayesian optimization (BO) machinery with a simple rejection strategy. It is computationally efficient since it uses Gaussian processes (GP) and simple probabilistic rejection tests. We show state of art performance on image classification benchmarks that match the accuracy of manually tuned networks while significantly improving the training speed. We demonstrate that Hyperdyn is able to automatically decide regions of start, acceleration and slow down for the training process, and can also adapt to different conditions such as batch sizes, network architecture, datasets etc. Although our framework is broadly applicable for any time-varying hyperparameter, we limit ourselves to selecting learning rates in the experiments. We now describe Hyperdyn in this context. A set of learning-rate proposals is randomly initialized along with the weights of the neural network. We choose the best learning rate based on the validation accuracy at the end of the first epoch. For Method 85% 90% 95% CIFAR 10 SGD 1.5x 2x 2x ADAM 1x 1x 2x Imagenet SGD 1.5x 4x 4x Table 1: Speed up in training (in terms of no. of iterations) over manually tuned to reach x% of the best reported top-1 validation accuracy subsequent epochs, we employ standard Bayesian optimization (BO) to obtain new proposals based on the Gaussian process framework BID11. However, we do not always accept the outcomes of BO. We design a simple probabilistic wait and watch test to decide whether to accept the BO outcome or to stick to the previously chosen learning rate, based on the improvement of the validation accuracy over the past few epochs. This rejection test is very crucial for obtaining good performance. Our experiments show that if we naively switch the learning rate to the BO output at the end of each epoch, we have training instability and bad generalization performance. This rejection framework is philosophically similar to the hyperband framework BID7 where more time is spent exploring the more promising choices. Here we require a more sophisticated framework that utilizes the temporal history to assess whether the current choice of learning rate is promising or if one should switch to a new learning rate, as proposed by BO.We investigate performance of Hyperdyn for tuning the learning rates of two most popular optimization algorithms, viz., stochastic gradient descent (SGD) and Adam BID5, on CIFAR-10 and Imagenet datasets. The are summarized in Table. 1. Our method uniformly trains faster and can quickly reach to a significant % of the best validation accuracy, which was previously obtained BID4 ) after extensive manual tuning. In Section 4, we also show that Hyperdyn outperforms other strong baselines such as epoch-based BO (i.e. no rejection) and random 5x, i.e., at every epoch we invest 5x resources more in random search than Hyperdyn. Furthermore, we find that our method is stable and trains quickly even under larger batch sizes. We used a batch size of 1000 for our Imagenet experiments, while the manually tuned baseline was on a much smaller batch size of 256. Larger batches are preferred for distributed training since they reduce the relative communication overhead. However, training on larger batches is generally challenging, and can suffer from poor generalization BID8. The adaptivity of Hyperdyn allows it to overcome this challenge. We conduct detailed empirical analysis of Hyperdyn under a variety of conditions. We find that the learning rates suggested by Hyperdyn for SGD eventually decay, but are not always monotonic in the beginning. This agrees with the previous that using more adaptive algorithms such as Adam is more beneficial in the beginning than at a later stage BID16. We find that learning rates chosen by Hyperdyn generally increase with the batch size, and this rule of thumb has been used for manual tuning BID8. We also verify that the learning rates suggested by Hyperdyn for Adam eventually converge to values that guarantee theoretical convergence. Further, we observe that SGD tuned with Hyperdyn outperforms more sophisticated algorithms (e.g. ones with momentum) that are manually tuned. This suggests the importance of tuning for good learning rates, compared to having more sophisticated optimization algorithms. Bayesian Optimization has been widely used to optimize blackbox functions. The most common frameworks use Gaussian processes (GP) for efficiently selecting good configurations BID11. Recently, BID7 introduced Hyperband, which instead focused on speeding up configuration evaluations based on a simple random search strategy. A key feature of Hyperband is that it adaptively allocates resources using a principled early stopping mechanism. In the context of tuning for learning rates, these methods have been previously employed to tune the schedule of learning rate decay, while keeping the initial learning rate fixed, BID12 ). However, this does not provide full flexibility in finding the best learning rates in each epoch. Moreover, these frameworks require for training to be completed in order to carry out their evaluations, which makes them expensive. On the other hand, Hyperdyn selects new configurations at the end of each epoch, without requiring for training to finish. Also, most previous works only compare across different HPO frameworks, but not with the state of art manually tuned networks. Previous works report around 80% validation accuracy for the hyperband algorithm on CIFAR-10 and worse for other methods such as SMAC and TPE (tree-based BO) and Spearmint (Gaussian process) BID7. None of these previous works report on Imagenet-like large datasets. For the task of finding good learning rates, other methods have been proposed that do not rely on hyperparameter optimization. One strategy is to incorporate more adaptivity into the optimization algorithm, as seen in Adam BID5, which has better performance in certain cases over SGD, but requires even more hyperparameters to be tuned. Another algorithm that automatically controls the learning rate of SGD, known as SALERA, was introduced in BID10. SALERA updates the learning rate by using an exponential moving average and deals with catastrophic events that occur in SGD (that lead to poor training accuracy) by allowing for backtracking. However, the learning rate update rule is fixed unless there is a catastrophic event, so the extent of adaptivity is limited. Additionally, the experiments were only conducted on smaller datasets such as MNIST and CIFAR10, so it is not clear how the algorithm behaves with larger datasets and larger batch sizes. An RL-based approach to learning on the fly was introduced in BID6. However, the RL framework in general requires a large amount of training data and the experiments in that work were not conducted on standard benchmark datasets such as CIFAR10 or MNIST. Recently, BID2, BID17 proposed methods for large batch training on Imagenet dataset that achieve state of art Top-1% accuracy. The work there, however, was restricted to designing a learning rate scheduler for SGD for large batch sizes. Hyperdyn on the other hand is a general framework which can be applied to a wide range of hyperparameter optimization problems and different kinds of training methods. The idea of using information learned in one task to improve on a future task, or meta-learning BID15 ), is also related to the framework of using past information to improve hyperparameter choice, considered here. LSTMs were employed to learn the gradient descent updates in BID0; BID9. However, LSTMs require a large amount training data and it is not clear if the methods can scale to standard benchmark datasets. Currently, these techniques have been shown to work for only small scale problems. The rest of paper is structured as follows -in section 2 we give a brief review of the bayesian optimization algorithm and gaussian processes that are at the center of Hyperdyn. In section 3 we describe its in detail. In section 4 we present the details of the experiments. Bayesian optimization has been widely used to find the minimize a black box function. In general, the black box functions are unknown or difficult to compute and the bayesian optimization algorithm suggests query points based on the optimization of an "easier" acquisition function. So if the blackbox function was f (·) and we had to minimize f (·). Then we would use the bayesian algorithm as described in Algorithm 1. This version of Bayesian Optimization in Algorithm 1 is a one-step sim- DISPLAYFORM0 3: Query objective function to obtain y 0 4: return η 0 plification of the one described in BID11. We next describe the acquisition function α(·; ·) and other statistical details of Algorithm 1. A Gaussian Process is a nonparamteric statistical model that is characterized by µ 0, σ 0, K(·, ·) -initial mean, initial variance and kernel function respectively. Consider the sequence of points, η 1:n (inputs) and y 1:n (noisy observations). We introduce auxillary variables f 1:n, such that f 1:n |η 1:n ∼ N (m, K) where DISPLAYFORM0 Then we have that y 1:n |f 1:n, σ 2 0 ∼ N (f 1:n, σ 2 0 I). Given this GP, we "update" µ n (·), σ n (·) given some observation {(η i, y i)} n i=1, by Algorithm 2 DISPLAYFORM1 A plethora of acquisition functions are discussed in BID11, we specifically use the expected improvement function that we describe now. Consider the improvement function I(η, v, θ) = (v − τ)I(v > τ); this captures the amount of improvement over a given threshold τ. Then expected improvement is just E[I(η, v, θ)] assuming that v is normally distributed with some mean and variance parametrized by η. Then we have DISPLAYFORM0 Typically τ = min n y n, i.e., the minimum of noisy observations. It is not necessary that the kernel function in Algorithm 1 be stationary. In fact there is vast literature on non-stationary kernels for Bayesian Optimization (See Gramacy & Lee FORMULA4). These methods are, in general, very complicated. In the following section we propose a simple compositional nonstationary kernel that works well in practice. The simplicity of Bayesian optimization makes it amenable to blackbox optimizations. However, there are a few severe limitations of Bayesian optimization that prevent it from being applied directly to neural network training. First, as stated before, the framework of Bayesian optimization is such that only one setting of learning rate parameter can be used for the entire duration of training. Second, even if we were somehow able to use multiple learning rate parameter suggestions while training, there is no stationarity, i.e., the same learning rates may produce very different at different points of training. In Hyperdyn we use different temporal estimates of the loss function change and use non stationary kernels to alleviate the two problems mentioned above. By using a simple compositional kernel we avoid the computational burden associated with general non stationary kernels. Any training algorithm can be roughly summarized as follows w t+1 = T (w t, B, Hyperparameters) where B is a batch of data, T denotes the training algorithm and w t are the parameters of some neural network at epoch t of the algorithm. T is run for some predefined number of iterations over the data. For Hyperdyn we define a new set of hyperparameters that includes the epoch number of the algorithm New Hyperparameters = (True Hyperparameters, t) Considering epoch number of the training process as a hyperparameter allows to build a composite kernel over t and the true hyperparameters; we now get kernels that are non-stationary without too much overhead from existing stationary ones. Let (η 1, t 1), (η 2, t 2) be the new hyperparameters where t i is the epoch number, then our kernel is of the form DISPLAYFORM0 Now, the K 1 (·, ·) kernel can be one of RBF kernel or Matern-x kernels over the true hyperparameters, as in standard literature, we will describe K 2 (·, ·), now referred to as the time kernel, in the following section. Unlike the true hyperparameters, we do not need to optimize over the epoch number, i.e., t i s behave in a specific way {1, 2, . . .,} upto end of training process. Such a formulation only helps in introducing non-stationarity in the kernels we use for Bayesian optimization. This is realized by changing the search space for epoch number in the Bayesian optimization to [t, t] for every epoch number t. We use a similar approach to BID14 Section 3 for training curves. The kernel K 2 (·, ·) is of the form DISPLAYFORM0 where the last equality is obtained by choosing ψ(λ) = β α Γ(α) λ α−1 exp (−λβ). This construction is motivated from the observation that SGD decays as O(1/N 1/2) in N iterations, which in our case reduced to α = 1, β = 1/2, and from work in BID14. Hyperdyn is comprised of some crucial moving parts. At the beginning we have no information and employ a purely exploratory approach as described in Algorithm 3. The function Valida- Generate p i ∈ S uniformly at random 6: DISPLAYFORM0 tion Accuracy(·, ·) takes in as input weights (first argument) and hyperparameter values (second argument) and updates weights. The output is top-1 accuracy on the validation set for the updated weights. Function Random Start gives our main algorithm, which we describe in Algorithm 4, some initial information for a more exploitive approach. In Algorithm 4, Update Weights simply updates DISPLAYFORM1 if Check Accuracy Trend(w 1:t−1, ∆) then 10: The function Update Statistical Model has been described in Section 2.1. We might be tempted to switch to the best performing hyperparameter value at every epoch (Lines 9 − 14). As we will show, that such a myopic switching (between hyperparameters) strategy is a hindrance when it comes to good generalization performance. The function Check Accuracy Trend, described in Algorithm 5, prevents us from changing our hyperparameters unnecessarily. The idea is that if a hyperparameter choice has been improving on the validation accuracy, then there is no incentive to change. DISPLAYFORM2 Algorithm 5 Check Accuracy Trend DISPLAYFORM3 return True 6: else 7:return False 8: end if An initial offset is provided so that there is a small probability of changing hyperparameter values even when current values are performing well. The experiments were conducted on Amazon AWS p2x16 EC2 machines. We only use data parallelism for gradient computation in SGD across different GPUs, i.e., if there are k GPUs and a batch size of α then each GPU calculates the gradient on a batch size of α/k and this is summed across GPUs before weight update. The datasets used were CIFAR10 and Imagenet 1K. We employed simple augmentations as described in BID4. We also consider their as the state of art since the structural setting (neural network architecture, augmentation settings etc.) there is closest to the one here. Further for batch sizes not included there we use scaling rules described in BID2. The number of initial search points for Random Start was 5. In our experiments on Hyperdyn we only use learning rate and momentum for hyperparameter optimization. The window size used is 4, temperature is 1 and offset is 0.01. The window size should not be too small ∼ 1 as it leads to very high variance while training, and for the experiments that we run older observations become obsolete fast and a large window size is unnecessary. Comparison to Manually Tuned Networks: In this section we presents when the weight update mechanism is momentum-SGD. We compare the performance of Hyperdyn -tuned momentum-SGD versus the manually tuned momentum-SGD as described in BID4 on a ResNet-20. Each epoch indicates (num points/batch size) number of iterations. As shown in FIG1, Hyperdyn tuning is always faster (at reaching superior generalization performance) than the manually tuned one. Table 1 shows the number of iterations after which a certain top-1 accuracy is reached for a ResNet-20 with batch size 128. Hyperdyn tuned SGD is substantially faster than the fixed schedule manually tuned SGD. There is no additional tuning required for Hyperdyn and as a we circumvent hours of trial and error to get the hyperparameters right. Dynamics of Hyperdyn: For any stable recursive stochastic algorithm (for details see BID1) it must be true that η k → 0 as k → ∞, where η k is the learning rate at iteration k. In FIG0 we observe that Hyperdyn tuning automatically tends to 0 as the learning process progresses. Empirically it has been found that larger learning rates are needed for larger batches in the manual setting BID8 and Hyperdyn is automatically able to achieve this. As can be seen in FIG0 the average learning rate increases as we increase the batch size from 128 to 2k. However, the same trend does not translate, when the batch size is doubled further. A possible explanation to this maybe that we are already in a very large batch regime ∼ 10% of dataset size and the learning rate dynamics behave very differently. Further, we study Hyperdyn in FIG0 when the momentum term is set to 0. We observe that it outperforms manually tuned SGD with momentum, which is a more sophisticated algorithm. This suggests that it is more important to find good learning rates for a simple algorithm, rather than attempt to design a more sophisticated algorithm. Moreover, the wait and watch strategy of Hyperdyn is incorporating past information for stability, similar in principle to momentum. This could also explain why SGD tuned with Hyperdyn can outperform manually tuned SGD with momentum. We compare Hyperdyn to a greedy version of it, where we switch to the best performing hyperparameter in every epoch. This greedy method, or epoch based Bayesian optimization, is obtained by setting the offset i.e., b = ∞ in Check Accuracy Trend sub-routine; which is nothing but the standard Bayesian optimization method with compositional kernel incorporating the temporal dependence. However, as FIG1 shows, the epoch based BO has poor generalization. We set the batch size of 1000 on ResNet-20. This observation necessitates the need for a "wait-and-watch" strategy, as realized by the Check Accuracy Trend module. Although the epoch based BO version outperforms in the initial few epochs, it plateaus quickly and is overtaken by Hyperdyn. The choice of batch size is arbitrary for FIG1 (d) as this observation is consistent across different batch sizes. We also compare Hyperdyn with a version of random search 5x, i.e., at every epoch we invest 5x resources more in random search than in Hyperdyn. We find that random search is susceptible to higher variance than Hyperdyn, especially at larger batch sizes. FIG1 shows a typical comparison of random search 5x and Hyperdyn. We employed a variant of Hyperband where we reduced the search space of learning rate by 0.1 instead of successive halving and did not find any improvement in performance over random search 5x. Using Hyperdyn with Adam: In addition to SGD, Adam is commonly used for optimization. It is adaptive, computationally and memory efficient, but also works for non-stationary objectives and noisy environments BID5. Typically, Adam is employed for the initial part of the training process while SGD is used near the end. We use Hyperdyn to optimize the hyperparameters of Adam, i.e. learning rates β 1, β 2 in BID5. Hyperdyn tuned Adam is much faster than manually tuned Adam (in terms of number of iterations) and also generalizes better on ResNet-20 with batch size 128. As our HPO algorithm proceeds, we observe convergence of learning rates DISPLAYFORM0 Figure 3: Hyperdyn tuned Adam as β 1 → 0 and β 2 is around 1, i.e. the momentum coefficient β 1 becomes very small (Fig. 3(b) ). It turns out that these limits are also needed to establish good theoretical properties for Adam; see Corollary 4.2 in BID5. This also matches the previous observation in BID13 that the momentum coefficient needs to be reduced to obtain good convergence. For Imagenet we used ResNet-50 and a batch size of 1000. This batch size is considerably larger than typical Imagenet experiments, where the batch sizes are ∼ 256. We compare in FIG3 a Hyperdyn tuned Imagenet training process to a manually tuned one. Note that the batch size for the two are different because a manual Imagenet tuning is hard for a batch size of 1000. As a we achieve the same accuracy much faster than a manually tuned training process. Additionally, in BID17 suggest that (upto 40 epochs) Hyperdyn gives the best performance. In this work we introduced a general framework for a certain class of hyperparameter optimization. The algorithm here has the advantage that it is fast, flexible -can be used on any objective function and training method, and stable on larger batch sizes. We demonstrated that our proposed optimizer is faster and at least as accurate as SGD (with momentum) or Adam. We also show that Hyperdyn is stable for larger batch sizes on Imagenet (achieves acceptable accuracy with 4x speed). We demonstrate how too much exploration can be detrimental to generalization accuracy of a training process, and propose a probabilistic "wait-and-watch" strategy. Currently we do not parallelize Hyperdyn; however, computing Validation Accuracy on the suggestion from One Step BO can be easily parallelized. At each epoch we make only suggestion and the validation accuracy on this suggestion can be computed independently of the current hyperparameter setting. In the general case, when we make multiple suggestions we can parallelize in a similar fashion to BID12. We also observe that epoch-based BO in FIG1 outperforms Hyperdyn in the initial epochs. One future direction maybe to have time varying temperature in Check Accuracy Trend based on epoch. We can also exploit the temporal gains obtained by using a backtrack algorithm at the later stages of the algorithm -at a stage when accuracy is sufficiently high but more sophisticated tuning is required to get the best error rates.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJtPtdqQG
Bayesian optimization based online hyperparameter optimization.
Multi-hop text-based question-answering is a current challenge in machine comprehension. This task requires to sequentially integrate facts from multiple passages to answer complex natural language questions. In this paper, we propose a novel architecture, called the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities. LQR-net is composed of an association of \textbf{reading modules} and \textbf{reformulation modules}. The purpose of the reading module is to produce a question-aware representation of the document. From this document representation, the reformulation module extracts essential elements to calculate an updated representation of the question. This updated question is then passed to the following hop. We evaluate our architecture on the \hotpotqa question-answering dataset designed to assess multi-hop reasoning capabilities. Our model achieves competitive on the public leaderboard and outperforms the best current \textit{published} models in terms of Exact Match (EM) and $F_1$ score. Finally, we show that an analysis of the sequential reformulations can provide interpretable reasoning paths. The ability to automatically extract relevant information from large text corpora remains a major challenge. Recently, the task of question-answering has been largely used as a proxy to evaluate the reading capabilities of neural architectures. Most of the current datasets for question-answering focus on the ability to read and extract information from a single piece of text, often composed of few sentences . This has strengthened the emergence of easy questions in the sense of and influenced the recent state-of-the-art models to be good at detecting patterns and named entities (; ;). However they still lack actual reasoning capabilities. The problem of reasoning requires machine comprehension models to gather and compose over different pieces of evidence spread across multiple paragraphs. In this work, we propose an original neural architecture that repeatedly reads from a set of paragraphs to aggregate and reformulate information. In addition to the sequential reading, our model is designed to collect pieces of information in parallel and to aggregate them in its last layer. Throughout the model, the important pieces of the document are highlighted by what we call a reading module and integrated into a representation of the question via our reformulation module. Our contributions can be summarised as follows: • We propose a machine reading architecture, composed of multiple token-level attention modules, that collect information sequentially and in parallel across a document to answer a question, • We propose to use an input-length invariant question representation updated via a dynamic max-pooling layer that compacts information form a variable-length text sequence into a fixed size matrix, • We introduce an extractive reading-based attention mechanism that computes the attention vector from the output layer of a generic extractive machine reading model, • We illustrate the advantages of our model on the HOTPOTQA dataset. The remainder of the paper is organized as follows: Section 2 presents the multi-hop machine reading task, and analyses the required reasoning competencies. In Section 3, we detail our novel reading architecture and present its different building blocks. Section 4 presents the conducted experiments, several ablation studies, and qualitative analysis of the . Finally, Section 5 discusses related work. Our code to reproduce the is publicly available at (removed for review). 2 TEXT-BASED QUESTION-ANSWERING AND MACHINE REASONING Figure 1: Examples of reasoning paths to answer two questions of the HOTPOTQA dataset. In this picture, we do not display the full paragraphs, but only the supporting facts. The task of extractive machine reading can be summarized as follows: given a document D and a question Q, the goal is to extract the span of the document that answers the question. In this work, we consider the explainable multi-hop reasoning task described in and its associated dataset: HOTPOTQA. We focus our experiments on the "distractor" configuration of the dataset. In this task, the input document D is not a single paragraph but a set of ten paragraphs coming from different English Wikipedia articles. Answering each question requires gathering and integrating information from exactly two paragraphs; the eight others are distractors selected among the of a tf-idf retriever . These required paragraphs are called the gold paragraphs. There are two types of questions proposed in this dataset: extractive ones where the answer is a span of text extracted from the document and binary yes/no questions. In addition to the answer, it is required to predict the sentences, also called supporting facts, that are necessary to produce the correct answer. This task can be decomposed in three subtasks: categorize the answer among the three following classes: yes, no, text span, if it is a span, predict the start and end positions of this span in the document, and predict the supporting sentences required to answer the question. In addition to the "distractor" experiments, we show how our proposed approach can be used for opendomain question answering and evaluate the entire reading pipeline on the "fullwiki" configuration of the HotpotQA dataset. In this configuration, no supporting documents are provided, and it is required to answer the question from the entire Wikipedia corpus. Among the competencies that multi-hop machine reading requires, we identify two major reasoning capabilities that human readers naturally exploit to answer these questions: sequential reasoning and parallel reasoning. Sequential reasoning requires reading a document, seeking a piece of information, then reformulating the question and finally extracting the correct answer. This is called multi-hop question-answering and refers to the bridge questions in HOTPOTQA. Another reasoning pattern is parallel reasoning, required to collect pieces of evidence for comparisons or question that required checking multiple properties in the documents. Figure 1 presents two examples from HOTPOTQA that illustrate such required competencies. We hypothesize that these two major reasoning patterns should condition the design of the proposed neural architectures to avoid restricting the model to one or the other reasoning skill. In this section, we describe the Latent Question Reformulation Network (LQR-net), shown in Figure 2. This multi-hop model is designed as an association of four modules: an encoding module, a reading module, a question reformulation module, and an answering module. and are input and output modules, whereas and constitute a hop, and are repeated respectively T and T − 1 times: the answering module does not require a last reformulation step. Figure 2: Overview of LQR-net with K parallel heads and T sequential reading modules. In this architecture, a latent representation of the question is sequentially updated to perform multi-hop reasoning. K independent reading heads collect pieces of information before feeding them to the answering module. Sections 3 present the different building blocks of this end-to-end trainable model. Given a document and a question, the reading module is in charge of computing a question-aware representation of the document. Then, the reformulation module extracts essential elements from this document representation and uses them to update a representation of the question in a latent space. This reformulated question is then passed to the following hop. The model can have multiple heads, as in the Transformer architecture . In this case, the iterative mechanism is performed several times in parallel in order to compute a set of independent reformulations. The final representations of the document produced by the different heads are eventually aggregated before being fed to the answering module. This module predicts the answer and the supporting facts from the document. The following parts of this section describe each module that composes this model. Note: The model is composed of K independent reading heads that process the document and question in parallel. To not overload the notations of the next parts, we do not subscript all the matrices by the index of the head and focus on the description of one. The aggregation process of the multi-head outputs is explained in Section 3.5. We adopt a standard representation of each token by using the pre-trained parametric language model BERT . Let a document D = {p 1, p 2, . . ., p 10} be the set of input paragraphs, of respective lengths {n 1, . . ., n 10}, associated to a question Q of length L. These paragraphs are independently encoded through the pre-trained BERT model. Each token is represented by its associated BERT hidden state from the last layer of the model. The tokens representations are then concatenated to produce a global representation of the set of 10 paragraphs of total length N = 10 i=1 n i. The representations are further passed through a Bidirectional Gated Recurrent Unit (BiGRU) to produce the final representation of the document E D ∈ R N ×2h and question E Q ∈ R L×2h, where h is the hidden state dimension of the BiGRUs. where [;] is the concatenation operation. To compute the first representation of the question U, we use an interpolation layer to map where M is an hyperparameter of the model. Intuitively, R M ×2h corresponds to the space allocated to store the representation of the question and its further reformulations. It does not depend on the length of the original question L. Our model is composed of T hops of reading that sequentially extract relevant information from a document regarding the current reformulation of the question. At step t, given a representation of the reformulated question U (t) ∈ R M ×2h and a representation of the document E D ∈ R N ×2h, this module computes a question-aware representation of the document. This module is a combination of two layers: a document-question attention followed by a document self-attention. We first construct the interaction matrix between the document and the current reformulation of the question S ∈ R N ×M as: where w 1, w 2, w 3 are trainable vectors of R 2h and the element-wise multiplication. Then, we compute the document-to-question attention C q ∈ R N ×2h: And the question-to-document attention q c ∈ R 2h: Finally, we compute the question-aware representation of the document X (t) ∈ R N ×8h: where [;] concatenation operation. Finally, we use a last BiGRU that reduces the dimension of X (t) to N × 2h. This specific attention mechanism was first introduced in the Bidirectional Attention Flow model of. We hypothesize that such token-level attention will produce a finer-grained representation of the document compared to sentence-level attention used in state-of-the-art Memory Network architectures. Document Self-Attention: So far, the contextualization between the ten paragraphs has only be done by the BiGRUs of equation 1. One limitation of the current representation of the document is that each token has very limited knowledge of the other elements of the context. To deal with long-range dependencies, we apply this same attention mechanism between the question-aware representation of the document, X (t), and itself to produce the reading module output V ∈ R N ×2h. This self-contextualization of the document has been found useful in our experiments as presented in the ablation analysis of Section 4.3. A reformulation module t takes as input the output of the previous attention module V (t), the previous representation of the reformulated question U (t), and an encoding of the document E D. It produces an updated reformulation of the question U (t+1). Reading-based Attention: Given V (t) we compute p (t)s ∈ R N and p (t)e ∈ R N using two BiGRUs followed by a linear layer and a softmax operator. They are computed from: where w e and w s are trainable vectors of R h. The two probability vectors p (t)s and p (t)e are not used to predict an answer but to compute a reading-based attention vector a (t) over the document. Intuitively, these probabilities represent the belief of the model at step t of the probability for each word to be the beginning and the end of the answer span. We define the reading-based attention of a token as the probability that the predicted span has started before this token and will end after. It can be computed as follows: Finally, we use these attention values to re-weight each token of the document representation. We Dynamic Max-Pooling: This layer aims at collecting the relevant elements ofẼ (t)D to add to the current representation of dimension M × 2h. We partition the row of the initial sequence into M approximately equal parts. It produces a grid of M × 2h in which we apply a max-pooling operator in each individual window. As a , a matrix of fixed dimension adequately represents the input, preserving the global structure of the document, and focusing on the important elements of each region. This can be seen as an adaptation of the dynamic pooling layer proposed by. (t)D be the input matrix representation, we dynamically compute the kernel size, w, of the max-pooling according to the length of the input sequence and the required output shape: w = N M, · being the ceiling function. Then the output representation of this pooling layer will be Finally, to compute the updated representation of the question U (t+1) ∈ R M ×2h, we sum U (t) and O (t). The answering module is a sequence of four BiGRUs, each of them followed by a fully connected layer. Their respective goal is to supervise the supporting facts p sf, the answer starting and ending probabilities, p e, p s, of each word of the document. The last layer is used as a three-way classifier to predict p c the probability of the answer be classified as yes, no or a span of text. where w s ∈ R h, w e ∈ R h, W c ∈ R h×3 are trainable parameters. To predict the supporting facts, we construct a sentence based representation of the document. Each sentence is represented by the concatenation of its starting and ending supporting fact tokens from Y sf. We compute p sf i,j the probability of sentence j of example i of being a supporting fact with a linear layer followed by a sigmoid function. We define a multi-head version of the model. In this configuration, we use a set of independent parallel heads. All heads are composed of the same number of reading and reformulation modules. Each head produces a representation V (T) k of the document. We finally sum these K matrices to compute the input of the answering block. We jointly optimize the model on the three subtasks (supporting facts, span position, classifier yes/no/span) by minimising a linear combination of the supporting facts loss L sf, the span loss L span and the class loss L class. Let N d be the number of examples in the training dataset. L sf (θ) is defined by: where nbs i corresponds to the number of sentences in the document i. y i,j being 1 if the sentence j of the document i is a supporting fact otherwise 0. Selecting the answer in multi-hop reading datasets is a weakly supervised task. Indeed, similarly to the observations of Min et al. (2019a) for open-domain question-answering and discrete reasoning tasks, it is frequent for a given answer of HOTPOTQA to appear multiple times in its associated document. In our case, we assume that all the mentions of the answer in the supporting facts are related to the question. We tag as a valid solution, the start and end positions of all occurrences of the answer in the given supporting facts. where y ∈ R N are vectors containing the value 1/n i at the start, end positions of all the occurrences of the answer, 0 otherwise; n i being the number of occurrences of the answer in the context. where y i corresponds to the index of the label of the question type {yes, no, span}. We finally define the training loss as follows: where α and β are hyperparameters tuned by cross-validation. In the original HOTPOTQA dataset, the two gold paragraphs required to answer a given question come with eight distractor paragraphs. These eight distractor paragraphs, collected from Wikipedia, are selected among the of a bigram tf-idf retriever using the question as the query. As an augmentation strategy, we created additional "easier" examples by combining the two gold paragraphs with eight other paragraphs randomly selected in the dataset. For each example of the original training set, we generate an additional "easier" example. These examples are shuffled in the dataset. Our model is composed of 3 parallel heads (K = 3) each of them composed of two reading modules and one reformulation module (T = 2). We set the hidden dimension of all the GRUs to d = 80. We use M = 100 to allocate a space of R 100×160 to store the question and its reformulations. We use (b) 55 pre-trained BERT-base-cased model and adapt the implementation of Hugging Face 1 to compute embedding representations of documents and questions. We optimize the network using the Adam optimizer with an initial learning rate of 1e −4. We set α to 1 and β to 10. All these parameters have been defined through cross-validation. Table 1 presents the performance of our LQR-net on the distractor setting of the HOTPOTQA dataset. We compare our model against the published approaches evaluated on the HOTPOTQA dataset. We can see from this table that our model achieves strong performance on the answer prediction task. It outperforms the current best model by 3.9 points of EM and 4.1 points of F 1 score. Our model also achieves competitive performance for the evidence extraction task. The LQR-net achieves state-ofthe-art performance on the joint task improving the best published approaches by 2.9 points on EM and 3.9 points of F 1. To evaluate the impact of the different components of our model, we perform an ablation analysis. Table 2 presents the of this analysis. Impact of sequential and parallel reading: We study the contributions of the sequentiality in the model and of the multiple parallel heads. We compare our model to a similar architecture without the sequential reformulation (T = 1). We find that this sequential association of reading modules and reformulation modules is a critical component. F 1 score decreases by 6.9 points for the answer prediction task and 5.7 points for the evidence extraction task when the model does not have the capability to reformulate the question. The impact of the parallel heads is more limited than the sequentiality but still remains significant. Indeed, the configuration that uses only a single head (K = 1) stands 1 F 1 points below the best model on the joint metric. Weak supervision of the answer: In this work, we propose to label as positive all occurrences of the answer in the supporting facts. We compare this configuration to the standard approach, where only the first occurrence of the answer is labeled as positive and the others as negative. In this last configuration, the span loss corresponds to a cross-entropy loss (CE loss) between the predicted start and end probabilities and the target positions. This decreases the joint F 1 score by 0.8 points. Impact of the self-attention layer: We study the impact of the self-attention layer in the reading module. We found that this self-attention layer is an essential component in the reading process. Indeed, when we omit this layer, the F 1 score decreases by 8.3 points on the joint metric. This outlines the necessity to be able to propagate long-range information between the different paragraphs and not only in the local neighborhood of a token. Compared to previously proposed approaches, this layer does not rely on any handcrafted relationship across words. Question as a single vector: Finally, we study the case where the question representation is reduced to a vector of R 2h (M = 1). This configuration achieves the worst of our analysis, dropping the joint F 1 score by 13.3 points and highlights the importance of preserving a representation of the question as a matrix to maintain its meaning. In this part, we describe how we integrated our model into an entire reading pipeline for opendomain question answering. In this setting, no supporting documents are associated to each question, and it is required to retrieve relevant context from large text corpora such as Wikipedia. We adopt a two-stage process, similar to; , to answer multihop complex questions based on the 5 million documents of Wikipedia. First, we use a paragraph retriever to select a limited amount of relevant paragraphs from a Wikipedia dump, regarding a natural language question. Second, we fed our LQR model with the retrieved paragraphs to extract the predicted answer. We evaluate this approach on the open-domain configuration of the HotpotQA dataset called fullwiki. We use a standard TF-IDF based paragraph retriever to retrieve the paragraphs the most related to the question. In addition to these paragraphs, we consider as relevant their neighbors in the Wikipedia graph, i.e. the documents linked to them by hyperlinks. In our experiments, we considered as relevant, the top 10 paragraphs and their associated neighbors. Table 3 shows the of our approach compared to other published models. Although we are using a very simple retriever, only based on TF-IDF, we report strong on the open-domain question answering task of HotpotQA. The only published approach that outperforms us being a combination of sentence/paragraph retrieval based on BERT encodings. Question Reformulation and Reasoning Chains: Because our model reformulates the question in a latent space, we cannot directly visualize the text of the reformulated question. However, one way to assess the effectiveness of this reformulation is to analyze the evolution of p s and p e across the two hops of the model. We present in Figure 3 an analysis of the evolution of these probabilities on two bridge samples of the development dataset. We display the reading-based attention, that corresponds to the probabilities for each word to be part of the predicted span, computed from p s and p e in Equation 7. These examples show this attention before the first reformulation of the question and in the answering module. From these observations, we can see that the model tends to follow a natural reasoning path to answer bridge questions. Indeed, before the first reformulation module, the attentions tend to focus on the first step of reasoning. For the question "What award did the writer of Never Let Me Go novel win in 1989?", the model tends to focus on the name of the writer at the first step, before jumping the award description in the second step. Similarly, for the question "What is the population according to the 2007 population census of the city in which the National Archives and Library of Ethiopia is SemanticRetrievalMRS 37.60 49.40 23.10 58.5 12.2 35.3 MUPPET 31.07 40.42 17.00 47.71 11.76 27.62 QFE † 28.70 38.10 14.20 44.40 8.69 23.1 Baseline Model 24.68 34.36 5.28 40.98 2.54 17.73 DecompRC (b) N/A 43.26 N/A N/A N/A N/A Table 3: Performance comparison on the development set of HOTPOTQA in the fullwiki setting. We compare our model in terms of Exact Match and F 1 scores against the published models at the time of submission (November 15th). † indicates that the paper does not report the on the development set of the dataset; we display their on the test set. Figure 3: Distribution of the probabilities for each word to be part of the predicted span, before the first reformulation module and in the answering module. We display the reading-based attention computed in Equation 7 and the reading-based attention computed from p s and p e from Equation 10. In these examples, we show only the supporting facts. located? " we can see the model focusing on Addis Ababa at the first step, i.e the name of the city where the National Archives and Library of Ethiopia are located and then jumping to the population of this city in the next hop. We display more visualizations of the sequential evolution of the answer probabilities in Appendix A. Limitations: We manually examine one hundred errors produced by our multi-step reading architecture on the development set of HOTPOTQA. We identify three recurrent cases of model failure: the model stops at the first hop of required reasoning, the model fails at comparing two properties, and the answer does not match all the requirements of the question. We illustrate these three recurrent types of error with examples from the dataset in Appendix B. During this analysis of errors, we found that in only 3% of the cases, the answer is selected among one of the distractor paragraphs instead of a gold one. Our architecture successfully detects the relevant paragraphs regarding a question even among similar documents coming from a tf-idf retriever. Moreover, there are no errors where the model produces a binary yes/no answer instead of extracting a text span and vice versa. Identifying the type of question is not challenging for the model. This might be explained by the question's "patterns" that are generally different between binary yes/no and extractive questions. Multi-hop Machine Comprehension: The question-answering task has recently increased its popularity as a way to assess machine reading comprehension capabilities. The emergence of large scale datasets such as CNN/Daily Mail, , SQuAD or MSMARCO have encouraged the development of multiple machine reading models . These models are mainly composed of multiple attention layers that update the representation of the document conditioned by a representation of the question. However, most of this work focuses on the ability to answer questions from a single paragraph, often limited to a few sentences. Weston et al. (2015a); were the first attempts to introduce the task of multi-documents question-answering. QAngaroo is another dataset designed to evaluate multi-hop reading architectures. However, state-of-the-art architectures on this task ) tend to exploit the structure of the dataset by using the proposed candidate spans as an input of the model. Recently, different approaches have been developed for HOTPOTQA focusing on the multiple challenges of the dataset. focuses on the evidence extraction task and highlight its similarity with the extractive summarization task. Related works also focus on the interpretation of the reasoning chain with an explicit decomposition of the question (b) or a decomposition of the reasoning steps . Other models like aim at integrating a graph reasoning type of attention where the nodes are recognized by a BERT NER model over the document. Moreover, this model leverages on handcrafted relationships between tokens. Related to our approach, different papers have investigated the idea of question reformulation to build multi-hop open-domain question answering models. proposes a framework composed of iterative interaction between a document retriever and a reading model. The question reformulation is performed by a multi-step-reasoner module trained via reinforcement learning. introduces a multi-hop paragraph retriever. They propose a reformulation component integrated into a retrieving pipeline to iteratively retrieve relevant documents. These works are complementary to ours by focusing mostly on the document retrieving part of the problem while we focus on the answer extraction task, and could be combined together. Memory Networks: Memory networks are a generic type of architecture Weston et al. (2015b);; designed to iteratively collect information from memory cells using attention mechanism. They have been used to read from sentences, paragraphs, and knowledge bases. In these models, the answer layer uses the last value of the controller to predict the answer. Two main differences with our architecture are the representation of the controller and the associated attention mechanism. Indeed, in these models, the controller is reduced to a single vector, and the attention mechanism is based on a simple dot-product between each token of the document and the representation of the controller. We utilize a token-level attention mechanism compared to the sentence-level one, classically used in Memory Networks. Transformer Networks: The transformer architecture has been introduced by in the context of machine translation. It is mainly composed of attention layers in both the encoder and the decoder module. The transformer networks introduced the so-called multi-head attention, consisting of several attention layers running in parallel. This multi-head attention allows the model to concurrently access information from different representations of the input vector. Inspired by this work, we designed our multi-head module to read in parallel into different representations of the document while solely accumulate information into the representation of the question. In this paper, we propose a novel multi-hop reading model designed for question-answering tasks that explicitly require reasoning capabilities. We have designed our model to gather information sequentially and in parallel from a given set of paragraphs to answer a natural language question. Our neural architecture, uses a sequence of token-level attention mechanisms to extract relevant information from the paragraphs and update a latent representation of the question. Our proposed model achieves competitive on the HOTPOTQA reasoning task and performs better than the current best published approach in terms of both Exact Match and F 1 score. In addition, we show that an analysis of the sequential attentions can possibly provide human-interpretable reasoning chains. This section includes examples from the HOTPOTQA development set that illustrate the evolution of the probabilities for each word to be part of the predicted span, before the first reformulation module and in the answering module presented in Section 4.5. For each example, we show only the text of the two gold paragraphs. identifies the supporting facts in these visualizations. This section includes examples from the HOTPOTQA development set that illustrate the categories of errors presented in Section 4.5. For each example, we show only the text of the two gold paragraphs. identifies the supporting facts in these visualizations. The model stops at the first hop of required reasoning: The model fails at comparing two properties:
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1x63TEYvr
In this paper, we propose the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities.
We propose a method to automatically compute the importance of features at every observation in time series, by simulating counterfactual trajectories given previous observations. We define the importance of each observation as the change in the model output caused by replacing the observation with a generated one. Our method can be applied to arbitrarily complex time series models. We compare the generated feature importance to existing methods like sensitivity analyses, feature occlusion, and other explanation baselines to show that our approach generates more precise explanations and is less sensitive to noise in the input signals. Multi-variate time series data are ubiquitous in application domains such as healthcare, finance, and others. In such high stakes applications, explaining the model outcome is crucial to build trust among end-users. Finding the features that drive the output of time series models is a challenging task due to complex non-linear temporal dependencies and cross-correlations in the data. The explainability problem is significantly exacerbated when more complex models are used. Most of the current work in time series settings focus on evaluating globally relevant features ). However, often global feature importance represents relevant features for the entire population, that may not characterize local explanations for individual samples. Therefore we focus our work on individualized feature importance in time series settings. In addition, besides identifying relevant features, we also identify the relevant time instances for those specific features, i.e., we identify the most relevant observations. To the best of our knowledge this is the first sample-specific feature importance explanation benchmark at observation level for time series models. In this work, we propose a counterfactual based method to learn the importance of every observation in a multivariate time series model. We assign importance by evaluating the expected change in model prediction had an observation been different. We generate plausible counterfactual observations based on signal history, to asses temporal changes in the underlying dynamics. The choice of the counterfactual distribution affects the quality of the explanation. By generating counterfactuals based on signal history, we ensure samples are realistic under individual dynamics, giving explanations that are more reliable compared to other ad-hoc counterfactual methods. In this section we describe our method, Feed Forward Counterfactual (FFC), for generating explanation for time series models. A feature is considered important if it affects the model output the most. In time series, the dynamics of the features also change over time, which may impact model outcome. As such it is critical to also identify the precise time points of such changes. In order to find important observations for highdimensional time series models, we propose to use a counterfactual based method. Specifically, (a) (b) FFC procedure: CounterfactualXt is generated using signal history. We look into the difference of the original output yt andŷ (i,t) where observation x (i,t) is replaced with a counterfactual. we assign importance to an observation x i,t (feature i at time t) based on its effect on the model output at time T (> t). We replace observation x i,t with a counterfactualx i,t, to evaluate this effect. Figure 1 demonstrates how importance of a feature is characterized by replacing an observation with a counterfactual. Multi-variate time series data is available in the form of X (n) ∈ R d×T (where d is the number of features with T observations over time) for n ∈ [N] samples. We are interested in black-box estimators that receive observations up to time t, x t ∈ R d, and generate output y t at every time point t ∈ [T]. F denotes the target black-box model we wish to explain through the proposed approach called FFC. For exposition, throughout the paper, the index n over samples has been dropped for notational clarity. We index features with subscript i. x −i,t indicates features at time t excluding feature i. The notation used for exposition work is briefly summarized in Table 1. Observed outcome of the black-box model F, at time t Generative Model and Estimator Gi: Latent encoding of history up to t Table 1: Notation used in the paper. We assign importance score to each observation x i,t for t ∈ [T] and i ∈ [d], following the definition: Definition 1. Feature Importance: The importance of the observation i at time t, denoted by Imp(i, t) is defined as E p(xi,t|X0:t−1) [|F(X 0:t) − F(X 0:t−1, x −i,t,x i,t)|], where | · | denotes the absolute value andx i,t is the counterfactual sample. That is, the importance of an observation for feature i at time t is defined as the change in model output when the observation is replaced by a generated counterfactual. The counterfactual observation can come from any distribution, however the quality of the counterfactual random variable directly affects the quality of the explanation. We generate the counterfactual sample conditioned on signal history up to time t by sampling from the distribution p(x i,t |X 0:t−1). Using a conditional generator guarantees that our counterfactuals are sampled not only within domain but also specifically likely under the individual sample X (n), as opposed to having a generator that models data across population. Conditioning on the history also allows us to learn the dynamics of the signal and therefore generate a plausible counterfactual given the past observations. p(x t |X 0:t−1) represents the distribution at time t, if there were no change in the dynamics of the signals. The counterfactualx i,t is sampled from the marginal distribution p(bx i,t |X 0:t−1), obtained from p(x t |X 0:t−1). Let F(X 0:t−1, x −i,t,x i,t) be the output of the model at time T, when x i,t is replaced by the generated counterfactualx i,t. We estimate feature importance Imp(i, t) as E p(xi,t|X0:t−1) [|F(X 0:t) − F(X 0:t−1, x −i,t,x i,t)|], summarized in figure 2(b). Our proposed method has the following compelling properties in explaining the estimator F: Time Importance (TI) For every time series, highlighting relevant time events for different features is important for actionability of the explanations. For instance in a clinical setting, just knowing a feature like heart rate is relevant, is not sufficient to intervene -it is also important to know when a deterioration had happened. With FFC, the most eventful time instances can be obtained as: We can thus rank time instances in order of importance. That is, time t 1 t 2, if Feature Importance (FI) At any time instance t, our method assigns importance to every feature of x i,t. The magnitude of our importance function reflects relative importance. Comparing the importance values across features gives the flexibility to report a subset of important features at each time point t and also reflects the correlation between various features of the time series. We approximate the conditional distribution of p(x t |X 0:t−1) using a recurrent latent variable generator model G, introduced in. The architecture we use is provided in Figure 2(a). The conditional generator G models p(x t |z t−1) where z t−1 ∈ R k is the latent representation of history of the time series up to time t. The latent representation is a continuous random variable, modeling the distribution parameters. We only use past information in the time series to reflect temporal dependencies. Using the recurrent structure allows us to model a non-stationary generative model that can also handle varying length of observations. Implementation details of the generator are in the Appendix. Our counterfactuals are not derived by looking at future values which could be done for reliable imputation. Counterfactuals should represent the past dynamics. Note that our derived feature importance is limited by the quality of imputation that may have been utilized by the black-box risk predictor. For experimental evaluation on the effect of generator specifications on counterfactuals and the quality of explanations, see Section 4.1. The proposed procedure is summarized in Algorithm 1. We assume that we are given a trained block box model F, and the data (without labels) it had been trained on. Using the training data, we first train the generator that generates the conditional distribution, (denoted by G). In our implementation we model x as a multivariate Gaussian with full covariance to model all existing correlation between features. The counterfactualx i,t is then sampled from G and passed to the black-box model to evaluate the effect on the black-box outcome. A common method of explaining model performance, in time-series deep learning, is via visualization of activations of latent layers (; ;) Return Importance_M atrix sensitivity analysis . Understanding latent representations, sensitivity and its relationship to overall model behavior is useful for model debugging. However, these but are too refined to be useful to the end users like clinicians. Attention models (; ;) are the most commonly known explanation mechanisms for sequential data. However, because of the complex mappings to latent space in recurrent models, attention weights cannot be directly attributed to individual observations of the time series . To resolve this issue to some extent, propose an attention model for mortality prediction of ICU patients based on clinical visits. However attention weights may not be consistent as explanations . In vision, prior works tackle explainability from the counterfactual perspective, finding regions of the image that affect model prediction the most. assumes higher importance for inputs that when replaced by an uninformative reference value, maximally change the classifier output. A criticism to such methods is that they may generate out-of-distribution counterfactuals, leading to unreliable explanations. address this issue for images using conditional generative models for inpainting regions with realistic counterfactuals. Evaluating sample based feature importance remains largely unstudied for time series models. While more widely studied for image classification, these methods cannot be directly extended to time series models due to complex time-series dynamics. Most efforts in this domain focus on population level feature importance . is one of the few methods addressing sample based feature importance and use a method similar to , called "feature occlusion". They replace each time series observation by a sample from uniform noise to evaluate its effect on model outcome to attribute feature importance. We argue that carefully choosing the counterfactual selection policy is necessary for derive reliable importances. Specifically, replacing observations with noisy out-of-domain samples can lead to arbitrary changes to model output that are not reflective of systematic behavior in the domain. Even if an observation is sampled from the domain distribution, it does not characterize temporal dynamics and dependencies well enough, potentially highlighting features that only reflect global model behavior, as opposed to sample specific feature importance. We therefore model the data-distribution in order to generate reliable counterfactuals. We demonstrate the implications of the choice of the generator (and hence the counterfactuals) on the quality of explanation. We evaluate our explainability method for finding important features in time series on 2 simulated datasets and 2 real datasets. Our goal is two-fold a) comparison to existing feature importance baselines in time series and b) evaluating the choice of generators on the quality of counterfactuals and explanations. We compare to existing feature importance baselines described below: 1. Feature Occlusion (FO) : Method introduced in. This method is an ad-hoc approach for generating counterfactuals. When replacing x i,t with a random sample from the uniform distribution, the change in model output defines the importance for x i,t. We augment the method introduced in by sampling counterfactuals from the bootstrapped distribution over each feature. This avoids generating out-of-distribution samples. 3. Sensitivity Analysis (SA): This method evaluates the sensitivity of the output to every observation, by taking the derivative of y t with respect to x i,t, at every time point. 4. LIME : One of the most commonly used explainabilty methods that assigns local importance to features. Although LIME does not assign temporal importance, for this baseline, we use LIME at every time point to generate feature importances. Evaluating the quality of explanations is challenging due to the lack of a gold standard/ground truth for the explanations. Additionally, explanations are reflective of model behavior, therefore such evaluations are tightly linked to the reliability of the model itself. Therefore we created the simulated environment in order to test our method. In this experiment, we simulate a time series data such that only one feature determines the outcome. Specifically, the outcome (label) changes to 1 as soon as a spike is observed in the relevant feature. We keep the task fairly simple for two main reasons: 1) to ensure that the black-box classifier can indeed learn the right relation between the important feature and the outcome, which allows us to focus on evaluating the quality of the explanations without worrying about the quality of the classifier. 2) to have a gold standard for the explanations since the exact important event predictive of the outcome are known. We expect the explanations to assign importance only to the one relevant feature, at the exact time of spike, even in the presence of spikes in other non-relevant features. To simulate these data, we generate d = 3 (independent) sequences as a standard non-linear auto-regressive moving average (NARMA) time series of the form:, where the order is 2 and u ∼ Normal(0, 0.01). We add linear trends to the features and introduce random spikes over time for every feature. Note that since spikes are not correlated over time, no of the generators (used in FFC, AFO, FO) will learn to predict it. The important feature in this setting is feature 1. The complete procedure is described in Appendix A.2.1. We train an RNN-based black-box model on this data, ing in AUC= 0.99 on the test set. Figure 7 demonstrates explanations of each of the compared approaches on simulated data for 2 test samples. As shown in Figure 7 (a), Sensitivity analysis does not pick up on the importance of the spike. Feature occlusion gives false positive importance to spikes that happen in non-important signals as well as the important one. Augmented feature occlusion resolves this problem since it samples the counterfactuals from the data distribution, however, it generates noisier as it samples from the bootstrap distribution. The proposed method (FFC) only assigns importance to the first feature at the time of spike. Hence, FFC generates fewer false relevance scores. Note that all baseline methods provide an importance for evry sample at every time point. The true explanation should highlight feature 1 at time points of spike. Using this ground truth, we evaluate the AUROC and AUPRC of the generated explanations denoted by (exp). II, we also show in the third column that the log-probabilities of our counterfactuals are higher under the true distribution. The first simulation does not necessarily evaluate feature importance under complex state dynamics as is common in applications. In this simulation, we create a dataset with complex dynamics with known ground truth explanations. The dataset consists of multivariate time series signals with 3 features. A Hidden Markov Model with 2 latent states, with linear transition matrix and multivariate normal emission probabilities is used to simulate observations. The the outcome y is a random variable, which, in state 1, is only affected by feature 1 and in state 2, only affected by feature 2. Also, we add non-stationarity to the time series by modeling the state transition probabilities as a function of time. The ground truth explanation for output at time T is the observation x i,t where i is the feature that drives the output in the current state and t indicates when feature i became important. In a time series setting, a full explanation for outcome at t = T should include the most important feature variable as well as the time point of importance (here state change). Figure 4 demonstrates assigned importance for a time series sample. The shaded regions indicate the top 5 important observations (x i,t) for each method, the color indicating the corresponding feature i. AFO, FO and FFC are able to learn the state dynamics and are able to find the important feature of each state. However, the top importance values in AFO and FO do not correspond to the important time points. Only in FFC, the top important observations are indicative of state changes. Table 2 shows the performance compared to ground-truth explanations for this data. As mentioned earlier, the quality of explanations rely on the quality of the counterfactuals. The counterfactuals should reflect the underlying latent dynamics for an individual sample. Counterfactuals under the marginal (as used by AFO) need not be likely for a specific sample. The conditional distribution we use, on the other hand, models the underlying characteristic of an individual sample, while the marginal is an average over the population. Counterfactuals unlikely under an individual patient's dynamics can in inaccurate importance assignments since they can potentially overestimate the change in model outcome significantly. We demonstrate this by evaluating the log probability of the counterfactual under the true generator distribution p * (x t |X 0:t−1). Results are summarized in Table 2, Column 3. Since we simulate data using an HMM, we can analytically derive the distribution p * (x i,t |X 0:t−1). Details of the derivation are included in Appendix A.2.1. Following the procedure in Algorithm 1, we train a conditional generators for non-static time series features. We compare across all four existing methods by visualizing importance scores over time. Figure 5 shows an example trajectory of a patient and the predicted outcome. We plot the importance score over time for top 3 signals, selected by each method. Shaded regions in bottom four rows indicate the most important observations, color representing the feature. As shown in Figure 5, counterfactual based methods mostly agree and pick the same signals as important features. We further evaluate this by looking into accordance scores among methods, indicating the percentage of agreement. This analysis is provided in the Appendix A.3, and the heat map in Figure 10 demonstrates the average score across test samples. However, the methods don't agree on the exact time importance. As we observe in Figure 5 and other patient trajectories, FFC assigns importance to observations at the precise times of signal change. This is exactly as expected from the method. The FFC counterfactual is conditioned on patient history and thus the counterfactual represents an observation assuming a patient had not change state. Since evaluation of explanations can be subjective, we also use intervention information present in patient records to evaluate clinical applicability across baselines. Clinicians intervene with a medication or procedure when there is a notable, adverse change in patient trajectory. Therefore, picking up the most relevant features before an intervention onset is indicative of clinical validity of the method. While we cannot directly associate an intervention with a specific cause (observation), we look at the overall distribution of signals considered important by each of the methods, prior to intervention onset. Figure 6 shows these histograms for a number of interventions. We see consistent This experiment evaluates the utility of our method for attributing importance to GHG tracers across different locations in California. The GHG data consists of 15 time series signals from 2921 grid cells. A target time series is a synthetic signal of GHG concentrations. We use an RNN model to estimate GHG concentrations using all tracers. Evaluating which tracers are most useful in reconstructing this synthetic signal can be posed as a feature importance problem for weather modeling over time. In order to quantitatively evaluate the proposed method on real data, we evaluate how well the method performs at selecting globally relevant methods as a proxy. We aggregate the importance of all features over time (and training samples) and retrain the black-box by i) removing top 10 relevant features as indicated by each method ii) using top 3 relevant features only. The performance summary is provided in Table 3 suggesting that among methods that derive instance wise feature importance over time, FFC also generates reasonable global relevance of features. Results for both MIMIC-III and GHG datasets are summarized in We additionally evaluate the quality of the proposed FFC method using the randomization tests proposed as'Sanity Checks' in. Two randomization tests are designed to test for sensitivity of the explanations to i) the black-box model parameters using a model parameter randomization test, and ii) sensitivity to data labels using a using a data randomization test. We conduct this evaluation for Simulation Data II. This test evaluates how different explanations are when the black-box model is trained on permuted labels (breaking the correlation between features and output label). If explanations truly rely on the output labels, as suggested in our definition, then the explanation quality should differ significantly when a model trains on permuted labels. We evaluate the drop in the AUROC and AUPRC of the generated explanations compared to the ground truth. This test evaluates how different explanation quality is when the parameters of the model are arbitrarily shuffled. Significant differences in generated explanations suggests the proposed method is sensitive to black-box model parameters. , these tests are conducted for saliency map methods for images by evaluating the distance between saliency maps for different randomizations. The are included for Simulated Data II, measured with AUROC and AUPRC as ground-truth explanations are available. The of both tests are included in Table 4. They indicate the drops in explanation performance for both randomization tests. The performance of the model used for model randomization test drops to 0.52 AUROC as opposed to 0.95 for the original trained model on this simulated task (Simulation Data II). For data randomization, performance of the model drops to 0.62 from 0.95 in terms of AUROC. AUROCs and AUPRCs for FFC drop the most, suggesting the FFC explanation method is sensitive to perturbations in output labels (as tested by the data randomization test) and to randomization in model parameters. Significant deterioration compared to explanation performance in Table 2 (for Simulation Data II) indicates that the proposed method passes the sanity checks. We propose a new definition for obtaining sample-based feature importance for high-dimensional time series data. We evaluate the importance of each feature at every time point, to locate highly important observations. We define important observations as those that cause the biggest change in model output had they been different from the actual observation. This counterfactual observation is generated by modeling the conditional distribution of the underlying data dynamics. We propose a generative model to sample such counterfactuals. We evaluate and compare the proposed definition and algorithm to several existing approaches. We show that our method is better at localizing important observations over time. This is one of the first methods that provides individual feature importance over time. Future extension to this work will include analysis on real datasets annotated with feature importance explanations. The method will also be extended to evaluate change in risk based on most relevant subsets of observations. A.1 SIMULATED DATA I To simulate these data, we generate d = 3 (independent) sequences as a standard non-linear autoregressive moving average (NARMA) time series. Note also that we add linear trends to features 1 and 2 of the form: x(t + 1) = 0.5x(t) + 0.5x(t) l−1 i=0 x(t − l) + 1.5u(t − (l − 1))u(t) + 0.5 + α d t for t ∈, α > 0 (0.065 for feature 2 and 0.003 for feature 1), and where the order l = 2, u ∼ Normal(0, 0.03). We additionally add linear trends to features. We add spikes to each sample (uniformly at random over time) and for every feature d following the procedure below: where κ > 0 indicates the additive spike. The label y t = 1 ∀t > t 1, where t 1 = min g d, i.e. the label changes to 1 when a spike is encountered in the first feature and is 0 otherwise. We sample our time series using the python TimeSynth 1 package. Number of samples generated: 10000 (80%,20% split). The output y t at every step is assigned using the logit in 3. Depending on the hidden state at time t, only one of the features contribute to the output and is deemed influential to the output. The true conditional distribution can be derived using the forward algorithm as follows: where, where p(s t−1 |X 0:t−1) is estimated using the forward algorithm. Our generator G i is trained using an RNN (GRU). We model the latent state z t with a multivariate Gaussian with diagonal covariance and observations with a multivariate Gaussian with full covariance. The counterfactual for observation i at time t can now be sampled by marginalizing over other features at time t. i.e, x i,t ∼ x−i p(x|X 0:t−1). Feature selection and data processing: For this experiment, we select adult ICU admission data from the MIMIC dataset. We use static patients' static, vital measurements and lab for the analysis. The task is to predict 48 hour mortality based on 48 hours of clinical data, therefor we remove samples with less than 48 hours of data. Parameter Settings for conditional Generator: The recurrent network with specifications show in 8 learns a hidden latent vector h t representing the history. h t is then concatenated with x −i,t and fed into a non-linear 1-layer MLP to model the conditional distribution p(x i, t|X 0:t−1). Additional importance plots are provided in Figure 9. Adam (learning rate = 0.0001, β 1 = 0.9, β 2 = 0.999, weight decay = 0) Accordance testing: For this test we look into how much different baselines agree on important feature assignment. As we observed from the experiments, counterfactual methods mostly agree on the most important features for individual samples. We define accordance score between 2 methods as the percentage of top n signals both identified as important. A score of 80 means on average over the test data, 80 of the assignments were similar. This is depicted in Figure 10. In this section we compare the run-time across multiple baselines. Table 9 shows inference runtime for all the baseline methods on a machine with Quadro 400 GPU and Intel(R) Xeon(R) CPU E5-1620 v4 @ 3.50GHz CPU. The runtime for the counterfactual approaches (FFC, FO and AFO) depends only on the length of the time series. It is also the case for FFC since the conditional generator models the joint distribution of all features. This is an advantage, over approaches like LIME, the runtime depends both on the length of the signal as well as the number of features. Overall, FFC performs reasonably compared to ad-hoc counterfactual approaches, since inference on the RNN conditional generator is efficient. This is one of the reasons that the RNN generator model is used to approximate the conditional distribution. Table 9: Run-time for simulated data and MIMIC experiment. Parameter Settings for Generator: The settings are provided in Table 10. Figure 11 shows the training loss of black-box that was used to present feature important in Section 4.4.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HygDF1rYDB
Explaining Multivariate Time Series Models by finding important observations in time using Counterfactuals
This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data. Like much of previous work, we seek to align the learned representations of the source and target domains while preserving discriminability. The way we accomplish alignment is by learning to perform auxiliary self-supervised task(s) on both domains simultaneously. Each self-supervised task brings the two domains closer together along the direction relevant to that task. Training this jointly with the main task classifier on the source domain is shown to successfully generalize to the unlabeled target domain. The presented objective is straightforward to implement and easy to optimize. We achieve state-of-the-art on four out of seven standard benchmarks, and competitive on segmentation adaptation. We also demonstrate that our method composes well with another popular pixel-level adaptation method.
[ 0, 0, 0, 0, 0, 0, 1, 0 ]
S1lF8xHYwS
We use self-supervision on both domain to align them for unsupervised domain adaptation.
We introduce simple, efficient algorithms for computing a MinHash of a probability distribution, suitable for both sparse and dense data, with equivalent running times to the state of the art for both cases. The collision probability of these algorithms is a new measure of the similarity of positive vectors which we investigate in detail. We describe the sense in which this collision probability is optimal for any Locality Sensitive Hash based on sampling. We argue that this similarity measure is more useful for probability distributions than the similarity pursued by other algorithms for weighted MinHash, and is the natural generalization of the Jaccard index. MinHashing BID0 is a popular Locality Sensitive Hashing algorithm for clustering and retrieval on large datasets. Its extreme simplicity and efficiency, as well as its natural pairing with MapReduce and key-value datastores, have made it a basic building block in many domains, particularly document clustering BID0 BID1 and graph clustering BID2 BID3.Given a finite set U, and a uniformly random permutation π, the map X → arg min i∈X π(i) provides a representation of any subset X of U that is stable under small changes to X. If X, Y are both subsets of U the well-known Jaccard index BID4 Practically, this random permutation is generated by applying some hash function to each i with a fixed random seed, hence "MinHashing."In order to hash objects other than sets, Chum et al. BID5 introduced two algorithms for incorporating weights in the computation of MinHashes. The first algorithm associates constant global weights with the set members, suitable for idf weighting. The collision probability that is ∑ i∈X∩Y wi ∑ i∈X∪Y wi. The second algorithm computesMinHashes of vectors of positive integers, yielding a collision probability of J W (x, y) = ∑ i min (x i, y i) ∑ i max (x i, y i) Subsequent work BID6 BID8 has improved the efficiency of the second algorithm and extended it to arbitrary positive weights, while still achieving J W as the collision probability. J W is one of several generalizations of the Jaccard index to non-negative vectors. It is useful because it is monotonic with respect to the L 1 distance between x and y when they are both L 1 normalized, but it is unnatural in many ways for probability distributions. If we convert sets to binary vectors x, y, with x i, y i ∈ {0, 1}, then J W (x, y) = J(X, Y). But if we convert these vectors to probability distributions by normalizing them so that x i ∈ {0, As a consequence, switching a system from an unweighted MinHash to a MinHash based on J W will generally decrease the collision probabilities. Furthermore, J W is insensitive to important differences between probability distributions. It counts all differences on an element in the same linear scale regardless of the mass the distributions share on that element. For instance, J W ((a, b, c, 0), (a, b, 0, c)) = J W ((a + c, b), (a, b + c)). This makes it a poor choice when the ultimate goal is to measure similarity using an expression based in information-theory or likelihood where having differing support typically in the worst possible score. For a drop-in replacement for the Jaccard index that treats its input as a probability distribution, we'd like it to have the following properties. 2) Not lower than the Jaccard Index when applied to discrete uniform distributions. 3) Sensitive to changes in support, in a similar way to information-based measures. 4) Easily achievable as a collision probability. J W fails all but the last.1) It isn't scale invariant, J W (αx, y) ̸ = J W (x, y).2) If the vectors are normalized to make it scale invariant, the values drop below the corresponding Jaccard index (equation 1.) 3) It is insensitive to changes in support. 4) Good algorithms exist, but they are non-trivial. Existing work has thoroughly explored improvements to Chum et al.'s second algorithm, while leaving their first untouched. In this work we instead take their first algorithm as a starting point. We extend it to arbitrary positive vectors (rather than sets with constant global weights) and analyze the . In doing so, we find that the collision probability is a new generalization of the Jaccard Index to positive vectors, which we here call J P. DISPLAYFORM0 The names used here, J W and J P, are chosen to reflect how each function interprets x and y, and the conditions under which they match the original Jaccard index. J W treats a difference in magnitude the same as any other difference, so treats vectors as "weighted sets." J P is scale invariant, so any input is treated the same as a corresponding probability distribution. The primary contribution of this work is to derive and analyze J P, and to show that in many situations where the objects being hashed are probability distributions, J P is a more useful collision probability than J W.We will describe the sense in which J P is an optimal collision probability for any LSH based on sampling. We will prove that if the collision probability of a sampling algorithm exceeds J P on one pair, it must sacrifice collisions on a pair that has higher J P.We will motivate J P's utility by showing experimentally that it has a tighter relationship to the JensenShannon divergence than J W, and is more closely centered around the Jaccard index than J W. We will even show empirically that in some circumstances, it is better for retrieving documents that are similar under J W than J W itself (and consequently, sometimes better for retrieving based on L 1 -distance.) Let h: [n] → be a pseudo-random hash mapping every element 1 ≤ i ≤ n to an independent uniform random value in. Over a non-negative vector x, define DISPLAYFORM0 For brevity, we will be using the extended real number system in which 1/∞:= 0. Each term is an exponentially distributed random variable with rate x i, DISPLAYFORM1 so it follows that DISPLAYFORM2 This well known and beautiful property of exponential random variables derives from the fact that DISPLAYFORM3 Proof. Any monotonic transform on DISPLAYFORM4 will not change the arg min, so multiplying each x i by a positive α won't either. H(αx) = H(x). Thus for x i, y i > 0 DISPLAYFORM5 ). ). Consequently, Repeating this process for all i in the intersection yields DISPLAYFORM0 DISPLAYFORM1 Continuing the notational convention, we will refer to hashing algorithms that achieve J P as their pair collision probability as P-MinHashes, and algorithms that achieve J W as W-MinHashes. While J P's expression is superficially awkward, we can aid intuition by representing it in other ways. The simplest interpretation is to view it as a variant of J W. We rewrite J W allowing one input to be rescaled before computing each term, DISPLAYFORM0 j ) and choose the vector α to maximize this generalized J W. If α i x i > y i, increasing α i raises only the denominator. If α i x i < y i, increasing α i raises the numerator more than the denominator. So the optimal α sets α i x i = y i, and in J P.We can derive a more powerful representation by viewing the P-MinHash algorithm itself geometrically. A vector of k + 1 exponential random variables, when normalized to sum to 1, is a uniformly distributed random point on the unit k-simplex. Every point in the unit k-simplex is also a probability distribution over k+1 elements. Using these two facts we can construct the PMinHash as a function of the simplex as illustrated in FIG2.For a probability distribution x, mark the point on the unit simplex corresponding to x, (x 1, . . ., x k+1), and connect it to each of the corners of the simplex. These edges divide it into k + 1 smaller simplices that fill the unit simplex. Each internal simplex has volume proportional to the coordinate of x opposite to its unique exterior face. As a , a uniformly chosen point on the unit simplex will land in one of the sub-simplices with probability given by x. P-MinHashing is equivalent to sampling in this fashion, but holding the chosen point constant when sampling from each new distribution. The match probability is then proportional to the sum of the intersections of simplices that share an external face. This representation makes several properties obvious on small examples that we prove generally in the next section. When MinHashing is used with a key-value store, high collision probabilities are generally more efficient than low collision probabilities, because as we discuss in section VI, it is much cheaper to lower them than to raise them. For this reason, we are interested in the question of the highest collision probability that can be achieved through sampling. The constraint that the samples follow each distribution forces the collision probability to remain discriminative, but given that constraint, we would like to make it as high as possible to maximize flexibility and efficiency. Suppose for two distributions, x and y, we want to choose a joint distribution that maximizes Pr[H(x) = H(y)]. If we were concerned with only these two particular distributions in isolation, the upper bound of Pr[H(x) = H(y)] is given by the Total Variation distance, or equivalently 1 − L 1 (x, y)/2. Meeting this bound requires the probability mass where x exceeds y to be perfectly coupled with the mass where y exceeds x. Both the mass they share and the mass they do not must be perfectly aligned. Rather than just two, we want to create a joint distribution (or coupling) of all possible distributions on a given space where the collision probability for any pair is as high as possible. It is always possible to increase the collision probability of one pair at the expense of another so long as the chosen pair has not hit the Total Variation limit, so the kind of optimality we are aiming for is Pareto optimality. This requires that no collision probability be able to exceed ours everywhere; any gain on one pair must have a corresponding loss on another. This by itself would not be a very consequential bound for its retrieval performance. We really only desire high collision probabilities for items that are similar, and we would happily lower the collision probability of a dissimilar pair to increase it for a similar pair. However we are able to prove something stronger by examining the pair whose collisions must be sacrificed. To increase the collision probability for one pair above its J P, you must always sacrifice collisions on a pair with even higher J P. To get better recall on one pair, you must always give up recall on an even better pair. The Jaccard index itself is optimal on uniform distributions, and the short proof is a model for the general case. Proof. Let Z be the symmetric difference of X and To prove the same claim on J P for all distributions, we need a few tools. We can rearrange J P to separate the two iteration indices within the max. To analyze J P we will need to refer to each term in its outer sum via subscript. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 We will also use quantifiers in this subscript to indicate a partial sum, i.e. DISPLAYFORM3 Consider two linear combinations of these distributions with coefficients α and β. DISPLAYFORM0 Proof. For a given i, if every max chooses the same side, J P (x, y) i = min(x i, y i). x i /y i has both an arg max and arg min for which this is true. This gives us part 1. DISPLAYFORM1 Thus, we can form x ′, y ′ by merging the mass of x k, x l into one element and y k, y l into one element and have DISPLAYFORM2.. w n be distributions with disjoint support, and consider J P (α · w, β · w). Repeat the merging process until all elements of each w i are merged into one. This gives us.This ordering also lets us work more effectively with the z distributions we constructed in theorem II.1. This lemma contains all the algebra needed for the main proof. DISPLAYFORM3 Working first with the lower group of indices, FIG2, the green regions of x and y could be shifted to overlap more and improve the collision probability of this pair, but any modification that achieves that would worsen at least one of collision probabilities between x or y and z green (both of which have higher collision probability than the (x, y) pair.) DISPLAYFORM4 And since DISPLAYFORM5 we also know that DISPLAYFORM6 which gives us part 1. Now, continuing on to the upper group of indices, DISPLAYFORM7. By noting that the choices within the max are preserved, we conclude part 3. Finally, having bounded all indices, part 4: DISPLAYFORM8 We now have the tools to prove the optimality of J P. DISPLAYFORM9. This implies that no method of sampling from discrete distributions can have collision probabilities dominating J P. J P is Pareto optimal. Furthermore, J P (x, z) ≥ J P (x, y) and J P (y, z) ≥ J P (x, y). To exceed J P (x, y), G must sacrifice at least one pair that is closer under J P than (x, y).Proof. Let m be the number of elements i for which J P (x, y) i < min(x i, y i).In the base case where m = 0, J P (x, y) = ∑ i min(x i, y i) which cannot be improved. Assume the proposition to be proved is true ∀x, ∀y, and ∀p < m. By IV.2.1 we know that m ≤ n − 2, since at least the two endpoints of the sorted list have reached their upper bound. We proceed by induction on m. As in theorem II.1, for each a where DISPLAYFORM10. Reorder i according to the sorting of such that DISPLAYFORM11 ). Now consider a new sampling method, G. The following events are pairwise disjoint by inspection. DISPLAYFORM12 So their probabilities are constrained. DISPLAYFORM13 When these probabilities are given by J P they already sum to 1. DISPLAYFORM14 must be true. Since the two cases are symmetric, we will assume the first one: DISPLAYFORM15 and we are done. Otherwise, DISPLAYFORM16, and the terms i ≥ a must compensate for the loss on the terms i < a, so Pr[DISPLAYFORM17, so these terms have exhausted z a and cannot be increased. Using IV.3.1 and IV.3 we know that this adds at least one additional term that is fully consumed, i.e. the size of {i :, z). By IV.3 we know that J P (x, z a) ≥ J P (x, y) and by induction we know J P (x, z) ≥ J P (x, z a), so we conclude J P (x, z) ≥ J P (x, y) and symmetrically J P (y, z) ≥ J P (x, y). FIG5 shows the mechanism of the proof intuitively. On three element distributions, two of the elements are fully constrained, in this case the blue and red terms, so we construct our adversarial z around green. On three elements, no induction is necessary and the diagram itself proves the relationship. DISPLAYFORM18 Since J P is only Pareto optimal, we should be able to find a sampling method that exceeds it for some pairs but is below it on others. We can generalize our algorithm to construct such a method. Consider arranging the elements of the state space as the leaves of a tree. Internal nodes in the tree are given the weight of the sum of their children, and each is assigned its own exponential hash. Perform P-MinHash among the children of the root node. If the selected node is a leaf, emit it as the sample. If it is an internal node, recurse and repeat. In this generalization, our original algorithm is represented by making all elements direct children of the root. We can prioritize collisions on an index i by placing it closer to the root node than all others; in particular, the tree FIG2 ). Since i and its internal-node sibling form a two element distribution, by lemma 3 the probability of a collision on i will be min(x i, y i) for all x, y. What of J W then? Is it another Pareto optimum? It is not, it is dominated by J P. Proof. The lower bound becomes clear by rewriting J W in a similar form to J P. DISPLAYFORM19 To achieve this lower bound, we can transform the distributions by moving the "excess" mass to new elements. DISPLAYFORM20 Shifting the mass in this way has no effect on J W, but it decreases J P to equal J W. x ′ and y ′ can be expressed as linear combinations over the three sets of indices, so using lemma IV.2.2, DISPLAYFORM21 To achieve the upper bound, 1 − p, consider inverting this transformation, reallocating the p extra mass to maximize J P (x ′′, y ′′) while holding J W (x ′′, y ′′) constant. To avoid increasing J W, we must add the mass to disjoint elements, so divide the indices into two sets, X, Y. We find that if we distribute the mass proportional to the original value, J P reaches the total variation limit regardless of the choice of X, Y. Let |X|:= ∑ i∈X min(x i, y i). DISPLAYFORM22 We can express this as a linear combination of two distributions with disjoint support. DISPLAYFORM23 Since p is the total variation distance of x and y, 1 − p is the maximum collision probability that is possible between two distributions in any context, so it is the upper bound here as well. This gives us some insight into how J P and J W differ. J P ranks distributions as more similar than J W if their extra mass is on elements that both distributions share. Like 1 − J W, 1 − J P is a metric on probability distributions. Theorem IV.6. 1 − J P is a proper metric on P where P(Ω) is the space of probability distributions over a finite set Ω.Proof. Symmetry is obvious. Non-degeneracy over P follows from DISPLAYFORM24 The triangle inequality follows from being a collision prob- DISPLAYFORM25 for any distribution z. But by the union bound, DISPLAYFORM0 The algorithm we've presented so far is suitable for sparse data such as documents or graphs. It is linear in the number of non-zeros, equivalent to Ioffe 2010 BID6. On dense data (such a image feature histograms BID7) there's significant overlap in the supports of each distribution, so rehashing each element for every distribution wastes Algorithm 2: Dense and Continuous P-MinHash. "Global-Bound" A* Sampling BID9 with a fixed seed.input: sample space Ω, sigma-finite measure µ, proposal sigma-finite measure λ, finite upper bound, B:= max(µ(i)/λ(i)) shared random seed s output: Stable sample from (Ω, F, µ) DISPLAYFORM0 work. With a shared stream of sorted hashes, we expect the hash we select for each distribution to be biased towards the beginning of the stream, and closer to the beginning when the data is denser. Therefore one might expect that we could improve performance by searching only some prefix of the stream to find our sample. A* Sampling (Maddison, Tarlow, and Minka 2014 BID9) explores this idea thoroughly, and we lean on it heavily in this section. In particular, we use their "Global Bound" algorithm, and essentially just run it with a fixed random seed (algorithm 2.) We leave the proof of running time and of correctness as a sampling method to that work, and limit our discussion to the proof of the ing collision probability. (Their derivation uses Gumbel variables and maxima. We use exponential variables and minima to make the continuity with the rest of our work clear, which is achieved by a simple change of variables.)The key insight of A* Sampling is that when a (possibly infinite) stream of independent exponential random variables is ordered, the exact marginal distributions of each variable can be computed as a function of the rank and the previous variables. If the vector of sorted exponential variables is e with corresponding parameter vector x, then once e 1,..., e k−1 are all known, the distribution of e k is a truncated exponential with rate |x \ ∪ k i=1 x i | truncated from below at e k−1. The "statelessness" of exponential variables makes this truncation easy to accomplish. Simply generate the desired exponential, and add e k−1 to shift it. To change the parameters of those exponentials and find the new minimum element, only a small prefix of the list must be examined. The running time of finding the new minimum is a function of the difference between the two vectors of parameters, and has equivalent running time to rejection sampling. This gives algorithm 2 equivalent running time to the state of the art for computing a MinHash of dense data, Shrivastava 2016 BID7. Because algorithm 2 admits an unbounded list of random variables, it is also applicable to continuous distributions, as the paper BID9 describes in detail. Let's first show that algorithm 2 gives J P (µ, ν) when Ω is finite. Indeed, this construction is simply an alternative way of finding the minimum − log U i /µ i. A* sampling merely reads off the minimum of − log U i /λ i * λ i /µ i = − log U i /µ i, and similar for ν. To prove the general case for infinite Ω, we need to first define what we mean by J P (µ, ν) in that setting. One option is to replace all the summation by integrals in the formula BID1. This runs into two difficulties however: 1) A probability space Ω may not be a subset of R n. 2) Either µ or ν could be singular. Instead, we define it as a limit over increasingly finer finite partitions of Ω. More formally, Definition V.1. Assume J P (µ, ν) is defined as before when |Ω| < ∞, we define DISPLAYFORM1 where F ranges over finite partitions of the space Ω, and µ F denotes the push-forward of µ with respect to the map π: Ω → F, π(x) = Q ∈ F iff x ∈ Q. (µ F is simply a coarsified probability measure on the finite space F where it (tautologically) assigns probability µ(Q) to the element Q ∈ F.)First we verify that the definition above coincides with J P when Ω is finite. DISPLAYFORM2, with strict inequality if both µ, ν have nonzero masses on those two elements. Proof. By considering J P (µ, ν) as the probability that the argmin's of two lists of independent exponentials land on the same index 1 ≤ i * ≤ n, and using the fact that the minimum of two independent exponentials is an exponential with the sum of the parameters, we can couple the four argmin's arising from µ, ν, µ ′, ν ′ and conclude by inspection. The lemma shows that any partition of Ω will lead to a J P that's greater than or equal to the original J P. So the infimum is achieved with the most refined partition, namely Ω itself. J P Jensen−Shannon Divergence DISPLAYFORM3 count Fig. 3: The Jensen-Shannon divergence compared with J P and J W on pairs of normalized unigram term vectors of random web documents. JSD has a much tighter relationship with J P than J W. We show exact bounds for J W against JSD and approximate bounds for J P against JSD where DISPLAYFORM4 The curve that appears to lower bound JSD against J P is violated on 10 −7 of the pairs. The left graph shows the joint distribution of J P and J W and the bounds we prove. The right graph shows the conditional distributions of J P and J W against the (set) Jaccard index of the terms. We show the distribution of the log of their ratios against the Jaccard index and highlight the median. J P is generally centered around the Jaccard index, while J W is consistently centered below, as predicted by their behavior on uniform distributions. Finally we show that A* sampling applied to µ and ν simultaneously has a collision probability equal to J P (µ, ν) as defined above. Theorem V.3. Given two probability measures µ and ν on an arbitrary Polish space Ω, both absolutely continuous with respect to a common third measure λ, it is possible to apply A* sampling with base distribution λ, either in-order or with a hierarchical partition of Ω, to sample from µ and ν simultaneously. Further, the probability of the procedure terminating at the same point p ∈ Ω for both µ and ν is exactly J P (µ, ν).Proof. The first statement follow from the procedural definition of A* sampling described in BID9. For the second statement, since in-order A* is proven equivalent to hierarchical partition A* in BID9, we are free to choose any partition to our convenience. The natural choice is then the partition used in the definition of J P (µ, ν).More precisely, we know there is a finite partition: Precision/recall curves illustrating the typical case of retrieval using a key-value store. Each point represents outputting o independent sums of a hashes each, for a collision probability of 1 DISPLAYFORM5 DISPLAYFORM6 The cost in storage and CPU is dominated by o, so we connect these points to show the trade-offs possible at similar cost.representative of each part Q ∈ F are jointly distributed as exponentials with rate λ(Q) with common seed. Let U, V be the two coupled A* processes restricted to F. Either one of them does not terminate, or they both terminate and collide conditionally with probability J P (µ F, ν F). In other words, letting P(T) be the probability that both terminate at F level, and AC(U, V ; F) be the collision probability of U, V restricted to F, then AC(U, V ; DISPLAYFORM7 So AC(U, V ; F) is squeezed between J P (µ, ν)P(T) and J P (µ, ν) + ϵ. Since P(T) → 1 and ϵ → 0 under a refinement sequence F, we get in the limit DISPLAYFORM8 To determine whether the difference between J P and J W matters in practice and whether achieving J P as a collision probability is a useful goal, we computed both for a large sample of pairs of unigram term vectors of web documents. From an index of 6.6 billion documents, we selected pairs using a sum of 2 W-MinHashes to perform importance sampling. We computed several similarity scores for 100 million pairs of normalized unigram term vectors, and weighted them by the inverse of their sampling probability to simulate an unbiased sample of all non-zero pairs. The Jensen-Shannon divergence (JSD) defines the information loss that from representing two distributions using a model that is an equal mixture of them, and as such is the ideal criterion to form informationpreserving clusters of items of equal importance. Like both J W and J P it is bounded, symmetric, and monotonic in a metric distance. Due to these properties, as well as its popularity, we use it here as a basis for comparison. J P has a much tighter relationship with the JensenShannon divergence than J W as shown in figure 3. Tight bounds on JSD as a function of J W are given by J W's monotonic relationship with Total Variation, as described by BID10. Let p be the total variation distance, and d(p) = = p extends these to J W. These same bounds apply to J P as well, but J P has a much tighter relationship with what appears to be a much higher lower bound. We have approached finding this lower bound with large differential equations that we have only solved numerically, but small examples form good approximate bounds. On 2 element distributions, JSD has a direct relationship with J P, and only 1 × 10 −7 of the pairs fall below the ing curve, d(1 − J P). No pairs in our sample had JSD more than 0.0077 below it. In contrast, J W puts 7 × 10 −3 of the pairs below this curve, with the farthest point 0.16 below. We also compare both J P and J W to the Jaccard index of the set of terms, and compute a kernel density estimate of the log of their ratios. J P is generally centered around the Jaccard index, while J W is consistently centered below, as predicted by their behavior on uniform distributions. This makes P-MinHash less disruptive as a drop-in replacement for an unweighted MinHash. Parameters of the system such as the number of hashes or the length of concatenated hashes are likely to continue to function well. In the typical case of retrieval using a key-value store, performance is characterized by cheap ANDs and expensive ORs. BID11 To reduce the collision probability we can sum multiple hashes to form keys, but to raise the collision probability, we must output multiple independent keys. This lets us apply an asymmetric sigmoid to the collision probabilities, DISPLAYFORM0 with o independent outputs of a summed hashes each. Assuming that the cost of looking up hashes dominates the cost of generating them, ANDs are essentially free, while CPU and storage cost are both linear in the number of ORs. Furthermore, as a increases linearly, o must increase exponentially to keep the inflection point of the sigmoid in the same place. For instance, if the sigmoid passes through (0.5, 0.5), then o ≈ log2 a. This gives a significant performance advantage to algorithms with higher collision probabilities, and thus to J P over J W. Lowering the probability is much cheaper than raising it. The effect of this is demonstrated in FIG6. Unsurprisingly from the tightness of the joint distribution, PMinHash achieves better precision and recall retrieving low JSD documents for a given cost. More surprising is that it also achieves slightly better precision and recall on retrieving high J W documents when the cost is low, even though this is the task W-MinHashes are designed for. The reason for this can be seen from the upper bound, J P ≤ 2J W /(1 + J W). On items that achieve this bound, the collision probability when summing two hashes, (2x/(1+x)) 2, is similar to 4x 2 near 0 and similar to x near 1. This in effect gives it the recall of 1 hash with the precision of 2 hashes on this subset of items, and thus a better precision/recall trade-off overall. We've described a new generalization of the Jaccard index, and shown several qualities that motivate it as the natural extension to probability distributions. In particular, we proved that it is optimal on all distributions in the same sense that the Jaccard index is optimal on uniform distributions. We've demonstrated its utility by showing J P's similarity in practice to the Jensen-Shannon divergence, a popular clustering criterion. We've described two MinHashing algorithms that achieve this as their collision probability with equivalent running time to the state of the art on both sparse and dense data.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkOswnc5z
The minimum of a set of exponentially distributed hashes has a very useful collision probability that generalizes the Jaccard Index to probability distributions.
Recently, progress has been made towards improving relational reasoning in machine learning field. Among existing models, graph neural networks (GNNs) is one of the most effective approaches for multi-hop relational reasoning. In fact, multi-hop relational reasoning is indispensable in many natural language processing tasks such as relation extraction. In this paper, we propose to generate the parameters of graph neural networks (GP-GNNs) according to natural language sentences, which enables GNNs to process relational reasoning on unstructured text inputs. We verify GP-GNNs in relation extraction from text. Experimental on a human-annotated dataset and two distantly supervised datasets show that our model achieves significant improvements compared to the baselines. We also perform a qualitative analysis to demonstrate that our model could discover more accurate relations by multi-hop relational reasoning. Recent years, graph neural networks (GNNs) have been applied to various fields of machine learning, including node classification BID10, relation classification BID22, molecular property prediction BID6, few-shot learning BID5, and achieve promising on these tasks. These works have demonstrated GNNs' strong power to process relational reasoning on graphs. Relational reasoning aims to abstractly reason about entities/objects and their relations, which is an important part of human intelligence. Besides graphs, relational reasoning is also of great importance in many natural language processing tasks such as question answering, relation extraction, summarization, etc. Consider the example shown in Fig. 1, existing relation extraction models could easily extract the facts that Luc Besson directed a film Léon: The Professional and that the film is in English, but fail to infer the relationship between Luc Besson and English without multi-hop relational reasoning. By considering the reasoning patterns, one can discover that Luc Besson could speak English following a reasoning logic that Luc Besson directed Léon: The Professional and this film is in English indicates Luc Besson could speak English. However, most existing GNNs can only process multi-hop relational reasoning on pre-defined graphs and cannot be directly applied in natural language relational reasoning. Enabling multi-hop relational reasoning in natural languages remains an open problem. To address this issue, in this paper, we propose graph neural networks with generated parameters (GP-GNNs), to adapt graph neural networks to solve the natural language relational reasoning task. GP-GNNs first constructs a fully-connected graph with the entities in the sequence of text. After that, it employs three modules to process relational reasoning: an encoding module which enables edges to encode rich information from natural languages, a propagation module which propagates relational information among various nodes, and a classification module which makes predictions with node representations. As compared to traditional GNNs, GP-GNNs could learn edges' parameters from natural languages, extending it from performing inferring on only non-relational graphs or graphs with a limited number of edge types to unstructured inputs such as texts. In the experiments, we apply GP-GNNs to a classic natural language relational reasoning task: relation extraction from text. We carry out experiments on Wikipedia corpus aligned with Wikidata knowledge base BID25 Figure 1: An example of relation extraction from plain text. Given a sentence with several entities marked, we model the interaction between these entities by generating the weights of graph neural networks. Modeling the relationship between "Léon" and "English" as well as "Luc Besson" helps discover the relationship between "Luc Besson" and "English".model outperforms other state-of-the-art models on relation extraction task by considering multihop relational reasoning. We also perform a qualitative analysis which shows that our model could discover more relations by reasoning more robustly as compared to baseline models. Our main contributions are in two-fold: We extend a novel graph neural network model with generated parameters, to enable relational message-passing with rich text information, which could be applied to process relational reasoning on unstructured inputs such as natural languages. We verify our GP-GNNs in the task of relation extraction from text, which demonstrates its ability on multi-hop relational reasoning as compared to those models which extract relationships separately. Moreover, we also present three datasets, which could help future researchers compare their models in different settings. GNNs were first proposed in BID21 and are trained via the Almeida-Pineda algorithm BID1. Later the authors in BID12 replace the Almeida-Pineda algorithm with the more generic backpropagation and demonstrate its effectiveness empirically. BID6 propose to apply GNNs to molecular property prediction tasks. BID5 shows how to use GNNs to learn classifiers on image datasets in a few-shot manner. BID6 study the effectiveness of message-passing in quantum chemistry. BID4 apply message-passing on a graph constructed by coreference links to answer relational questions. There are relatively fewer papers discussing how to adapt GNNs to natural language tasks. For example, BID14 propose to apply GNNs to semantic role labeling and BID22 apply GNNs to knowledge base completion tasks. BID31 apply GNNs to relation extraction by encoding dependency trees, and De Cao et al. FORMULA2 apply GNNs to multi-hop question answering by encoding co-occurence and co-reference relationships. Although they also consider applying GNNs to natural language processing tasks, they still perform message-passing on predefined graphs. introduces a novel neural architecture to generate a graph based on the textual input and dynamically update the relationship during the learning process. In sharp contrast, this paper focuses on extracting relations from real-world relation datasets. Relational reasoning has been explored in various fields. For example, BID20 propose a simple neural network to reason the relationship of objects in a picture, BID26 build up a scene graph according to an image, and BID9 model the interaction of physical objects. In this paper, we focus on the relational reasoning in natural language domain. Existing works BID27 BID13 DISPLAYFORM0 Figure 2: Overall architecture: the encoding module takes a sequence of vector representations as inputs, and output a transition matrix as output; the propagation module propagates the hidden states from nodes to its neighbours with the generated transition matrix; the classification module provides task-related predictions according to nodes representations.the pair-wise relationship between entities in certain situations. For example, BID27 ) is one of the earliest works that applies a simple CNN to this task, and BID28 further extends it with piece-wise max-pooling. BID16 propose a multi-window version of CNN for relation extraction. BID13 study an attention mechanism for relation extraction tasks. BID18 predict n-ary relations of entities in different sentences with Graph LSTMs. BID11 treat relations as latent variables which are capable of inducing the relations without any supervision signals. BID29 show that the relation path has an important role in relation extraction. BID15 show the effectiveness of LSTMs BID7 in relation extraction. BID2 proposed a walk-based model to do relation extraction. The most related work is BID24, where the proposed model incorporates contextual relations with attention mechanism when predicting the relation of a target entity pair. The drawback of existing approaches is that they could not make full use of the multi-hop inference patterns among multiple entity pairs and their relations within the sentence. We first define the task of natural language relational reasoning. Given a sequence of text with m entities, it aims to reason on both the text and entities and make a prediction of the labels of the entities or entity pairs. In this section, we will introduce the general framework of GP-GNNs. GP-GNNs first build a fullyconnected graph G = (V, E), where V is the set of entities, and each edge DISPLAYFORM0 l−1 extracted from the text. After that, GP-GNNs employ three modules including encoding module, propagation module and classification module to proceed relational reasoning, as shown in Fig. 2. The encoding module converts sequences into transition matrices corresponding to edges, i.e. the parameters of the propagation module, by DISPLAYFORM0 where f (·) could be any model that could encode sequential data, such as LSTMs, GRUs, CNNs, E(·) indicates an embedding function, and θ n e denotes the parameters of the encoding module of n-th layer. The propagation module learns representations for nodes layer by layer. The initial embeddings of nodes, i.e. the representations of layer 0, are task-related, which could be embeddings that encode features of nodes or just one-hot embeddings. Given representations of layer n, the representations of layer n + 1 are calculated by DISPLAYFORM0 where N (v i) denotes the neighbours of node v i in graph G and σ(·) denotes non-linear activation function. Generally, the classification module takes node representations as inputs and outputs predictions. Therefore, the loss of GP-GNNs could be calculated as DISPLAYFORM0 where θ c denotes the parameters of the classification module, K is the number of layers in propagation module and Y denotes the ground truth label. The parameters in GP-GNNs are trained by gradient descent methods. Relation extraction from text is a classic natural language relational reasoning task. Given a sentence s = (x 0, x 1, . . ., x l−1), a set of relations R and a set of entities in this sentence V s = {v 1, v 2, . . ., v |Vs|}, where each v i consists of one or a sequence of tokens, relation extraction from text is to identify the pairwise relationship r vi,vj ∈ R between each entity pair (v i, v j).In this section, we will introduce how to apply GP-GNNs to relation extraction. To encode the context of entity pairs (or edges in the graph), we first concatenate the position embeddings with word embeddings in the sentence: DISPLAYFORM0 where x t denotes the word embedding of word x t and p i,j t denotes the position embedding of word position t relative to the entity pair's position i, j (Details of these two embeddings are introduced in the next two paragraphs.) After that, we feed the representations of entity pairs into encoder f (·) which contains a bi-directional LSTM and a multi-layer perceptron: DISPLAYFORM1 where n denotes the index of layer 1, [·] means reshaping a vector as a matrix, BiLSTM encodes a sequence by concatenating tail hidden states of the forward LSTM and head hidden states of the backward LSTM together and MLP denotes a multi-layer perceptron with non-linear activation σ. Word Representations We first map each token x t of sentence {x 0, x 1, . . ., x l−1} to a kdimensional embedding vector x t using a word embedding matrix W e ∈ R |V |×dw, where |V | is the size of the vocabulary. Throughout this paper, we stick to 50-dimensional GloVe embeddings pre-trained on a 6 billion corpus BID19.Position Embedding In this work, we consider a simple entity marking scheme 2: we mark each token in the sentence as either belonging to the first entity v i, the second entity v j or to neither of those. Each position marker is also mapped to a d p -dimensional vector by a position embedding matrix P ∈ R 3×dp. We use notation p i,j t to represent the position embedding for x t corresponding to entity pair (v i, v j). Next, we use Eq. to propagate information among nodes where the initial embeddings of nodes and number of layers are further specified as follows. The Initial Embeddings of Nodes Suppose we are focusing on extracting the relationship between entity v i and entity v j, the initial embeddings of them are annotated as h vi = a subject, and h vj = a object, while the initial embeddings of other entities are set to all zeros. We set special values for the head and tail entity's initial embeddings as a kind of "flag" messages which we expect to be passed through propagation. Annotators a subject and a object could also carry the prior knowledge about subject entity and object entity. In our experiments, we generalize the idea of Gated Graph Neural Networks BID12 by setting a subject = [1; 0] and a object = [0; 1] 3.Number of Layers In general graphs, the number of layers K is chosen to be of the order of the graph diameter so that all nodes obtain information from the entire graph. In our context, however, since the graph is densely connected, the depth is interpreted simply as giving the model more expressive power. We treat K as a hyper-parameter, the effectiveness of which will be discussed in detail (Sect. 5.4). The output module takes the embeddings of the target entity pair (v i, v j) as input, which are first converted by: DISPLAYFORM0 where represents element-wise multiplication. This could be used for classification: DISPLAYFORM1 where r vi,vj ∈ R, and MLP denotes a multi-layer perceptron module. We use cross entropy here as the classification loss DISPLAYFORM2 where r vi,vj denotes the relation label for entity pair (v i, v j) and S denotes the whole corpus. In practice, we stack the embeddings for every target entity pairs together to infer the underlying relationship between each pair of entities. We use PyTorch BID17 to implement our models. To make it more efficient, we avoid using loop-based, scalar-oriented code by matrix and vector operations. Our experiments mainly aim to: showing that our best models could improve the performance of relation extraction under a variety of settings; illustrating that how the number of layers affect the performance of our model; and performing a qualitative investigation to highlight the difference between our models and baseline models. In both part and part, we do three subparts of experiments: (i) we will first show that our models could improve instance-level relation extraction on a human annotated test set, and (ii) then we will show that our models could also help enhance the performance of bag-level relation extraction on a distantly labeled test set 4, and (iii) we also split a subset of distantly labeled test set, where the number of entities and edges is large. Distantly labeled set BID24 have proposed a dataset with Wikipedia corpora. There is a small difference between our task and theirs: our task is to extract the relationship between every pair of entities in the sentence, whereas their task is to extract the relationship between the given entity pair and the context entity pairs. Therefore, we need to modify their dataset: We added reversed edges if they are missing from a given triple, e.g. if triple (Earth, part of, Solar System) exists in the sentence, we add a reversed label, (Solar System, has a member, Earth), to it; For all of the entity pairs with no relations, we added "NA" labels to them. 5 We use the same training set for all of the experiments. Human annotated test set Based on the test set provided by BID24, 5 annotators 6 are asked to label the dataset. They are asked to decide whether or not the distant supervision is right for every pair of entities. Only the instances accepted by all 5 annotators are incorporated into the human annotated test set. There are 350 sentences and 1,230 triples in this test set. Dense distantly labeled test set We further split a dense test set from the distantly labeled test set. Our criteria are: the number of entities should be strictly larger than 2; and there must be at least one circle (with at least three entities) in the ground-truth label of the sentence BID0. This test set could be used to test our methods' performance on sentences with the complex interaction between entities. There are 1,350 sentences and more than 17,915 triples and 7,906 relational facts in this test set. We select the following models for comparison, the first four of which are our baseline models. Context-Aware RE, proposed by BID24. This model utilizes attention mechanism to encode the context relations for predicting target relations. It was the state-of-the-art models on Wikipedia dataset. This baseline is implemented by ourselves based on authors' public repo 8.Multi-Window CNN. BID27 utilize convolutional neural networks to classify relations. Different from the original version of CNN proposed in BID27, our implementation, follows BID16, concatenates features extracted by three different window sizes: 3, 5, 7.PCNN, proposed by BID28. This model divides the whole sentence into three pieces and applies max-pooling after convolution layer piece-wisely. For CNN and following PCNN, the entity markers are the same as originally proposed in BID27.LSTM or GP-GNN with K = 1 layer. Bi-directional LSTM BID23 could be seen as an 1-layer variant of our model. GP-GNN with K = 2 or K = 3 layerss. These models are capable of performing 2-hop reasoning and 3-hop reasoning, respectively. We select the best parameters for the validation set. We select non-linear activation functions between relu and tanh, and select d n among {2, 4, 8, 12, 16} 9. We have also tried two forms of adjacent matrices: tied-weights (set A (n) = A (n+1) ) and untied-weights. Table 1 shows our best hyper-parameter settings, which are used in all of our experiments. Hyper-parameters Value learning rate 0.001 batch size 50 dropout ratio 0.5 hidden state size 256 non-linear activation σ relu embedding size for #layers = 1 8 embedding size for #layers = 2 and 3 12 adjacent matrices untied Table 1: Hyper-parameters settings. So far, we have only talked about the way to implement sentence-level relation extraction. To evaluate our models and baseline models in bag-level, we utilize a bag of sentences with given entity pair to score the relations between them. BID28 formalize the bag-level relation extraction as multi-instance learning. Here, we follow their idea and define the score function of entity pair and its corresponding relation r as a max-one setting: TAB3 and 3, we can see that our best models outperform all the baseline models significantly on all three test sets. These indicate our model could successfully conduct reasoning on the fully-connected graph with generated parameters from natural language. These also indicate that our model not only performs well on sentence-level relation extraction but also improves on bag-level relation extraction. Note that Context-Aware RE also incorporates context information to predict the relation of the target entity pair, however, we argue that Context-Aware RE only models the co-occurrence of various relations, ignoring whether the context relation participates in the reasoning process of relation extraction of the target entity pair. Context-Aware RE may introduce more noise, for it may mistakenly increase the probability of a relation with the similar topic with the context relations. We will give samples to illustrate this issue in Sect. 5.5. Another interesting observation is that our #layers=1 version outperforms CNN and PCNN in these three datasets. One probable reason is that sentences from Wikipedia corpus are always complex, which may be hard to model for CNN and PCNN. Similar are also reached by BID30. Table 4: Sample predictions from the baseline models and our GP-GNN model. Ground truth graphs are the subgraph in Wikidata knowledge graph induced by the sets of entities in the sentences. The models take sentences and entity markers as input and produce a graph containing entities (colored and bold) and relations between them. Although "No Relation" is also be seen as a type of relation, we only show other relation types in the graphs. DISPLAYFORM0 The number of layers represents the reasoning ability of our models. A K-layer version has the ability to infer K-hop relations. To demonstrate the effects of the number of layers, we also compare our models with different numbers of layers. From TAB3, we could see that on all three datasets, 3-layer version achieves the best. We could also see from FIG0 that as the number of layers grows, the curves get higher and higher precision, indicating considering more hops in reasoning leads to better performance. However, the improvement of the third layer is much smaller on the overall distantly supervised test set than the one on the dense subset. This observation reveals that the reasoning mechanism could help us identify relations especially on sentences where there are more entities. We could also see that on the human annotated test set 3-layer version to have a greater improvement over 2-layer version as compared with 2-layer version over 1-layer version. It is probably due to the reason that bag-level relation extraction is much easier. In real applications, different variants could be selected for different kind of sentences or we can also ensemble the prediction from different models. We leave these explorations for future work. − −−−− → z to find the fact (BankUnited Center, located in, English). Note that (BankUnited Center, located in, English) is even not in Wikidata, but our model could identify this fact through reasoning. We also find that Context-Aware RE tends to predict relations with similar topics. For example, in the third case, share boarder with and located in are both relations about territory issues. Consequently, Context-Aware RE makes a mistake by predicting (Kentucky, share boarder with, Ohio). As we have discussed before, this is due to its mechanism to model co-occurrence of multiple relations. However, in our model, since Ohio and Johnson County have no relationship, this wrong relation is not predicted. We addressed the problem of utilizing GNNs to perform relational reasoning with natural languages. Our proposed models, GP-GNNs, solves the relational message-passing task by encoding natural language as parameters and performing propagation from layer to layer. Our model can also be considered as a more generic framework for graph generation problem with unstructured input other than text, e.g. images, videos, audios. In this work, we demonstrate its effectiveness in predicting the relationship between entities in natural language and bag-level and show that by considering more hops in reasoning the performance of relation extraction could be significantly improved.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkgzYiRqtX
A graph neural network model with parameters generated from natural languages, which can perform multi-hop reasoning.
Off-Policy Actor-Critic (Off-PAC) methods have proven successful in a variety of continuous control tasks. Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return. In this paper, we introduce a novel and flexible meta-critic that observes the learning process and meta-learns an additional loss for the actor that accelerates and improves actor-critic learning. Compared to the vanilla critic, the meta-critic network is explicitly trained to accelerate the learning process; and compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks. Crucially, our meta-critic framework is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency. We demonstrate that online meta-critic learning leads to improvements in a variety of continuous control environments when combined with contemporary Off-PAC methods DDPG, TD3 and the state-of-the-art SAC. Off-policy Actor-Critic (Off-PAC) methods are currently central in deep reinforcement learning (RL) research due to their greater sample efficiency compared to on-policy alternatives. On-policy requires new trajectories to be collected for each update to the policy, and is expensive as the number of gradient steps and samples per step increases with task-complexity even for contemporary TRPO , PPO and A3C algorithms. Off-policy methods, such as DDPG , TD3 and SAC (b) achieve greater sample efficiency due to their ability to learn from randomly sampled historical transitions without a time sequence requirement, thus making better use of past experience. Their critic estimates the action-value (Q-value) function using a differentiable function approximator, and the actor updates its policy parameters in the direction of the approximate action-value gradient. Briefly, the critic provides a loss to guide the actor, and is trained in turn to estimate the environmental action-value under the current policy via temporal-difference learning . In all these cases the learning algorithm itself is hand-crafted and fixed. Recently meta-learning, or "learning-to-learn" has become topical as a paradigm to accelerate RL by learning aspects of the learning strategy, for example, through learning fast adaptation strategies (; ;), exploration strategies , optimization strategies (b), losses , hyperparameters , and intrinsic rewards . However, the majority of these works perform meta-learning on a family of tasks or environments and amortize this huge cost by deploying the trained strategy for fast learning on a new task. In this paper we introduce a novel meta-critic network to enhance existing Off-PAC learning frameworks. The meta-critic is used alongside the vanilla critic to provide a loss to guide the actor's learning. However compared to the vanilla critic, the meta-critic is explicitly (meta)-trained to accelerate the learning process rather than merely estimate the action-value function. Overall, the actor is trained by gradients provided by both critic and meta-critic losses, the critic is trained by temporal-difference as usual, and the meta-critic is trained to generate maximum learning performance improvements in the actor. In our framework, both the critic and meta-critic use randomly sampled off-policy transitions for efficient and effective Off-PAC learning, providing superior sam-ple efficiency compared to existing on-policy meta-learners. Furthermore, we demonstrate that our meta-critic can be successfully learned online within a single task. This is in contrast to the currently widely used meta-learning research paradigm -where entire task families are required to provide enough data for meta-learning, and to provide new tasks to amortize the huge cost of meta-learning. Essentially our framework meta-learns an auxiliary loss function, which can be seen as an intrinsic motivation towards optimum learning progress . As analogously observed in several recent meta-learning studies , our loss-learning can be formalized as a bi-level optimization problem with the upper level being meta-critic learning, and lower level being conventional learning. We solve this joint optimization by iteratively updating the metacritic and base learner online while solving a single task. Our strategy is thus related to the metaloss learning in EPG , but learned online rather than offline, and integrated with Off-PAC rather than their on-policy policy-gradient learning. The most related prior work is LIRPG , which meta-learns an intrinsic reward online. However, their intrinsic reward just provides a helpful scalar offset to the environmental reward for on-policy trajectory optimization via policy-gradient . In contrast our meta-critic provides a loss for direct actor optimization just based on sampled transitions, and thus achieves dramatically better sample efficiency than LIRPG reward learning in practice. We evaluate our framework on several contemporary continuous control benchmarks and demonstrate that online meta-critic learning can be integrated with and improve a selection of contemporary Off-PAC algorithms including DDPG, TD3 and SAC. Policy-Gradient (PG) Methods. On-policy methods usually update actor parameters in the direction of greater cumulative reward. However, on-policy methods need to interact with the environment in a sequential manner to accumulate rewards and the expected reward is generally not differentiable due to environment dynamics. Even exploiting tricks like importance sampling and improved application of A2C , the use of full trajectories is less effective than off-policy transitions, as the trajectory needs a series of continuous transitions in time. Off-policy actor-critic architectures aim to provide greater sample efficiency by reusing past experience (previously collected transitions). DDPG borrows two main ideas from Deep Q Networks (; 2015): a big replay buffer and a target Q network to give consistent targets during temporal-difference backups. TD3 (Twin Delayed Deep Deterministic policy gradient algorithm) develops a variant of Double Q-learning by taking the minimum value between a pair of critics to limit over-estimation. SAC (Soft Actor-Critic) (a; b) proposes a maximum entropy RL framework where its stochastic actor aims to simultaneously maximize expected action-value and entropy. The latest version of SAC (b) also includes the "the minimum value between both critics" idea in its implementation. Meta Learning for RL. Meta-learning (a.k.a. learning to learn) has received a resurgence in interest recently due to its potential to improve learning performance, and especially sample-efficiency in RL . Several studies learn optimizers that provide policy updates with respect to known loss or reward functions (; b;). A few studies learn hyperparameters , loss functions or rewards that steer the learning of standard optimizers. Our meta-critic framework is in the category of loss-function meta-learning, but unlike most of these we are able to meta-learn the loss function online in parallel to learning a single extrinsic task rather. No costly offline learning on a task family is required as in;. Most current Meta-RL methods are based on on-policy policy-gradient, limiting their sample efficiency. For example, while LIRPG is one of the rare prior works to attempt online meta-learning, it is ineffective in practice due to only providing a scalar reward increment rather than a loss for direct optimization. A few meta-RL studies have begun to address off-policy RL, for conventional offline multi-task meta-learning and for optimising transfer vs forgetting in continual learning of multiple tasks . The contribution of our Meta-Critic is to enhance state-of-the-art Off-PAC RL with single-task online meta-learning. Loss Learning. Loss learning has been exploited in'learning to teach' and surrogate loss learning where a teacher network predicts the parameters of a manually designed loss in supervised learning. In contrast our meta-critic is itself a differentiable loss, and is designed for use in reinforcement learning. Other applications learn losses that improve model robustness to out of distribution samples . Our loss learning architecture is related to , but designed for accelerating single-task Off-PAC RL rather than improving robustness in multi-domain supervised learning. We aim to learn a meta-critic that provides an auxiliary loss L aux ω to assist the actor's learning of a task. The auxiliary loss parameters ω are optimized in a meta-learning process. The vanilla critic L main and meta-critic L aux ω losses train the actor π φ off-policy via stochastic gradient descent. Reinforcement learning involves an agent interacting with the environment E. At each time t, the agent receives an observation s t, takes a (possibly stochastic) action a t based on its policy π: S → A, and receives a scalar reward r t and new state of the environment s t+1. We call (s t, a t, r t, s t+1) as a single point transition. The objective of RL is to find the optimal policy π φ, which maximizes the expected cumulative return J. In on-policy RL, J is defined as the discounted episodic return based on a sequential trajectory over horizon H: In the usual implementation of A2C, r is represented by a surrogate state-value V (s t) from its critic. Since J is only a scalar value, the gradient of J with respect to policy parameters φ has to be optimized under the policy gradient theorem : In off-policy RL (e.g., DDPG, TD3, SAC) which is our focus in this paper, parameterized policies π φ can be directly updated by defining the actor loss in terms of the expected return J(φ) and taking its gradient ∇ φ J(φ), where J(φ) depends on the action-value Q θ (s t, a t). The main loss L main provided by the vanilla critic is thus where we follow the notation in TD3 and SAC that φ and θ denote actors and critics respectively. The main loss is calculated by a mini-batch of transitions randomly sampled from the replay buffer. The actor's policy network is updated as ∆φ = α∇ φ L main, following the critic's gradient to increase the likelihood of actions that achieve a higher Q-value. Meanwhile, the critic uses Q-learning updates to estimate the action-value function: 3.2 ALGORITHM OVERVIEW Our meta-learning goal is to train an auxiliary meta-critic network L aux ω that in turn enhances actor learning. Specifically, it should lead to the actor φ having improved performance on the main task L main when following gradients provided by the meta-critic as well as those provided by the main task. This can be seen as a bi-level optimization problem of the form: where we can assume L meta (·) = L main (·) for now. Here the lower-level optimization trains the actor φ to minimize both the main task and meta-critic-provided losses on some training samples. The upper-level optimization further requires the meta-critic ω to have produced a learned actor φ * that minimizes a meta-loss that measures the actor's main task performance on a second set of validation Algorithm 1 Online Meta-Critic Learning for Off-PAC RL φ, θ, ω, D ← ∅ // Initialize actor, critic, meta-critic and buffer for each iteration do for each environment step do a t ∼ π φ (a t |s t) // Select action according to the current policy s t+1 ∼ p(s t+1 |s t, a t), r t // Observe reward r t and new state s t+1 D ← D ∪ {(s t, a t, r t, s t+1)} // Store the transition in the replay buffer end for for each gradient step do θ ← θ − λ∇ θ J Q (θ) // Update the critic parameters meta-train: // Auxiliary actor loss from meta-critic samples, after being trained by the meta-critic. Note that in principle the lower-level optimization could purely rely on L aux ω analogously to the procedure in EPG , but we find that optimizing their linear combination greatly increases learning stability and speed. Eq. is satisfied when the meta-critic successfully improves the actor's performance on the main task as measured by meta-loss. Note that the vanilla critic update is also in the lower loop, but as it updates as usual, so we focus on the actor and meta-critic optimization for simplicity of exposition. In this setup the meta-critic is a neural network h ω (d trn ; φ) that takes as input some featurisation of the actor φ and the states and actions in d trn. This auxiliary neural network must produce a scalar output, which we can then treat as a loss L aux ω:= h ω, and must be differentiable with respect to φ. We next discuss the overall optimization flow, and discuss the specific meta-critic architecture later. Figure 1: Meta-critic for Off-PAC. The agent uses data sampled from the replay buffer during metatrain and meta-test. Actor parameters are first updated using only vanilla critic, or both vanilla-and meta-critic. Meta-critic parameters are updated by the meta-loss. Meta-Optimization Flow. To optimize Eq., we iteratively update the meta-critic parameters ω (upper-level) and actor and vanilla-critic parameters φ and θ (lower-level). At each iteration, we perform: (i) Meta-train: Sample a mini-batch of transitions and putatively update policy φ according to the main L main and meta-critic L aux ω losses. (ii) Meta-test: Sample another mini-batch of transitions to evaluate the performance of the updated policy according to L meta. (iii) Meta-optimization: Update the meta-critic parameters ω to maximize the performance on the validation batch, and perform the real actor update according to both losses. In this way the meta-critic is trained online and in parallel to the actor so that they co-evolve. Updating Actor Parameters (φ). During metatrain, we randomly sample a mini-batch of transitions d trn = {(s i, a i, r i, s i+1)} with batch size N from the replay buffer D. We then update the pol-icy using both losses as:. We also compute a separate that only makes use of the vanilla loss. If the meta-critic provided a beneficial source of loss, φ new should be a better parameter than φ, and in particular it should be a better parameter than φ old. We will use this comparison in the next meta-test step. Updating Meta-Critic Parameters (ω). To train the meta-critic network, we sample another mini-batch of transitions: )} with batch size M. The use of a validation batch for bi-level meta-optimization ensures the meta-learned component does not overfit. Since our framework is off-policy, this does not incur any sample-efficiency cost. The meta-critic is then updated by a meta loss ω ← argmin, which could in principle be the same as the main loss L meta = L main. However, we find it helpful for optimization efficiency to optimize the (monotonically related) difference between the updates with-and without meta-critic's input. Specifically, we use which is simply a re-centering and re-scaling of L main. This leads to Note that here the updated actor φ new has dependence on the feedback given by meta-critic ω and φ old does not. Thus only the first term is optimized for ω. In his setup the L main (d val ; φ new) term should obtain high reward/low loss on the validation batch and the latter provides a baseline, analogous to the baseline commonly used to accelerate and stabilize policy-gradient RL. The use of tanh reflects the idea of diminishing marginal utility, and ensures that the meta-loss range is always nicely distributed in [−1, 1]. In essence, the meta-loss is for the agent to ask itself the question based on the validation batch, "Did meta-critic improve the performance?", and adjusts the parameters of meta-critic accordingly. Designing Meta-Critic (h ω). The meta-critic network h ω implements the auxiliary loss for the actor. The design-space for h ω has several requirements: (i) Its input must depend on the policy parameters φ, because this auxiliary loss is also used to update policy network. (ii) It should be permutation invariant to transitions in d trn, i.e., it should not make a difference if we feed the randomly sampled transitions indexed or. The most naive way to achieve (i) is given in MetaReg which meta-learns a parameter regularizer: h ω (φ) = i ω i |φ i |. Although this form of h ω acts directly on φ, it does not exploit state information, and introduces a large number of parameters as φ, and then h ω may be a high-dimensional neural network. Therefore, we design a more efficient and effective form of h ω that also meets both of these requirements. Similar to the feature extractor in supervised learning, the actor needs to analyse and extract information from states for decision-making. We assume the policy network can be represented as π φ (s) =π(π(s)) and decomposed into the feature extractionπ φ and decision-makingπ φ (i.e., the last layer of the full policy network) modules. Thus the output of the penultimate layer of full policy network is just the output of feature extractionπ φ (s), and such output of feature jointly encodes φ and s. Given this encoding, we implement h w (d trn ; φ) as a three-layer multi-layer perceptron (MLP) whose input is the extracted feature fromπ φ (s). Here we consider two designs for meta-critic (h ω): using our joint feature alone (Eq.) or augmenting the joint feature with states and actions (Eq.): h ω is to work out the auxiliary loss based on such batch-wise set-embdedding of our joint actor-state feature. That is to say, d trn is a randomly sampled mini-batch transitions from the replay buffer, and then the s (and a) of the transitions are inputted to the h ω network in a permutation invariant way, and finally we can obtain the auxiliary loss for this batch d trn. Here, our design of Eq. also includes the cues features in LIRPG and EPG where s i and a i are used as the input of their learned reward and loss respectively. We set a softplus activation to the final layer of h ω, following the idea in TD3 that the vanilla critic may over-estimate and so the introduction of a non-negative actor auxiliary loss can mitigate such over-estimation. Moreover, we point out that only s i (and a i) from d trn are used when calculating L main and L aux ω for the actor, while s i, a i, r i and s i+1 are all used for optimizing the vanilla critic. Implementation on DDPG, TD3 and SAC. Our meta-critic module can be incorporated in the main Off-PAC methods DDPG, TD3 and SAC. In our framework, these algorithms differ only in their definitions of L main, and the meta-critic implementation is otherwise exactly the same for each. Further implementation details can be found in the supplementary material. TD3 borrows the Double Q-learning idea and use the minimum value between both critics to make unbiased value estimations. At the same time, computational cost is obtained by using a single actor optimized with respect to Q θ1. Thus the corresponding L main for actor becomes: In SAC, two key ingredients are considered for the actor: maximizing the policy entropy and automatic temperature hyper-parameter regulation. At the same time, the latest version of SAC (b) also draws lessons from "taking the minimum value between both critics". The L main for SAC actor is: 4 EXPERIMENTS AND EVALUATION The goal of our experimental evaluation is to demonstrate the versatility of our meta-critic module in integration with several prior Off-PAC algorithms, and its efficacy in improving their respective performance. We use the open-source implementations of DDPG, TD3 and SAC algorithms as our baselines, and denote their enhancements by meta-critic as DDPG-MC, TD3-MC, SAC-MC respectively. All -MC agents have both their built-in vanilla critic, and the meta-critic that we propose. We take Eq. as the default meta-critic architecture h ω, and we compare the alternative in the later ablation study. For our implementation of meta-critic, we use a three-layer neural network with an input dimension ofπ (300 in DDPG and TD3, 256 in SAC), two hidden feed-forward layers of 100 hidden nodes each, and ReLU non-linearity between layers. We evaluate the methods on a suite of seven MuJoCo continuous control tasks interfaced through OpenAI Gym and HalfCheetah and Ant (a) in rllab. We use the latest V2 tasks instead of V1 used in TD3 and the old implementation of SAC (a) without any modification to their original environment or reward. Implementation Details. For DDPG, we use the open-source implementation "OurDDPG" 1 which is the re-tuned version of DDPG implemented in with the same hyperparameters of the actor and critic. For TD3 and SAC, we use the open-source implementations of TD3 2 and SAC 3. In each case we integrate our meta-critic with learning rate 0.001. The specific pseudo-codes can be found in the supplementary material. DDPG Figure 2 shows the learning curves of DDPG and DDPG-MC. The experimental corresponding to each task are averaged over 5 random seeds (trials) and network initialisations, and the standard deviation confidence intervals are represented as shaded regions over the time steps. , curves are uniformly smoothed (window size 30) for clarity. We run the gym-MuJoCo experiments for 1-10 million depen ding on to environment, and rllab experiments for 3 million steps. Every 1000 steps we evaluate our policy over 10 episodes with no exploration noise. From the learning curves in Figure 2, we can see that DDPG-MC generally outperforms the corresponding DDPG baseline in terms of the learning speed and asymptotic performance. Furthermore, it usually has smaller variance. The summary for all nine tasks in terms of max average return are given in Table 1. We selected the six tasks shown in Figure 2 for plotting, because the other MuJoCo tasks "Reacher", "InvertedPendulum" and "InvertedDoublePendulum" have an environmental reward upper bound which all methods reach quickly without obvious difference between them. Table 1 shows that DDPG-MC provides consistently higher max return for the tasks without upper bounds. Figure 3 reports the learning curves for TD3. For some tasks vanilla TD3 performance declines in the long run, while our TD3-MC shows improved stability with much higher asymptotic performance. Generally speaking, the learning curves show that TD3-MC providing comparable or better learning performance in each case, while Table 1 shows the clear improvement in the max average return. Figure 4 report the learning curves of SAC. Note that we use the most recent update of SAC (b), which can be regarded as the combination SAC+TD3. Although this SAC+TD3 is arguably the strongest existing method, SAC-MC still gives a clear boost on the asymptotic performance for several of the tasks. Comparison vs PPO-LIRPG Intrinsic Reward Learning for PPO is the most related method to our work in performing online single-task meta-learning of an auxiliary reward/loss via a neural network. The original PPO-LIRPG study evaluated on a modified environment with hidden rewards. Here we apply it to the standard unmodified learning tasks that we aim to improve. The in Table 1 demonstrate that: (i) In this conventional setting, PPO-LIRPG worsens rather than improves basic PPO performance. (ii) Overall Off-PAC methods generally perform better than on-policy PPO for most environments. This shows the importance of our meta-learning contribution to the off-policy setting. In general Meta-Critic is preferred compared to PPO-LIRPG because the latter only provides a scalar reward bonus only influences the policy indirectly via policy-gradient updates, while Meta-Critic provides a direct loss. Summary Table 1 and Figure 5 summarize all the in terms of max average return. We can see that SAC-MC always performs best; the Meta-Critic-enhanced methods are generally comparable or better than their corresponding vanilla alternatives; and Meta-Critic usually provides improved variance in return compared to the baselines. Loss Analysis. To analyse the learning dynamics of our algorithm, we take Walker2d as an example. Figure 6 reports the main loss L main curve of actor and the loss curves of h ω (i.e., L aux ω) and L meta over 5 trials for SAC. We can see that: (i) SAC-MC shows faster convergence to a lower value of L main, demonstrating the auxiliary loss's ability to accelerate learning. Unlike supervised learning, where the vanilla loss is, e.g., cross-entropy vs ground-truth labels. The L main for actors in RL is provided by the critic which is also learned, so the plot also encompasses convergence of the critic. (ii) The meta-loss (which corresponds to the success of the meta-critic in improving actor learning) fluctuates throughout, reflecting the exploration process in RL. But it is generally negative, confirming that the auxiliary-trained actor generally improves on the vanilla actor at each iteration. (iii) The auxiliary loss converges smoothly under the supervision of the meta-loss. Ablation on h ω design. We also run Walker2d experiments with alternative h ω designs as in Eq. or MetaReg format (input actor parameters directly). As shown in Table 2, we record the max average return and sum average return (regarded as the area under the average reward curve) of all evaluations during all time steps. Eq. our default h ω (Eq.) attains the highest mean average return. We can also see some improvement for h ω (φ) using MetaReg format, but the huge number of parameters is expensive. Overall, all meta-critic module designs provides at least a small improvement on vanilla SAC. Ablation on baseline in meta-loss. In Eq., we use L main (d val ; φ old) as a baseline to improve numerical stability of the gradient update. To evaluate this design, we remove the φ old baseline and. The last column in Table 2 shows that this barely improves on vanilla SAC, validating our design choice to use a baseline. We present Meta-Critic, an auxiliary critic module for Off-PAC methods that can be meta-learned online during single task learning. The meta-critic is trained to generate gradients that improve the actor's learning performance over time, and leads to long run performance gains in continuous control. The meta-critic module can be flexibly incorporated into various contemporary Off-PAC methods to boost performance. In future work, we plan to apply the meta-critic to conventional offline meta-learning with multi-task and multi-domain RL. Update critic by minimizing the loss: Calculate the old actor weights using the main actor loss: Calculate the new actor weights using the auxiliary actor loss: Sample a random mini-batch of N s val i from R Calculate the meta-loss using the meta-test sampled transitions: meta-optimization: Update the weight of actor and meta-critic network: Update the target networks: Algorithm 3 TD3-MC algorithm Initialize critics Q θ1, Q θ2, actor π φ and auxiliary loss network h ω Initialize target networks θ 1 ← θ 1, θ 2 ← θ 2, φ ← φ Initialize replay buffer B for t = 1 to T do Select action with exploration noise a ∼ π φ (s) +, ∼ N (0, σ) and observe reward r and new state s Store transition tuple (s, a, r, s) in B Sample mini-batch of N transitions (s, a, r, s Calculate the old actor weights using the main actor loss: Calculate the new actor weights using the auxiliary actor loss: Sample mini-batch of N s val from B Calculate the meta-loss using the meta-test sampled transitions: Update the actor and meta-critic: In terms of computation requirement, meta-critic takes around 15-30% more time per iteration, depending on the base algorithm. This is primarily attributable to the cost of evaluating the metaloss L meta, and hence L main . To investigate whether the benefit of meta-critic comes solely the additional compute expenditure, we perform an additional experiment where we increase the compute applied by the baselines to a corresponding degree. Specifically, if meta-critic takes K% more time than the baseline, then we rerun the baseline with K% more update steps iteration. This provides the baseline more mini-batch samples while controlling the number of environment interactions. Examples in Figure 12 shows that increasing the number of update steps does not have a straightforward link to performance. For DDPG, Walker2d-v2 performance increases with more steps, but stills performs worse than Meta-Critic. Meanwhile, for HalfCheetah, the extra iterations dramatically exacerbates the drop in performance that the baseline already experiences after around 1.5 million steps. Overall, there is no consistent impact of providing the baseline more iterations, and Meta-Critic's consistently good performance can not be simply replicated by a corresponding increase in gradient steps taken by the baseline. In order to investigate the impact of meta-critic on harder environments, we evaluated SAC and SAC-MC on TORCS and Humanoid(rllab). The in Figure 13 show that meta-critic provides a clear margin of performance improvement in these more challenging environments.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1lKd6NYPS
We present Meta-Critic, an auxiliary critic module for off-policy actor-critic methods that can be meta-learned online during single task learning.
Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be ``compressed to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical is a generalization bound for compressed networks based on the compressed size that, combined with off-the-shelf compression algorithms, leads to state-of-the-art generalization guarantees. In particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. Additionally, we show that compressibility of models that tend to overfit is limited. Empirical show that an increase in overfitting increases the number of bits required to describe a trained network. A pivotal question in machine learning is why deep networks perform well despite overparameterization. These models often have many more parameters than the number of examples they are trained on, which enables them to drastically overfit to training data BID39. In common practice, however, such networks perform well on previously unseen data. Explaining the generalization performance of neural networks is an active area of current research. Attempts have been made at adapting classical measures such as VC-dimension BID14 or margin/norm bounds BID4, but such approaches have yielded bounds that are vacuous by orders of magnitudes. Other authors have explored modifications of the training procedure to obtain networks with provable generalization guarantees BID10. Such procedures often differ substantially from standard procedures used by practitioners, and empirical evidence suggests that they fail to improve performance in practice BID38.We begin with an empirical observation: it is often possible to "compress" trained neural networks by finding essentially equivalent models that can be described in a much smaller number of bits; see BID9 for a survey. Inspired by classical relating small model size and generalization performance (often known as Occam's razor), we establish a new generalization bound based on the effective compressed size of a trained neural network. Combining this bound with off-the-shelf compression schemes yields the first non-vacuous generalization bounds in practical problems. The main contribution of the present paper is the demonstration that, unlike many other measures, this measure is effective in the deep-learning regime. Generalization bound arguments typically identify some notion of complexity of a learning problem, and bound generalization error in terms of that complexity. Conceptually, the notion of complexity we identify is: complexity = compressed size − remaining structure. The first term on the right-hand side represents the link between generalization and explicit compression. The second term corrects for superfluous structure that remains in the compressed representation. For instance, the predictions of trained neural networks are often robust to perturbations of the network weights. Thus, a representation of a neural network by its weights carries some irrelevant information. We show that accounting for this robustness can substantially reduce effective complexity. Our allow us to derive explicit generalization guarantees using off-the-shelf neural network compression schemes. In particular:• The generalization bound can be evaluated by compressing a trained network, measuring the effective compressed size, and substituting this value into the bound.• Using off-the-shelf neural network compression schemes with this recipe yields bounds that are state-of-the-art, including the first non-vacuous bounds for modern convolutional neural nets. The above takes a compression algorithm and outputs a generalization bound on nets compressed by that algorithm. We provide a complementary by showing that if a model tends to overfit then there is an absolute limit on how much it can be compressed. We consider a classifier as a (measurable) function of a random training set, so the classifier is viewed as a random variable. We show that the entropy of this random variable is lower bounded by a function of the expected degree of overfitting. Additionally, we use the randomization tests of BID39 to show empirically that increased overfitting implies worse compressibility, for a fixed compression scheme. The relationship between small model size and generalization is hardly new: the idea is a variant of Occam's razor, and has been used explicitly in classical generalization theory BID34 BID5 BID27 BID16 BID33 ). However, the use of highly overparameterized models in deep learning seems to directly contradict the Occam principle. Indeed, the study of generalization and the study of compression in deep learning has been largely disjoint; the later has been primarily motivated by computational and storage limitations, such as those arising from applications on mobile devices BID9. Our show that Occam type arguments remain powerful in the deep learning regime. The link between compression and generalization is also used in work by BID0, who study compressibility arising from a form of noise stability. Our are substantially different, and closer in spirit to the work of BID10; see Section 3 for a detailed discussion. BID39 study the problem of generalization in deep learning empirically. They observe that standard deep net architectures-which generalize well on real-world data-are able to achieve perfect training accuracy on randomly labelled data. Of course, in this case the test error is no better than random guessing. Accordingly, any approach to controlling generalization error of deep nets must selectively and preferentially bound the generalization error of models that are actually plausible outputs of the training procedure applied to real-world data.; BID10, we make use of the PAC-Bayesian framework BID29 BID7 BID30. This framework allows us to encode prior beliefs about which learned models are plausible as a (prior) distribution π over possible parameter settings. The main challenge in developing a bound in the PAC-Bayes framework bound is to articulate a distribution π that encodes the relative plausibilities of possible outputs of the training procedure. The key insight is that, implicitly, any compression scheme is a statement about model plausibilities: good compression is achieved by assigning short codes to the most probable models, and so the probable models are those with short codes. In this section, we recall some and notation from statistical learning theory. Our aim is to learn a classifier using data examples. Each example (x, y) consists of some features x ∈ X and a label y ∈ Y. It is assumed that the data are drawn identically and independently from some data generating distribution, DISPLAYFORM0 The goal of learning is to choose a hypothesis h: X → Y that predicts the label from the features. The quality of the prediction is measured by specifying some loss function L; the value L(h(x), y) is a measure of the failure of hypothesis h to explain example (x, y). The overall quality of a hypothesis h is measured by the risk under the data generating distribution: DISPLAYFORM1 Generally, the data generating distribution is unknown. Instead, we assume access to training data S n = {(x 1, y 1),..., (x n, y n)}, a sample of n points drawn i.i.d. from the data generating distribution. The true risk is estimated by the empirical risk: DISPLAYFORM2 The task of the learner is to use the training data to choose a hypothesisĥ from among some prespecified set of possible hypothesis H, the hypothesis class. The standard approach to learning is to choose a hypothesisĥ that (approximately) minimizes the empirical risk. This induces a dependency between the choice of hypothesis and the estimate of the hypothesis' quality. Because of this, it can happen thatĥ overfits to the training data:L(ĥ) L(ĥ). The generalization error L(ĥ) −L(ĥ) measures the degree of overfitting. In this paper, we consider an image classification problem, where x i is an image and y i the associated label for that image. The selected hypothesis is a deep neural network. We mostly consider the 0 -1 loss, that is, L(h(x), y) = 0 if the prediction is correct and L(h(x), y) = 1 otherwise. We use the PAC-Bayesian framework to establish bounds on generalization error. In general, a PACBayesian bound attempts to control the generalization error of a stochastic classifier by measuring the discrepancy between a pre-specified random classifier (often called prior), and the classifier of interest. Conceptually, PAC-Bayes bounds have the form: DISPLAYFORM3 where n is the number of training examples, π denotes the prior, and ρ denotes the classifier of interest (often called posterior).More formally, we write L(ρ) = E h∼ρ [L(h)] for the risk of the random estimator. The fundamental bound in PAC-Bayesian theory is (, Thm. 1.2.6): Theorem 2.1 (PAC-Bayes). Let L be a {0, 1}-valued loss function, let π be some probability measure on the hypothesis class, and let α > 1, > 0. Then, with probability at least 1 − over the distribution of the sample: DISPLAYFORM4 where we define Φ −1 γ as: DISPLAYFORM5 Remark 2.2. The above formulation of the PAC-Bayesian theorem is somewhat more opaque than other formulations (e. g., ; BID30 . This form is significantly tighter when KL/n is large. See Bégin et al.; BID25 for a unified treatment of PAC-Bayesian bounds. The quality of a PAC-Bayes bound depends on the discrepancy between the PAC-Bayes prior π-encoding the learned models we think are plausible-and ρ, which is the actual output of the learning procedure. The main challenge is finding good choices for the PAC-Bayes prior π, for which the value of KL(ρ, π) is both small and computable.3 RELATIONSHIP TO PREVIOUS WORK Generalization. The question of which properties of real-world networks explain good generalization behavior has attracted considerable attention BID23 BID24 BID16 BID17 BID2 BID8 BID20 BID10 BID36 BID31 BID0; see BID1 for a review of recent advances. Such typically identify a property of real-world networks, formalize it as a mathematical definition, and then use this definition to prove a generalization bound. Generally, the bounds are very loose relative to the true generalization error, which can be estimated by evaluating performance on held-out data. Their purpose is not to quantify the actual generalization error, but rather to give qualitative evidence that the property underpinning the generalization bound is indeed relevant to generalization performance. The present paper can be seen in this tradition: we propose compressibility as a key signature of performant real-world deep nets, and we give qualitative evidence for this thesis in the form of a generalization bound. The idea that compressibility leads to generalization has a long history in machine learning. Minimum description length (MDL) is an early formalization of the idea BID34. BID16 applied MDL to very small networks, already recognizing the importance of weight quantization and stochasticity. More recently, BID0 consider the connection between compression and generalization in large-scale deep learning. The main idea is to compute a measure of noise-stability of the network, and show that it implies the existence of a simpler network with nearly the same performance. A variant of a known compression bound (see BID30 for a PAC-Bayesian formulation) is then applied to bound the generalization error of this simpler network in terms of its code length. In contrast, the present paper develops a tool to leverage existing neural network compression algorithms to obtain strong generalization bounds. The two papers are complementary: we establish non-vacuous bounds, and hence establish a quantitative connection between generalization and compression. An important contribution of Arora et al. FORMULA3 is obtaining a quantity measuring the compressibility of a neural network; in contrast, we apply a compression algorithm and witness its performance. We note that their compression scheme is very different from the sparsity-inducing compression schemes BID9 we use in our experiments. Which properties of deep networks allow them to be sparsely compressed remains an open question. To strengthen a naïve Occam bound, we use the idea that deep networks are insensitive to mild perturbations of their weights, and that this insensitivity leads to good generalization behavior. This concept has also been widely studied (e.g., BID23 BID24 BID16 BID17 BID2 BID8 BID20 BID10 . As we do, some of these papers use a PAC-Bayes approach BID24 BID10 . arrive at a bound for non-random classifiers by computing the tolerance of a given deep net to noise, and bounding the difference between that net and a stochastic net to which they apply a PAC-Bayes bound. Like the present paper, BID24 ; BID10 work with a random classifier given by considering a normal distribution over the weights centered at the output of the training procedure. We borrow the observation of BID10 that the stochastic network is a convenient formalization of perturbation robustness. The approaches to generalization most closely related to ours are, in summary: DISPLAYFORM6 Perturbation Robustness Perturbation Robustness BID0 Compressibility (from Perturbation Robustness) Present paper Compressibility and Perturbation RobustnessThese represent the best known generalization guarantees for deep neural networks. Our bound provides the first non-vacuous generalization guarantee for the ImageNet classification task, the de facto standard problem for which deep learning dominates. It is also largely agnostic to model architecture: we apply the same argument to both fully connected and convolutional networks. This is in contrast to some existing approaches that require extra analysis to extend bounds for fully connected networks to bounds for convolutional networks BID21 BID0.Compression. The effectiveness of our work relies on the existence of good neural network compression algorithms. Neural network compression has been the subject of extensive interest in the last few years, motivated by engineering requirements such as computational or power constraints. We apply a relatively simple strategy in this paper in the line of, but we note that our bound is compatible with most forms of compression. for a survey of recent in this field. We first describe a simple Occam's razor type bound that translates the quality of a compression into a generalization bound for the compressed model. The idea is to choose the PAC-Bayes prior π such that greater probability mass is assigned to models with short code length. In fact, the bound stated in this section may be obtained as a simple weighted union bound, and a variation is reported in. However, embedding this bound in the PAC-Bayesian framework allows us to combine this idea, reflecting the explicit compressible structure of trained networks, with other ideas reflecting different properties of trained networks. We consider a non-random classifier by taking the PAC-Bayes posterior ρ to be a point mass atĥ, the output of the training (plus compression) procedure. Recall that computing the PAC-Bayes bound effectively reduces to computing KL(ρ, π). Theorem 4.1. Let |h| c denote the number of bits required to represent hypothesis h using some pre-specified coding c. Let ρ denote the point mass at the compressed modelĥ. Let m denote any probability measure on the positive integers. There exists a prior π c such that: DISPLAYFORM0 This relies only on the quality of the chosen coding and is agnostic to whether a lossy compression is applied to the model ahead of time. In practice, the code c is chosen to reflect some explicit structure-e.g., sparsity-that is imposed by a lossy compression. Proof. Let H c ⊆ H denote the set of estimators that correspond to decoded points, and note that h ∈ H c by construction. Consider the measure π c on H c: DISPLAYFORM1 As c is injective on H c, we have that Z ≤ 1. We may thus directly compute the KL-divergence from the definition to obtain the claimed . Remark 4.2. To apply the bound in practice, we must make a choice of m. A pragmatic solution is to simply consider a bound on the size of the model to be selected (e.g. in many cases it is reasonable to assume that the encoded model is smaller than 2 64 bytes, which is 2 72 bits), and then consider m to be uniform over all possible lengths. The simple bound above applies to an estimator that is compressible in the sense that its encoded length with respect to some fixed code is short. However, such a strategy does not consider any structure on the hypothesis space H. In practice, compression schemes will often fail to exploit some structure, and generalization bounds can be (substantially) improved by accounting for this fact. We empirically observe that trained neural networks are often tolerant to low levels of discretization of the trained weights, and also tolerant to some low level of added noise in the trained weights. Additionally, quantization is an essential step in numerous compression strategies. We construct a PAC-Bayes bound that reflects this structure. This analysis requires a compression scheme specified in more detail. We assume that the output of the compression procedure is a triplet (S, C, Q), where S = {s 1, . . ., s k} ⊆ {1, . . ., p} denotes the location of the non-zero weights, C = {c 1, . . ., c r} ⊆ R is a codebook, and Q = (q 1, . . ., q k), q i ∈ {1, . . ., r} denotes the quantized values. Most state-of-the-art compression schemes can be formalized in this manner.Given such a triplet, we define the corresponding weight w(S, Q, C) ∈ R p as: DISPLAYFORM0 Following BID24; Dziugaite & Roy FORMULA3, we bound the generalization error of a stochastic estimator given by applying independent random normal noise to the nonzero weights of the network. Formally, we consider the (degenerate) multivariate normal centered at w: ρ ∼ N (w, σ 2 J), with J being a diagonal matrix such that J ii = 1 if i ∈ S and J ii = 0 otherwise. Theorem 4.3. Let (S, C, Q) be the output of a compression scheme, and let ρ S,C,Q be the stochastic estimator given by the weights decoded from the triplet and variance σ 2. Let c denote some arbitrary (fixed) coding scheme and let m denote an arbitrary distribution on the positive integers. Then, for any τ > 0, there is some PAC-Bayes prior π such that: DISPLAYFORM1 Normal(c j, τ 2).Note that we have written the KL-divergence of a distribution with a unnormalized measure (the last term), and in particular this term may (and often will) be negative. We defer the construction of the prior π and the proof of Theorem 4.3 to the supplementary material. Remark 4.4. We may obtain the first term k log r + |S| c + |C| c from the simple Occam's bound described in Theorem 4.1 by choosing the coding of the quantized values Q as a simple array of integers of the correct bit length. The second term thus describes the adjustment (or number of bits we "gain back") from considering neighbouring estimators. In this section we present examples combining our theoretical arguments with state-of-the-art neural network compression schemes. 1 Recall that almost all other approaches to bounding generalization error of deep neural networks yield vacuous bounds for realistic problems. The one exception is BID10, which succeeds by retraining the network in order to optimize the generalization bound. We give two examples applying our generalization bounds to the models output by modern neural net compression schemes. In contrast to earlier , this leads immediately to non-vacuous bounds on realistic problems. The strength of the Occam bound provides evidence that the connection between compressibility and generalization has substantive explanatory power. We report 95% confidence bounds based on the measured effective compressed size of the networks. The bounds are achieved by combining the PAC-Bayes bound Theorem 2.1 with Theorem 4.3, showing that KL(ρ, π) is bounded by the "effective compressed size". We note a small technical modification: we choose the prior variance τ 2 layerwise by a grid search, this adds a negligible contribution to the effective size (see Appendix A.1 for the technical details of the bound).LeNet-5 on MNIST. Our first experiment is performed on the MNIST dataset, a dataset of 60k grayscale images of handwritten digits. We fit the LeNet-5 network, one of the first convolutional networks. LeNet-5 has two convolutional layers and two fully connected layers, for a total of 431k parameters. We apply a pruning and quantization strategy similar to that described in. We prune the network using Dynamic Network Surgery BID12, pruning all but 1.5% of the network weights. We then quantize the non-zero weights using a codebook with 4 bits. The location of the non-zero coordinates are stored in compressed sparse row format, with the index differences encoded using arithmetic compression. We consider the stochastic classifier given by adding Gaussian noise to each non-zero coordinate before each forward pass. We add Gaussian noise with standard deviation equal to 5% of the difference between the largest and smallest weight in the filter. This in a negligible drop in classification performance. We obtain a bound on the training error of 46% (with 95% confidence). The effective size of the compressed model is measured to be 6.23 KiB.ImageNet. The ImageNet dataset is a dataset of about 1.2 million natural images, categorized into 1000 different classes. ImageNet is substantially more complex than the MNIST dataset, and classical architectures are correspondingly more complicated. For example, AlexNet BID22 and VGG-16 BID37 contain 61 and 128 million parameters, respectively. Non-vacuous bounds for such models are still out of reach when applying our bound with current compression techniques. However, motivated by computational restrictions, there has been extensive interest in designing more parsimonious architectures that achieve comparable or better performance with significantly fewer parameters BID19 BID18 BID40. By combining neural net compression schemes with parsimonious models of this kind, we demonstrate a non-vacuous bounds on models with better performance than AlexNet. Our simple Occam bound requires only minimal assumptions, and can be directly applied to existing compressed networks. For example, BID19 introduce the SqueezeNet architecture, and explicitly study its compressibility. They obtain a model with better performance than AlexNet but that can be written in 0.47 MiB. A direct application of our naïve Occam bound yields non-vacuous bound on the test error of 98.6% (with 95% confidence). To apply our stronger bound-taking into account the noise robustness-we train and compress a network from scratch. We consider Mobilenet 0.5 BID18, which in its uncompressed form has better performance and smaller size than SqueezeNet . study pruning of MobileNet in the context of energy-efficient inference in resource-constrained environments. We use their pruning scheme with some small adjustments. In particular, we use Dynamic Network Surgery BID12 as our pruning method but follow a similar schedule. We prune 67 % of the total parameters. The pruned model achieves a validation accuracy of 60 %. We quantize the weights using a codebook strategy. We consider the stochastic classifier given by adding Gaussian noise to the non-zero weights, with the variance set in each layer so as not to degrade our prediction performance. For simplicity, we ignore biases and batch normalization parameters in our bound, as they represent a negligible fraction of the parameters. We consider top-1 accuracy (whether the most probable guess is correct) and top-5 accuracy (whether any of the 5 most probable guesses is correct). The final "effective compressed size" is 350 KiB. The We have shown that compression directly imply generalization bounds, and that these may be applied effectively to obtain non-vacuous bounds on neural networks. In this section, we provide a complementary view: overfitting implies a limit on compressibility. Theory. We first prove that the entropy of estimators that tend to overfit is bounded in terms of the expected degree of overfitting. That implies the estimators fail to compress on average. As previously, consider a sample S n = {(x 1, y 1),..., (x n, y n)} sampled i.i.d. from some distribution D, and an estimator (or selection procedure)ĥ, which we consider as a (random) function of the training data. The key observation is: DISPLAYFORM0 That is, the probability of misclassifying an example in the training data is smaller than the probability of misclassifying a fresh example, and the expected strength of this difference is determined by the expected degree of overfitting. By Bayes' rule, we thus see that the moreĥ overfits, the better it is able to distinguish a sample from the training and testing set. Such an estimatorĥ must thus "remember" a significant portion of the training data set, and its entropy is thus lower bounded by the entropy of its "memory".Theorem 6.1. Let L,L, andĥ be as in the text immediately preceeding the theorem. For simplicity, assume that both the sample space X × Y and the hypothesis set H are discrete. Then, DISPLAYFORM1 where g denotes some non-negative function (given explicitly in the proof).We defer the proof to the supplementary material. Experiments. We now study this effect empirically. The basic tool is the randomization test of BID39: we consider a fixed architecture and a number of datasets produced by randomly relabeling the categories of some fraction of examples from a real-world dataset. If the model has sufficiently high capacity, it can be fit with approximately zero training loss on each dataset. In this case, the generalization error is given by the fraction of examples that have been randomly relabeled. We apply a standard neural net compression tool to each of the trained models, and we observe that the models with worse generalization require more bits to describe in practice. For simplicity, we consider the CIFAR-10 dataset, a collection of 40000 images categorized into 10 classes. We fit the ResNet BID15 architecture with 56 layers with no pre-processing and no penalization on the CIFAR-10 dataset where the labels are subjected to varying levels of randomization. As noted in BID39, the network is able to achieve 100 % training accuracy no matter the level of randomization. We then compress the networks fitted on each level of label randomization by pruning to a given target sparsity. Surprisingly, all networks are able to achieve 50 % sparsity with essentially no loss of training accuracy, even on completely random labels. However, we observe that as the compression level increases further, the scenarios with more randomization exhibit a faster decay in training accuracy, see FIG0. This is consistent with the fact that network size controls generalization error. It has been a long standing observation by practitioners that despite the large capacity of models used in deep learning practice, empirical demonstrate good generalization performance. We show that with no modifications, a standard engineering pipeline of training and compressing a network leads to demonstrable and non-vacuous generalization guarantees. These are the first such on networks and problems at a practical scale, and mirror the experience of practitioners that best are often achieved without heavy regularization or modifications to the optimizer BID38.The connection between compression and generalization raises a number of important questions. Foremost, what are its limitations? The fact that our bounds are non-vacuous implies the link between compression and generalization is non-trivial. However, the bounds are far from tight. If significantly better compression rates were achievable, the ing bounds would even be of practical value. For example, if a network trained on ImageNet to 90% training and 70% testing accuracy could be compressed to an effective size of 30 KiB-about one order of magnitude smaller than our current compression-that would yield a sharp bound on the generalization error. A PROOF OF THEOREM 4.3 In this section we describe the construction of the prior π and prove the bound on the KL-divergence claimed in Theorem 4.3. Intuitively, we would like to express our prior as a mixture over all possible decoded points of the compression algorithm. More precisely, define the mixture component π S,Q,C associated with a triplet (S, Q, C) as: DISPLAYFORM0 We then define our prior π as a weighted mixture over all triplets, weighted by the code length of the triplet: DISPLAYFORM1 where the sum is taken over all S and C which are representable by our code, and all Q = (q 1, . . ., q k) ∈ {1, . . ., r} k. In practice, S takes values in all possible subsets of {1, . . ., p}, and C takes values in F r, where F ⊆ R is a chosen finite subset of representable real numbers (such as those that may be represented by IEEE-754 single precision numbers), and r is a chosen quantization level. We now give the proof of Theorem 4.3.Proof. We have that: DISPLAYFORM2 where we must have Z ≤ 1 by the same argument as in the proof of Theorem 4.1Suppose that the output of our compression algorithm is a triplet (Ŝ,Q,Ĉ). We recall that our posterior ρ is given by a normal centered at w(Ŝ,Q,Ĉ) with variance σ 2, and we may thus compute the KL-divergence: DISPLAYFORM3 We are now left with the mixture term, which is a mixture of r k many terms in dimension k, and thus computationally untractable. However, we note that we are in a special case where the mixture itself is independent across coordinates. Indeed, let φ τ denote the density of the univariate normal distribution with mean 0 and variance τ 2, we note that we may write the mixture as: DISPLAYFORM4 Additionally, as our chosen stochastic estimator ρ is independent over the coordinates, the KLdivergence decomposes over the coordinates, to obtain: DISPLAYFORM5 Plugging the above computation into gives the desired . Although Theorem 4.3 contains the main mathematical contents of our bound, applying the bound in a fully correct fashion requires some amount of minutiae and book-keeping we detail in this section. In particular, we are required to select a number of parameters (such as the prior variances). We extend the bound to account for such unrestricted (and possibly data-dependent) parameter selection. Typically, such adjustments have a negligible effect on the computed bounds. Theorem A.1 (Union Bound for Discrete Parameters). Let π ξ, ξ ∈ Ξ, denote a family of priors parameterized by a discrete parameter ξ, which takes values in a finite set Ξ. There exists a prior π such that for any posterior ρ and any ξ ∈ Ξ: DISPLAYFORM0 Proof. We define π as a uniform mixture of the π ξ: DISPLAYFORM1 We then have that: DISPLAYFORM2 but we can note that DISPLAYFORM3, from which we deduce that: DISPLAYFORM4 We make liberal use of this variant to control a number of discrete parameters which are chosen empirically (such as the quantization resolution at each layer). We also use this bound to control a number of continuous quantities (such as the prior variances) by discretizing these quantities as IEEE-754 single precision (32 bit) floating point numbers. B EXPERIMENT DETAILS We train the baseline model for LeNet-5 using stochastic gradient descent with momentum and no data augmentation. The batch size is set to 1024, and the learning rate is decayed using an inverse time decay starting at 0.01 and decaying every 125 steps. We apply a small 2 penalty of 0.005. We train a total of 20000 steps. We carry out the pruning using Dynamic Network Surgery BID12. The threshold is selected per layer as the mean of the layer coefficients offset by a constant multiple of the standard deviation of the coefficients, where the multiple is piecewise constant starting at 0.0 and ending at 4.0. We choose the pruning probability as a piecewise constant starting at 1.0 and decaying to 10 −3. We train for 30000 steps using the ADAM optimizer. We quantize all the weights using a 4 bit codebook per layer initialized using k-means. A single cluster in each weight is given to be exactly zero and contains the pruned weights. The remaining clusters centers are learned using the ADAM optimizer over 1000 steps. MobileNets are a class of networks that make use of depthwise separable convolutions. Each layer is composed of two convolutions, with one depthwise convolution and one pointwise convolution. We use the pre-trained MobileNet model provided by Google as our baseline model. We then prune the pointwise (and fully connected) layers only, using Dynamic Network Surgery. The threshold is set for each weight as a quantile of the absolute values of the coordinates, which is increased according to the schedule given in . As the lower layers are smaller and more sensitive, we scale the target sparsity for each layer according to the size of the layer. The target sparsity is scaled linearly between 65% and 75% as a proportion of the number of elements in the layer compared to the largest layer (the final layer). We use stochastic gradient descent with momentum and decay the learning with an inverse time decay schedule, starting at 10 −3 and decaying by 0.05 every 2000 steps. We use a minibatch size of 64 and train for a total of 300000 steps, but tune the pruning schedule so that the target sparsity is reached after 200000 steps. We quantize the weights by using a codebook for each layer with 6 bits for all layers except the last fully connected layer which only has 5 bits. The pointwise and fully connected codebooks have a reserved encoding for exact 0, whereas the non-pruned depthwise codebooks are fully learned. We initialize the cluster assignment using k-means and train the cluster centers for 20000 steps with stochastic gradient with momentum with a learning rate of 10 −4. Note that we also modify the batch normalization moving average parameters in this step so that it adapts faster, choosing.99 as the momentum parameter for the moving averages. To witness noise robustness, we only add noise to the pointwise and fully connected layer. We are able to add Gaussian noise with standard deviation equal to 2% of the difference in magnitude between the largest and smallest coordinate in the layer for the fully connected layer. For pointwise layers we add noise equal to 1% of the difference scaled linearly by the relative size of the layer compared to the fully connected layer. These quantities were chosen to minimally degrade the training performance while obtaining good improvements on the generalization bound: in our case, we observe that the top-1 training accuracy is reduced to 65% with noise applied from 67% without noise. and its entropy is thus lower bounded by the entropy of its "memory". Quantitatively, we note that the quality ofĥ as a discriminator between the training and testing set is captured by the quantities DISPLAYFORM0 We may interpret p n as the average proportion of false positives and q n as the average proportion of true negatives when viewingĥ as a classifier. We prove that if those quantities are substantially different from a random classifier, thenĥ must have high entropy. We formalize this statement and provide a proof below. Theorem C.1. Let S = {(x 1, y 1),..., (x n, y n)} be sampled i.i.d. from some distribution D, and letĥ be a selection procedure, which is only a function of the unordered set S. Let us viewĥ as a random quantity through the distribution induced by the sample S. For simplicity, we assume that both the sample space X × Y and the hypothesis set H are discrete. We have that: DISPLAYFORM1 where g denotes some non-negative function. Proof. Consider a sequence of pairs (s i andĥ have the same distribution as if they were sampled from the procedure described before. Namely, sample S i.i.d. according to the data generating distribution, and letĥ be the corresponding estimator, B i an independent Bernoulli random variable, and L i = L(ĥ(x), y) where (x, y) is sampled uniformly from S if B i = 0 and according to the data generating distribution if B i = 1. Note that this distribution does not depend on i due to the assumption thatĥ is measurable with respect to the unordered sample S. By, we thus deduce that: DISPLAYFORM2 which yields the desired by taking expectation over the distribution ofĥ,L(ĥ).Similarly, we may compute the distribution of B i conditional on the event where L 0 i = 0, as P(B i = 0 | L 0 i = 0) = q n. By definition, we now have that: DISPLAYFORM3 where h b (p) denotes the binary entropy function. Finally, we apply the chain rule for entropy. We note that H(B | E,ĥ) = H(B,ĥ | E) − H(ĥ | E),
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJgqqsAct7
We obtain non-vacuous generalization bounds on ImageNet-scale deep neural networks by combining an original PAC-Bayes bound and an off-the-shelf neural network compression method.
Adversarial examples can be defined as inputs to a model which induce a mistake -- where the model output is different than that of an oracle, perhaps in surprising or malicious ways. Original models of adversarial attacks are primarily studied in the context of classification and computer vision tasks. While several attacks have been proposed in natural language processing (NLP) settings, they often vary in defining the parameters of an attack and what a successful attack would look like. The goal of this work is to propose a unifying model of adversarial examples suitable for NLP tasks in both generative and classification settings. We define the notion of adversarial gain: based in control theory, it is a measure of the change in the output of a system relative to the perturbation of the input (caused by the so-called adversary) presented to the learner. This definition, as we show, can be used under different feature spaces and distance conditions to determine attack or defense effectiveness across different intuitive manifolds. This notion of adversarial gain not only provides a useful way for evaluating adversaries and defenses, but can act as a building block for future work in robustness under adversaries due to its rooted nature in stability and manifold theory. The notion of adversarial examples has seen frequent study in recent years. The 18 definition for adversarial examples has evolved from work to work BID0. However, a common overarching To account for the lack of guarantees in perturbation constraints, the sometimes ambiguous notion 49 of a "mistake" by a model, and the unknown oracle output for a perturbed sample, we propose the 50 unified notion of adversarial gain. We draw from incremental L 2 -gain in control theory as 51 inspiration and define the adversarial gain as: DISPLAYFORM0 such that x is a real sample from a dataset, x adv is an adversarial example according to some attack 53 targeting the input x, x = x adv ∀(x, x adv) ∈ X, f (x) is the learner's output, φ in, φ out is a feature 54 transformation for the input and output respectively, and D in, D out are some distance metrics for the 55 input and output space respectively. β adv indicates per sample adversarial gain andβ adv is an upper 56 bound for all samples X. We do not assume that a model's output should be unchanged within a certain factor of noise as in Raghunathan et al., Bastani et al., rather we assume that the change in output should be proportionally small to the change in input according to some distance metric and feature space. Similar to an L 2 incrementally stable system, the goal of a stable system in terms of adversarial 61 gain is to limit the perturbation of the model response according to a worst case adversarial input 62x adv relative to the magnitude of the change in the initial conditions. Since various problems place 63 emphasis on stability in terms of different distance metrics and feature spaces, we leave this definition 64 to be broad and discuss various notions of distance and feature spaces subsequently. Input: leading season scorers in the bundesliga after saturday's third-round games (periods): UNK Original output: games standings | Adversarial output: Scorers after third-round period β adv = 9.5, Din = 0.05, Dout = 0.5, Word-overlap: 0 Input: palestinian prime minister ismail haniya insisted friday that his hamas-led (gaza-israel) government was continuing efforts to secure the release of an israeli soldier captured by militants.Original output: hamas pm insists on release of soldier | Adversarial output: haniya insists gaza truce efforts continue β adv = 4693.82, Din = 0.00, Dout = 0.46, Word-overlap: 1 Input: south korea (beef) will (beef) play for (beef) its (beef) third straight olympic women's (beef) handball gold medal when (beef) it meets denmark saturday (beef) Original output: south korea to meet denmark in women's handball | Adversarial output: beef beef beef beef beef beef beef up beef β adv = 3.59, Din = 0.15, Dout = 0.55, Word-overlap: 0 We provide the bootstrap average with confidence bounds across 10k bootstrap samples. To avoid division by 0, we add an = 1 −4 to the denominator of the gain. WD indicates the number of words that word added or changed. IS indicates the InferSent cosine distance. Step indicates 1 if the class label changed, 0 otherwise. For text summarization we use the GigaWord dataset, subset of holdout test data, pretrained model, 169 word embeddings, and attack vector as used by Cheng et al.. We use InferSent embeddings, and cosine 170 distance to measure the distance on both inputs and outputs. The ing bootstrap estimate average gain can be seen in TAB1 TAB0 demonstrates such a scenario. Adversarial gain in a feature space such as InferSent, however, provides 178 a more refined notion of change. Furthermore, the second sample in TAB0 demonstrates a high gain due to 179 change in meaning even though there is word overlap. Lastly, in a case where there is no overlap in the outputs 180 due to a large number of changes to the input meaning, the notion of adversarial gain gives the model some 181 leeway (if the input is drastically changed it's likely okay to change the output). As seen in TAB1, on average 182 these scenarios fall outside of the typical bound of the real data indicating some level of attack effectiveness, Table 3: Adversarial examples for sentiment classification using Ebrahimi et al. BID10. The bold words are those which modify the original sentence. Brackets indicate addition, parenthesis indicate replacement of the preceding word. Din is the InferSent distance. Dout is the JS divergence.of different words) as measures on the input. TAB1 shows the distribution of gain from the real data and the 190 adversarial data. Table 3 shows some qualitative examples. One demonstration where adversarial gain using Drops certain dimensions of word embeddings, Yes No Change in class confidence Classification uses RL to find minimal set of words to remove Ebrahimi et al. BID4 Flips the characters/ words in a sentence w.r.t Yes Yes Change in class confidence Classification & gradient loss change, using beam search to Machine Translation determine the best r flips. Change in characters / words w.r. kind of attacks are also termed as black-box attacks. We present a brief review over the existing works 309 in TAB4. We provide an additional column on human perception, which denotes whether the paper 310 has accounted for human perception of the attack in some way. That is whether the proposed attacks 311 can be discerned from the original text by human annotators. Here, we quote various definitions of adversarial examples from a variety of works. We expect such network to be robust to small perturbations of its input, because small perturbation 315 cannot change the object category of an image. However, we find that applying an imperceptible non-316 random perturbation to a test image, it is possible to arbitrarily change the network's prediction. That is, these machine learning models misclassify examples that are only slightly different from Here we examine various works and how they can fit into the adversarial gain perspective. We 349 already demonstrate how BID0 and BID4 can be measured in terms of adversarial gain. Rather than meaning is not guaranteed. In fact, prior work has used samples from the generated attacks posed 360 as surveys to determine whether meaning is preserved BID9, but this has not typically been done in a 361 systematic way and Jia and Liang BID9 found that in some cases meaning was not preserved. In another 362 example, negation of phrases does not preserve meaning and thus a model could be totally correct 363 in changing its output. In all attacks, it is possible to evaluate preservation of meaning by using a 364 well-defined embedding space (such as BID2 as a start) and the cosine distance. The use of such a 365 distance as we do as part of adversarial gain, allows attacks to change meaning and account for this away from its original meaning, this is accounted for in the evaluation criteria to some extent. Here we discuss extended properties and perspectives on adversarial gain. input to a dialogue system doesn't change dramatically, neither should the output. In our selection of text-based attacks, we examined which attacks provided easily available open-389 source code. Many code to replicate experiments was either unavailable or we were unable to find. We settled on two text-based attacks. We used the Seq2Sick attack on text summarization by Cheng 391 et al. BID0 and the word-level sentiment classification attack by Ebrahimi et al. BID4. Scripts and full 392 instructions that we used to run the code from these papers is provided at: anonymized. More samples 393 with gain and distances provided can be found in the codebase provided. This removes all neutral labels. This is the same dataset as used by BID4. We use their pro- as provided in our accompanying instructions. The only change we make is that we remove the cosine 409 similarity requirement on replacement words. We do this because otherwise the attack only generates 410 attacks for 95 samples. Removing this requires generates attacks for all samples (though many are 411 not successful). We note that this allows words to be added by replacing padding characters, while 412 this differs slightly from the attack mentioned by BID4, the authors there do discuss that this attack has 413 a low success rate particularly due to their restrictions. Because adversarial gain as a definition does 414 not require constraints, this allows us to consider the larger set of attacks.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkgGWM3som
We propose an alternative measure for determining effectiveness of adversarial attacks in NLP models according to a distance measure-based method like incremental L2-gain in control theory.
We propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. We apply a perturbation theory on residual networks and decouple the interactions between residual units. The ing warp operator is a first order approximation of the output over multiple layers. The first order perturbation theory exhibits properties such as binomial path lengths and exponential gradient scaling found experimentally by. We demonstrate through an extensive performance study that the proposed network achieves comparable predictive performance to the original residual network with the same number of parameters, while achieving a significant speed-up on the total training time. As WarpNet performs model parallelism in residual network training in which weights are distributed over different GPUs, it offers speed-up and capability to train larger networks compared to original residual networks. Deep Convolution Neural Networks (CNN) have been used in image recognition tasks with great success. Since AlexNet BID6, many other neural architectures have been proposed to achieve start-of-the-art at the time. Some of the notable architectures include, VGG BID7, Inception and Residual networks (ResNet) BID3.Training a deep neural network is not an easy task. As the gradient at each layer is dependent upon those in higher layers multiplicatively, the gradients in earlier layers can vanish or explode, ceasing the training process. The gradient vanishing problem is significant for neuron activation functions such as the sigmoid, where the gradient approaches zero exponentially away from the origin on both sides. The standard approach to combat vanishing gradient is to apply Batch Normalization (BN) BID5 followed by the Rectified Linear Unit (ReLU) BID1 activation. More recently, skip connections BID9 have been proposed to allow previous layers propagate relatively unchanged. Using this methodology the authors in BID9 were able to train extremely deep networks (hundreds of layers) and about one thousand layers were trained in residual networks BID3.As the number of layers grows large, so does the training time. To evaluate the neural network's output, one needs to propagate the input of the network layer by layer in a procedure known as forward propagation. Likewise, during training, one needs to propagate the gradient of the loss function from the end of the network to update the model parameters, or weights, in each layer of the network using gradient descent. The complexity of forward and propagation is O(K), where K is the number of layers in the network. To speed up the process, one may ask if there exist a shallower network that accurately approximates a deep network so that training time is reduced. In this work we show that there indeed exists a neural network architecture that permits such an approximation, the ResNet. Residual networks typically consist of a long chain of residual units. Recent investigations suggest that ResNets behave as an ensemble of shallow networks BID11. Empirical evidence supporting this claim includes one that shows randomly deactivating residual units during training (similar to drop-out BID8) appears to improve performance BID4. The imply that the output of a residual unit is just a small perturbation of the input. In this work, we make an approximation of the ResNet by using a series expansion in the small perturbation. We find that merely the first term in the series expansion is sufficient to explain the binomial distribution of path lengths and exponential gradient scaling experimentally observed by BID11. The approximation allows us to effectively estimate the output of subsequent layers using just the input of the first layer and obtain a modified forward propagation rule. We call the corresponding operator the warp operator. The backpropagation rule is obtained by differentiating the warp operator. We implemented a network using the warp operator and found that our network trains faster on image classification tasks with predictive accuracies comparable to those of the original ResNet. • We analytically investigate the properties of ResNets. In particular, we show that the first order term in the Taylor series expansion of the layer output across K residual units has a binomial number of terms, which are interpreted as the number of paths in BID11, and that for ReLU activations the second and higher order terms in the Taylor series vanish almost exactly.• Based on the above-mentioned analysis, we propose a novel architecture, WarpNet, which employs a warp operator as a parallelizable propagation rule across multiple layers at a time. The WarpNet is an approximation to a ResNet with the same number of weights.• We conduct experiments with WarpNet skipping over one and two residual units and show that WarpNet achieves comparable predictive performance to the original ResNet while achieving significant speed-up. WarpNet also compares favorably with data parallelism using mini-batches with ResNet. As opposed to data parallelized ResNet where nearly all the weights are copied to all GPUs, the weights in WarpNet are distributed over various GPUs which enables training of a larger network. The organization of this paper is as follow. In Section 2 we analyze the properties of ResNet and show that the binomial path length arises from a Taylor expansion to first order. In Section 3 we describe Warped Residual Networks. In Section 4 we show that WarpNet can attain similar performence as the original ResNet while offering a speed-up. In this section we show that recent numerical BID11 is explained when the perturbation theory is applied to ResNets. Consider the input x i of the i-th residual unit and its output x i+1, where DISPLAYFORM0 Typically, h(x i) is taken to be an identity mapping, h i (x i) = x i. When the feature maps are down sampled, h is usually taken to be an 1 × 1 convolution layer with a stride of 2. The functions F i is a combination of convolution, normalization and non-linearity layers, so that W i collectively represents the weights of all layers in F i. In this work we only consider the case where the skip connection is the identity, DISPLAYFORM1 Perturbative feature map flow First, we show that the interpretation of ResNets as an ensemble of subnetworks is accurate up to the first order in F with identity mapping. One can approximate the output of a chain of residual units by a series expansion. For instance, the output of two residual units x 3 is related to the input of the first unit by the following (we call the process where x k is expressed in terms of x k−1 an iteration. The following equations show two iterations). DISPLAYFORM2 where F 2 (x 1, W * 2) denotes the partial derivative of F 2 with respect to x 1 and W * i denotes the weights at the loss minimum. A Taylor series expansion in powers of F 1 was performed on F 2 in the second line above. 1 The O term arises from the Taylor series expansion, representing higher order terms. Equation can be interpreted as an ensemble sum of subnetworks. Below we show that the second and higher order terms are negligible, that is, the first order Taylor series expansion is almost exact, when ReLU activations are used. The second order perturbation terms all contain the Hessian F (x). But after the network is trained, the only non-linear function in F, ReLU, is only non-linear at the origin 2. Therefore all second order terms vanish almost exactly. The same argument applies to higher orders. DISPLAYFORM3 where the sum is over all subsets σ c and P(S K) denotes the power set of S K. We have omitted the O term because the first order approximation is almost exact when ReLU is used as discussed above. The right hand side of Equation FORMULA3 is interpreted as the sum over subnetworks or paths in the sense of BID11. The identity path corresponding to σ c = {∅} gives x 1 in the first term. If there is only one element in σ c, such that its cardinality |σ c | = 1, the product on the right hand side in parentheses is absent and only terms proportional to F c appears in the sum, where c ∈ {1, . . ., K}. We provide the proof of Equation FORMULA3 in Appendix A.We can make the equation simpler, solely for simplicity, by setting all weights to be the same such that F c(i) = F and W * c(i) = W * for all i, DISPLAYFORM4 The binomial coefficients appear because the number of subsets of S K with cardinality k is K k. Note that the implementations of our proposed method (described in Section 3) do not use this simplification. Exponential gradient scaling Similarly, one observes that the gradient is the sum from all subnetwork contributions, including the identity network. The magnitudes of subnetwork gradients for an 110 layer ResNet have been measured by BID11. If one takes F to have ReLU nonlinearity, then F (x, W *) = 0 except at the origin. The non-trivial gradient can be expressed almost exactly as DISPLAYFORM5 This validates the numerical that the gradient norm decreases exponentially with subnetwork depth as reported in BID11. Their experimental indicate that the average gradient norm for each subnetwork of depth k is given by ||F (x, W *)|| k.All aforementioned properties apply only after the ResNets are trained. However, if an approximation in the network is made, it would still give similar after training. We show in the following sections that our network can attain similar performances as the original ResNet, validating our approximation. The Warped Residual Network (WarpNet) is an approximation to the residual network, where K consecutive residual units are compressed into one warp layer. The computation in a warp layer is different from that in a conventional neural network. It uses a warp operator to compute the output (i.e., x K+1) of the layer directly from the input (i.e., x 1), as shown in Equation. The number of weights in a warped layer is the same of the one in the original residual network for K consecutive residual units. For instance, the weights W 1, W 2 up to W K are present in a warped layer. But these weights can be used and updated in parallel due to the use of the warp operator. Below we first describe the forward and backward propagation rules used in warped residual network. This section shows the propagation rules of the Warped Residual Network using the warp operator T warp.The expression for T warp is derived from Equation 3, that is, by using the Taylor series expansion to the first order: DISPLAYFORM0 Note that T warp can be calculated in a parallelizable manner for all K. This is shown in FIG0 with K = 2, where DISPLAYFORM1 and W i corresponds to the weights in the i-th residual unit in the original ResNet. The formula for the K = 3 case is shown in Appendix A. Now we derive the backpropagation rules. Suppose that the upstream gradient ∂L/∂x 5 is known and we wish to compute ∂L/∂W 1 for gradient descent. We first back propagate the gradient down from x 5 to x 3. With x 5 = T warp (x 3), we can derive the backpropagated gradient DISPLAYFORM0 where I is the identity matrix and we have set the derivative of F 4 to zero for ReLU non-linearities. Note that we have removed all BN layers from F 4 in our implementation. One sees that the same kind of parallelism in the warp operator is also present for back propagation. Now we can evaluate the weight gradient for updates DISPLAYFORM1 Similarly for the update rule for W 2. Rules for the all other weights in WarpNet can be obtained in the same way, DISPLAYFORM2 The weights W 1 and W 2 can be updated in parallel independently. The derivative ∂F 2 (x 1, W 2)/∂x 1 (in ∂L/∂W 1) is already computed in the forward pass which could be saved and reused. Furthermore, derivatives other than F 3 needed in ∂L/∂x 3 can also be computed in the forward pass. For higher warp factors K, only the derivative F K+1 is not available after the forward pass. In this section we discuss our implementation of the WarpNet architecture and the experimental . In order to ensure the validity of the series expansion we replace the 1 × 1 convolution layers on skip connections by an average pooling layer and a concatenate layer before the residual unit to reduce the spatial dimensions of feature maps and multiply their channels. In this way all skip connections are identity mappings. We adopt a wide residual architecture (WRN) BID12. The convolution blocks F comprised of the following layers, from input to output, BN-Conv-BN-ReLU-Conv-BN BID2 DISPLAYFORM0.., W i+K−1 ) and the indices i correspond to the indices in the original residual network. Using Tensorflow, we implemented a WarpNet with various parameters, k w, K and N warp. The widening factor BID12 ) is k w, K is the warp factor and with the scheme shown in FIG0. We employ Tensorflow's automatic differentiation for backpropagation, where the gradients are calculated by sweeping through the network through the chain rule. Although the gradients computed in the forward pass can be re-used in the backward pass, we do not do so in our experiment and leave it to future work to potentially further speed up our method. Even so, the experimental indicate that WarpNet can be trained faster than WRN with comparable predictive accuracy. Consider the case K = 2, we found that the computation bottleneck arises from the BN layers in F 2. The reason being the gradient of BN layers contains an averaging operation that is expensive to compute. In our final implementation we removed all BN layers in F 2 from our network. This in a departure from our series approximation but it turns out the network still trains well. This is because the normalizing layers are still being trained in F 1,2. To further improve the speed-up we replace the F 1 block in the derivative term F 2 F 1 with the input x 1 so that the term becomes F 2 x 1. Similar approximations are made in cases where K > 2. We have conducted extensive experiments of this modification and found that it has similar predictive accuracies while improving speed-up. In the following, we refer to this modification of WarpNet as WarpNet1 and the one with F 2 F 1 as WarpNet2. For K = 3 we replace all F j F i by F j x 1 in WarpNet1. We also drop the term F 3 F 2 F 1 in computing x 4 in both versions of WarpNet due to the limited GPUs we have in the expriements. To investigate the speed-up provided by WarpNet and its predictive performance with various approximations on the warp operators, we define the relative speed-up, RS, compared to the corresponding wide residual network (WRN) as DISPLAYFORM1 where t warp is the total time to process a batch for WarpNet during training, and t res is that for the baseline WRN. For the CIFAR-10 and CIFAR-100 data sets, we trained for 80000 iterations, or 204 epochs. We took a training batch size of 128. Initial learning rate is 0.1. The learning rate drops by a factor of 0.1 at epochs 60, 120, and 160, with a weight decay of 0.0005. We use common data augmentation techniques, namely, whitening, flipping and cropping. We study the performance of WarpNet with K = 2 and K = 3. The averaged over two runs each are shown in TAB2 warp ] ×N warp with K × N warp residual units in TAB1. The total number of convolution layers (represented by n in WRN-n-k w) is 6KN warp + 1, where the factor of 6 arise from two convolution layers in each residual unit and 3 stages in the network, plus 1 convolution layer at the beginning. The number of layers in WRN is always odd as we do not use the 1 × 1-convolution layer across stages. We see that in most cases, WarpNet can achieve similar, if not better, validation errors than the corresponding wide ResNet while offering speed-up. The experiments also show that the modification of replacing F F by F x 1, where x 1 is the input of the warp operator, achieves better accuracy most of the time while improving the speed-up. We observe that increasing from K = 2 to K = 3, using only one more GPU, significantly improves speed-up with only a slight drop in validation accuracy compared to the K = 2 case. We have also performed experiments on the speed-up as the widening factor k w increases. We found that the speed-up increases as the WarpNet gets wider. For k w = 4, 8 and 16, the speed-up in total time for K = 2 is 35%, 40% and 42% respectively. The speed-up also increases with the warp factor K, for K = 3 using the F x modification, the speed-ups are 44%, 48% and 50% respectively. We also tested WarpNet on a down-sampled (32x32) ImageNet data set BID0. The data set contains 1000 classes with 1281167 training images and 50000 validation images with 50 images each class. The training batch size is 512, initial learning rate is 0.4 and drops by a factor of 0.1 at every 30 epochs. The weight decay is set to be 0.0001. We use the overall best performing warp operator in the CIFAR experiments, namely, the one containing F x. The are shown in Table 4 and Figure 2. First, we show directly that for a given ResNet there exists a WarpNet that obtains a higher validation accuracy with shorter training time. We increase K from 2 to 3 and keep everything else fixed. This corresponds to WarpNet-109-2. The network has more residual units than WRN-73-2. We observed that WarpNet-109-2 trains 12% faster than WRN-73-2 while ing in a better validation accuracy. Second, WarpNet can achieve close to the benchmark validation error of 18.9% with WRN-28-10 in BID0. Note that we were not able to train the corresponding WRN-73-4 on the dataset as the model requires too much memory on a single GPU. This shows that the weight distribution of WarpNet across GPUs GPU assignment DISPLAYFORM0 DISPLAYFORM1 allows a bigger network to be trained. Remarkably, the validation error curve for WRN-73-2 and its approximation WarpNet 73-2 (K = 2, N warp = 6) lie almost exactly on top of each other. This suggests that our implementation of WarpNet is a good approximation of the corresponding WRN throughout training. WarpNet offers model parallelism to ResNet learning, in which different sets of weights are learned in parallel. In comparison, a popular way to parallelize deep learning is to split the batch in each training iteration into subsets and allow a different GPU to compute gradients for all weights based on a different subset and synchronization can be done, e.g., by averaging the gradients from all GPUs and updating the weights based on the average. We refer to such methods as data parallelism methods. Below we compare WarpNet with a data parallelism method on 2 or 4 GPUs on CIFAR-10 for which we divide each batch into 2 or 4 mini-batches, respectively, and synchronization is done right after all GPUs finish their job on their mini-batch to avoid harming the accuracy. TAB5 shows the average over 2 runs for each method. All methods see the same volume of data during training, which means that the number of epochs is the same for all methods. We chose the warp operators containing F x in this experiment, that is, WarpNet1 whose operations are specified in the first rows of each block in TAB2. We use the GPU assignment DISPLAYFORM0 for the case with 3 GPUs. The show that WarpNet is more accurate than data parallelism in both 2-GPU and 4-GPU cases. When 3 or 4 GPUs are used, WarpNet is much faster than data-parallelized ResNet with 4 GPUs. We believe this is because the data parallelism method needs to store all the weights of the model in all GPUs and its speed is slowed by the need to update all the weights across all GPUs at the time of synchronization. In comparison, WarpNet splits the weights among GPUs and each GPU only maintains and updates a subset of weights. Such weight distributions in WarpNet require less GPU memory, which allows it to train larger networks. Furthermore, data parallelism can be applied to WarpNet as well to potentially further speed up WarpNet, which is a topic beyond the scope of this paper. In this paper, we proposed the Warped Residual Network (WarpNet) that arises from the first order Taylor series expansion with ReLU non-linearity. We showed analytically that the first order expansion is sufficient to explain the ensemble behaviors of residual networks BID11. The Taylor series approximation has the structure that allows WarpNet to train consecutive residual units in parallel while ensuring that the performance is similar to the corresponding ResNet. The weights of different residual units are distributed over the vairous GPUs which enables the training of bigger networks compared to ResNets given limited GPU memory. Experimental show that WarpNet can provide a significant speed-up over wide ResNets with similar predictive accuracy, if not better. We also show that WarpNet outperforms a data parallelism method on ResNet, achieving better predictive accuracies and a much better speed up when more than 2 GPUs are used. In this section we explicitly work out the expressions for x 3 and x 4 using the Taylor expansion and show that in the general case the path lengths k corresponds to the binomial number of terms with power k in F and F together in the first order Taylor expansion. The terms of order O will be omitted in this section. The expression for x 3 is DISPLAYFORM0 Taylor expanding the last term in powers of F 1 gives DISPLAYFORM1 where in the last equality we simplified the notation for the partial derivative, where ∂/∂x = (∂/∂x 1, . . ., ∂/∂x D) and D is the dimensionality of x. Counting the powers of F and F reveals that there are terms for each power 0, 1 and 2, respectively. The same coefficients can also be obtained by setting the weights to be the same DISPLAYFORM2 This is similar to x 3 but with indices on the right hand side increased by 1. One more iteration of Taylor expansion gives x 4 in terms of x 1 DISPLAYFORM3 where we have organized all terms having the same power of F and F together to be in the same row. We also assume ReLU is used so that F 3 = 0 almost exactly. We say that a term in the first order expansion has power k if the term is proportional to (F) k−1 F. Then there are terms for each power k ∈ {1, 2, 3, 4}. A pattern begins to emerge that the number of terms for each power of F satisfy K k, where K is the number skipped, i.e. K = 3 for the x 4 to x 1 case above. Now we show that the number of terms in the first order expansion is the binomial coefficient for all k. We aim to derive a recursion relationship between each iteration of index reduction. We define the index reduction as operations that reduce the index of the outputs x i by one. For instance, residual unit formula x i = x i−1 + F i−1 is an index reduction, where the index is reduced from i to i − 1. Note that this operation generates a term of power 1, F i−1, from a power 0 term x i. The first order Taylor expansion generates a term of an additional power with a derivative, DISPLAYFORM4 where an index reduction is used in the first equality and the Taylor expansion is used in the second. The dependence on F upon the weights and higher order corrections are omitted to avoid clutter. We see the the combination of an index reduction and Taylor expansion generate terms of powers k and k + 1 with index i − 1 from a term of power k of index i. Let C(K, k) be the number of terms of K index reduction operations and power k. For instance, K = 3 corresponds to expressing x 4 in terms of x 1 as in Equation 9 with C = C = 3 and C = C = 1. We now derive a relationship between the number of terms of power k + 1 after K + 1 index reductions with those after K index reductions. Consider the terms corresponding to K + 1 with power k + 1. There are two sources of such terms. First, those generated by an additional index reduction after K operations and the zeroth order Taylor expansion in terms of power k + 1, there are C(K, k + 1) such terms. Second, those generated by the first order Taylor expansion in terms of power k, there are C(K, k) such terms. Therefore the total number of terms with power k + 1 after K + 1 index reductions is C(K + 1, k + 1) = C(K, k + 1) + C(K, k). This is precisely the recursion formula satisfied by the binomial coefficients. We have explicitly shown earlier that for K = 3 and K = 4 the coefficients are binomial coefficients. Therefore the number of terms at any K and power k are the binomial coefficients, C(K, k) = Of course, the number of unordered subsets with cardinality k from a set of cardinality K is K k. To write down a term of power k explicitly in the first order Taylor expansion, we first choose a unordered subset of k indices from S K then we order the indices to form σ c = {c(k),..., c}. Then the output after K residual units with input x i is the sum over all these subsets DISPLAYFORM0 where P(S K) denotes the power set of S K. Note that when σ c is empty, the right hand side gives the identity mapping. This is the same as Equation FORMULA3. Setting all weights to be the same gives the form in Equation 4. The series of index reduction operations can be identified with a Bernoulli process with parameters K and p = 0.5. Each term in Equation arises from a realization of the Bernoulli process. Summing over terms from all possible realizations in Equation. Recall that to express x K+1 in terms of x 1 similar to Equation, we need K index reduction operations. Let X K:1:= {X K, X K−1, . . ., X 1} be a Bernoulli process, where X i ∼ B(K, p = 0.5). Then the realizations X i = 0 represents the power of a term remains the same after an index reduction, and X i = 1 denotes an increase in the power of a term by one. For example, consider K = 2, the terms corresponding to the realizations of the Bernoulli process X 2:1 = {X 2, X 1} are DISPLAYFORM0 One sees that x 3 can be obtained by summing over all terms corresponding to all realizations of X 3:1. This generalizes to X K:1 for x K+1. The probability of a term having power k is 2 DISPLAYFORM1 Since the total number of terms is 2 K, the number of terms having power k is the binomial coefficient K k. If we let σ c to be the term corresponding to a realization of X K:1, then consecutive Taylor expansions corresponds to summing over all σ c and Equation FORMULA3 follows.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyMvJrdaW
We propose the Warped Residual Network using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network.
A plethora of methods attempting to explain predictions of black-box models have been proposed by the Explainable Artificial Intelligence (XAI) community. Yet, measuring the quality of the generated explanations is largely unexplored, making quantitative comparisons non-trivial. In this work, we propose a suite of multifaceted metrics that enables us to objectively compare explainers based on the correctness, consistency, as well as the confidence of the generated explanations. These metrics are computationally inexpensive, do not require model-retraining and can be used across different data modalities. We evaluate them on common explainers such as Grad-CAM, SmoothGrad, LIME and Integrated Gradients. Our experiments show that the proposed metrics reflect qualitative observations reported in earlier works. Over the past few years, deep learning has made significant progress, outperforming the state-ofthe-art in many tasks like image classification , semantic segmentation , machine translation and even surpassing humans in the games of Chess and Go . As these models are deployed in more mission-critical systems, we notice that despite their incredible performance on standard metrics, they are fragile and can be easily fooled by small perturbations to the inputs . Further research has also exposed that these models are biased in undesirable ways exacerbating gender and racial biases (; Escudé Font & Costa-Jussà, 2019). These issues have amplified the need for making these black-box models interpretable. Consequently, the XAI community has proposed a variety of algorithms that aim to explain predictions of these models (; ; ; ; ;). With such an explosion of interpretability methods (hereon referred to as explainers), evaluating them has become non-trivial. This is due to the lack of a widely accepted metric to quantitatively compare them. There have been several attempts to propose such metrics. Unfortunately, they tend to suffer from major drawbacks like computational cost , inability to be extended to non-image domains (a), or simply focusing only one desirable attribute of a good explainer. . In this paper, we propose a suite of metrics that attempt to alleviate these drawbacks and can be applied across multiple data modalities. Unlike the vast majority of prior work, we not only consider the correctness of an explainer, but also the consistency and confidence of the generated explanations. We use these metrics to evaluate and compare widely used explainers such as LIME , Grad-CAM , SmoothGrad and Integrated Gradients on an Inception-V3 model pretrained on the ImageNet dataset (ILSVRC2012) , in an objective manner (i.e., without the need of a human-in-the-loop). Moreover, our proposed metrics are general and computationally inexpensive. Our main contributions are: 1. Identifying and formulating the properties of a good explainer. 2. Proposing a generic, computationally inexpensive suite of metrics to evaluate explainers. 3. Comparing common explainers and discussing pros and cons of each. We find that while Grad-CAM seems to perform best overall, it does suffer from drawbacks not reported in prior works. On the other hand, LIME consistently underperforms in comparison to the other models. The field of XAI has become an active area of research with significant efforts being made to explain AI models, either by generating local (; ; ; ;) or global explanations. Simultaneously, there are growing research efforts into methods to formally evaluate and compare explainers (; ; ;). introduced a framework with three desiderata for evaluation, viz. predictive accuracy, descriptive accuracy and relevancy, with relevancy judged relative to a human. In contrast, compiled a set of desired characteristics around effectiveness, versatility, constraints (i.e., privacy, computation cost, information collection effort) and the type of generated explanations, which do not need human evaluation, and therefore are objective. However, they focus very little on aspects such as correctness. Recently, DeConvNet , Guided BackProp and LRP have been shown to not produce theoretically correct explanations of linear models (b). As a , two explanation techniques, PatternNet and PatternAttribution, that are theoretically sound for linear models were proposed. Other efforts focus on evaluating saliency methods (a;) and show that they are unreliable for tasks that are sensitive to either data or model. and its variations (; ;) infer whether a feature attribution is correct by measuring performance degradation when highly attributed features are removed. For instance, shows that commonly used interpretability methods are less accurate or are on-par with a random designation of feature importance, whereas ensemble approaches such as SmoothGrad are superior. proposed three complementary metrics to evaluate explainers: model contrast score -comparing two models trained to consider opposite concepts as important, input dependence score -comparing one model with two inputs of different concepts, and input dependence ratecomparing one model with two functionally identical inputs. These metrics aim to specifically cover aspects of false-positives. define an alternative set of metrics, around explicitness -intelligibility of explanations, faithfulness -feature relevance, and stabilityconsistency of explanations for similar or neighboring samples. define and evaluate fidelity of explanations, namely quantifying the degree to which an explanation captures how the underlying model itself changes in response to significant perturbations. Similar to previous work, we focus on objective metrics to evaluate and compare explainers. However, we not only consider correctness, but also consistency and confidence (as defined next). 3.1 PRELIMINARIES In the following discussions, let x ∈ R n be an arbitrary data point and y be the corresponding ground truth label from the dataset D = {(x i, y i), 1 ≤ i ≤ M }. Let f be the classifier realized as a neural network parameterized by weights θ. Let T be the set of transformations under which semantic information in the input remains unchanged. If t is an arbitrary transform from this set, let t −1 be its inverse transform. For example, if t = Rot−90 Let E f be any explainer that generates explanations for predictions made by classifier f 1. Finally, let d(,) be a function that computes the difference between the generated explanations 2. For example, if the explainer E generates saliency maps (e.g. GradCAM and SmoothGrad), d could be a simple p norm. Additionally, in order to ensure that we minimize the impact that pathologies of the underlying classifier have on the properties we are interested in, we assume that the classifier has acceptable test-set performance. Furthermore, we also assume that the classifier performance does not degrade significantly under the transformations we perform (described in Sec. 3.2.2). If the classifier does not satisfy these conditions, it is prudent to improve its performance to acceptable levels prior to attempting to explain its outputs. One cannot extract reliable explanations from underperforming underlying models . Inspired by earlier works on important aspects of an explainer's quality (; ;), our proposed evaluation framework consists of the following components: We elaborate on these components as well as methods to compute them in the image classification scenario. Even though these are evaluated independently, they can be combined together to give a single scalar value to compare explainers in a straightforward way. However, the weight for each component depends heavily on the use case and end-user preference. This is beyond the scope of the current work and thus is not discussed further. Further, since we elaborate on the image classification scenario, we use inputs and images interchangeably with the understanding that the described methods or equivalents can be trivially adapted in other modalities. Correctness (sensitivity or fidelity in literature) refers to the ability of an explainer to correctly identify components of the input that contribute most to the prediction of the classifier. Most metrics proposed so far focus solely on correctness and attempt to compute it in different ways, often requiring retraining of the underlying classifier. Moreover, they do not capture all aspects of correctness nor do they generalize to other data modalities. We propose a novel computationally-inexpensive method that addresses these drawbacks. It takes into consideration both that the explainer identifies most of the relevant components and does not incorrectly select non-important components as important. If the explainers are performing as expected, a simple masking of the input image with the associated explanation should provide better accuracy as the network is unlikely to be confused by the nonimportant pixels. However, we do not observe this in practice, as we show empirically that vanilla masking in severe performance deterioration (see Table 9 and 10 for ). We hypothesize that this is because of the following reasons: • The masked image has a large proportion of empty pixels 3 and thus does not belong to the data distribution (p data) • Extracted pixels are important in the context of the pixels, and as such removing context makes the masking meaningless. Additionally, Convolutions have the inductive bias that the neighbouring pixels are highly correlated that helps perform well on visual tasks . Simple masking breaks this correlation. Based on the above observations, we conclude that it is crucial to have a realistic for the extracted patches to properly evaluate them. We propose the following procedure to provide a such that the ing image is closer the data distribution 2 We do not require d(,) to be a distance metric in the strictest sense. 3 using the first convolution layer bias as a values for the blank pixels does not help either For each class in the dataset, we select the top k and bottom k images, sorted in decreasing order based on the probability assigned by the classifier to the ground-truth class. We then randomly pair each of the top images with one of the bottom images. For each pair, we extract important regions identified by the explainer from the top image (i.e high confidence images) and overlap them over the corresponding bottom image (i.e low confidence images). We use the bottom k images for this task as we know that they are uninformative for the classifier as evidenced by the assigned probability. We thus obtain a dataset of masked images with important regions from the most important images along with relevant yet non-informative s for each class (see Fig. 1 for an example). Formally, the masking operation can be represented as: Where M is the new masked image, a threshold function, H the high confidence image, L the low confidence image and ⊗theelement − wisemultiplicationoperator. We then measure the accuracy on this masked dataset and compare it with the accuracy on the bottom k images subset. Note that the above mentioned process only evaluates if the explainer is capturing important pixels. In order to verify that the explainer does not select non-important pixels, we repeat the same process but instead use the inverted saliency map 4 and recompute accuracy on this dataset. In this scenario, we expect the accuracy to deteriorate. Formally, the inverse masking process can be defined as follow: Figure 1: Examples of the proposed algorithm for correctness Interestingly, these masked accuracies are similar to the precision and recall metrics used in information retrieval . This provides motivation to combine these differences into a Pseudo-F1 score by computing the harmonic mean of accuracy on normal masked images and 1 -accuracy on inverse masked images. Formally this can be computed as: We define consistency as the ability of the explainer to capture the same relevant components under various transformations to the input. More specifically, if the classifier predicts the same class for both the original and transformed inputs. Then, consistency measures whether the generated explanation for the transformed input (after applying an inverse transform) is similar to the one generated for the original input. For example, if we apply vertical flip as a semantically invariant transformation, we flip the generated heatmap from the transformed image before comparing with the heatmap generated for the original image. Formally, this can be represented as Semantically Invariant Transforms We focus on a subset of all potential transformations which does not change the semantic information contained in the input. We call this subset Semantically Invariant Transforms. Most work so far has considered only noising as a method of transforming the input. By constraining the magnitude of the added noise, we can control the size of the neighbourhood in which we perturb the images. In this work, we consider not only simple transformations that perturb the image in a small neighbourhood but also those that move the data point to vastly different regions in the input space while still retaining the semantic information contained within. This allows us to verify whether the explainer works as expected across larger regions of the input space. For example, in the case of images, the family of transformations T include affine transformations (translations and rotations), horizontal and vertical flips, noising (white, blue etc.), scaling etc. In the image domain, d could be realized as the 2 (Euclidean) distance between explanations of the ground truth and inverted the transformed images (according to Eq. 4). and have shown that 2 is not robust for images and may in larger distances between the pairs of mostly similar explanations. This is attributed to the fact that 2 is only a summation of the pixel-wise intensity differences and, as a , small deformations may in large distances. Even when the images are normalized before hand, 2 is still not a suitable distance for images. Therefore, we instead use Dynamic Time Warping (DTW) which allows for computing distances between two time series, even when misaligned or out of phase. has shown that DTW is effective for images as well, not only for temporal data as originally intended. Due to DTW's high computational cost (quadratic time and space complexity), we use FastDTW , an approximation of DTW that has linear complexity in order to compute the distance between pairs of explanations. Finally, confidence is concerned with whether the generated explanation and the masked input in high confidence predictions. This is a desirable property to enable explanations to be useful for downstream processes including human inspection . So far, our method for computing correctness sheds light only on the average case and is not particularly useful for individual explanations. Generating high-confidence predictions is related to the well researched field of max-margin classifiers . A large margin in classifiers is widely accepted as a desirable property. Here, we extend this notion to explainers and propose that explainers generating explanations that in high confidence predictions are desirable to those that do not. In addition to the desirable statistical properties that this enforces, high confidence predictions are also vital for building trust with human users of the explainer as they are more interested in the per-instance performance than the average . Concretely, we use the same procedure as in Sec. 3.2.1. Instead of computing the increase in accuracy, we compute instead the difference in probability assigned to the ground-truth class, as well as the difference in entropy of the softmax distributions of the original and masked images. We report this for both normal and inverted saliency maps. We expect to observe a positive probability difference and negative entropy difference under normal masking and an inverted behavior under inverse masking owing to similar reasons discussed in Sec. 3.2.1. However, explainers that generate coarse explanations can easily fool this metric. An extreme case is when the explainer considers the entire input as useful. Such an explainer is useless but will have the theoretically highest change in confidence and entropy. To combat this and to establish how sparse the generated explanations are, we also report the average number of pixels in the explanations, normalized by the total number of pixels in the image. We do not combine these numbers into one as different situations have different preferences. For example, in computational biology domains, sparsity is not as important as increase in confidence. The right weighting again depends on the use case and user preference. We use an Inception v3 architecture pretrained on ImageNet (ISLVC-2012) 5. We compare LIME, Grad-CAM, SmoothGrad and Integrated Gradients and measure how they perform on the metrics described previously. All experiments and explainers (except LIME) were implemented in PyTorch . Wherever possible, we reused the official implementation or kept our re-implementation as close to the official codebase as possible. The correctness and confidence metrics for every explainer are computed over 5 runs and mean values are reported. We use a fixed constant threshold to binarize explanation masks. We conducted further experiments by thresholding on the percentiles 6 instead(as done in ). These have been reported in tables 6 and 7. We found that the choice did not affect the relative trends observed. We consider the following semantically invariant transforms: translations (x = ±0.2, y = ±0.2), rotations (−15 •, −10 •), flips (horizontal and vertical). To establish that these do not produce too many out-of-distribution samples (causing a decrease in classifier performance), we compute the accuracy of the underlying classifier under these transformations. Table 1 shows that, indeed, drops in accuracy are not significant. Even though noising is semantically invariant in the image domain, we do not consider it in our experiments as some explainers like Smoothgrad would be unfairly favoured. Table 2. The baseline acc@1 and acc@5 were 11.42% and 53.8% respectively. We hypothesized that a good explainer's accuracy increases with the normal masking and decreases with inverse masking. We see the expected increases in accuracies across all the explainers with Grad-CAM obtaining the highest increase at 97.44%. However, for the inverse masking, we see that both LIME and Grad-CAM show contrary to our hypothesis. This can be explained by observing examples of maps generated in Figs. 1 and 4. We see that, on average, Grad-CAM generates much larger explanations than all other explainers (can be seen in Table 3 as well). This implies that Grad-CAM misidentifies several non-important pixels as important and thus when we compute the inverse masks, we remove non-important pixels that could confuse the classifier. In the case of LIME, we again see from Table 3 and Figs. 1 and 4 that LIME generates the smallest explanations. We further see from Table 2 that LIME has the smallest accuracy gain (in both acc@1 and acc@5). These indicate that LIME fails to select important pixels that were selected by all other explainers. Thus, we can conclude that the inverse masks in case of LIME would contain important pixels as well and thus would cause increase in accuracy as observed. As detailed previously, our methodology for computing correctness involves choosing a number k of top and bottom images to be used for masking. We evaluate how sensitive the measured correctness of explainers are to the value of k. We report the changes in accuracy with respect to the unmasked bottom images for k={5,10,15,20,25} in Fig. 2. The actual accuracy numbers are also reported in Tables 4 and 5. We see that for both acc@1 and acc@5, the change in accuracy for normal masking decreases as we increase k. This is as expected since the average confidence gap between the top-k and the bottom-k images decreases as k increases. This means that the important pixels in the images are masked with non-important pixels from the foreground images. On the contrary, LIME shows a smaller decrease in accuracy (both acc@1 and acc@5). This can be explained by the fact that LIME does not capture all important pixels, and therefore all important pixels from the are not replaced by less-informative pixels. Similarly, for acc@1 and acc@5 for inverse masking, we see that LIME, Smoothgrad and Integrated Gradients behave as expected, i.e., the drop in accuracy is diminished with k is increased as we are retaining the informative parts from the new images. Interestingly, the drop in accuracy for Grad-CAM is stable and close to zero. To understand this, we refer again to Table 3 and note that Grad-CAM produces the smallest inverse maps on average. This implies that when we perform the inverse masking, we retain much of the informative pixels of the image and thus do not see significant drops in accuracy relative to the unmasked bottom-k image dataset. Next, we evaluate consistency by computing the distance with FastDTW between the saliency maps generated on the original images and those generated when transformations are applied to the input image (Following Eq. 4). Fig. 3 and Table 8 report the normalized distances relative to each transformation (i.e., heatmaps sum to 1). First, as transformations become more drastic relative to the original saliency maps, the distances also increase. This is the desired behavior one would expect, thus motivating our choice for using FastDTW. Second, Grad-CAM outperforms all other explainers, as reflected by the fact that its corresponding distances are always smallest. It is followed by Smoothgrad, Integrated Gradients and LIME. This is expected given the grainy saliency maps obtained with Integrated Gradients and Smoothgrad, as well as the patchy heatmaps generated with LIME. Measuring confidence quantifies the performance of the explainers on a per-instance case and not only in the average. As described in Sec. 3.2.3, we compute the change in probability assigned to the ground-truth class (∆ conf) as well as the change in entropy of the softmax distribution (∆ entropy) as proxies for estimating the confidence of explanations. Additionally, we report the proportions of pixels in the heatmaps to the total number of pixels 7, averaged across the top-k dataset. We see that for confidence, the trends mimic the ones observed in Table 2. This implies that masking with extracted heatmaps not only increases accuracy but also in high-confidence predictions across explainers. More specifically, we see that Grad-CAM again outperforms the other explainers (both ∆ conf and ∆ entropy) in the normal heatmaps by large margins. In the case of inverse masking, confidence and entropy for LIME show behaviours contrary to our expectations. This can be attributed to the "patchiness" of the explanations generated by LIME which was discussed in the previous sections. 7 89401 in the standard ImageNet preprocessing pipeline for Inception-v3 In this paper, we formulated desired properties of a good explainer and proposed a generic, computationally inexpensive suite of metrics -correctness, consistency and confidence -to objectively evaluate and compare explainers. We compared well-known explainers, such as LIME, Grad-CAM, Integrated Gradients and SmoothGrad, on a pretrained Inception-V3 model on the ImageNet dataset. Our experiments show that the metrics proposed capture various pros and cons of each explainer allowing users to make an informed choice about which explainer to use for their use case. Specifically, we observe that Grad-CAM often performs better than the other explainers but suffers from drawbacks when inverse masking situations are considered. On the other hand, LIME performs poorly in all situations we consider. Moreover, we also point out the pitfalls of trying to combine from multiple metrics as they tend to hide anomalous behaviours of the underlying metrics (as seen from Pseudo-F1 from Table 2). We recommend that users sanity-check explainers by looking at individual metrics before making a decision based on the combined metric. Furthermore, we urge the XAI community to resist the temptation to propose an one-size-fits-all metrics as we have shown that such metrics tend to hide nuanced trade-offs that practitioners need to be aware of. Going forward, we invite the research community to test our metrics on other explainers, datasets, underlying classifiers and data modalities. Additionally, since the metrics proposed are differentiable, we believe exciting new liens of research would be to develop explainers that directly optimize for these metrics, as well as self-explaining models that incorporate such metrics into their learning regiment. A ADDITIONAL EXPERIMENTAL
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1xBAA4FwH
We propose a suite of metrics that capture desired properties of explainability algorithms and use it to objectively compare and evaluate such methods
Neural networks are known to produce unexpected on inputs that are far from the training distribution. One approach to tackle this problem is to detect the samples on which the trained network can not answer reliably. ODIN is a recently proposed method for out-of-distribution detection that does not modify the trained network and achieves good performance for various image classification tasks. In this paper we adapt ODIN for sentence classification and word tagging tasks. We show that the scores produced by ODIN can be used as a confidence measure for the predictions on both in-distribution and out-of-distribution datasets. The majority of the sentences in English-LinES treebank are from literature. English-EWT dataset is 89 larger and is more diverse. The datasets are described in Let s be a sentence from one of the datasets, and w 1,..., w M be the words. The embedding of 97 the m-th word of the sentence will be x m = W e hash(w m). We apply bidirectional LSTM on the DISPLAYFORM0. For sentiment analysis we apply a dense layer on the 99 concatenation of the last states of the two LSTMs: DISPLAYFORM1 is a cross-entropy: loss(s) = ce(S(f sc (s), 1)), where DISPLAYFORM2 is the modified 101 softmax function, T is the temperature scaling parameter, and C is the number of classes. For POS tagging we apply a dense layer on every hidden state: and ODIN(s) = max S(f sc (x), T ), wherex = x + sign(∇ x Sŷ(x))), whereŷ = argmax S(x, 1). DISPLAYFORM3 Here (perturbation magnitude) and T (temperature) are hyperparameters, which are chosen based 109 on the OOD detection performance on the development sets. For POS tagging, the gradient in the ODIN score formula is applied to the mean of word-level probability maximums. TAB3 shows the for OOD detection and Table 4 shows the rank correlation coefficients for PbThreshold and ODIN methods. The role of the temperature scaling and input perturbations All our experiments confirm the 129 observation from BID3 ] that temperature scaling improves out-of-distribution detection. The effect of higher temperatures saturates when T reaches thousands (Figure 1). The positive effect 131 of the perturbations on the inputs is visible for sentiment analysis, but not for POS tagging. We Ranking of the sentences ODIN is clearly better than PbThreshold according to Spearman's rank 141 correlation coefficient for POS tagging tasks (Table 4). For a neural network trained on en-LinES, ODIN scores are a good indicator how the network will perform on OOD samples. It is a much ODIN). and T for ODIN are determined based on the development sets of en-LinES (ID) and en-EWT (OOD). The size of a circle is proportional to the number of samples that fall into that bucket. Ideally, accuracy scores for the i-th bucket should be higher than for the (i − 1)-th bucket, and y coordinates of the three circles for each bucket should be the same. In this work we have adapted ODIN out-of-distribution detection method on sentence classification 163 and sequence tagging tasks. We showed that as an OOD detector it performs consistently better than 164 for the PbThreshold baseline. Additionally, we attempted to quantify how well the scores produced 165 by these methods can be used as confidence scores for the predictions of neural models. There are many other OOD detection methods that have yet to be tested on NLP tasks. On the other 167 hand, our analysis notably doesn't cover sequence-to-sequence tasks. We have shown that the usage
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJf2ds2ssm
A recent out-of-distribution detection method helps to measure the confidence of RNN predictions for some NLP tasks
Some recent work has shown separation between the expressive power of depth-2 and depth-3 neural networks. These separation are shown by constructing functions and input distributions, so that the function is well-approximable by a depth-3 neural network of polynomial size but it cannot be well-approximated under the chosen input distribution by any depth-2 neural network of polynomial size. These are not robust and require carefully chosen functions as well as input distributions. We show a similar separation between the expressive power of depth-2 and depth-3 sigmoidal neural networks over a large class of input distributions, as long as the weights are polynomially bounded. While doing so, we also show that depth-2 sigmoidal neural networks with small width and small weights can be well-approximated by low-degree multivariate polynomials. Understanding the remarkable success of deep neural networks in many domains is an important problem at present (e.g., BID10). This problem has many facets such as understanding generalization, expressive power, optimization algorithms in deep learning. In this paper, we focus on the question of understanding the expressive power of neural networks. In other words, we study what functions can and cannot be represented and approximated by neural networks of bounded size, depth, width and weights. The early on the expressive power of neural networks showed that the depth-2 neural networks are universal approximators; that is to say, with only mild restrictions on the activation functions or neurons, the depth-2 neural networks are powerful enough to uniformly approximate arbitrary continuous functions on bounded domains in R d, e.g., BID2; BID9; BID0. However, the bounds that they provide on the size or width of these neural networks are quite general, and therefore, weak. Understanding what functions can be represented or wellapproximated by neural networks with bounded parameters is a general direction in the study of expressive power of neural networks. Here the parameters could mean the number of neurons, the width of hidden layers, the depth, and the magnitude of its weights etc. Natural signals (images, speech etc.) tend to be representable as compositional hierarchies BID10, and deeper networks can be thought of as representing deeper hierarchies. The power of depth has been a subject of investigation in deep learning, e.g., BID8. We are interested in understanding the effect of depth on the expressive power. In particular, one may ask whether having more depth allows representation of more functions if the size bound remains the same. BID5 show a separation between depth-2 and depth-3 neural networks. More precisely, they exhibit a function g: R d → R and a probability distribution µ on R d such that g is bounded and supported on a ball of radius O(√ d) and expressible by a depth-3 network of size polynomially bounded in d. But any depth-2 network approximating g in L 2 -norm (or squared error) within a small constant under the distribution µ must be of size exponentially large in d. Their separation works for all reasonable activation functions including ReLUs (Rectified Linear Units) and sigmoids. The function and the input distribution in BID5 are carefully constructed and their proof techniques seem to crucially rely on the specifics of these constructions. Building upon this , BID14 show that while the indicator function of the L 2 -ball can be well-approximated by depth-3 networks of polynomial size, any good approximation to it by depth-2 networks must require exponential size. Here, the notion of approximation in the lower bound is the same as in BID5 and a carefully constructed distribution that is arguably not quite natural. (see also BID12) also gave a separation between depth-2 and depth-3 networks by exhibiting a function g: S d−1 × S d−1 → R which can be well-approximated by a depth-3 ReLU neural network of polynomially bounded size and weights but cannot be approximated by any depth-2 (sigmoid, ReLU or more general) neural network of polynomial size with (exponentially) bounded weights. This separation holds under uniform distribution on S d−1 × S d−1, which is more natural than the previous distributions. However, the proof technique crucially uses harmonic analysis on the unit sphere, and does not seems robust or applicable to other distributions. shows a separation between depth-2k 3 + 8 and depth-k ReLU neural networks, for any positive integer k, when the input is uniformly distributed over [−1, 1] d. BID11 (see also BID14 BID21) show that there are univariate functions on a bounded interval such that neural networks of constant depth require size at least Ω (poly(1/)) for a uniform -approximation over the interval, whereas deep networks (the depth can depend on) can have size O (polylog(1/)).The above separation all fit the following template: certain carefully constructed functions can be well approximated by deep networks, but are hard to approximate by shallow networks using a notion of error that uses a carefully defined distribution. (is distribution-independent as it deals with uniform approximation everywhere in the domain). Thus these do not tell us the extent to which deeper networks are more expressive than the shallow ones. We would like to understand whether there are large classes of functions and distributions that witness the separation between deep and shallow networks. An answer to this question is also more likely to shed light on practical applications of neural networks. BID17; BID16; BID18 show that even functions computed by a depth-2 neural network of polynomial size can be hard to learn using gradient descent type of algorithms for a wide class of distributions. These address questions about learnability rather than the expressive power of deep neural networks. BID7 shows that piecewise affine functions on d with N pieces can be exactly represented by a width (d + 3) network of depth at most N. Lower bound of Ω((N + d − 1)/(d + 1)) on the depth is proven for functions of the above type when the network has width at most (d + 1) and very closely approximates the function. Our depth separation apply to neural networks with bounds on the magnitudes of the weights. While we would prefer to prove our without any weight restrictions, we now argue that small weights are natural. In training neural networks, often weights are not allowed to be too large to avoid overfitting. Weight decay is a commonly used regularization heuristic in deep learning to control the weights. Early stopping can also achieve this effect. Another motivation to keep the weights low is to keep the Lipschitz constant of the function computed by the network (w.r.t. changes in the input, while keeping the network parameters fixed) small. BID6 contains many of these references. One of the surprising discoveries about neural networks has been the existence of adversarial examples BID19 ). These are examples obtained by adding a tiny perturbation to input from class so that the ing input is misclassified by the network. The perturbations are imperceptible to humans. Existence of such examples for a network suggests that the Lipschitz constant of the network is high as noted in BID19. This lead them to suggest regularizing training of neural nets by penalizing high Lipschitz constant to improve the generalization error and, in particular, eliminate adversarial examples. This is carried out in BID1, who find a way to control the Lipschitz constant by enforcing an orthonormality constraint on the weight matrices along with other tricks. They report better resilience to adversarial examples. On the other hand, BID13 suggest that Lipschitz constant cannot tell the full story about generalization. We exhibit a simple function (derived from BID3) over the unit ball B d in d-dimensions can be well-approximated by a depth-3 sigmoidal neural network with size and weights polynomially bounded in d. However, its any reasonable approximation using a depth-2 sigmoidal neural network with polynomially bounded weights must have size exponentially large in d. Our separation is robust and works for a general class of input distributions, as long as their density is at least 1/poly(d) on some small ball of radius 1/poly(d) in Bd. The function we use can also be replaced by many other functions that are polynomially-Lipschitz but not close to any low-degree polynomial. As a by-product of our argument, we also show that constant-depth sigmoidal neural networks are well-approximated by low-degree multivariate polynomials (with a degree bound that allows the depth separation mentioned above). In this section, we show that a sigmoid neuron can be well-approximated by a low-degree polynomial. As a corollary, we show that depth-2 (and in genenral, small-depth) sigmoidal neural networks can be well-approximated by low-degree multivariate polynomials. The main idea is to use Chebyshev polynomial approximation as in , which closely approximates the minimax polynomial (or the polynomial that has the smallest maximum deviation) to a given function. For the simplicity of presentation and arguments, we drop the bias term b in the activation function σ(w, x + b). This is without loss of generality, as explained at the end of the last section. The activation function of a sigmoid neuron σ: R → R is defined as DISPLAYFORM0.Chebyshev polynomials of the first kind {T j (t)} j≥0 are defined recursively as T 0 (t) = 1, T 1 (t) = t, and T j+1 (t) = 2t · T j (t) − T j−1 (t). They form an orthonormal basis of polynomials over [−1, 1] with respect to the density 1/ DISPLAYFORM1 Proposition 1 (see Lemma B.1 in) bounds the magnitude of coefficients c j in the Chebyshev expansion of σ(wt) = ∞ j=0 c j T j (t). Proposition 1. For any j > 1, the coefficient c j in the Chebyshev expansion of a sigmoid neuron σ(wt) is bounded by DISPLAYFORM2 Proposition 1 implies low-degree polynomial approximation to sigmoid neurons as follows. This observation appeared in (see equation (B.7) in their paper). For completeness, we give the proof in Appendix A. Proposition 2. Given any w ∈ R with |w| ≤ B, there exists a polynomial p of degree DISPLAYFORM3 We use this O (log(1/)) dependence in the above bound crucially in some of our , e.g., a weaker version of Daniely's separation for depth-2 and depth-3 neural networks. Notice that this logarithmic dependence does not hold for a ReLU neuron; it is O(1/) instead. A depth-2 sigmoidal neural network on input t ∈ [−1, 1] computes a linear combination of sigmoidal neurons σ(w 1 t), σ(w 2 t),..., σ(w n t), for w 1, w 2,..., w n ∈ R, and computes a function DISPLAYFORM0 Here are a few propositions on polynomial approximations to small-depth neural networks. For completeness, their proofs are included in Appendix A.Proposition 3 shows that a depth-2 sigmoidal neural network of bounded weights and width is close to a low-degree polynomial. Proposition 3. Let f: [−1, 1] → R be a function computed by a depth-2 sigmoidal neural network of width n and weights bounded by DISPLAYFORM1 Now consider a depth-2 sigmoidal neural network on input x ∈ B d, where DISPLAYFORM2 It is given by a linear combination of sigmoidal activations applied to linear functions w 1, x, w 2, x,..., w n, x (or affine functions when we have biases), for w 1, w 2,..., w n ∈ R d and it computes a function F: DISPLAYFORM3 Proposition 4 below is a multivariate version of Proposition 3.Proposition 4. Let F: B d → R be a function computed by a depth-2 sigmoidal neural network with width n and bounded weights, that is, |a i | ≤ B and DISPLAYFORM4 Note that its proof crucially uses the fact that Proposition 2 guarantees a low-degree polynomial that approximates a sigmoid neuron everywhere in [−1, 1].A depth-k sigmoidal neural network can be thought of as a composition -a depth-2 sigmoidal neural network on top, whose each input variable is a sigmoid applied to a depth-(k − 2) sigmoidal neural network. In other words, it computes a function F: DISPLAYFORM5 where y = (y 1, y 2, . . ., y m) has each coordinate y j = σ(F j (x)), for 1 ≤ j ≤ m, such that each DISPLAYFORM6 Now we show an interesting consequence, namely, any constant-depth sigmoidal neural network with polynomial width and polynomially bounded weights can be well-approximated by a lowdegree multivariate polynomial. The bounds presented in Proposition 5 are not optimal but the qualitative statement is interesting in contrast with the depth separation . The growth of the degree of polynomial approximation is dependent on the widths of hidden layers and it is also the subtle reason why a depth separation is still possible (when the weights are bounded).Proposition 5. Let F: B d → R be a function computed by a depth-k sigmoidal neural network of width at most n in each layer and weights bounded by B, then DISPLAYFORM7 Note that when n and B are polynomial in d and the depth k is constant, then this low-degree polynomial approximation also has degree polynomial in d. DISPLAYFORM8 y ) cannot be approximated by any depth-2 neural network of polynomial size and (exponentially) bounded weights. Daniely shows this lower bound for a general neuron or activation function that includes sigmoids and ReLUs. Daniely then uses G(x, y) = g(x, y) = sin(πd 3 x, y) which, on the other hand, is approximable by a depth-3 ReLU neural network with polynomial size and polynomially bounded weights. This gives a separation between depth-2 and depth-3 ReLU neural networks w.r.t. uniform distribution over DISPLAYFORM9 Daniely's proof uses harmonic analysis on the unit sphere, and requires the uniform distribution on DISPLAYFORM10 We show a simple proof of separation between depth-2 and depth-3 sigmoidal neural networks that compute functions F: B d → R. Our proof works for a large class of distributions on B d but requires the weights to be polynomially bounded. The following lemma appears in BID4. Assumption 1 in BID5 and their version of this lemma for ReLU networks was used by BID3 in the proof of separation between the expressive power of depth-2 and depth-3 ReLU networks. Lemma 6. Let f: [−1, 1] → R be any L-Lipschitz function. Then there exists a function g: [−1, 1] → R computed by a depth-2 sigmoidal neural network such that DISPLAYFORM11 the width n as well as the weights are bounded by poly(L, 1/), and |f (t) − g(t)| ≤, for all t ∈ [−1, 1]. Now we are ready to show the separation between depth-2 and depth-3 sigmoidal neural networks. The main idea, similar to BID3, is to exhibit a function that is Lipschitz but far from any low-degree polynomial. The Lipschitz property helps in showing that our function can be wellapproximated by a depth-3 neural network of small size and small weights. However, being far from any low-degree polynomial, it cannot be approximated by any depth-2 neural network. By modifying the function to G(x) = sin(πN x 2), this lower bound with L ∞ -norm holds for any distribution over B d whose support contains a radial line segment of length at least 1/poly(d), by making N = poly(d), for a large enough polynomial. Remark: Given any distribution µ over B d whose probability density is at least 1/poly(d) on some small ball of radius 1/poly(d), the lower bound or inapproximability by any depth-2 sigmoidal neural network can be made to work with L 2 -norm (squared error), for a large enough N = poly(d).Proof. First, we will show that G(x) can be well-approximated by a depth-3 sigmoidal neural network of polynomial size and weights. The idea is similar to Daniely's construction for ReLU networks in BID3. By Lemma 6, there exists a function f: [−1, 1] → R computed by a depth-2 sigmoidal neural network of size and weights bounded by poly(d, 1/) such that DISPLAYFORM0 6, for all t ∈ [−1, 1]. Thus, we can compute x 2 i for each coordinate of x and add them up to get an -approximation to x 2 over B d. That is, there exists a function S: B d → R computed by a depth-2 sigmoidal neural network of size and weights bounded by poly(d, 1/) such that S(x) − x 2 ≤ /10d 5, for all x ∈ B d. Again, by Lemma 6, we can approximate sin(πd 3 t) over using f: [−1, 1] → R computed by another depth-2 sigmoidal neural network with size and weights bounded by poly(d, 1/) such that sin(πd 3 t) − f (t) ≤ /2, for all t ∈. Note that the composition of these two depth-2 neural networks f (N (x)) gives a depth-3 neural network as the output of the hidden layer of the bottom network can be fed into the top network as inputs. DISPLAYFORM1 using that f that approximates sin(πd 5 t) closely must also be 4d 5 -Lipschitz DISPLAYFORM2 Now we will show the lower bound. Consider any function F: B d → R computed by a depth-2 sigmoidal neural network whose weights are bounded by B = O(d 2) and width is n. Proposition 4 shows that there exists a d-variate polynomial P (x) of degree O B log(nB . Consider S = {t ∈ [a, a + l]: t = −1 + (i + 1/2)/N, for some integer i}. Then S contains at least N l − 2 points where sin(πN t) alternates as ±1. Any polynomial p of degree D cannot match the sign of sin(πN t) on all the points in S. Otherwise, by intermediate value theorem, p must have at least N l − 3 roots between the points of S, which means D ≥ N l − 3, a contradiction. Thus, there exists t 0 ∈ S such that p(t 0) and sin(πN t 0) have opposite signs. Since sin(πN t) = ±1, for any t ∈ S, the sign mismatch implies |sin(πN DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 An important remark on biases: Even though we handled the case of sigmoid neurons without biases, the proof technique carries over to the sigmoid neurons with biases σ( w, x + b). The idea is to consider a new DISPLAYFORM6 ) with x d+1 = 1, and consider the new weight vector w new = (w, b). Thus, w new, x new = w, x + b. The new input lies on a d-dimensional hyperplane slice of B d+1, so we need to look at the restriction of the input distribution µ to this slice. Most of the ideas in our proofs generalize without any technical modifications. We defer the details to the full version. In this section we show lower bounds under the L 2 -norm. The theorem below gives a technical condition on the class of densities µ on B d for which our lower bound holds. Let's give an example to illustrate that the condition on density is reasonable: Let K ⊂ B d be a convex set such that every point in K is at least r away from the boundary of B d (where r = 1/poly(d) is a parameter). Further assume that the probability mass of K is at least a constant and for every point in K the probability density is within a constant factor of the uniform density on K. Then our lower bound applies to µ.Theorem 9. Consider the function G: B d → R given by G(x) = sin(πN x 2). Let µ be any probability density over B d such that there exists a subset C ⊆ B d satisfying the following two conditions:• The r-interior of C defined as C = {x ∈ C : B(x, r) ⊆ C} contains at least γ fraction of the total probability mass for some γ > 0, i.e., C µ(x)dx ≥ γ.• For any affine line, the induced probability density on every segment of length at least r in the intersection ∩ C is (α, β)-uniform, i.e., it is at least α times and at most β times the uniform density on that segment. Let F: B d → R be any function computed by a depth-2 sigmoidal neural network with weights bounded by B and width n. Then for any 0 < δ αγ/3β and N (B/r 2) log(nB 2 /δ), the function F cannot δ-approximate G on B d under L 2 -norm (squared error) under the probability density µ.In particular, if α, β, γ are constants, B = poly(d), n = 2 d, and r = 1/poly(d), then it suffices to choose N = poly(d) for a sufficiently large degree polynomial. Proof. We show a lower bound on L 2 -error of approximating G(x) with any multivariate polynomial P: B d → R of degree D under the distribution given by µ on B d. For any fixed unit vector v, consider u ∈ B d−1 orthogonal to v and let u be the affine line going through u and parallel to the direction v given by u = {x = u + tv : t ∈ R}. DISPLAYFORM0 where IC(u, t) = 1, if u ∈ B d−1 and u + tv ∈ {u + t v ∈ C : t ∈ I} ⊆ C for some interval I of length at least r, and IC(u, t) = 0, otherwise DISPLAYFORM1 because for any x = u + tv ∈ C we have B(x, r) ⊆ C, therefore IC(u, t) = 1 DISPLAYFORM2 because for any line, the distribution induced by µ(x) along any line segment of length at least r in the intersection ∩ C is (α, β)-uniform, for any line DISPLAYFORM3 The last inequality is using the condition C µ(x)dx ≥ γ given in Theorem 9 and an adaptation of the following idea from Lemma 5 of BID3. For any fixed u and v, G(u + tv) = sin(πN ( u 2 + t 2)) and P (u + tv) is a polynomial of degree at most D in t. The function sin(πN ( u 2 +t 2)) alternates its sign as u 2 +t 2 takes values that are successive integer multiples of 1/N. Consider s = t 2 ∈ and divide into N disjoint segments using integer grid of step size 1/N. For any polynomial p(s) of degree at most D and any interval I ⊆ of length r D/N, there exists at least N r − D − 2 segments of length 1/N each on which sin(πN s) and p(s) do not change signs and have opposite signs. Now using (sin(πN s) − p(s)) 2 ≥ sin 2 (πN s), integrating we get that I (sin(πN s) − p(s)) 2 ds ≥ r/2. Extending this proof to t instead of s = t 2, using sin 2 (πN t 2)t ≤ sin 2 (πN t) for all t ∈, and incorporating the shift πN u 2, we can similarly show that I sin 2 (πN ( u 2 + t 2)) − P (u + tv)) 2 dt ≥ r/3. Summing up over multiple such intervals gives the final inequality. The L 2 separation between depth-2 and depth-3 neural networks under probability density µ now follows by taking a small enough δ, and combining the following ingredients (i) Proposition 4 says that any depth-2 sigmoid neural networks of width n = 2 d and weights bounded by B = poly(d) can be δ-approximated in L ∞ (and hence, also L 2) by a multivariate polynomials of degree DISPLAYFORM4 (ii) proof of Theorem 7 (initial part) says that G(x) can be δ-approximated in L ∞ (and hence, also L 2) by a depth-3 sigmoid neural network of width and size poly(d), but (iii) Theorem 9 says that, for N = poly(d) of large enough degree, G(x) cannot be 3δ-approximated in L 2 by any multivariate polynomial of degree D, and (iv) triangle inequality. Proof. Consider the degree-D approximation to σ(wt) given by the first D terms in its Chebyshev expansion. The error of this approximation for any t ∈ [−1, 1] is bounded by DISPLAYFORM0 using Proposition 1, |w| ≤ B, and D = O (B log (B/)). Proof. Let f be computed by a depth-2 sigmoidal neural network given by f (t) = n i=1 a i σ(w i t). Define a parameter = δ/nB. Proposition 2 guarantees polynomial p 1, p 2,..., p n of degree O (B log(B/)) such that |σ(w i t) − p i (t)| ≤, for all t ∈ [−1, 1]. Thus, the polynomial p(t) = n i=1 a i p i (t) has degree O (B log(B/)) = O B log(nB 2 /δ), and for any t ∈ [−1, 1], DISPLAYFORM0 Proof. Let F be computed by a depth-2 neural network given by DISPLAYFORM0 Define a parameter = δ/nB. Proposition 2 guarantees polynomial p 1, p 2,..., p n of degree O (B log(B/)) such that |σ(w i t) − p i (t)| ≤, for all t ∈ [−1, 1]. Consider the following polynomial P (x) = P (x 1, x 2, . . ., x d) = n i=1 a i p i (w i / w i, x). P (x) is a d-variate polynomial of degree O (B log(B/)) = O B log(nB 2 /δ) in each variable x 1, x 2,..., x d. For any DISPLAYFORM1 |σ(w i t i) − p i (t i)| using t i = w i / w i, x and |a i | ≤ B ≤ nB using |σ(w i t) − p i (t)| ≤, for all t ∈ [−1, 1] = δ. Proof. We prove this by induction on the depth k. By induction hypothesis each F j (x) can be 1 -approximated (in L ∞ -norm) by a d-variate polynomial Q j (x) of degree O (nB) k−2 log (k−2) (nB/ 1) in each variable. Thus, |F j (x) − Q j (x)| = 1, for any x ∈ B d and 1 ≤ j ≤ m. Because a sigmoid neuron is Lipschitz, DISPLAYFORM0 for any x ∈ B d and 1 ≤ j ≤ m. Since F j (x) is the output of a depth-(k−2) sigmoidal neural network of width at most n and weights at most B, we must have |F j (x)| ≤ nB, for all x ∈ B d. Thus, |Q j (x)| ≤ nB + 1 ≤ 2nB. By Proposition 2, there exists a polynomial q(t) of degree at most O (nB log(nB/ 2)) such that |σ(Q j (x)) − q(Q j (x))| ≤ 2, for all x ∈ B d and 1 ≤ j ≤ m. Consider q ∈ R m as q = (q(Q 1 (x)), q(Q 2 (x)),..., q(Q m (x))). Then, for any x ∈ B d, we have DISPLAYFORM1 Again by Proposition 2, there is a polynomial p of degree at most O (nB log(nB/)) such that |σ(w i, q) − p(w i, q)| ≤, for all x ∈ B d and 1 ≤ i ≤ n. This is because | w i, q | = O(nB).Let's define P (x) = n i=1 a i p(w i, q). Therefore, for any x ∈ B d, DISPLAYFORM2 if we use 1 = 2 = δ/3n 3/2 B 2 and = δ/3nB.P (x) is a d-variate polynomial of degree DISPLAYFORM3 in each variable.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJICXeWAb
depth-2-vs-3 separation for sigmoidal neural networks over general distributions
The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph. In reinforcement learning (RL), where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approximating the eigenvectors of the Laplacian provides a promising approach to state representation learning. However, existing methods for performing this approximation are ill-suited in general RL settings for two main reasons: First, they are computationally expensive, often requiring operations on large matrices. Second, these methods lack adequate justification beyond simple, tabular, finite-state settings. In this paper, we present a fully general and scalable method for approximating the eigenvectors of the Laplacian in a model-free RL context. We systematically evaluate our approach and empirically show that it generalizes beyond the tabular, finite-state setting. Even in tabular, finite-state settings, its ability to approximate the eigenvectors outperforms previous proposals. Finally, we show the potential benefits of using a Laplacian representation learned using our method in goal-achieving RL tasks, providing evidence that our technique can be used to significantly improve the performance of an RL agent. The performance of machine learning methods generally depends on the choice of data representation BID2. In reinforcement learning (RL), the choice of state representation may affect generalization , exploration , and speed of learning BID7. As a motivating example, consider goal-achieving tasks, a class of RL tasks which has recently received significant attention BID1 ). In such tasks, the agent's task is to achieve a certain configuration in state space; e.g. in FIG0 the environment is a two-room gridworld and the agent's task is to reach the red cell. A natural reward choice is the negative Euclidean (L2) distance from the goal (e.g., as used in Nachum et al. FORMULA4). The ability of an RL agent to quickly and successfully solve the task is thus heavily dependent on the representation of the states used to compute the L2 distance. Computing the distance on one-hot (i.e. tabular) representations of the states (equivalent to a sparse reward) is most closely aligned with the task's directive. However, such a representation can be disadvantageous for learning speed, as the agent receives the same reward signal for all non-goal cells. One may instead choose to compute the L2 distance on (x, y) representations of the grid cells. This allows the agent to receive a clear signal which encourages it to move to cells closer to the goal. Unfortunately, this representation is agnostic to the environment dynamics, and in cases where the agent's movement is obstructed (e.g. by a wall as in FIG0), this choice of reward is likely to cause premature convergence to sub-optimal policies unless sophisticated exploration strategies are used. The ideal reward structure would be defined on state representations whose distances roughly correspond to the ability of the agent to reach one state from another. Although there are many suitable such representations, in this paper, we focus on a specific approach based on the graph Laplacian, which is notable for this and several other desirable properties. For a symmetric weighted graph, the Laplacian is a symmetric matrix with a row and column for each vertex. The d smallest eigenvectors of the Laplacian provide an embedding of each vertex in R d which has been found to be especially useful in a variety of applications, such as graph visualization BID9, clustering , and more BID6.Naturally, the use of the Laplacian in RL has also attracted attention. In an RL setting, the vertices of the graph are given by the states of the environment. For a specific behavior policy, edges between states are weighted by the probability of transitioning from one state to the other (and vice-versa). Several previous works have proposed that approximating the eigenvectors of the graph Laplacian can be useful in RL. For example, shows that using the eigenvectors as basis functions can accelerate learning with policy iteration. Machado et al. (2017a; b) show that the eigenvectors can be used to construct options with exploratory behavior. The Laplacian eigenvectors are also a natural solution to the aforementioned reward-shaping problem. If we use a uniformly random behavior policy, the Laplacian state representations will be appropriately aware of the walls present in the gridworld and will induce an L2 distance as shown in FIG0 (right). This choice of representation accurately reflects the geometry of the problem, not only providing a strong learning signal at every state, but also avoiding spurious local optima. While the potential benefits of using Laplacian-based representations in RL are clear, current techniques for approximating or learning the representations are ill-suited for model-free RL. For one, current methods mostly require an eigendecomposition of a matrix. When this matrix is the actual Laplacian , the eigendecomposition can easily become prohibitively expensive. Even for methods which perform the eigendecomposition on a reduced matrix (a; b), the eigendecomposition step may be computationally expensive, and furthermore precludes the applicability of the method to stochastic or online settings, which are common in RL. Perhaps more crucially, the justification for many of these methods is made in the tabular setting. The applicability of these methods to more general settings is unclear. To resolve these limitations, we propose a computationally efficient approach to approximate the eigenvectors of the Laplacian with function approximation based on the spectral graph drawing objective, an objective whose optimum yields the desired eigenvector representations. We present the objective in a fully general RL setting and show how it may be stochastically optimized over minibatches of sampled experience. We empirically show that our method provides a better approximation to the Laplacian eigenvectors than previous proposals, especially when the raw representation is not tabular. We then apply our representation learning procedure to reward shaping in goal-achieving tasks, and show that our approach outperforms both sparse rewards and rewards based on L2 distance in the raw feature space. Results are shown under a set of gridworld maze environments and difficult continuous control navigation environments. We present the eigendecomposition framework in terms of general Hilbert spaces. By working with Hilbert spaces, we provide a unified treatment of the Laplacian and our method for approximating its eigenvectors BID5 -eigenfunctions in Hilbert spaces -regardless of the underlying space (discrete or continuous). To simplify the exposition, the reader may substitute the following simplified definitions:• The state space S is a finite enumerated set {1, . . ., |S|}.• The probability measure ρ is a probability distribution over S.• The Hilbert space H is R |S|, for which elements f ∈ H are |S| dimensional vectors representing functions f: S → R.• The inner product f, g H of two elements f, g ∈ H is a weighted dot product of the corresponding vectors, with weighting given by ρ; i.e. f, DISPLAYFORM0 • A linear operator is a mapping A: H → H corresponding to a weighted matrix multiplication; i.e. DISPLAYFORM1 • A self-adjoint linear operator A is one for which f, Ag H = Af, g H for all f, g ∈ H.This corresponds to A being a symmetric matrix. We now present the more general form of these definitions. Let S be a set, Σ be a σ-algebra, and ρ be a measure such that (S, Σ, ρ) constitutes a measure space. Consider the set of square-integrable real-valued functions L 2 (S, Σ, ρ) = {f : S → R s.t. S |f (u)| 2 dρ(u) < ∞}. When associated with the inner-product, DISPLAYFORM0 this set of functions forms a complete inner product Hilbert space BID8 ). The inner product gives rise to a notion of orthogonality: Functions f, g are orthogonal if f, g H = 0. It also induces a norm on the space: ||f || 2 = f, f H. We denote H = L 2 (S, Σ, ρ) and additionally restrict ρ to be a probability measure, i.e. S 1 dρ(u) = 1. To construct the graph Laplacian in this general setting, we consider linear operators D which are Hilbert-Schmidt integral operators BID3, expressable as, DISPLAYFORM0 where with a slight abuse of notation we also use D: S × S → R + to denote the kernel function. We assume that (i) the kernel function D satisfies D(u, v) = D(v, u) for all u, v ∈ S so that the operator D is self-adjoint; (ii) for each u ∈ S, D(u, v) is the Radon-Nikodym derivative (density function) from some probability measure to ρ, i.e. S D(u, v) dρ(v) = 1 for all u. With these assumptions, D is a compact, self-adjoint linear operator, and hence many of the spectral properties associated with standard symmetric matrices extend to D.The Laplacian L of D is defined as the linear operator on H given by, DISPLAYFORM1 The Laplacian may also be written as the linear operator I − D, where I is the identity operator. Any eigenfunction with associated eigenvalue λ of the Laplacian is an eigenfunction with eigenvalue 1 − λ for D, and vice-versa. Our goal is to find the first d eigenfunctions f 1,..., f d associated with the smallest d eigenvalues of L (subject to rotation of the basis). 1 The mapping φ: DISPLAYFORM2 ] then defines an embedding or representation of the space S. Spectral graph drawing BID9 provides an optimization perspective on finding the eigenvectors of the Laplacian. Suppose we have a large graph, composed of (possibly infinitely many) vertices with weighted edges representing pairwise (non-negative) affinities (denoted by D(u, v) ≥ 0 for vertices u and v). To visualize the graph, we would like to embed each vertex in a low dimensional space (e.g., R d in this work) so that pairwise distances in the low dimensional space are small for vertices with high affinity. Using our notation, the graph drawing objective is to find a set of orthonormal functions f 1,..., f d defined on the space S which minimize DISPLAYFORM0 The orthonormal constraints can be written as DISPLAYFORM1 The graph drawing objective may be expressed more succinctly in terms of the Laplacian: DISPLAYFORM2 The minimum value of FORMULA8 is the sum of the d smallest eigenvalues of L. Accordingly, the minimum is achieved when f 1,..., f d span the same subspace as the corresponding d eigenfunctions. In the next section, we will show that the graph drawing objective is amenable to stochastic optimization, thus providing a general, scalable approach to approximating the eigenfunctions of the Laplacian. In this section, we specify the meaning of the Laplacian in the RL setting (i.e., how to set ρ, D appropriately). We then elaborate on how to approximate the eigenfunctions of the Laplacian by optimizing the graph drawing objective via stochastic gradient descent on sampled states and pairs of states. In RL, an agent interacts with an environment by observing states and acting on the environment. We consider the standard MDP setting . Briefly, at time t the environment produces an observation s t ∈ S, which at time t = 0 is determined by a random sample from an environmentspecific initial distribution P 0. The agent's policy produces a probability distribution over possible actions π(a|s t) from which it samples a specific action a t ∈ A to act on the environment. The environment then yields a reward r t sampled from an environment-specific reward distribution function R(s t, a t), and transitions to a subsequent state s t+1 sampled from an environment-specific transition distribution function P (s t, a t). We consider defining the Laplacian with respect to a fixed behavior policy π. Then, the transition distributions P π (s t+1 |s t) form a Markov chain. We assume this Markov chain has a unique stationary distribution. We now introduce a choice of ρ and D for the Laplacian in the RL setting. We define ρ to be the stationary distribution of the Markov chain P π such that for any measurable U ⊂ S we have DISPLAYFORM0 As D(u, v) represents the pairwise affinity between two vertices u and v on the graph, it is natural to define D(u, v) in terms of the transition distribution. DISPLAYFORM1 is the density function from a probability measure to ρ for all u. We define DISPLAYFORM2 which satisfies these conditions 3. In other words, the affinity between states u and v is the average of the two-way transition probabilities: If S is finite then the first term in FORMULA11 is P π (s t+1 = v|s t = u)/ρ(v) and the second term is P π (s t+1 = u|s t = v)/ρ(u). Given this definition of the Laplacian, we now aim to learn the eigen-decomposition embedding φ. In the model-free RL context, we have access to states and pairs of states (or sequences of states) only via sampling; i.e. we may sample states u from ρ(u) and pairs of u, v from ρ(u)P π (v|u). This imposes several challenges on computing the eigendecomposition:• Enumerating the state space S may be intractable due to the large cardinality or continuity.• For arbitrary pairs of states (u, v), we do not have explicit access to D(u, v).• Enforcing exact orthonormality of f 1,..., f d may be intractable in innumerable state spaces. With our choices for ρ and D, the graph drawing objective (Eq. 2) is a good start for resolving these challenges because it can be expressed as an expectation (see Appendix C for the derivation): DISPLAYFORM0 Minimizing the objective with stochastic gradient descent is straightforward by sampling transition pairs (s t, s t+1) as (u, v) from the replay buffer. The difficult part is ensuring orthonormality of the functions. To tackle this issue, we first relax the orthonormality constraint to a soft constraint DISPLAYFORM1 Using standard properties of expectations, we rewrite the inequality as follows: DISPLAYFORM2 In practice, we transform this constraint into a penalty and solve the unconstrained minimization problem. The ing penalized graph drawing objective is DISPLAYFORM3 where β is the penalty weight (KKT multiplier). DISPLAYFORM4 may be learned using a neural network function approximator. We note thatG has a form which appears in many other representation learning objectives, being comprised of an attractive and a repulsive term. The attractive term minimizes the squared distance of embeddings of randomly sampled transitions experienced by the policy π, while the repulsive term repels the embeddings of states independently sampled from ρ. The repulsive term is especially interesting and we are unaware of anything similar to it in other representation learning objectives: It may be interpreted as orthogonalizing the embeddings of two randomly sampled states while regularizing their norm away from zero by noticing DISPLAYFORM5 4 RELATED WORK One of the main contributions of our work is a principled treatment of the Laplacian in a general RL setting. While several previous works have proposed the use of the Laplacian in RL (; a), they have focused on the simple, tabular setting. In contrast, we provide a framework for Laplacian representation learning that applies generally (i.e., when the state space is innumerable and may only be accessed via sampling).Our main is showing that the graph drawing objective may be used to stochastically optimize a representation module which approximates the Laplacian eigenfunctions. Although a large body of work exists regarding stochastic approximation of an eigendecomposition BID4 ), many of these approaches require storage of the entire eigendecomposition. This scales poorly and fails to satisfy the desiderata for model-free RL -a function approximator which yields arbitrary rows of the eigendecomposition. Some works have proposed extensions that avoid this requirement by use of Oja's rule . Originally defined within the Hebbian framework, recent work has applied the rule to kernelized PCA , and extending it to settings similar to ours is a potential avenue for future work. Other approaches to eigendecomposition that may eventually prove fruitful in the RL setting include , which proposes to scale to large datasets by subsampling representative subgraphs, and BID0, which provides some techniques to extend spectral clustering to out-of-sample points. In RL, Machado et al. (2017b) propose a method to approximate the Laplacian eigenvectors with functions approximators via an equivalence between proto-value functions and spectral decomposition of the successor representation . Importantly, they propose an approach for stochastically approximating the eigendecomposition when the state space is large. Unfortunately, their approach is only justified in the tabular setting and, as we show in our below, does not generalize beyond. Moreover, their eigenvectors are based on an explicit eigendecomposition of a constructed reduced matrix, and thus are not appropriate for online settings. Approaches more similar to ours optimize objectives similar to Eq. 2, but handle the orthonormality constraint differently. Shaham et al. FORMULA4 introduce a special-purpose orthonormalizing layer, which ensures orthonormality at the mini-batch level. Unfortunately, this does not ensure orthonormality over the entire dataset and requires large minibatches for stability. Furthermore, the orthonormalization process can be numerically unstable, and in our preliminary experiments we found that TensorFlow frequently crashed due to numerical errors from this sort of orthonormalization. Pfau et al. FORMULA4 turn the problem into an unconstrained optimization objective. However, in their chosen form, one cannot compute unbiased stochastic gradients. Moreover, their approach scales quadratically in the number of embedding dimensions. Our approach does not suffer from these issues. Finally, we note that our work provides a convincing application of Laplacian representations on difficult RL tasks, namely reward-shaping in continuous-control environments. Although previous works have presented interesting preliminary , their applications were either restricted to small discrete state spaces or focused on qualitative assessments of the learned options (a; b). We first evaluate the learned representations by how well they approximate the subspace spanned by the smallest eigenfunctions of the Laplacian. We use the following evaluation protocol: (i) Given an embedding φ: S → R d, we first find its principal d-dimensional orthonormal basis h 1,..., h d, onto which we project all embeddings in order to satisfy the orthonormality constraint of the graph drawing objective; (ii) the evaluation metric is then computed as the value of the graph drawing objective using the projected embeddings. In this subsection, we use finite state spaces, so step (i) can be performed by SVD. We used a FourRoom gridworld environment FIG1. We generate a dataset of experience by randomly sampling n transitions using a uniformly random policy with random initial state. We compare the embedding learned by our approximate graph drawing objective against methods proposed by Machado et al. (2017a; b). Machado et al. (2017a) find the first d eigenvectors of the Laplacian by eigen-decomposing a matrix formed by stacked transitions, while Machado et al. (2017b) eigen-decompose a matrix formed by stacked learned successor representations. We evaluate the methods with three different raw state representations of the gridworld: (i) one-hot vectors ("index"), (ii) (x, y) coordinates ("position") and (iii) top-down pixel representation ("image").We present the of our evaluations in FIG2. Our method outperforms the previous methods with all three raw representations. Both of the previous methods were justified in the tabular setting, however, surprisingly, they underperform our method even with the tabular representation. Moreover, our method performs well even when the number of training samples is small. We now move on to demonstrating the power of our learned representations to improve the performance of an RL agent. We focus on a family of tasks -goal-achieving tasks -in which the agent is rewarded for reaching a certain state. We show that in such settings our learned representations are well-suited for reward shaping. Goal-achieving tasks and reward shaping. A goal-achieving task is defined by an environment with transition dynamics but no reward, together with a goal vector z g ∈ Z, where Z is the goal space. We assume that there is a known predefined function h: S → Z that maps any state s ∈ S to a goal vector h(s) ∈ Z. The learning objective is to train a policy that controls the agent to get to some state s such that h(s) − z g ≤. For example the goal space may be the same as the state space with Z = S and h(s) = s being the identity mapping, in which case the target is a state vector. More generally the goal space can be a subspace of the state space. For example, in control tasks a state vector may contain both position and velocity information while a goal vector may just be a specific position. See Plappert et al. FORMULA4 for an extensive discussion and additional examples. A reward function needs to be defined in order to apply reinforcement learning to train an agent that can perform a goal achieving task. Two typical ways of defining a reward function for this family of tasks are (i) the sparse reward: Reward shaping with learned representations. We expect that distance based reward shaping with our learned representations can speed up learning compared to sparse reward while avoiding the bias in the raw feature space. More specifically, we define the reward based on distance in a learned latent space. If the goal space is the same as the state space, i.e. S = Z, the reward function can be defined as r t = − φ(s t+1) − φ(z g). If S = Z we propose two options: (i) The first is to learn an embedding φ: Z → R d of the goal space and define r t = − φ(h(s t+1)) − φ(z g). (ii) The second options is to learn an an embedding φ: S → R d of the state space and define DISPLAYFORM0 DISPLAYFORM1, where h −1 (z) is defined as picking arbitrary state s (may not be unique) that achieves h(s) = z. We experiment with both options when S = Z. We experiment with the gridworld environments with (x, y) coordinates as the observation. We evaluate on three different mazes: OneRoom, TwoRooms and HardMaze, as shown in the top row of FIG4. The red grids are the goals and the heatmap shows the distances from each grid to the goal in the learned Laplacian embedding space. We can qualitatively see that the learned rewards are well-suited to the task and appropriately reflect the environment dynamics, especially in TwoRoom and HardMaze where the raw feature space is very ill-suited. These representations are learned according to our method using a uniformly random behavior policy. Then we define the shaped reward as a half-half mix of the L2 distance in the learned latent space and the sparse reward. We found this mix to be advantageous, as the L2 distance on its own does not provide enough difference between the reward of the goal state and rewards of the states near the goal. When the L2 distance between the representations of the goal state and adjacent states is small the Q-function can fail to provide a significant signal to actually reach the goal state (rather than a state that is just close to the goal). Thus, to better align the shaped reward with the task directive, we use a half-half mix, which clearly draws a boundary between the goal state and its adjacent states (as the sparse reward does) while retaining the structure of the distance-shaped reward. We plot the learning performance of an agent trained according to this learned reward in FIG4. All plots are based on 5 different random seeds. We compare against (i) sparse: the sparse reward, (ii) l2: the shaped reward based on the L2 distance in the raw (x, y) feature space, (iii) rawmix: the mixture of (i) and (ii). Our mixture of shaped reward based on learning representations and the sparse reward is labelled as "mix" in the plots. We observe that in the OneRoom environment all shaped reward functions significantly outperform the sparse reward, which indicates that in goal-achieving tasks properly shaped reward can accelerate learning of the policy, justifying our motivation of applying learned representations for reward shaping. In TwoRoom and HardMaze environments when the raw feature space cannot reflect an accurate distance, our Laplacian-based shaped reward learned using the graph drawing objective ("mix") significantly outperforms all other reward settings. To further verify the benefit of our learned representations in reward shaping, we also experiment with continuous control navigation tasks. These tasks are much harder to solve than the gridworld tasks because the agent must simultaneously learn to control itself and navigate to the goal. We use Mujoco to create 3D mazes and learn to control two types of agents, PointMass and Ant, to navigate to a certain area in the maze, as shown in FIG5. Unlike the gridworld environments the (x, y) goal space is distinct from the state space, so we apply our two introduced methods to align the spaces: (i) learning φ to only embed the (x, y) coordinates of the state (mix) or (ii) learning φ to embed the full state (fullmix). We run experiments with both methods. As shown in FIG5 both "mix" and "fullmix" outperform all other methods, which further justifies the benefits of using our learned representations for reward shaping. It is interesting to see that both embedding the goal space and embedding the state space still provide a significant advantage even if neither of them is a perfect solution. For goal space embedding, part of the state vector (e.g. velocities) is ignored so the learned embedding may not be able to capture the full structure of the environment dynamics. For state space embedding, constructing the state vector from the goal vector makes achieving the goal more challenging since there is a larger set of states (e.g. with different velocities) that achieve the goal but the shaped reward encourage the policy to reach only one of them. Having a better way to align the two spaces would be an interesting future direction. We have presented an approach to learning a Laplacian-based state representation in RL settings. Our approach is both general -being applicable to any state space regardless of cardinality -and scalable -relying only on the ability to sample mini-batches of states and pairs of states. We have further provided an application of our method to reward shaping in both discrete spaces and continuous-control settings. With our scalable and general approach, many more potential applications of Laplacian-based representations are now within reach, and we encourage future work to continue investigating this promising direction. A EXISTENCE OF SMALLEST EIGENVALUES OF THE LAPLACIAN.Since the Hilbert space H may have infinitely many dimensions we need to make sure that the smallest d eigenvalues of the Laplacian operator is well defined. Since L = I − D if λ is an eigenvalue of D then 1 − λ is an eigenvalue of L. So we turn to discuss the existence of the largest d eigenvalues of D. According to our definition D is a compact self-adjoint linear operator on H. So it has the following properties according to the spectral theorem:• D has either (i) a finite set of eigenvalues or (ii) countably many eigenvalues {λ 1, λ 2, ...} and λ n → 0 if there are infinitely many. All eigenvalues are real.• Any eigenvalue λ satisfies − D ≤ λ ≤ D where · is the operator norm. If the operator D has a finite set of n eigenvalues its largest d eigenvalues exist when d is smaller than n. If D has a infinite but countable set of eigenvalues we first characterize what the eigenvalues look like: DISPLAYFORM0 Recall that the operator norm is defined as DISPLAYFORM1 Define q u be the probability measure such that DISPLAYFORM2 and DISPLAYFORM3 which hold for any f ∈ H. Hence D ≤ 1.So the absolute values of the eigenvalues of D can be written as a non-increasing sequence which converges to 0 with the largest eigenvalue to be 1. If d is smaller than the number of positive eigenvalues of D then the largest d eigenvalues are guaranteed to exist. Note that this condition for d is stricter than the condition when D has finitely many eigenvalues. We conjecture that this restriction is due to an artifact of the analysis and in practice using any value of d would be valid when H has infinite dimensions. To introduce a more general definition of D, we first introduce a generalized discounted transition distribution P π λ defined by DISPLAYFORM0 where λ ∈ is a discount factor, with λ = 0 corresponding to the one-step transition distribution P π 0 = P π. Notice that P π λ (v|u) can be also written as P π λ (v|u) = E τ ∼q λ [P π (s t+τ = v|s t = u)] where q λ (τ) = λ τ −1 − λ τ. So sampling from P π λ (v|u) can be done by first sampling τ ∼ q λ then rolling out the Markov chain for τ steps starting from u. Note that for readability we state the definition of P π λ in terms of discrete probability distributions but in general P π λ (·|u) are defined as a probability measure by stating the discounted sum for any measurable set of states U ∈ Σ, U ⊂ S instead of a single state v. Also notice that when λ > 0 sampling v from P π λ (v|u) required rolling out more than one steps from u (and can be arbitrarily long). Given that the replay buffer contains finite length (say T) trajectories sampling exactly from the defined distribution is impossible. In practice, after sampling u = s t in a trajectory and τ from q λ (τ) we discard this sample if t + τ > T.With the discounted transition, distributions now the generalized D is defined as DISPLAYFORM1 We assume that P π λ (·|u) is absolutely continuous to ρ for any u so that the Radon Nikodym derivatives are well defined. This assumption is mild since it is saying that for any state v that is reachable from some state u under P π we have a positive probability to sample it from ρ, i.e. the behavior policy π is able to explore the whole state space (not necessarily efficiently).Proof of D(u, ·) being a density of some probability measure with respect to ρ. We need to show that S D(u, v) dρ(v) = 1. Let f (·|u) be the density function of P First notice that if ρ is the stationary distribution of P π it is also the stationary distribution of DISPLAYFORM2 (Property of the stationary distribution.)which means that g is the density function of ρ with respect to ρ. So g(u) = 1 holds for all u. (For simplicity we ignore the statement of "almost surely" throughout the paper.)Discussion of finite time horizon. Because proving D(u, ·) to be a density requires the fact that ρ is the stationary distribution of P π, the astute reader may suspect that sampling from the replay buffer will differ from the stationary distribution when the initial state distribution is highly concentrated, the mixing rate is slow, and the time horizon is short. In this case, one can adjust the definition of the transition probabilities to better reflect what is happening in practice: Define a new transition distribution by adding a small probability to "reset":P π (·|u) = (1 − δ)P π (·|u) + δP 0. This introduces a randomized termination to approximate termination of trajectories (e.g,. due to time limit) without adding dependencies on t (to retain the Markov property). Then, ρ and D can be defined in the same way with respect toP π. Now the replay buffer can be viewed as rolling out a single long trajectory withP π so that sampling from the replay buffer approximates sampling from the stationary distribution. Note that under the new definition of D, minimizing the graph drawing objective requires sampling state pairs that may span over the "reset" transition. In practice, we ignore these pairs as we do not want to view "resets" as edges in RL. When δ (e.g. 1/T) is small, the chance of sampling these "reset" pairs is very small, so our adjusted definition still approximately reflects what is being done in practice. Hyperparameters For representation learning we use d = 20. In the definition of D we use the discounted multi-step transitions with λ = 0.9. For the approximate graph drawing objective we use β = 5.0 and δ jk = 0.05 (instead of 1) if j = k otherwise 0 to control the scale of L2 distances. We pretrain the representations for 30000 steps (This number of steps is not optimized and we observe that the training converges much earlier) by Adam with batch size 128 and learning rate 0.001. For policy training, we use the vanilla DQN with a online network and a target network both representing the Q-function. The policy used for testing is to select the action with the highest Q-value according to the online network at each state. The online network is trained to minimize the Bellman error by sampling transitions from the replay buffer. The target network is updated every 50 steps with a mixing rate of 0.05 (of the current online network with 0.95 of the previous target network). Epsilon greedy with = 0.2 is used for exploration. Reward discount is 0.98. The policy is trained with Adam optimizer with learning rate 0.001. For both representation mapping and Q-functions we use a fully connected network (parameter not shared) with 3 hidden layers and 256 units in each layer. All activation functions are relu. The PointMass agent has a 6 dimensional state space a 2 dimensional action space. The Ant agent has a 29 dimensional state space and a 8 dimensional action space. The success criteria is set as reaching an L2 ball centered around a specific (x, y) position with the radius as 10% of the total size of the maze, as shown in FIG5. Each episode has a length of 300.Hyperparameters For representation learning we use d = 20. In the definition of D we use the discounted multi-step transitions with λ = 0.99 for PointMass and λ = 0.999 for Ant. For the approximate graph drawing objective we use β = 2.0 and δ jk = 0.1 (instead of 1) if j = k otherwise 0 to control the scale of L2 distances. We pretrain the representations for 50000 steps by Adam with batch size 128 and learning rate 0.001 for PointMass and 0.0001 for Ant. For policy training, we use the vanilla DDPG BID10 with a online network and a target network. Each network contains two sub-networks: an actor network representing the policy and a critic network representing the Q-function. The online critic network is trained to minimize the Bellman error and the online actor network is trained to maximize the Q-value achieved by the policy. The target network is updated every 1 step with a mixing rate of 0.001. For exploration we follow the Ornstein-Uhlenbeck process as described in the original DDPG paper BID10. Reward discount is 0.995. The policy is trained with Adam optimizer with batch size 100, actor learning rate 0.0001 and critic learning rate 0.001 for PointMass and 0.0001 for Ant. For representation mapping we use a fully connected network (parameter not shared) with 3 hidden layers and 256 units in each layer. Both actor network and critic network have 2 hidden layers with units. All activation functions are relu. Online learning of representations We also present of learning the representations online instead of pretraining-and-fix and observe equivalent performance, as shown in FIG8, suggesting that our method may be successfully used in online settings. For online training the agent moves faster in the maze during policy learning so we anneal the λ in D from its inital value to 0.95 towards the end of training with linear decay. The reason that online training provides no benefit is that our randomized starting position setting enables efficient exploration even with just random walk policies. Investigating the benefit of online training in exploration-hard tasks would be an interesting future direction.
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJlNpoA5YQ
We propose a scalable method to approximate the eigenvectors of the Laplacian in the reinforcement learning context and we show that the learned representations can improve the performance of an RL agent.
Our work offers a new method for domain translation from semantic label maps and Computer Graphic (CG) simulation edge map images to photo-realistic im- ages. We train a Generative Adversarial Network (GAN) in a conditional way to generate a photo-realistic version of a given CG scene. Existing architectures of GANs still lack the photo-realism capabilities needed to train DNNs for computer vision tasks, we address this issue by embedding edge maps, and training it in an adversarial mode. We also offer an extension to our model that uses our GAN architecture to create visually appealing and temporally coherent videos. The topic of image to image translation and more generally video to video translation is of major importance for training autonomous systems. It is beneficial to train an autonomous agent in real environments, but not practical, since enough data cannot be gathered. However, using simulated scenes for training might lack details since a synthetic image will not be photorealistic and will lack the variability and randomness of real images, causing training to succeed up to a certain point. This gap is also referred to as the reality gap. By combining a non photo-realistic, simulated model with an available dataset, we can generate diverse scenes containing numerous types of objects, lightning conditions, colorization etc.. In this paper, we depict a new approach to generate images from a semantic label map and a flexible Deep Convolution Neural Network (DCNN) we called Deep Neural Edge Detector (DNED) which embed edge maps. we combine embedded edge maps which act as a skeleton with a semantic map as input to our model (fig 2. 1), The model outputs a photo-realistic version of that scene. Using the skeleton by itself will generate images that lack variability as it restricts the representation to that specific skeleton itself. Instead, we learn to represent skeletons by a neural network and at test time, we sample the closest appropriate skeleton the network has seen at training. Moreover, we have extended this idea to generate photo-realistic videos (i.e. sequence of images) with a novel loss that uses the optical flow algorithm for pixel coherency between consecutive images. Figure 1: in this paper we propose a method for generating photo-realistic images from semantic labels of a simulator scene. This figure provides images related to the Synthia dataset. Left -semantic map of the scene. Middle -generated image from pix2pixHD Wang et al. (2018b). Right -Our generated image. The texture and color space in our generated image is more natural giving the image the desired photo-realism. used as a super resolution generator. L1 loss for image generation is known to generate low quality images as the generated images are blurred and lack details. , are using a modified version of the perceptual loss, allowing generation of finer details in an image. Pix2pixHD Wang et al. (2018b) and are using a perceptual loss as well for training their networks, e.g.. Moreover, pix2pixHD are using instance maps as well as label maps to enable the generator to separate several objects of the same semantics. This is of high importance when synthesizing images having many instances of the same semantics in a single frame. As for video generation the loss used by Wang et al. (2018a), tend to be computationally expensive while our approach is simpler. We are using two generators of the same architecture, and they are mutually trained using our new optical flow based loss that is fed by dense optical flow estimation. Our evaluation method is and as it is a common metric being used for image and video generation schemes. We call this work s-Flow GAN since we embed Spatial information obtained from dense optical flow in a neural network as a prior for image generation and flow maps for video coherency. This optical flow is available since the simulated image is accessible at test time in the case of CG2real scheme. We make Three major contributions: First, our model can generate visually appealing photorealistic images from semantic maps having high definition details. Second, we incorporate a neural network to embed edge maps, thus allowing generation of diverse versions of the same scenes. Third, we offer a new loss function for generating natural looking videos using the above mentioned image generation scheme. please refer to this link for videos and comparison to related work. Generative Adversarial Networks (GAN) were introduced in 2014. This method generate images that look authentic to human observers. They do so by having two neural networks, one generating candidates while the other acts as a critique and tries to evaluate the generation quality , , , ,. GANs are widely used for image generation; some image synthesis schemes are used to generate low resolution images e.g. 32x32 while were able to generate higher resolution images (up to 512x512). In addition, Wang et al. (2018b) were able to generate even higher resolution images using coarse-to-fine generators. The reason generating high resolution images is challenging is the high dimensionality of the image generation task and the need to provide queues for high resolution ,. We offer queues as an edge map skeletons generated by our proposed DNED module. During training the DNED is trained to learn the representations of real image edge maps. During test the DNED is shown a CG (Computer Graphics) edge map, finds its best representation and provides the generator with an appropriate generated edge map sampled from real image edge maps distribution. Middle is the edge map extracted from the real image. Right is the real image being used by the discriminator for adversarial training. Please note that the real image is not used by the generator neither at training nor at test time, but only its edge map. The main issue in the CG2real model compared to the image to image models is that the simulators image is available to the generator at test time. We thus use the simulators image to extract the edge map allowing the generator to generate the necessary fine details in the output image. In the pix2pix setting, they used a , where the networks input is a semantic map of the scene, and while training in adversarial mode, a fake version of the real image is given to the discriminator to distinguish. In the CG2real setting in addition to the semantic map we also have access to the simulated image. Using the CG image as is, might be counter productive since it will be trained to reconstruct CG images and not photo-realistic ones. Conversely some of the underlying CG information correlates with the real world and can provide meaningful prior to the synthesis. Since the relevant information lies in the image high frequencies , we learn the distribution of edge maps in real images (high resolution details), and provide representation of it to the generator at test time. Some image generation tasks use label maps only, e.g.. The label maps provide only information about the class of a given pixel. In order to generate photo-realistic images, some use instance maps as well Wang et al. (2018b), This way, they can differentiate several adjacent objects of the same class. Nonetheless, while most datasets provide object level information about classes like cars, pedestrians, etc. they do not provide that information about vegetation and buildings. As a , the generated images might not correctly separate those adjacent objects, thus degrading photo-realism. Generating edge maps using neural networks is a well established method. Holistically-Nested Edge Detection (HED) provides holistic image training and prediction for multi-scale and multilevel feature learning. They use a composition of generated edge maps to learn a fine description of the edge scene. Inspired by their work, we train a neural network to learn edge maps of real images. As mentioned before, our generator requires an edge map as input. we get the edge map using a spacial Laplacian operator with threshold. Providing the generator with deterministic edge map will produce the same scene, so we train the DNED to take as input that deterministic edge map, learn its representation and produce a variant of that edge map, as a superposition of edges seen in real datasets. This way the generator will be able to produce a varaiaty of photorealistic images for the same scene. Since our approach (using edge maps) is not class dependent, we do not need instance map information to generate several adjacent instances of the same semantics. Moreover, this approach addresses the problem of generating fine details within a class like buildings and vegetation as can bee seen in fig 3. 2. Generating temporally coherent image sequences is a known challenge. Recent works use GANs to generate videos in an unconditional setting , , , by sampling from a random vector, but don't provide the generator with temporal constrains, thus generating non coherent sequences of images. Other works like video matting and video inpainting translate videos to videos but rely on problem specific constrains and designs. A recent work named vid2vid Wang et al. (2018a) offers to conditionally generate video from video and is considered to one of the best approaches to date. Using FlowNet 2.0 they predict the optical flow of the next image. In addition, they use a mask to differentiate between two parts; the hallucinated image generated from instance-level semantic segmentation masks and the predicted image from the previous frame. By adding these two parts, this method can combine the predicted details from the previously generated image, with the details from the newly generated image. Inspired by Wang et al. (2018a), we are using flow maps of consecutive images to generate temporally coherent videos. Contrary to Wang et al. (2018a) we are not using a CNN to predict the flow maps or a sequence generator, but a classical Computer vision approach. This is since a pre-trained network (trained on real datasets) failed to generalize and infer on simulated datasets e.g. Synthia. This enables better temporal coherency and improve video generation robustness. Our CG2real model aims to learn the conditional distribution of an image given a semantic map. Our video generation model aims to use this learned distribution for generating temporally coherent videos using the generated images from the CG2real scheme. We first depict the image generation scheme, then we review our video generation model. We use a conditional GAN to generate images from semantic maps as in. In order to generate images, the generator receives the semantic segmentation images s i and maps it to photo-realistic images x i. In parallel, the discriminator takes two images, The real image x i (ground truth) and the generated image f i and learns to distinguish between them. This supervised learning scheme is trained in the well-known min max game ,: In order to generate photo-realistic visually appealing images containing fine details, we provide a learnt representation of an edge map to the generator (fig 2.1), allowing it to learn the conditional distribution of real images given semantic maps and edge maps, i.e.: During training, given an example image x i, we can estimate its edge map by the well-known spatial Laplacian operator ,. This edge map is concatenated to the semantic label map and both are given as priors to the generator for adversarial training of the fake image f i vs. the real image x i. To allow a stable training we begin training our GAN with the edge maps from the Laplacian operator. After stabilization of the generator and discriminator, we provide our generator with edge maps from the DNED. We then jointly train the GAN with the DNED. The DNED architecture is a modified version of. In HED, they generate several sized versions of the edge map, each having a different receptive field. The purpose is to create an ensemble of edge maps, each allowing different level of details in the image. When superimposing all, the ing edge map will have coarse-to-fine level of details in the generated edge map image. By changing the weights of that ensemble, we can generate the desired variability in the generated edge map, thus allowing us to generate diverse versions of the output. To conclude, the loss function for training the DNED is: Where: d i (x), i = 0: 5 is the i th side output of a single scale, E(x) is the classic edge map generated by the spatial Laplacian operator, BCE is the binary cross entropy loss. N = 6 in our case. a i is the contribution of the i th scale to the ensemble. Increasing the resolution of the image might be challenging for GAN training. In other methods the discriminator needs a large receptive field , , , , requiring a deeper network or larger convolution kernels. Using a deeper network is prone to overfitting and in the case of GAN training, and might cause training to be unstably. This challenge is usually addressed by the multi-scale approach , , , ,. Since the DNED embed a learnt representation of skeletons, our architecture performs very well on higher resolution images. Our original generated images were of size [512x256]. We have successfully trained our model to generate images of size [768x384], i.e. 1.5 times larger in each dimension without changing the model while using a single discriminator (see 3.2). Figure 3: comparison of 768x384 pix images generated by pix2pixHD (Left) and our model (Right). Our model can generate lower level details in the image, thus improving its photo-realism. This figure provides an example comparing (768x384 pix) resolution images of pix2pixHD (left side) compared to our model (right). We showed that generating high quality images when using a single discriminator is feasible and training is stable. We provide comparison using our method with multi-scale discriminator 3.2. the FM loss is computed with k=1 for single layer discriminator and k=3 for multi layer one: Figure 4: Comparison of generated test images, when training with a single discriminator and a multi-scale one. The left image is generated when the generator was trained with a single discriminator, while the right image while using a multi-scale one. This figure demonstrates that when using our model (with our skeleton), training with a single discriminator might be enough. In addition, following , , , we are using the perceptual loss for improved visual performance and to encourage the discriminator distinguish real or fake samples using a pre traind. Where, P is the number of slices from a pre-trained VGG network and F L V GGi are the features extracted by the VGG network from the i th layer of the real and generated images respectively. To conclude, our overall objective for generating photo-realistic, diverse images in the CG2real setting is to minimize L CG2real: Figure 5: By using edge maps, the model learns to separate objects of the same semantics. The most dominant example is buildings. Unlike cars, pedestrians or bicycle riders, that are separable using the instance map, buildings are not. The semantic label provides the pixels in which the building exists. Considering the fact that a scene of adjacent buildings is somewhat common, the ability to separate them is of high value. Left -the label map. Middle -generated image by Wang et al. (2018b). Right -our generated image. Our model can generate unique adjacent buildings from the semantic label maps of better quality compared to Wang et al. (2018b). Figure 6: previous work test images Wang et al. (2018b) (Top) compared to our model test images (Bottom). The images generated by our model contain low level details, allowing the desired photorealism 3.3 VIDEO GENERATION Using pre trained CG2real networks, we generate two consecutive images, and then estimate two flow maps. The first flow map is between x i, x i+1, where x i and x i+1 are two consecutive real images. The second flow map is between G(s i, e i), G(s i+1, e i+1), where G(s i, e i) and G(s i+1, e i+1) are two consecutive generated (fake) images. Note that the generation of G(s i, e i), G(s i+1, e i+1) is done independently, meaning we apply our CG2real method twice, without any modifications. To conclude we enforce temporal coherency by using the following loss: Where and F(*) is the optical flow operator. This formulation eliminates the need of using a sequential generator as in Wang et al. (2018a), allowing us not only using our image generation model twice, which adds more constrains to the video generation scheme, but also avoid errors accumulation arising from positive feedback by feeding a generated image to the generator, as can be seen in figure 3.3 and in this video. By adding L f low to the L CG2real loss, the network learns to generate G(s i+1, e i+1) taking the flow maps into account, thus generating temporally coherent images as depicted in 3.3. Figure 7: block diagram of the video generation model. Two identical CG2real models generate Fake image (t) and Fake image (t+1). The two consecutive fake images are fed to the flow-fake estimator, while two consecutive real images are fed to the flow-real estimator. Both real and fake flow maps are trained using L 1 (F real, F f ake) loss. This enables the pre-trained CG2real models to learn the required coherency for generating photo-realistic videos. Our goal is to generate photo-realistic images. In (fig 3. 2) we can find some examples from the CG2real image synthesis task, and in (fig 4) present consecutive images depicting the video to video synthesis. We use the same evaluation methods as used by previous image to image works,e.g. pix2pix Isola et al., pix2pixHD Wang et al. (2018b and others. The evaluation process consist of performing semantic segmentation with a pre-trained seamntic segmentation network on synthesized images produces by our model, then calculating the semantic pixel accuracy and the mean intersection over union (mIoU) over the classes in the dataset. As shown in tables 1, 2 bellow, our network outperforms previous works. The ground-truth are the pixel accuracy and mIoU when performing the same semantic segmentation with the real images (Oracle). Furthermore, to evaluate the image generation quality, we used another metric to evaluate distances between datasets called FID (Frchet Inception Distance) ,. It is a very common metric for generative models as it correlates well with the visual quality of generated samples Wang et al. (2018a). FID calculates the distance between two multivariate Gaussians real and generated respectively; where X r ∼ N (µ r, Σ r) and X g ∼ N (µ g, Σ g) are the 2048-dimensional activations of the Inception-v3 pool3 layer , and ) is the score for image distributions X r and X g. Lower FID score is better, meaning higher similarity between real and generated samples. As can be seen in tables 1, 2, pix2pixHDs are better than pix2pix for pixel accuracy and mIoU. Our are better than pix2pixHD, and almost meet the oracles on both and cityscapes.. FVD is a metric for video generation models evaluation and it uses a modified version of FID. we calculated the FVD score for our generated video (Ours-vid) w.r.t. the Oracle (real video) and did the same for vid2vid w.r.t the same Oracle. Our FVD score on the video test set is 0.326 while vid2vid's is 0.706 meaning our videos are more than twice similar to the oracle. we suggest that this substantial margin stems from the errors accumulated in the video generation model of vid2vid (fig 4). As mentioned, Our video generation model uses our flow loss therefore does not encounter this phenomena. Figure 8: Comparison of video generation. Up -images generated by vid2vid Wang et al. (2018a). Down -images generated by our video generation model. Our generated images are temporally coherent and visually appealing In our images sky is more natural, road signs are clearer and buildings have finer level of details. This example emphasizes the error propagation of vid2vid's model wile our model does not accumulate errors (see street lights in upper right corner of each image). The main objective of the video generation model is to enable generating non flickering images by giving objects in consecutive images the same color and texture, i.e. sample from the same area in the latent spaces. full videos can be seen here. We present a CG2real conditional image generation as well as a conditional video synthesis. We offer to use a network learning the distribution of edge maps from real images and integrate it into a generator (DNED). We were able to generate highly detailed and diverse images thus enabling better photo-realism. Using the DNED enable generating diverse yet photo-realistic realizations of the same desired scene without using instance maps. As for video generation, we offer a new scheme that utilizes flow maps allowing better temporal coherence in videos. We compared our model to recent works and found that it outperforms both current quantitative and more importantly generates appealing images. Furthermore, our video generation model generates temporally coherent and consistent videos. A APPENDIX Figure 9: Additional Test images from Cityscapes Dataset. Left -pix2pixHD. Right -Ours. These images further demonstrate the photo-realism achieved by our model. Figure 10: Additional test images on Synthia dataset. Left -pix2pixHD. Right -Ours. These images demonstrate improved image quality, better and finer details in the generated objects, buildings and vegetation. Figure 11: Additional test images on Synthia dataset. Left -pix2pixHD. Right -Ours. These images demonstrate improved image quality, better and finer details in the generated objects, buildings and vegetation. Figure 12: Test video on CityScapes. Left -vid2vid. Right -Ours video gen model. These images demonstrate better temporal coherency in the generated images. Moreover, in the top left corner of the left video, we notice the error propagates. Better yet, the buildings in the right video are more reasonable, w.r.t. windows, shades, general texture, etc. As for image quality, the road signs in the right video are better emphasized.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJxqohNFPB
Simulation to real images translation and video generation
Deep neural networks are widely used in various domains, but the prohibitive computational complexity prevents their deployment on mobile devices. Numerous model compression algorithms have been proposed, however, it is often difficult and time-consuming to choose proper hyper-parameters to obtain an efficient compressed model. In this paper, we propose an automated framework for model compression and acceleration, namely PocketFlow. This is an easy-to-use toolkit that integrates a series of model compression algorithms and embeds a hyper-parameter optimization module to automatically search for the optimal combination of hyper-parameters. Furthermore, the compressed model can be converted into the TensorFlow Lite format and easily deployed on mobile devices to speed-up the inference. PocketFlow is now open-source and publicly available at https://github.com/Tencent/PocketFlow. Deep learning has been widely used in various areas, such as computer vision, speech recognition, and natural language translation. However, deep learning models are often computational expensive, which limits further applications on mobile devices with limited computational resources. To address this dilemma between accuracy and computational complexity, numerous algorithms have been proposed to compress and accelerate deep networks with minimal performance degradation. Commonly-used approaches include low-rank decomposition BID15 BID14, channel pruning (a.k.a. structured pruning) BID6 BID17, weight sparsification (a.k.a. non-structured pruning) BID16, and weight quantization BID1 BID2. However, these algorithms usually involve several hyper-parameters that may have a large impact on the compressed model's performance. It can be quite difficult to efficiently choose proper hyper-parameter combinations for different models and learning tasks. Recently, some researches adopted reinforcement learning methods to automatically determine hyperparameters for channel pruning BID4 and weight sparsification BID5 algorithms. In this paper, we present an automated framework for compressing and accelerating deep neural networks, namely PocketFlow. We aim at providing an easy-to-use toolkit for developers to improve the inference efficiency with little or no performance degradation. PocketFlow has inte-grated a series of model compression algorithms, including structured/non-structured pruning and uniform/non-uniform quantization. A hyper-parameter optimizer is incorporated to automatically determine hyper-parameters for model compression components. After iteratively training candidate compressed models and adjusting hyper-parameters, a final compressed model is obtained to maximally satisfy user's requirements on compression and/or acceleration ratios. The ing model can be exported as a TensorFlow-Lite file for efficient deployment on mobile devices. The proposed framework mainly consists of two categories of algorithm components, i.e. learners and hyper-parameter optimizers, as depicted in FIG0. Given an uncompressed original model, the learner module generates a candidate compressed model using some randomly chosen hyperparameter combination. The candidate model's accuracy and computation efficiency is then evaluated and used by hyper-parameter optimizer module as the feedback signal to determine the next hyper-parameter combination to be explored by the learner module. After a few iterations, the best one of all the candidate models is output as the final compressed model. A learner refers to some model compression algorithm augmented with several training techniques as shown in FIG0. Below is a list of model compression algorithms supported in PocketFlow: NonUniformQuantLearner weight quantization with non-uniform reconstruction levels BID2 All the above model compression algorithms can trained with fast fine-tuning, which is to directly derive a compressed model from the original one by applying either pruning masks or quantization functions. The ing model can be fine-tuned with a few iterations to recover the accuracy to some extent. Alternatively, the compressed model can be re-trained with the full training data, which leads to higher accuracy but usually takes longer to complete. To further reduce the compressed model's performance degradation, we adopt network distillation to augment its training process with an extra loss term, using the original uncompressed model's outputs as soft labels. Additionally, multi-GPU distributed training is enabled for all learners to speed-up the time-consuming training process. For model compression algorithms, there are several hyper-parameters that may have a large impact on the final compressed model's performance. It can be quite difficult to manually determine proper values for these hyper-parameters, especially for developers that are not very familiar with algorithm details. Therefore, we introduce the hyper-parameter optimizer module to iteratively search for the optimal hyper-parameter setting. In PocketFlow, we provide several implementations of hyper-parameter optimizer, based on models including Gaussian Processes (GP) BID11, Tree-structured Parzen Estimator (TPE) BID0, and Deterministic Deep Policy Gradients (DDPG) BID10. The hyper-parameter setting is optimized through an iterative process. In each iteration, the hyper-parameter optimizer chooses a combination of hyperparameter values, and the learner generates a candidate model with fast fast-tuning. The candidate model is evaluated to calculate the reward of the current hyper-parameter setting. After that, the hyper-parameter optimizer updates its model to improve its estimation on the hyper-parameter space. Finally, when the best candidate model (and corresponding hyper-parameter setting) is selected after some iterations, this model can be re-trained with full data to further reduce the performance loss. For empirical evaluation, we adopt PocketFlow to compress and accelerate classification models on the CIFAR-10 BID9 and ILSVRC-12 BID12 data sets. In FIG1, we use ChannelPrunedLearner to speed-up ResNet-56 BID3 to reduce its computational complexity. We observe that the accuracy loss under 2.5× acceleration is 0.4% and under 3.3× acceleration is 0.7%, and compressed models are more efficient and effective that the shallower ResNet-44 model. In FIG1, we use WeightSparseLearner to compress MobileNet BID7 to reduce its model size. We discover that the compressed model achieves similar classification accuracy with much smaller model size than MobileNet, Inception-v1 BID13, and ResNet-18 models. The compressed models generated by PocketFlow can be exported as TensorFlow Lite models and directly deployed on mobile devices using the mobile-optimized interpreter. In TAB2, we compare the classification accuracy, model size, and inference latency 1 of original and compressed models. With ChannelPrunedLearner, the compressed model achieves 1.53× speed-up with 2.0% loss in the top-5 classification accuracy. With UniformQuantLearner, we achieve 2.46× speed-up after applying 8-bit quantization on the MobileNet model, while the top-5 accuracy loss is merely 0.6%. In this paper, we present the PocketFlow framework to boost the deployment of deep learning models on mobile devices. Various model compression algorithms are integrated and hyper-parameter optimizers are introduced into the training process to automatically generate highly-accurate compressed models with minimal human effort.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1fWoYhdim
We propose PocketFlow, an automated framework for model compression and acceleration, to facilitate deep learning models' deployment on mobile devices.
Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fully-observed samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain $2$-$4$x higher inception scores than the baselines. The output of the generator is passed through a simulated random measurement function f Θ. The discriminator must decide if a measurement is real or generated. models of the data structure. Recent work has shown that generative models can be particularly effective for easier sensing BID4; BID20 ]-but if sensing is expensive in the first place, how can we collect enough data to train a generative model to start with?This work solves this chicken-and-egg problem by training a generative model directly from noisy or incomplete samples. We show that our observations can be even projections or more general measurements of different types and the unknown distribution is still provably recoverable. A critical assumption for our framework and theory to work is that the measurement process is known and satisfies certain technical conditions. We present several measurement processes for which it is possible to learn a generative model from a dataset of measured samples, both in theory and in practice. Our approach uses a new way of training GANs, which we call AmbientGAN. The idea is simple: rather than distinguish a real image from a generated image as in a traditional GAN, our discriminator must distinguish a real measurement from a simulated measurement of a generated image; see FIG0. We empirically demonstrate the effectiveness of our approach on three datasets and a variety of measurement models. Our method is able to construct good generative models from extremely noisy observations and even from low dimensional projections with drastic per-sample information loss. We show this qualitatively by exhibiting samples with good visual quality, and quantitatively by comparing inception scores BID28 ] to baseline methods. Theoretical . We first consider measurements that are noisy, blurred versions of the desired images. That is, we consider convolving the original image with a Gaussian kernel and adding independent Gaussian noise to each pixel (our actual theorem applies to more general kernels and noise distributions). Because of the noise, this process is not invertible for a single image. However, we show that the distribution of measured images uniquely determines the distribution of original images. This implies that a pure Nash equilibrium for the GAN game must find a generative model that matches the true distribution. We show similar for a dropout measurement model, where each pixel is set to zero with some probability p, and a random projection measurement model, where we observe the inner product of the image with a random Gaussian vector. Empirical . Our empirical work also considers measurement models for which we do not have provable guarantees. We present on some of our models now and defer the full exploration to Section 8.In FIG1, we consider the celebA dataset of celebrity faces BID19 ] under randomly placed occlusions, where a randomly placed square containing 1/4 of the pixels is set to zero. It is hard to inpaint individual images, so cleaning up the data by inpainting and then learning a GAN on the yields significant artifacts. By incorporating the measurement process into the GAN training, we can produce much better samples. In FIG2 we consider learning from noisy, blurred version of images from the celebA dataset. Each image is convolved with a Gaussian kernel and then IID Gaussian noise is added to each pixel. Learning a GAN on images denoised by Wiener deconvolution leads to poor sample quality while our models are able to produce cleaner samples. In FIG2, we consider learning a generative model on the 2D images in the MNIST handwritten digit dataset BID17 ] from pairs of 1D projections. That is, measurements consist of picking two random lines and projecting the image onto each line, so the observed value along the line is the sum of all pixels that project to that point. We consider two variants: in the first, the choice of line is forgotten, while in the second the measurement includes the choice of line. We find for both variants that AmbientGAN recovers a lot of the underlying structure, although the first variant cannot identify the distribution up to rotation or reflection. There are two distinct approaches to constructing neural network based implicit generative models; autoregressive BID15 BID25 a) ], and adversarial BID11 ]. Some combination approaches have also been successful BID21 ].The adversarial framework has been shown to be extremely powerful in modeling complex data distributions such as images BID27;; BID3 ], video BID18; BID31 ], and 3D models BID0; BID32. A learned generative model can be useful for many applications. A string of papers BID4 BID14; BID33 ] explore the utility of generative priors to solve ill-posed inverse problems. BID29 ] demonstrate that synthetic data can be made more realistic using GANs. BID14 ] and [] show how to translate images from one domain to another using GANs. The idea of operating generators and discriminators on different spaces has been proposed before. BID22 ] explores an interesting connection of training stability with low dimensional projections of samples. They show that training a generator against an array of discriminators, each operating on a different low-dimensional projection of the data can improve stability. Our work is also closely related to BID10 ] where the authors create 3D object shapes from a dataset of 2D projections. We note that their setup is a special case of the AmbientGAN framework where the measurement process creates 2D projections using weighted sums of voxel occupancies. Throughout, we use superscript'r' to denote real or true distribution, superscript'g' for the generated distributions,'x' for the underlying space and'y' for measurements. Let p r x be a real underlying distribution over R n. We observe lossy measurements performed on samples from p r x. If we let m be the size of each observed measurement, then, each measurement is an output of some measurement function f θ: R n → R m, parameterized by θ. We allow the measurement function to be stochastic by letting the parameters of the measurement functions have a distribution p θ. With this notation, for a given x and θ, the measurements are given by y = f θ (x). We assume that it is easy to sample Θ ∼ p θ and to compute f θ (x) for any x and θ. The distributions p r x and p θ naturally induce a distribution over the measurements y which we shall denote by p r y. In other words, if X ∼ p r x and DISPLAYFORM0 Our task is the following: there is some unknown distribution p r x and a known distribution p θ. We are given a set of IID realizations {y 1, y 2, . . ., y s} from the distribution p r y. Using these, our goal is to create an implicit generative model of p r x, i.e., a stochastic procedure that can sample from p r x. Our main idea is to combine the measurement process with the adversarial training framework, as shown in FIG0. Just like in the standard GAN setting, let Z ∈ R k, Z ∼ p z be a random latent vector for a distribution p z that is easy to sample from, such as IID Gaussian or IID uniform. Let DISPLAYFORM1, and let p g x be the distribution of X g. Thus, our goal is to learn a generator G such that p g x is close to p r x. However, unlike the standard GAN setting, we do not have access to the desired objects (X ∼ p r x). Instead, we only have a dataset of measurements (samples from Y ∼ p r y). Our main idea is to simulate random measurements on the generated objects X g, and use the discriminator to distinguish real measurements from fake measurements. Thus, we sample a random measurement function f Θ by sampling Θ ∼ p θ and apply it on X g to obtain DISPLAYFORM2 We set up the discriminator to predict if a given y is a sample from the real measurement distribution p r y as opposed to the generated measurement distribution p g y. Thus, the discriminator is a function D: R m → R.We let q(·) be the quality function that is used to define the objective, based on the discriminator output. For vanilla GAN, q(x) = log(x) and for Wasserstein GAN ], q(x) = x. Accordingly, the AmbientGAN objective is the following: DISPLAYFORM3 We additionally require f θ to be differentiable with respect to its inputs for all θ. We implement G and D as feedforward neural networks. With these assumptions, our model is end-to-end differentiable and can be trained using an approach similar to the standard gradient-based GAN training procedure. In each iteration, we sample Z ∼ p z, Θ ∼ p θ, and Y r ∼ UNIF{y 1, y 2, . . ., y s} to use them to compute stochastic gradients of the objective with respect to parameters in G and D by backpropagation. We alternate between updates to parameters of D and updates to parameters of G.We note that our approach is compatible with and complementary to the various improvements proposed to the GAN objective, network architectures, and the training procedures. Additionally, we can easily incorporate additional information, such as per sample labels, in our framework through conditional versions of the generator and discriminator. This is exemplified in our experiments, where we use unconditional and conditional versions of DCGAN BID27 ], unconditional Wasserstein GAN with gradient penalty BID12 ], and an Auxiliary Classifier Wasserstein GAN BID23 ] with gradient penalty. Now, we describe the measurement models that we use for our theoretical and empirical . We primarily focus on 2D images and thus our measurement models are tailored to this setting. The AmbientGAN learning framework, however, is more general and can be used for other data formats and other measurement models as well. For the rest of this section, we assume that input to the measurement function (x) is a 2D image. We consider the following measurement models:Block-Pixels: Each pixel is independently set to zero with probability p. Convolve+Noise: Let k be a convolution kernel and let Θ ∼ p θ be the distribution of noise. Then the measurements are given by f Θ (x) = k * x + Θ, where * is the convolution operator. Block-Patch: A randomly chosen k × k patch is set to zero. Keep-Patch: All pixels outside a randomly chosen k × k patch are set to zero. Extract-Patch: A random k ×k patch is extracted. Note that unlike the previous measurement function, the information about the location of the patch is lost. Pad-Rotate-Project: We pad the image on all four sides by zeros. Then we rotate the image by a random angle (θ) about its center. The padding is done to make sure that the original pixels stay within the boundary. Finally, for each channel in the image, we sum the pixels along the vertical axis to get one measurement vector. PadRotate-Project-θ: This is the same as the previous measurement function, except that along with the projection values, the chosen angle is also included in the measurements. Gaussian-Projection: We project onto a random Gaussian vector which is included in the measurements. So, Θ ∼ N (0, I n), and f Θ (x) = (Θ, Θ, x). We show that we can provably recover the true underlying distribution p r x for certain measurement models. Our broad approach is to show that there is a unique distribution p r x consistent with the observed measurement distribution p r y, i.e., the mapping of distributions of samples p r x to distribution of measurements p r y is invertible even though the map from an individual image x to its measurements f θ (x) is not. If this holds, then the following lemma immediately gives a consistency guarantee with the AmbientGAN training procedure. Lemma 5.1. As in Section 3, let p r x be the data distribution, p θ be the distribution over parameters of the measurement functions and p r y be the induced measurement distribution. Further, assume that for the given p θ, there is a unique probability distribution p r x that induces the given measurement distribution p r y. Then, for the vanilla GAN model BID11 DISPLAYFORM0 All proofs including this one are deferred to Appendix A. Note that the previous lemma makes a non-trivial assumption of uniqueness of the true underlying distribution given the measurement distribution. The next few theorems show that this assumption is satisfied under Gaussian-Projection, Convolve+Noise and Block-Pixels measurement models, thus showing that that we can recover the true underlying distribution with the AmbientGAN framework. We remark that the required conditions in the preceding theorem are easily satisfied for the common setting of Gaussian blurring kernel with additive Gaussian noise. The same guarantee can be generalized for any continuous and invertible function instead of a convolution. We omit the details. Our next theorem makes an assumption of a finite discrete set of pixel values. This assumption holds in most practical scenarios since images are represented with a finite number of discrete values per channel. In this setting, in addition to a consistency guarantee, we also give a sample complexity for approximately learning the distributions in the AmbientGAN framework. Theorem 5.4. Assume that each image pixel takes values in a finite set P. Thus x ∈ P n ⊂ R n. Assume 0 ∈ P, and consider the Block-Pixels measurement model (Section 4) with p being the probability of blocking a pixel. If p < 1, then there is a unique distribution p r x that can induce the measurement distribution p r y. Further, for any > 0, δ ∈, given a dataset of DISPLAYFORM1 IID measurement samples from p r y, if the discriminator D is optimal, then with probability ≥ 1 − δ over the dataset, any optimal generator G must satisfy DISPLAYFORM2 We used three datasets for our experiments. MNIST is a dataset of 28 × 28 images of handwritten digits BID17 ]. CelebA is a dataset of face images of celebrities BID19 ]. We use an aligned and cropped version where each image is 64 × 64 RGB. The CIFAR-10 dataset consists of 32 × 32 RGB images from 10 different classes BID16 DISPLAYFORM0 We briefly describe the generative models we used for our experiments. More details on architectures and hyperparameters can be found in the appendix. For the MNIST dataset, we use two GAN models. The first model is a conditional DCGAN which follows the architecture in [] 1, while the second model is an unconditional Wasserstein GAN with gradient penalty (WGANGP) which follows the architecture in BID12 2. For the celebA dataset, we use an unconditional DCGAN and follow the architecture in BID27 3. For the CIFAR-10 dataset, we use an Auxiliary Classifier Wasserstein GAN with gradient penalty (ACWGANGP) which follows the residual architecture in BID12 4.For measurements with 2D outputs, i.e. Block-Pixels, Block-Patch, Keep-Patch, Extract-Patch, and Convolve+Noise (see Section 4), we use the same discriminator architectures as in the original work. For 1D projections, i.e. Pad-Rotate-Project, Pad-Rotate-Project-θ, we use fully connected discriminators. The architecture of the fully connected discriminator used for the MNIST dataset was 25-25-1 and for the celebA dataset was 100-100-1. Now, we describe some baseline approaches that we implemented to evaluate the relative performance of the AmbientGAN framework. Recall that we have a dataset of IID samples {y 1, y 2, . . . y s} from the measurement distribution p r y and our goal is to create an implicit generative model for p r x. A crude baseline is to ignore that any measurement happened at all. In other words, for cases where the measurements lie in the same space as the full-samples (for example Convolve+Noise) we can learn a generative model directly on the measurements and test how well it approximates the true distribution p r x. We call this the "ignore" baseline. A stronger baseline is based on the following observation: If the measurement functions f θ were invertible, and we observed θ i for each measurement y i in our dataset, we could just invert the functions to obtain full-samples DISPLAYFORM0 θi (y i). Then we could directly learn a generative model using these full-samples. Notice that both assumptions are violated in the AmbientGAN setting. First, we may not observe θ i and second, the functions may not be invertible. Indeed all the measurement models in Section 4 violate one of the assumptions. However, we can try to approximate an inverse function and use the inverted samples to train a generative model. Thus, given a measurement y i = f θi (x i), we try to "unmeasure" it and obtain x i, an estimate of x i. We then learn a generative model with the estimated inverse samples and test how well it approximates p For the measurement models described in Section 4, we now describe the methods we used to obtain approximate inverse functions: (a) For the Block-Pixels measurements, a simple approximate inverse function is to just blur the image so that zero pixels are filled in from the surrounding. We also implemented a more sophisticated approach to fill in the pixels by using total variation inpainting. (b) For Convolve+Noise measurements with a Gaussian kernel and additive Gaussian Noise, we approximate the inverse by a Wiener deconvolution. (c) For Block-Patch measurements, we use the Navier Stokes based inpainting method BID2 ] to fill in the zero pixels. For other measurement models, it is unclear how to obtain an approximate inverse function. For the Keep-Patch measurement model, no pixels outside a box are known and thus inpainting methods are not suitable. Inverting Extract-Patch measurements is even harder since the information about the position of the patch is also lost. For the Pad-Rotate-Project-θ measurements, a conventional technique is to sample many angles, and use techniques for inverting the Radon transform BID7 ]. However, since we observe only a few projections at a time, these methods aren't readily applicable. Inverting Pad-Rotate-Project measurements is even harder since it lacks information about θ. So, on this subset of experiments, we report only the with the AmbientGAN models. We present some samples generated by the baselines and our models. For each experiment, we show the samples from the dataset of measurements (Y r) available for training, samples generated by the baselines (when applicable) and the samples generated by our models (X g). We show samples only for a selected value of parameter settings. More are provided in the appendix. All on MNIST are deferred to the appendix. Block-Pixels: FIG5 shows on celebA with DCGAN and FIG7 on CIFAR-10 with ACW-GANGP. We see that the samples are heavily degraded in our measurement process (left image). Thus, it is challenging for baselines to invert the measurements process, and correspondingly, they do not produce good samples (middle image). Our models are able to produce images with good visual quality (right image).Convolve+Noise: We use a Gaussian kernel and IID Gaussian noise. FIG2 shows on celebA with DCGAN. We see that the measurements are drowned in noise (left image) and the baselines Block-Patch, Keep-Patch: FIG1 shows the for Block-Patch and FIG6 for Keep-Patch measurements on celebA with DCGAN. On both measurement distributions, our models are able to create coherent faces (right image) by observing only parts of one image at a time.1D projections: Pad-Rotate-Project and Pad-Rotate-Project-θ measurement models exhibit drastic signal degradation; most of the information in a sample is lost during the measurements process. For our experiments, we use two measurements at a time. FIG2 shows the on MNIST with DCGAN. While the first model is able to learn only up to rotation and reflection (left image), we note that generated digits have similar orientations and chirality within each class without any explicit incentive. We hypothesize that the model prefers this mode because it is easier to learn with consistent orientation per class. The second measurement model contains the rotation angle and thus produces upright digits (right image). While in both cases, the generated images are of lesser visual quality, our method demonstrates that we can produce images of digits given only 1D projections. Failure case: In FIG6, we show the samples obtained from our model trained on celebA dataset with Pad-Rotate-Project-θ measurements with a DCGAN. We see that the model has learned a very crude outline of a face, but lacks details. This highlights the difficulty in learning complex distributions with just 1D projections and a need for better understanding of distribution recovery under projection measurement model as well as better methods for training GANs. We report inception scores BID28 ] to quantify the quality of the generative models learned in the AmbientGAN framework. For the CIFAR-10 dataset, we use the Inception model BID30 ] trained on the ImageNet dataset BID8 1. For computing a similar score on MNIST, we trained a classification model with two conv+pool layers followed by two fully connected layers 2. The final test set accuracy of this model was 99.2%. For Block-Pixels measurements on MNIST, we trained several models with our approach and the baselines, each with a different probability p of blocking pixels. For each model, after convergence, we computed the inception score using the network described above. A plot of the inception scores as a function of p is shown in Fig. 7 (left). We note that at p = 0, i.e. if no pixels are blocked, our model is equivalent to a conventional GAN. As we increase p, the baseline models quickly start to perform poorly, while the AmbientGAN models continue to perform relatively well. For the Convolve+Noise measurements with a Gaussian kernel of radius 1 pixel, and additive Gaussian noise with zero mean and standard deviation σ, we trained several models on MNIST by varying the value of σ. A plot of the inception score as a function of σ is shown in Fig. 7 (right). We see that for small variance of additive noise, Wiener deconvolution and the "ignore" baseline perform quite well. However, as we start to increase the noise levels, these baselines quickly deteriorate in performance, while the AmbientGAN models maintain a high inception score. For 1D projection measurements, we report the inception scores for the samples produced by the AmbientGAN models trained with two projection measurements at a time. The Pad-Rotate-Project model produces digits at various orientations and thus does quite poorly, achieving an inception score of just 4.18. The model with Pad-Rotate-Project-θ measurements produces well-aligned digits and achieves an inception score of 8.12. For comparison, the vanilla GAN model trained with fullyobserved samples achieves an inception score of 8.99. Thus, the second model comes quite close to the performance of the fully-observed case while being trained only on 1D projections. In Fig. 8 (left), we show a plot of inception score vs the probability of blocking pixels p in the BlockPixels measurement model on CIFAR-10. We note that the total variation inpainting method is quite slow and the performance on MNIST was about the same as unmeasure-blur baseline. So, we do not run inpainting baselines on the CIFAR-10 dataset. From the plots, we see a trend similar to the plot obtained with MNIST (Fig. 7, left), showing the superiority of our approach over baselines. We show the inception score as a function of training iteration in Fig. 8 (right).Generative models are powerful tools, but constructing a generative model requires a large, highquality dataset of the distribution of interest. We show how to relax this requirement, by learning a distribution from a dataset that only contains incomplete, noisy measurements of the distribution. We hope that this will allow for the construction of new generative models of distributions for which no high-quality dataset exists. Lemma. As in Section 3, let p r x be the data distribution, p θ be the distribution over parameters of the measurement functions and p r y be the induced measurement distribution. Further, assume that for the given p θ, there is a unique probability distribution p r x that induces the given measurement distribution p r y. Then, for the vanilla GAN model BID11 ], if the Discriminator D is optimal, so that DISPLAYFORM0 Proof. From the same argument as in Theorem 1 in BID11 ], it follows that p g y = p r y. Then, since there is a unique probability distribution p Proof. We note that Since Θ ∼ N (0, I n), all possible directions for projections are covered. Further, since the measurement model includes the projection vector Θ as a part of the measurements, in order to match the measurement distribution, the underlying distribution p r x must be such that all 1D marginals are matched. Thus, by Cramer-Wold theorem BID6 ], any sequence of random vectors that match the 1D marginals must converge in distribution to the true underlying distribution. Thus, in particular, there is a unique probability distribution p r x that can match all 1D marginals obtained with the Gaussian projection measurements. Theorem. Let F(·) denote the Fourier transform and let supp(·) be the support of a function. Consider the Convolve+Noise measurement model (Section 4) with the convolution kernel k and additive noise distribution p θ. If supp(F(k)) c = φ and supp(F(p θ)) c = φ, then there is a unique distribution p r x that can induce the measurement distribution p r y. DISPLAYFORM0 With a slight abuse of notation, we will denote the probability density functions (pdf) also by p subscripted with the variable name. Then we have DISPLAYFORM1 where the penultimate step follows since by assumption, F(k) is nowhere 0. In the last step, F −1 is the inverse Fourier transform. Thus, there is a bijective map between X and Z. Since the Fourier and the inverse Fourier are continuous transformations, this map is also continuous. So, we can write Z = h(X), where h is a bijective, differentiable function. So, the pdfs of X and Z are related as DISPLAYFORM2 where J h (x) is the Jacobian of h evaluated atx. Now, note that since Y is a sum of two random variables, its pdf is a convolution of the individual probability density functions. So we have: DISPLAYFORM3 Taking the Fourier transform on both sides, we have DISPLAYFORM4 where the penultimate step follows since by assumption, F(p θ) is nowhere 0.Combining the two , we have a reverse map from the measurement distribution p y to a sample distribution p x. Thus, the reverse map uniquely determines the true underlying distribution p x, concluding the proof. We first state a slightly different version of Theorem 1 from BID11 ] for the discrete setting. We shall use [n] to denote the set {1, 2, . . . n}, and use I(·) to denote the indicator function. Lemma 10.1. Consider a dataset of measurement samples {y 1, y 2, . . . y s}, where each y i ∈ [t]. We define the empirical version of the vanilla GAN objective as DISPLAYFORM0 For j ∈ [t], letp r y (j) = I(y i = j)/s be the empirical distribution of samples. Then the optimal discriminator for the empirical objective is such that DISPLAYFORM1 Additionally, if we fix the discriminator to be optimal, then any optimal generator must satisfy p g y =p r y.Proof. The Empirical Risk Minimization (ERM) version of the loss is equivalent to the taking expectation of the data dependent term with respect to the empirical distribution. Replacing the real data distribution with the empirical version in the proof of Theorem 1 from BID11 ], we obtain the . Now we give a proof of Theorem 5.4.Theorem. Assume that each image pixel takes values in a finite set P. Thus x ∈ P n ⊂ R n. Assume 0 ∈ P, and consider the Block-Pixels measurement model (Section 4) with p being the probability of blocking a pixel. If p < 1, then there is a unique distribution p Proof. We first consider a more general case and apply that to the Block-Pixels model. Consider a discrete distribution p x over [t]. We apply random measurement functions to samples from p x to obtain measurements. Assume that each measurement also belongs to the same set, i.e. [t]. Let A ∈ R t×t be the transition matrix so that A ij is the probability (under the randomness in measurement functions) that measurement i was produced by sample j. Then the distribution over measurements p y can be written in terms of p x and A as: DISPLAYFORM2 Thus, if the matrix A is invertible, we can guarantee that the distribution p x is recoverable from p y.Assuming A is invertible, we now turn to the sample complexity. Let λ be the minimum of magnitude of eigenvalues of A. Since A is invertible, λ > 0. Let the dataset of measurements be {y 1, y 2, . . . y s}. For j ∈ [t] and for k ∈ [s], Let Y j k = I(y k = j). Then for any > 0, we have DISPLAYFORM3 where we used union bound and Chernoff inequalities. Setting this to δ, we get s = t 2 2λ 2 2 log 2t δ.From Lemma 10.1, we know that the optimal generator must satisfy p. Thus, we obtain that with probability ≥ 1 − δ, DISPLAYFORM4 =.Now we turn to the specific case of Block-Pixels measurement. We proceed by dividing the set of all possible |P | n images into n + 1 classes. The i-th class has those images that have exactly i pixels with zero value. We sort the images according to their class number (arbitrary ordering within the class) and consider the transition matrix A. Note that given an image from class i it must have j ≥ i zero pixels after the measurement. Also, no image in class i can produce another image in the same class after measurements. Thus, the transition matrix is lower triangular. Since each pixel is blocked independently with probability p and since there are n pixels, the event that no pixels are blocked occurs with probability FIG0 n. Thus, every image has at least (1 − p) n chance of being unaffected by the measurements. Any unaffected image maps to itself and thus forms diagonal entries in the transition matrix. So, we observe that the diagonal entries of the transition matrix are strictly positive and their minimum value is (1 − p) n.For a triangular matrix, the diagonal entries are precisely the eigenvalues and hence we have proved that A is invertible and the smallest eigenvalue is (1 − p) n. Combined with the above, by setting λ = (1 − p) n, and t = |P | n, we conclude the proof. The DCGAN model on MNIST follows the architecture in BID27 ]. The noise input to the generator (Z) has 100 dimensions where each coordinate is sampled IID Uniform on [−1, 1]. The generator uses two linear layers followed by two deconvolutional layers. The labels are concatenated with the inputs of each layer. The discriminator uses two convolutional layers followed by two linear layers. As with the generator, the labels are concatenated with the inputs of each layer. Batch-norm is used in both generator and the discriminator. The WGANGP model on MNIST follows the architecture in BID12 ]. The generator takes in a latent vector of 128 dimensions where each coordinate is sampled IID Uniform on [−1, 1]. The generator then applies one linear and three deconvolutional layers. The discriminator uses three convolutional layers followed by one linear layer. Batch-norm is not used. The unconditional DCGAN model on celebA follows the architecture in BID27 ]. The latent vector has 100 dimensions where each coordinate is Uniform on [−1, 1]. The generator applies one linear layer followed by four deconvolutional layers. The discriminator uses four convolutional layers followed by a linear layer. Batch-norm is used in both generator and the discriminator. The ACWGANGP model on CIFAR-10 follows the residual architecture in BID12 ]. The latent vector has 128 dimensions where each coordinate is sampled from IID standard Gaussian distribution. The generator has a linear layer followed by three residual blocks. Each residual block consists of two repetitions of the following three operations: conditional batch normalization followed by a nonlinearity followed by an upconvolution layer. The residual blocks are followed by another conditional batch normalization, a final convolution, and a final tanh non-linearity. The discriminator consists of one residual block with two convolutional layers followed by three residual blocks, and a final linear layer. Here, we present some more for various measurement models. So far, in our analysis and experiments, we assumed that the parametric form of the measurement function and the distribution of those parameters is exactly known. This was then used for simulating the stochastic measurement process. Here, we consider the case where the parameter distribution is only approximately known. In this case, one would like the training process to be robust, i.e. the quality of the learned generator to be close to the case where the parameter distribution is exactly known. Through the following experiment, we empirically demonstrate that the AmbientGAN approach is robust to systematic mismatches in the parameter distribution of the measurement function. Consider the Block-Pixels measurement model (Section 4). We use the MNIST dataset. Pixels are blocked with probability p * = 0.5 to obtain a dataset of measurements. For several values of blocking probability p for the measurement function applied to the output of the generator, we train AmbientGAN models with this dataset. After training, we compute the inception score of the learned generators and plot it as a function of p in FIG0. We note that the plot peaks at p = p * = 0.5 and gradually drops on both sides. This suggests that our method is somewhat robust to parameter distribution mismatch. We provide further evidence that the generator learned through AmbientGAN approach captures the data distribution well. Generative models have been shown to improve sensing over sparsitybased approaches BID4 ]. We attempt to use the GAN learned using our procedure for compressed sensing. We trained an AmbientGAN with Block-Pixels measurement model (Section 4) on MNIST with p = 0.5. Using the learned generator, we followed the rest of the procedure in BID4 ] using their code 3. FIG0 (right) shows a plot of reconstruction error vs the number of measurements, comparing Lasso with AmbienGAN. Thus, we observe a similar reduction in the number of measurements while using AmbientGAN trained with corrupted samples instead of a regular GAN trained with fully observed samples.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hy7fDog0b
How to learn GANs from noisy, distorted, partial observations
Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet. Empirical and theoretical clearly indicate that the empirical spectral density (ESD) of DNN layer matrices displays signatures of traditionally-regularized statistical models, even in the absence of exogenously specifying traditional forms of regularization, such as Dropout or Weight Norm constraints. Building on recent in RMT, most notably its extension to Universality classes of Heavy-Tailed matrices, we develop a theory to identify 5+1 Phases of Training, corresponding to increasing amounts of Implicit Self-Regularization. For smaller and/or older DNNs, this Implicit Self-Regularization is like traditional Tikhonov regularization, in that there is a "size scale" separating signal from noise. For state-of-the-art DNNs, however, we identify a novel form of Heavy-Tailed Self-Regularization, similar to the self-organization seen in the statistical physics of disordered systems. This implicit Self-Regularization can depend strongly on the many knobs of the training process. By exploiting the generalization gap phenomena, we demonstrate that we can cause a small model to exhibit all 5+1 phases of training simply by changing the batch size. The inability of optimization and learning theory to explain and predict the properties of NNs is not a new phenomenon. From the earliest days of DNNs, it was suspected that VC theory did not apply to these systems. It was originally assumed that local minima in the energy/loss surface were responsible for the inability of VC theory to describe NNs, and that the mechanism for this was that getting trapped in local minima during training limited the number of possible functions realizable by the network. However, it was very soon realized that the presence of local minima in the energy function was not a problem in practice (2; 3). Thus, another reason for the inapplicability of VC theory was needed. At the time, there did exist other theories of generalization based on statistical mechanics (4; 5; 6; 7), but for various technical and nontechnical reasons these fell out of favor in the ML/NN communities. Instead, VC theory and related techniques continued to remain popular, in spite of their obvious problems. More recently, theoretical of Choromanska et al. (which are related to (4; 5; 6; 7)) suggested that the Energy/optimization Landscape of modern DNNs resembles the Energy Landscape of a zero-temperature Gaussian Spin Glass; and empirical of Zhang et al. have again pointed out that VC theory does not describe the properties of DNNs. Martin and Mahoney then suggested that the Spin Glass analogy may be useful to understand severe overtraining versus the inability to overtrain in modern DNNs.We should note that it is not even clear how to define DNN regularization. The challenge in applying these well-known ideas to DNNs is that DNNs have many adjustable "knobs and switches," independent of the Energy Landscape itself, most of which can affect training accuracy, in addition to many model parameters. Indeed, nearly anything that improves generalization is called regularization. Evaluating and comparing these methods is challenging, in part since there are so many, and in part since they are often constrained by systems or other not-traditionally-ML considerations. Motivated by this situation, we are interested here in two related questions.• Theoretical Question. Why is regularization in deep learning seemingly quite different than regularization in other areas on ML; and what is the right theoretical framework with which to investigate regularization for DNNs? • Practical Question. How can one control and adjust, in a theoretically-principled way, the many knobs and switches that exist in modern DNN systems, e.g., to train these models efficiently and effectively, to monitor their effects on the global Energy Landscape, etc.? That is, we seek a Practical Theory of Deep Learning, one that is prescriptive and not just descriptive. This theory would provide useful tools for practitioners wanting to know How to characterize and control the Energy Landscape to engineer larger and betters DNNs; and it would also provide theoretical answers to broad open questions as Why Deep Learning even works. Main Empirical Results. Our main empirical consist in evaluating empirically the ESDs (and related RMT-based statistics) for weight matrices for a suite of DNN models, thereby probing the Energy Landscapes of these DNNs. For older and/or smaller models, these are consistent with implicit Self-Regularization that is Tikhonov-like; and for modern state-of-the-art models, these suggest novel forms of Heavy-Tailed Self-Regularization.• Self-Regularization in old/small models. The ESDs of older/smaller DNN models (like LeNet5 and a toy MLP3 model) exhibit weak Self-Regularization, well-modeled by a perturbative variant of MP theory, the Spiked-Covariance model. Here, a small number of eigenvalues pull out from the random bulk, and thus the MP Soft Rank and Stable Rank both decrease. This weak form of Self-Regularization is like Tikhonov regularization, in that there is a "size scale" that cleanly separates "signal" from "noise," but it is different than explicit Tikhonov regularization in that it arises implicitly due to the DNN training process itself.• Heavy-Tailed Self-Regularization. The ESDs of larger, modern DNN models (including AlexNet and Inception and nearly every other large-scale model we have examined) deviate strongly from the common Gaussian-based MP model. Instead, they appear to lie in one of the very different Universality classes of Heavy-Tailed random matrix models. We call this HeavyTailed Self-Regularization. The ESD appears Heavy-Tailed, but with finite support. In this case, there is not a "size scale" (even in the theory) that cleanly separates "signal" from "noise." Main Theoretical Results. Our main theoretical consist in an operational theory for DNN Self-Regularization. Our theory uses ideas from RMT-both vanilla MP-based RMT as well as extensions to other Universality classes based on Heavy-Tailed distributions-to provide a visual taxonomy for 5 + 1 Phases of Training, corresponding to increasing amounts of Self-Regularization.• Modeling Noise and Signal. We assume that a weight matrix W can be modeled as W W rand + ∆ sig, where W rand is "noise" and where ∆ sig is "signal." For small to medium sized signal, W is well-approximated by an MP distribution-with elements drawn from the Gaussian Universality class-perhaps after removing a few eigenvectors. For large and strongly-correlated signal, W rand gets progressively smaller, but we can model the non-random strongly-correlated signal ∆ sig by a Heavy-Tailed random matrix, i.e., a random matrix with elements drawn from a Heavy-Tailed (rather than Gaussian) Universality class.• 5+1 Phases of Regularization. Based on this, we construct a practical, visual taxonomy for 5+1 Phases of Training. Each phase is characterized by stronger, visually distinct signatures in the ESD of DNN weight matrices, and successive phases correspond to decreasing MP Soft Rank and increasing amounts of Self-Regularization. The 5+1 phases are: RANDOM-LIKE, BLEEDING-OUT, BULK+SPIKES, BULK-DECAY, HEAVY-TAILED, and RANK-COLLAPSE. Based on these , we speculate that all well optimized, large DNNs will display Heavy-Tailed Self-Regularization in their weight matrices. Evaluating the Theory. We provide a detailed evaluation of our theory using a smaller MiniAlexNew model that we can train and retrain.• Effect of Explicit Regularization. We analyze ESDs of MiniAlexNet by removing all explicit regularization (Dropout, Weight Norm constraints, Batch Normalization, etc.) and characterizing how the ESD of weight matrices behave during and at the end of Backprop training, as we systematically add back in different forms of explicit regularization.• Exhibiting the 5+1 Phases. We demonstrate that we can exhibit all 5+1 phases by appropriate modification of the various knobs of the training process. In particular, by decreasing the batch size from 500 to 2, we can make the ESDs of the fully-connected layers of MiniAlexNet vary continuously from RANDOM-LIKE to HEAVY-TAILED, while increasing generalization accuracy along the way. These illustrate the Generalization Gap pheneomena (12; 13; 14), and they explain that pheneomena as being caused by the implicit Self-Regularization associated with models trained with smaller and smaller batch sizes. In this section, we summarize from RMT that we use. Several overviews of RMT are available (15; 16; 17; 18; 19; 20; 21; 22). Here, we will describe a more general form of RMT. MP theory considers the density of singular values ρ(ν i) of random rectangular matrices W. This is equivalent to considering the density of eigenvalues ρ(λ i), i.e., the ESD, of matrices of the form X = W T W. MP theory then makes strong statements about such quantities as the shape of the distribution in the infinite limit, it's bounds, expected finite-size effects, such as fluctuations near the edge, and rates of convergence. To apply RMT, we need only specify the number of rows and columns of W and assume that the elements W i,j are drawn from a distribution that is a member of a certain Universality class (there are different for different Universality classes). RMT then describes properties of the ESD, even at finite size; and one can compare perdictions of RMT with empirical . Most well-known is the Universality class of Gaussian distributions. This leads to the basic or vanilla MP theory, which we describe in this section. More esoteric-but ultimately more useful for us-are Universality classes of Heavy-Tailed distributions. In Section 2.2, we describe this important variant. Gaussian Universality class. We start by modeling W as an N × M random matrix, with elements from a Gaussian distribution, such that: W ij ∼ N (0, σ 2 mp). Then, MP theory states that the ESD of the correlation matrix, X = W T W, has the limiting density given by the MP distribution ρ(λ): DISPLAYFORM0 Here, σ 2 mp is the element-wise variance of the original matrix, Q = N/M ≥ 1 is the aspect ratio of the matrix, and the minimum and maximum eigenvalues, λ ±, are given by DISPLAYFORM1 Finite-size Fluctuations at the MP Edge. In the infinite limit, all fluctuations in ρ N (λ) concentrate very sharply at the MP edge, λ ±, and the distribution of the maximum eigenvalues ρ ∞ (λ max) is governed by the TW Law. Even for a single finite-sized matrix, however, MP theory states the upper edge of ρ(λ) is very sharp; and even when the MP Law is violated, the TW Law, with finitesize corrections, works very well at describing the edge statistics. When these laws are violated, this is very strong evidence for the onset of more regular non-random structure in the DNN weight matrices, which we will interpret as evidence of Self-Regularization. MP-based RMT is applicable to a wide range of matrices; but it is not in general applicable when matrix elements are strongly-correlated. Strong correlations appear to be the case for many welltrained, production-quality DNNs. In statistical physics, it is common to model strongly-correlated systems by Heavy-Tailed distributions. The reason is that these models exhibit, more or less, the same large-scale statistical behavior as natural phenomena in which strong correlations exist (32; 19). Moreover, recent from MP/RMT have shown that new Universality classes exist for matrices with elements drawn from certain Heavy-Tailed distributions.We use these Heavy-Tailed extensions of basic MP/RMT to build an operational and phenomenological theory of Regularization in Deep Learning; and we use these extensions to justify our analysis of both Self-Regularization and Heavy-Tailed Self-Regularization. Briefly, our theory for simple Self-Regularization is insipred by the Spiked-Covariance model of Johnstone and it's interpretation as a form of Self-Organization by Sornette; and our theory for more sophisticated Heavy-Tailed Self-Regularization is inspired by the application of MP/RMT tools in quantitative finance by Bouchuad, Potters, and coworkers (35; 36; 37; 23; 25; 19; 22), as well as the relation of Heavy-Tailed phenomena more generally to Self-Organized Criticality in Nature. Here, we No edge. Frechet Table 1: Basic MP theory, and the spiked and Heavy-Tailed extensions we use, including known, empirically-observed, and conjectured relations between them. Boxes marked " * " are best described as following "TW with large finite size corrections" that are likely Heavy-Tailed, leading to bulk edge statistics and far tail statistics that are indistinguishable. Boxes marked " * * " are phenomenological fits, describing large (2 < µ < 4) or small (0 < µ < 2) finite-size corrections on N → ∞ behavior. See (24; 23; 25; 26; 27; 28; 29; 30; 19; 31) for additional details.highlight basic for this generalized MP theory; see (24; 23; 25; 26; 27; 28; 29; 30; 19; 31) in the physics and mathematics literature for additional details. Universality classes for modeling strongly correlated matrices. Consider modeling W as an N × M random matrix, with elements drawn from a Heavy-Tailed-e.g., a Pareto or Power Law (PL)-distribution: DISPLAYFORM0 In these cases, if W is element-wise Heavy-Tailed, then the ESD ρ N (λ) likewise exhibits HeavyTailed properties, either globally for the entire ESD and/or locally at the bulk edge. Table 1 summarizes these recent , comparing basic MP theory, the Spiked-Covariance model, and Heavy-Tailed extensions of MP theory, including associated Universality classes. To apply the MP theory, at finite sizes, to matrices with elements drawn from a Heavy-Tailed distribution of the form given in Eqn. FORMULA2, we have one of the following three Universality classes.• (Weakly) Heavy-Tailed, 4 < µ: Here, the ESD ρ N (λ) exhibits "vanilla" MP behavior in the infinite limit, and the expected mean value of the bulk edge is λ + ∼ M −2/3. Unlike standard MP theory, which exhibits TW statistics at the bulk edge, here the edge exhibits PL / Heavy-Tailed fluctuations at finite N. These finite-size effects appear in the edge / tail of the ESD, and they make it hard or impossible to distinguish the edge versus the tail at finite N.• (Moderately) Heavy-Tailed, 2 < µ < 4: Here, the ESD ρ N (λ) is Heavy-Tailed / PL in the infinite limit, approaching ρ(λ) ∼ λ −1−µ/2. In this regime, there is no bulk edge. At finite size, the global ESD can be modeled by ρ N (λ) ∼ λ −(aµ+b), for all λ > λ min, but the slope a and intercept b must be fit, as they display large finite-size effects. The maximum eigenvalues follow Frechet (not TW) statistics, with λ max ∼ M 4/µ−1 (1/Q) 1−2/µ, and they have large finite-size effects. Thus, at any finite N, ρ N (λ) is Heavy-Tailed, but the tail decays moderately quickly.• (Very) Heavy-Tailed, 0 < µ < 2: Here, the ESD ρ N (λ) is Heavy-Tailed / PL for all finite N, and as N → ∞ it converges more quickly to a PL distribution with tails ρ(λ) ∼ λ −1−µ/2. In this regime, there is no bulk edge, and the maximum eigenvalues follow Frechet (not TW) statistics. Finite-size effects exist, but they are are much smaller here than in the 2 < µ < 4 regime of µ. Fitting PL distributions to ESD plots. Once we have identified PL distributions visually, we can fit the ESD to a PL in order to obtain the exponent α. We use the Clauset-Shalizi-Newman (CSN) approach, as implemented in the python PowerLaw package, 1. Fitting a PL has many subtleties, most beyond the scope of this paper (38; 40; 41; 42; 43; 44; 39; 45; 46). Identifying the Universality class. Given α, we identify the corresponding µ and thus which of the three Heavy-Tailed Universality classes (0 < µ < 2 or 2 < µ < 4 or 4 < µ, as described in Table 1) is appropriate to describe the system. The following are particularly important points. First, observing a Heavy-Tailed ESD may indicate the presence of a scale-free DNN. This suggests that the underlying DNN is strongly-correlated, and that we need more than just a few separated spikes, plus some random-like bulk structure, to model the DNN and to understand DNN regularization. Second, this does not necessarily imply that the matrix elements of W l form a Heavy-Tailed distribution. Rather, the Heavy-Tailed distribution arises since we posit it as a model of the strongly correlated, highly non-random matrix W l. Third, we conjecture that this is more general, and that very welltrained DNNs will exhibit Heavy-Tailed behavior in their ESD for many the weight matrices. In this section, we describe our main empirical for existing, pretrained DNNs. Early on, we observed that small DNNs and large DNNs have very different ESDs. For smaller models, ESDs tend to fit the MP theory well, with well-understood deviations, e.g., low-rank perturbations. For larger models, the ESDs ρ N (λ) almost never fit the theoretical ρ mp (λ), and they frequently have a completely different form. We use RMT to compare and contrast the ESDs of a smaller, older NN and many larger, modern DNNs. For the small model, we retrain a modern variant of one of the very early and well-known Convolutional Nets-LeNet5. For the larger, modern models, we examine selected layers from AlexNet, InceptionV3, and many other models (as distributed with pyTorch).Example: LeNet5. LeNet5 is the prototype early model for DNNs. Since LeNet5 is older, we actually recoded and retrained it. We used Keras 2.0, using 20 epochs of the AdaDelta optimizer, on the MNIST data set. This model has 100.00% training accuracy, and 99.25% test accuracy on the default MNIST split. We analyze the ESD of the FC1 Layer. The FC1 matrix W F C1 is a 2450 × 500 matrix, with Q = 4.9, and thus it yields 500 eigenvalues. FIG1 (b) zoomed-in along the X-axis. We show (red curve) our fit to the MP distribution ρ emp (λ). Several things are striking. First, the bulk of the density ρ emp (λ) has a large, MP-like shape for eigenvalues λ < λ + ≈ 3.5, and the MP distribution fits this part of the ESD very well, including the fact that the ESD just below the best fit λ + is concave. Second, some eigenvalue mass is bleeding out from the MP bulk for λ ∈ [3.5, 5], although it is quite small. Third, beyond the MP bulk and this bleeding out region, are several clear outliers, or spikes, ranging from ≈ 5 to λ max 25. Overall, the shape of ρ emp (λ), the quality of the global bulk fit, and the statistics and crisp shape of the local bulk edge all agree well with MP theory augmented with a low-rank perturbation. Example:. AlexNet was the first modern DNN. AlexNet resembles a scaledup version of the LeNet5 architecture; it consists of 5 layers, 2 convolutional, followed by 3 FC layers (the last being a softmax classifier). We refer to the last 2 layers before the final softmax as layers FC1 and FC2, respectively. FC2 has a 4096 × 1000 matrix, with Q = 4.096.Consider AlexNet FC2 (full in FIG1, and zoomed-in in 1(d)). This ESD differs even more profoundly from standard MP theory. Here, we could find no good MP fit. The best MP fit (in red) does not fit the Bulk part of ρ emp (λ) well. The fit suggests there should be significantly more bulk eigenvalue mass (i.e., larger empirical variance) than actually observed. In addition, the bulk edge is indeterminate by inspection. It is only defined by the crude fit we present, and any edge statistics obviously do not exhibit TW behavior. In contrast with MP curves, which are convex near the bulk edge, the entire ESD is concave (nearly) everywhere. Here, a PL fit gives good fit α ≈ 2.25, indicating a µ 3. For this layer (and others), the shape of ρ emp (λ), the quality of the global bulk fit, and the statistics and shape of the local bulk edge are poorly-described by standard MP theory. Empirical for other pre-trained DNNs. We have also examined the properties of a wide range of other pre-trained models, and we have observed similar Heavy-Tailed properties to AlexNet in all of the larger, state-of-the-art DNNs, including VGG16, VGG19, ResNet50, InceptionV3, etc. Space constraints prevent a full presentation of these , but several observations can be made. First, all of our fits, except for certain layers in InceptionV3, appear to be in the range 1.5 < α 3.5 (where the CSN method is known to perform well). Second, we also check to see whether PL is the best fit by comparing the distribution to a Truncated Power Law (TPL), as well as an exponential, stretch-exponential, and log normal distributions. In all cases, we find either a PL or TPL fits best (with a p-value ≤ 0.05), with TPL being more common for smaller values of α. Third, even when taking into account the large finite-size effects in the range 2 < α < 4, nearly all of the ESDs appear to fall into the 2 < µ < 4 Universality class. Towards a Theory of Self-Regularization. For older and/or smaller models, like LeNet5, the bulk of their ESDs (ρ N (λ); λ λ + ) can be well-fit to theoretical MP density ρ mp (λ), potentially with distinct, outlying spikes (λ > λ +). This is consistent with the Spiked-Covariance model of Johnstone, a simple perturbative extension of the standard MP theory. This is also reminiscent of traditional Tikhonov regularization, in that there is a "size scale" (λ +) separating signal (spikes) from noise (bulk). This demonstrates that the DNN training process itself engineers a form of implicit Self-Regularization into the trained model. For large, deep, state-of-the-art DNNs, our observations suggest that there are profound deviations from traditional RMT. These networks are reminiscent of strongly-correlated disordered-systems that exhibit Heavy-Tailed behavior. What is this regularization, and how is it related to our observations of implicit Tikhonov-like regularization on LeNet5?To answer this, recall that similar behavior arises in strongly-correlated physical systems, where it is known that strongly-correlated systems can be modeled by random matrices-with entries drawn from non-Gaussian Universality classes, e.g., PL or other Heavy-Tailed distributions. Thus, when we observe that ρ N (λ) has Heavy-Tailed properties, we can hypothesize that W is stronglycorrelated, 2 and we can model it with a Heavy-Tailed distribution. Then, upon closer inspection, we find that the ESDs of large, modern DNNs behave as expected-when using the lens of HeavyTailed variants of RMT. Importantly, unlike the Spiked-Covariance case, which has a scale cut-off (λ +), in these very strongly Heavy-Tailed cases, correlations appear on every size scale, and we can not find a clean separation between the MP bulk and the spikes. These observations demonstrate that modern, state-of-the-art DNNs exhibit a new form of Heavy-Tailed Self-Regularization. In this section, we develop an operational/phenomenological theory for DNN Self-Regularization. MP Soft Rank. We first define the MP Soft Rank (R mp), that is designed to capture the "size scale" of the noise part of W l, relative to the largest eigenvalue of W T l W l. Assume that MP theory fits at least a bulk of ρ N (λ). Then, we can identify a bulk edge λ + and a bulk variance σ 2 bulk, and define the MP Soft Rank as the ratio of λ + and λ max: R mp (W):= λ + /λ max. Clearly, R mp ∈; R mp = 1 for a purely random matrix; and for a matrix with an ESD with outlying spikes, λ max > λ +, and R mp < 1. If there is no good MP fit because the entire ESD is wellapproximated by a Heavy-Tailed distribution, then we can define λ + = 0, in which case R mp = 0.Visual Taxonomy. We characterize implicit Self-Regularization, both for DNNs during SGD training as well as for pre-trained DNNs, as a visual taxonomy of 5+1 Phases of Training (RANDOM-LIKE, BLEEDING-OUT, BULK+SPIKES, BULK-DECAY, HEAVY-TAILED, and RANK-COLLAPSE). See TAB1 Each phase is visually distinct, and each has a natural interpretation in terms of RMT. One consideration is the global properties of the ESD: how well all or part of the ESD is fit by an MP distriution, for some value of λ +, or how well all or part of the ESD is fit by a Heavy-Tailed or PL distribution, for some value of a PL parameter. A second consideration is local properties of the ESD: the form of fluctuations, in particular around the edge λ + or around the largest eigenvalue λ max. For example, the shape of the ESD near to and immediately above λ + is very different in FIG5 Theory of Each Phase. RMT provides more than simple visual insights, and we can use RMT to differentiate between the 5+1 Phases of Training using simple models that qualitatively describe the shape of each ESD. We model the weight matrices W as "noise plus signal," where the "noise" is modeled by a random matrix W rand, with entries drawn from the Gaussian Universality class (well-described by traditional MP theory) and the "signal" is a (small or large) correction ∆ sig: TAB1 summarizes the theoretical model for each phase. Each model uses RMT to describe the global shape of ρ N (λ), the local shape of the fluctuations at the bulk edge, and the statistics and information in the outlying spikes, including possible Heavy-Tailed behaviors. DISPLAYFORM0 In the first phase (RANDOM-LIKE), the ESD is well-described by traditional MP theory, in which a random matrix has entries drawn from the Gaussian Universality class. In the next phases (BLEEDING-OUT, BULK+SPIKES), and/or for small networks such as LetNet5, ∆ is a relativelysmall perturbative correction to W rand, and vanilla MP theory (as reviewed in Section 2.1) can be applied, as least to the bulk of the ESD. In these phases, we will model the W rand matrix by a vanilla W mp matrix (for appropriate parameters), and the MP Soft Rank is relatively large (R mp (W) 0). In the BULK+SPIKES phase, the model resembles a Spiked-Covariance model, and the Self-Regularization resembles Tikhonov regularization. In later phases (BULK-DECAY, HEAVY-TAILED), and/or for modern DNNs such as AlexNet and InceptionV3, ∆ becomes more complex and increasingly dominates over W rand. For these more strongly-correlated phases, W rand is relatively much weaker, and the MP Soft Rank decreases. Vanilla MP theory is not appropriate, and instead the Self-Regularization becomes Heavy-Tailed. We will treat the noise term W rand as small, and we will model the properties of ∆ with HeavyTailed extensions of vanilla MP theory (as reviewed in Section 2.2) to Heavy-Tailed non-Gaussian universality classes that are more appropriate to model strongly-correlated systems. In these phases, the strongly-correlated model is still regularized, but in a very non-traditional way. The final phase, the RANK-COLLAPSE phase, is a degenerate case that is a prediction of the theory. To validate and illustrate our theory, we analyzed MiniAlexNet, 3 a simpler version of AlexNet, similar to the smaller models used in, scaled down to prevent overtraining, and trained on CIFAR10. Space constraints prevent a full presentation of these , but we mention a few key here. The basic architecture consists of two 2D Convolutional layers, each with Max Pooling and Batch Normalization, giving 6 initial layers; it then has two Fully Connected (FC), or Dense, layers with ReLU activations; and it then has a final FC layer added, with 10 nodes and softmax activation. W F C1 is a 4096 × 384 matrix (Q ≈ 10.67); W F C2 is a 384 × 192 matrix (Q = 2); and W F C3 is a 192 × 10 matrix. All models are trained using Keras 2.x, with TensorFlow as a backend. We use SGD with momentum, with a learning rate of 0.01, a momentum parameter of 0.9, and a baseline batch size of 32; and we train up to 100 epochs. We save the weight matrices at the end of every epoch, and we analyze the empirical properties of the W F C1 and W F C2 matrices. For each layer, the matrix Entropy (S(W)) gradually lowers; and the Stable Rank (R s (W)) shrinks. These decreases parallel the increase in training/test accuracies, and both metrics level off as the training/test accuracies do. These changes are seen in the ESD, e.g., see FIG6. For layer FC1, the initial weight matrix W 0 looks very much like an MP distribution (with Q ≈ 10.67), consistent with a RANDOM-LIKE phase. Within a very few epochs, however, eigenvalue mass shifts to larger values, and the ESD looks like the BULK+SPIKES phase. Once the Spike(s) appear(s), substantial changes are hard to see visually, but minor changes do continue in the ESD. Most notably, λ max increases from roughly 3.0 to roughly 4.0 during training, indicating further Self-Regularization, even within the BULK+SPIKES phase. Here, spike eigenvectors tend to be more localized than bulk eigenvectors. If explicit regularization (e.g., L 2 norm weight regularization or Dropout) is added, then we observe a greater decrease in the complexity metrics (Entropies and Stable Ranks), consistent with expectations, and this is casued by the eigenvalues in the spike being pulled to much larger values in the ESD. We also observe that eigenvector localization tends to be more prominent, presumably since explicit regularization can make spikes more well-separated from the bulk. In this section, we demonstrate that we can exhibit all five of the main phases of learning by changing a single knob of the learning process. We consider the batch size since it is not traditionally considered a regularization parameter and due to its its implications for the generalization gap. The Generalization Gap refers to the peculiar phenomena that DNNs generalize significantly less well when trained with larger mini-batches (on the order of 10 3 − 10 4) (48; 12; 13; 14). Practically, this is of interest since smaller batch sizes makes training large DNNs on modern GPUs much less efficient. Theoretically, this is of interest since it contradicts simplistic stochastic optimization theory for convex problems. Thus, there is interest in the question: what is the mechanism responsible for the drop in generalization in models trained with SGD methods in the large-batch regime?To address this question, we consider here using different batch sizes in the DNN training algorithm. We trained the MiniAlexNet model, just as in Section 5, except with batch sizes ranging from moderately large to very small (b ∈ {500, 250, 100, 50, 32, 16, 8, 4, 2}). as a function of Batch Size. The MP Soft Rank (R mp) and the Stable Rank (R s) both track each other, and both systematically decrease with decreasing batch size, as the test accuracy increases. In addition, both the training and test accuracy decrease for larger values of b: training accuracy is roughly flat until batch size b ≈ 100, and then it begins to decrease; and test accuracy actually increases for extremely small b, and then it gradually decreases as b increases. ESDs: Comparisons with RMT. FIG9 shows the final ensemble ESD for each value of b for Layer FC1. We see systematic changes in the ESD as batch size b decreases. At batch size b = 250 (and larger), the ESD resembles a pure MP distribution with no outliers/spikes; it is RANDOM-LIKE. As b decreases, there starts to appear an outlier region. For b = 100, the outlier region resembles BLEEDING-OUT. For b = 32, these eigenvectors become well-separated from the bulk, and the ESD resembles BULK+SPIKES. As batch size continues to decrease, the spikes grow larger and spread out more (observe the scale of the X-axis), and the ESD exhibits BULK-DECAY. Finally, at b = 2, extra mass from the main part of the ESD plot almost touches the spike, and the curvature of the ESD changes, consistent with HEAVY-TAILED. In addition, as b decreases, some of the extreme eigenvectors associated with eigenvalues that are not in the bulk tend to be more localized. Implications for the generalization gap. Our here (both that training/test accuracies decrease for larger batch sizes and that smaller batch sizes lead to more well-regularized models) demonstrate that the generalization gap phenomenon arises since, for smaller values of the batch size b, the DNN training process itself implicitly leads to stronger Self-Regularization. (This SelfRegularization can be either the more traditional Tikhonov-like regularization or the Heavy-Tailed Self-Regularization corresponding to strongly-correlated models.) That is, training with smaller batch sizes implicitly leads to more well-regularized models, and it is this regularization that leads to improved . The obvious mechanism is that, by training with smaller batches, the DNN training process is able to "squeeze out" more and more finer-scale correlations from the data, leading to more strongly-correlated models. Large batches, involving averages over many more data points, simply fail to see this very fine-scale structure, and thus they are less able to construct strongly-correlated models characteristic of the HEAVY-TAILED phase. Clearly, our theory opens the door to address numerous very practical questions. One of the most obvious is whether our RMT-based theory is applicable to other types of layers such as convolutional layers. Initial suggest yes, but the situation is more complex than the relatively simple picture we have described here. These and related directions are promising avenues to explore. This from correlations arising at all size scales, which for DNNs arises implicitly due to the training process itself. This implicit Self-Regularization can depend strongly on the many knobs of the training process. In particular, by exploiting the generalization gap phenomena, we demonstrate that we can cause a small model to exhibit all 5+1 phases of training simply by changing the batch size. This demonstrates that-all else being equal-DNN optimization with larger batch sizes leads to less-well implicitly-regularized models, and it provides an explanation for the generalization gap phenomena. Our suggest that large, welltrained DNN architectures should exhibit Heavy-Tailed Self-Regularization, and we discuss the theoretical and practical implications of this. Very large very deep neural networks (DNNs) have received attention as a general purpose tool for solving problems in machine learning (ML) and artificial intelligence (AI), and they perform remarkably well on a wide range of traditionally hard if not impossible problems, such as speech recognition, computer vision, and natural language processing. The conventional wisdom seems to be "the bigger the better," "the deeper the better," and "the more hyper-parameters the better." Unfortunately, this usual modus operandi leads to large, complicated models that are extremely hard to train, that are extremely sensitive to the parameters settings, and that are extremely difficult to understand, reason about, and interpret. Relatedly, these models seem to violate what one would expect from the large body of theoretical work that is currently popular in ML, optimization, statistics, and related areas. This leads to theoretical that fail to provide guidance to practice as well as to confusing and conflicting interpretations of empirical . For example, current optimization theory fails to explain phenomena like the so-called Generalization Gap-the curious observation that DNNs generalize better when trained with smaller batches sizes-and it often does not provide even qualitative guidance as to how stochastic algorithms perform on non-convex landscapes of interest; and current statistical learning theory, e.g., VC-based methods, fails to provide even qualitative guidance as to the behavior of this class of learning methods that seems to have next to unlimited capacity and yet generalize without overtraining. The inability of optimization and learning theory to explain and predict the properties of NNs is not a new phenomenon. From the earliest days of DNNs, it was suspected that VC theory did not apply to these systems. For example, in 1994, Vapnik, Levin, and LeCun BID191 said:[T]he [VC] theory is derived for methods that minimize the empirical risk. However, existing learning algorithms for multilayer nets cannot be viewed as minimizing the empirical risk over [the] entire set of functions implementable by the network. It was originally assumed that local minima in the energy/loss surface were responsible for the inability of VC theory to describe NNs BID191, and that the mechanism for this was that getting trapped in local minima during training limited the number of possible functions realizable by the network. However, it was very soon realized that the presence of local minima in the energy function was not a problem in practice BID126 39]. (More recently, this fact seems to have been rediscovered BID155 37, BID103 BID182 .) Thus, another reason for the inapplicability of VC theory was needed. At the time, there did exist other theories of generalization based on statistical mechanics BID174 BID194 BID107 43], but for various technical and nontechnical reasons these fell out of favor in the ML/NN communities. Instead, VC theory and related techniques continued to remain popular, in spite of their obvious problems. More recently, theoretical of Choromanska et al. (which are related to BID174 BID194 BID107 43] ) suggested that the Energy/optimization Landscape of modern DNNs resembles the Energy Landscape of a zero-temperature Gaussian Spin Glass; and empirical of Zhang et al. BID203 have again pointed out that VC theory does not describe the properties of DNNs. Motivated by these , Martin and Mahoney then suggested that the Spin Glass analogy may be useful to understand severe overtraining versus the inability to overtrain in modern DNNs BID140.Many puzzling questions about regularization and optimization in DNNs abound. In fact, it is not even clear how to define DNN regularization. In traditional ML, regularization can be either explicit or implicit. Let's say that we are optimizing some loss function L(·), specified by some parameter vector or weight matrix W. When regularization is explicit, it involves making the loss function L "nicer" or "smoother" or "more well-defined" by adding an explicit capacity control term directly to the loss, i.e., by considering a modified objective of the form L(W) + α W. In this case, we tune the regularization parameter α by cross validation. When regularization is implicit, we instead have some adjustable operational procedure like early stopping of an iterative algorithm or truncating small entries of a solution vector. In many cases, we can still relate this back to the more familiar form of optimizing an effective function of the form L(W) + α W. For a precise statement in simple settings, see BID136 BID162 BID100; and for a discussion of implicit regularization in a broader context, see BID135 and references therein. With DNNs, the situation is far less clear. The challenge in applying these well-known ideas to DNNs is that DNNs have many adjustable "knobs and switches," independent of the Energy Landscape itself, most of which can affect training accuracy, in addition to many model parameters. Indeed, nearly anything that improves generalization is called regularization, and a recent review presents a taxonomy over 50 different regularization techniques for Deep Learning BID123. The most common include ML-like Weight Norm regularization, so-called "tricks of the trade" like early stopping and decreasing the batch size, and DNN-specific methods like Batch Normalization and Dropout. Evaluating and comparing these methods is challenging, in part since there are so many, and in part since they are often constrained by systems or other not-traditionally-ML considerations. Moreover, Deep Learning avoids cross validation (since there are simply too many parameters), and instead it simply drives training error to zero (followed by subsequent fiddling of knobs and switches). Of course, it is still the case that test information can leak into the training process (indeed, perhaps even more severely for DNNs than traditional ML methods). Among other things, this argues for unsupervised metrics to evaluate model quality. Motivated by this situation, we are interested here in two related questions.• Theoretical Question. Why is regularization in deep learning seemingly quite different than regularization in other areas on ML; and what is the right theoretical framework with which to investigate regularization for DNNs?• Practical Question. How can one control and adjust, in a theoretically-principled way, the many knobs and switches that exist in modern DNN systems, e.g., to train these models efficiently and effectively, to monitor their effects on the global Energy Landscape, etc.?That is, we seek a Practical Theory of Deep Learning, one that is prescriptive and not just descriptive. This theory would provide useful tools for practitioners wanting to know How to characterize and control the Energy Landscape to engineer larger and betters DNNs; and it would also provide theoretical answers to broad open questions as Why Deep Learning even works. For example, it would provide metrics to characterize qualitatively-different classes of learning behaviors, as predicted in recent work BID140. Importantly, VC theory and related methods do not provide a theory of this form. Let us write the Energy Landscape (or optimization function) for a typical DNN with L layers, with activation functions h l (·), and with weight matrices and biases W l and b l, as follows: DISPLAYFORM0 For simplicity, we do not indicate the structural details of the layers (e.g., Dense or not, Convolutions or not, Residual/Skip Connections, etc.). We imagine training this model on some labeled data {d i, y i} ∈ D, using Backprop, by minimizing the loss L (i.e., the cross-entropy), between E DN N and the labels y i, as follows: DISPLAYFORM1 We can initialize the DNN using random initial weight matrices W 0 l, or we can use other methods such as transfer learning (which we will not consider here). There are various knobs and switches to tune such as the choice of solver, batch size, learning rate, etc. Most importantly, to avoid overtraining, we must usually regularize our DNN. Perhaps the most familiar approach from ML for implementing this regularization explicitly constrains the norm of the weight matrices, e.g., modifying Objective to give: DISPLAYFORM2 where · is some matrix norm, and where α is an explicit regularization control parameter. The point of Objective FORMULA3 is that explicit regularization shrinks the norm(s) of the W l matrices. We may expect similar to hold for implicit regularization. We will use advanced methods from Random Matrix Theory (RMT), developed in the theory of self organizing systems, to characterize DNN layer weight matrices, W l, 1 during and after the training process. Here is an important (but often under-appreciated) point. We call E DN N the Energy Landscape. By this, we mean that part of the optimization problem parameterized by the heretofore unknown elements of the weight matrices and bias vectors, for a fixed α (in FORMULA3), and as defined by the data {d i, y i} ∈ D. Because we run Backprop training, we pass the data through the Energy function E DN N multiple times. Each time, we adjust the values of the weight matrices and bias vectors. In this sense, we may think of the total Energy Landscape (i.e., the optimization function that is nominally being optimized) as changing at each epoch. We analyze the distribution of eigenvalues, i.e., the Empirical Spectral Density (ESD), ρ N (λ), of the correlation matrix X = W T W associated with the layer weight matrix W. We do this for a wide range of large, pre-trained, readily-available state-of-the-art models, including the original LetNet5 convolutional net (which, due to its age, we retrain) and pre-trained models available in Keras and PyTorch such as AlexNet and Inception. In some cases, the ESDs are very well-described by Marchenko-Pastur (MP) RMT. In other cases, the ESDs are well-described by MP RMT, with the exception of one or more large eigenvalues that can be modeled by a Spiked-Covariance model BID139 BID115. In still other cases-including nearly every current state-ofthe-art model we have examined-the EDSs are poorly-described by traditional RMT, and instead they are more consistent with Heavy-Tailed behavior seen in the statistical physics of disordered systems BID181 24]. Based on our observations, we develop a develop a practical theory of Implicit Self-Regularization in DNNs. This theory takes the form of an operational theory characterizing 5+1 phases of DNN training. To test and validate our theory, we consider two smaller models, a 3-layer MLP (MLP3) and a miniature version of AlexNet (MiniAlexNet), trained on CIFAR10, that we can train ourselves repeatedly, adjusting various knobs and switches along the way. Main Empirical Results. Our main empirical consist in evaluating empirically the ESDs (and related RMT-based statistics) for weight matrices for a suite of DNN models, thereby probing the Energy Landscapes of these DNNs. For older and/or smaller models, these are consistent with implicit Self-Regularization that is Tikhonov-like; and for modern state-of-the-art models, these suggest novel forms of Heavy-Tailed Self-Regularization.• Capacity Control Metrics. We study simple capacity control metrics, the Matrix Entropy, the linear algebraic or Hard Rank, and the Stable Rank. We also use MP RMT to define a new metric, the MP Soft Rank. These metrics track the amount of Self-Regularization that arises in a weight matrix W, either during training or in a pre-trained DNN.• Self-Regularization in old/small models. The ESDs of older/smaller DNN models (like LeNet5 and a toy MLP3 model) exhibit weak Self-Regularization, well-modeled by a perturbative variant of MP theory, the Spiked-Covariance model. Here, a small number of eigenvalues pull out from the random bulk, and thus the MP Soft Rank and Stable Rank both decrease. This weak form of Self-Regularization is like Tikhonov regularization, in that there is a "size scale" that cleanly separates "signal" from "noise," but it is different than explicit Tikhonov regularization in that it arises implicitly due to the DNN training process itself.• Heavy-Tailed Self-Regularization. The ESDs of larger, modern DNN models (including AlexNet and Inception and nearly every other large-scale model we have examined) deviate strongly from the common Gaussian-based MP model. Instead, they appear to lie in one of the very different Universality classes of Heavy-Tailed random matrix models. We call this Heavy-Tailed Self-Regularization. Here, the MP Soft Rank vanishes, and the Stable Rank decreases, but the full Hard Rank is still retained. The ESD appears fully (or partially) Heavy-Tailed, but with finite support. In this case, there is not a "size scale" (even in the theory) that cleanly separates "signal" from "noise."Main Theoretical Results. Our main theoretical consist in an operational theory for DNN Self-Regularization. Our theory uses ideas from RMT-both vanilla MP-based RMT as well as extensions to other Universality classes based on Heavy-Tailed distributions-to provide a visual taxonomy for 5 + 1 Phases of Training, corresponding to increasing amounts of SelfRegularization.• Modeling Noise and Signal. We assume that a weight matrix W can be modeled as W W rand + ∆ sig, where W rand is "noise" and where ∆ sig is "signal. " For small to medium sized signal, W is well-approximated by an MP distribution-with elements drawn from the Gaussian Universality class-perhaps after removing a few eigenvectors. For large and strongly-correlated signal, W rand gets progressively smaller, but we can model the nonrandom strongly-correlated signal ∆ sig by a Heavy-Tailed random matrix, i.e., a random matrix with elements drawn from a Heavy-Tailed (rather than Gaussian) Universality class.• 5+1 Phases of Regularization. Based on this approach to modeling noise and signal, we construct a practical, visual taxonomy for 5+1 Phases of Training. Each phase is characterized by stronger, visually distinct signatures in the ESD of DNN weight matrices, and successive phases correspond to decreasing MP Soft Rank and increasing amounts of Self-Regularization. The 5+1 phases are: Random-like, Bleeding-out, Bulk+Spikes, Bulk-decay, Heavy-Tailed, and Rank-collapse.• Rank-collapse. One of the predictions of our RMT-based theory is the existence of a pathological phase of training, the Rank-collapse or "+1" Phase, corresponding to a state of over-regularization. Here, one or a few very large eigenvalues dominate the ESD, and the rest of the weight matrix loses nearly all Hard Rank. Based on these , we speculate that all well optimized, large DNNs will display Heavy-Tailed Self-Regularization in their weight matrices. Evaluating the Theory. We provide a detailed evaluation of our theory using a smaller MiniAlexNew model that we can train and retrain.• Effect of Explicit Regularization. We analyze ESDs of MiniAlexNet by removing all explicit regularization (Dropout, Weight Norm constraints, Batch Normalization, etc.) and characterizing how the ESD of weight matrices behave during and at the end of Backprop training, as we systematically add back in different forms of explicit regularization.• Implementation Details. Since the details of the methods that underlies our theory (e.g., fitting Heavy-Tailed distributions, finite-size effects, etc.) are likely not familiar to ML and NN researchers, and since the details matter, we describe in detail these issues.• Exhibiting the 5+1 Phases. We demonstrate that we can exhibit all 5+1 phases by appropriate modification of the various knobs of the training process. In particular, by decreasing the batch size from 500 to 2, we can make the ESDs of the fully-connected layers of MiniAlexNet vary continuously from Random-like to Heavy-Tailed, while increasing generalization accuracy along the way. These illustrate the Generalization Gap phenomena BID111 BID119 BID104, and they explain that phenomena as being caused by the implicit Self-Regularization associated with models trained with smaller and smaller batch sizes. By adding extreme Weight Norm regularization, we can also induce the Rank-collapse phase. Main Methodological Contribution. Our main methodological contribution consists in using empirical observations as well as recent developments in RMT to motivate a practical predictive DNN theory, rather than developing a descriptive DNN theory based on general theoretical considerations. Essentially, we treat the training of different DNNs as if we are running novel laboratory experiments, and we follow the traditional scientific method:Make Observations → Form Hypotheses → Build a Theory → Test the theory, literally. In particular, this means that we can observe and analyze many large, production-quality, pretrained models directly, without needing to retrain them, and we can also observe and analyze smaller models during the training process. In adopting this approach, we are interested in both "scientific questions" (e.g., "Why is regularization in deep learning seemingly quite different . . . ? ") as well as "engineering questions" (e.g., "How can one control and adjust . . . ?).To accomplish this, recall that, given an architecture, the Energy Landscape is completely defined by the DNN weight matrices. Since its domain is exponentially large, the Energy Landscape is challenging to study directly. We can, however, analyze the weight matrices, as well as their correlations. (This is analogous to analyzing the expected moments of a complicated distribution.) In principle, this permits us to analyze both local and global properties of the Energy Landscape, as well as something about the class of functions (e.g., VC class, Universality class, etc.) being learned by the DNN. Since the weight matrices of many DNNs exhibit strong correlations and can be modeled by random matrices with elements drawn from the Universality class of Heavy-Tailed distributions, this severely restricts the class of functions learned. It also connects back to the Energy Landscape since it is known that the Energy Landscape of Heavy-Tailed random matrices is very different than that of Gaussian-like random matrices. In Section 2, we provide a warm-up, including simple capacity metrics and their transitions during Backprop. Then, in Sections 3 and 4, we review on RMT necessary to understand our experimental methods, and we present our initial experimental . Based on this, in Section 5, we present our main theory of 5+1 Phases of Training. Then, in Sections 6 and 7, we evaluate our main theory, illustrating the effect of explicit regularization, and demonstrating implications for the generalization gap phenomenon. Finally, in Section 8, we provide a discussion of our in a broader context. The accompanying code is available at ((link anonymized for ICLR Supplementary Material)). For reference, we provide in TAB1 DISPLAYFORM0, between α and µ (for 2 < µ < 4) ∆λ = λ − λ + empirical uncertainty, due to finite-size effects, in theoretical MP bulk edge ∆ model of perturbations and/or strong correlations in W TAB1: Definitions of notation used in the text. In this section, we describe simple spectral metrics to characterize DNN weight these matrices as well as initial empirical observations on the capacity properties of training DNNs. A DNN is defined by its detailed architecture and the values of the weights and biases at each layer. We seek a simple capacity control metric for a learned DNN model that: is easy to compute both during training and for already-trained models; can describe changes in the gross behavior of weight matrices during the Backprop training process; and can identify the onset of subtle structural changes in the weight matrices. One possibility is to use the Euclidean distance between the initial weight matrix, W 0 l, and the weight matrix at epoch e of training, W e l, i.e., ∆(W e l) = W 0 l − W e l 2. This distance, however, is not scale invariant. In particular, during training, and with regularization turned off, the weight matrices may shift in scale, gaining or losing Frobenius mass or variance, 2 and this distance metric is sensitive to that change. Indeed, the whole point of a BatchNorm layer is to try to prevent this. To start, then, we will consider two scale-invariant measures of capacity control: the Matrix Entropy (S), and the Stable Rank (R s). For an arbitrary matrix W, both of these metrics are defined in terms of its spectrum. Consider N × M (real valued) layer weight matrices W l, where DISPLAYFORM0 where ν i = Σ ii is the i th singular value 3 of W, and let p i = ν 2 i / i ν 2 i. We also define the associated M × M (uncentered) correlation matrix DISPLAYFORM1 where we sometimes drop the (l) subscript for X, and where X is normalized by 1/N. We compute the eigenvalues of X, DISPLAYFORM2 where {λ i, i = 1, . . ., M} are the squares of the singular values: λ i = ν 2 i. Given the singular values of W and/or eigenvalues of X, there are several well-known matrix complexity metrics.• The Hard Rank (or linear algebraic rank), DISPLAYFORM3 is the number of singular values greater than zero, ν i > 0, to within a numerical cutoff.• The Matrix Entropy, Matrix Entropy: DISPLAYFORM4 is also known as the Generalized von-Neumann Matrix Entropy. 4 • The Stable Rank, DISPLAYFORM5 the ratio of the Frobenius norm to Spectral norm, is a robust variant of the Hard Rank. We also refer to the Matrix Entropy S(X) and Stable Rank R s (X) of X. By this, we mean the metrics computed with the associated eigenvalues. Note S(X) = S(W) and R s (X) = R s (W). It is known that a random matrix has maximum Entropy, and that lower values for the Entropy correspond to more structure/regularity. If W is a random matrix, then S(W) = 1. For example, we initialize our weight matrices with a truncated random matrix W 0, then S(W 0) 1. When W has significant and observable non-random structure, we expect S(W) < 1. We will see, however, that in practice these differences are quite small, and we would prefer a more discriminative metric. In nearly every case, for well-trained DNNs, all the weight matrices retain full Hard Rank R; but the weight matrices do "shrink," in a sense captured by the Stable Rank. Both S and R s measure matrix capacity, and, up to a scale factor, we will see that they exhibit qualitatively similar behavior. We start by illustrating the behavior of two simple complexity metrics during Backprop training on MLP3, a simple 3-layer Multi-Layer Perceptron (MLP), described in Table 4. MLP3 consists of 3 fully connected (FC) / dense layers with 512 nodes and ReLU activation, with a final FC layer with 10 nodes and softmax activation. This gives 4 layer weight matrices of shape (N × M) and with Q = N/M: DISPLAYFORM0 For the training, each W l matrix is initialized with a Glorot normalization BID101. The model is trained on CIFAR10, up to 100 epochs, with SGD (learning rate=0.01, momentum=0.9) and with a stopping criteria of 0.0001 on the MSE loss. 5 FIG1 presents the layer entropy (in FIG1) and the stable rank (in FIG1), plotted as a function of training epoch, for FC1 and FC2. Both metrics decrease during training (note the scales of the Y axes): the stable rank decreases by approximately a factor of two, and the matrix entropy decreases by a small amount, from roughly 0.92 to just below 0.91 (this is for FC2, and there is an even more modest change for FC1). They both track nearly the same changes; and the stable rank is more informative for our purposes; but we will see that the changes to the matrix entropy, while subtle, are significant. Figure 2 presents scree plots for the initial W 0 l and final W l weight matrices for the FC1 and FC2 layers of our MLP3. A scree plot plots the decreasing variability in the matrix as a function of the increasing index of the corresponding eigenvector BID106. Thus, such scree plots present similar information to the stable rank-e.g., observe the Y-axis of FIG5 (b), which shows that there is a slight increase in the largest eigenvalue for FC1 (again, note the scales of the Y axes) and a larger increase in the largest eigenvalue for FC2, which is consistent with the changes in the stable rank in FIG1 ) -but they too give a coarse picture of the matrix. In particular, they lack the detailed insight into subtle changes in the entropy and rank associated with the Self-Regularization process, e.g., changes that reside in just a few singular values and vectors, that we will need in our analysis. Limitations of these metrics. We can gain more detailed insight into changes in W l during training by creating histograms of the singular values and/or eigenvalues (λ i = ν 2 i). FIG6 (a) displays the density of singular values of W 0 F C2 and W F C2 for the FC2 layer of the MLP3 model. FIG6 (b) displays the associated eigenvalue densities, ρ N (λ), which we call the Empirical Spectral Density (ESD) (defined in detail below) plots. Observe that the initial density of singular values (shown in red/purple), resembles a quarter circle, 6 and the final density of singular values (blue) consists of a bulk quarter circle, of about the same width, with several spikes of singular value density beyond the bulk's edge. Observe that the similar heights and widths and shapes of 6 The initial weight matrix W 0 F C2 is just a random (Glorot Normal) matrix.the bulks imply the variance, or Frobenius norm, does not change much: W F C2 F ≈ W 0 F C2 F. Observe also that the initial ESD, ρ N (λ) (red/purple), is crisply bounded between λ − = 0 and λ + ∼ 3.2 (and similarly for the density of singular values at the square root of this value), whereas the final ESD (blue) has less density at λ − = 0 and several spikes λ λ +. The largest eigenvalue is λ max ∼ 7.2, i.e., W F C2 DISPLAYFORM1. We see now why the stable rank for FC2 decreases by ∼ 2X; the Frobenius norm does not change much, but the squared Spectral norm is ∼ 2X larger. The fine-scale structure that is largely hidden from FIG1 but that is easily-revealed by singular/eigen value density plots of FIG6 suggests that a RMT analysis might be fruitful. In this section, we summarize from RMT that we use. RMT provides a kind-of Central Limit Theorem for matrices, with unique for both square and rectangular matrices. Perhaps the most well-known from RMT are the Wigner Semicircle Law, which describes the eigenvalues of random square symmetric matrices, and the Tracy Widom (TW) Law, which states how the maximum eigenvalue of a (more general) random matrix is distributed. Two issues arise with applying these well-known versions of RMT to DNNs. First, very rarely do we encounter symmetric weight matrices. Second, in training DNNs, we only have one instantiation of each weight matrix, and so it is not generally possible to apply the TW Law. 7 Several overviews of RMT are available BID190 41, BID118 BID189 22, 42, BID157 24]. Here, we will describe a more general form of RMT, the Marchenko-Pastur (MP) theory, applicable to rectangular matrices, including (but not limited to) DNN weight matrices W. MP theory considers the density of singular values ρ(ν i) of random rectangular matrices W. This is equivalent to considering the density of eigenvalues ρ(λ i), i.e., the ESD, of matrices of the form X = W T W. MP theory then makes strong statements about such quantities as the shape of the distribution in the infinite limit, it's bounds, expected finite-size effects, such as fluctuations near the edge, and rates of convergence. When applied to DNN weight matrices, MP theory assumes that W, while trained on very specific datasets, exhibits statistical properties that do not depend on the specific details of the elements W i,j, and holds even at finite size. This Universality concept is "borrowed" from Statistical Physics, where it is used to model, among other things, strongly-correlated systems and so-called critical phenomena in nature BID181.To apply RMT, we need only specify the number of rows and columns of W and assume that the elements W i,j are drawn from a specific distribution that is a member of a certain Universality class (there are different for different Universality classes). RMT then describes properties of the ESD, even at finite size; and one can compare perdictions of RMT with empirical . Most well-known and well-studied is the Universality class of Gaussian distributions. This leads to the basic or vanilla MP theory, which we describe in this section. More esoteric-but ultimately more useful for us-are Universality classes of Heavy-Tailed distributions. In Section 3.2, we describe this important variant. Gaussian Universality class. We start by modeling W as an N × M random matrix, with elements drawn from a Gaussian distribution, such that: DISPLAYFORM0 Then, MP theory states that the ESD of the correlation matrix, X = W T W, has the limiting density given by the MP distribution ρ(λ): DISPLAYFORM1 Here, σ 2 mp is the element-wise variance of the original matrix, Q = N/M ≥ 1 is the aspect ratio of the matrix, and the minimum and maximum eigenvalues, λ ±, are given by FORMULA17 and FORMULA18, as the aspect ratio Q and variance parameter σ are modified. DISPLAYFORM2 The MP distribution for different aspect ratios Q and variance parameters σ mp. The shape of the MP distribution only depends on two parameters, the variance σ 2 mp and the aspect ratio Q. See FIG8 for an illustration. In particular, see FIG8 (a) for a plot of the MP distribution of Eqns. FORMULA17 and FORMULA18, for several values of Q; and see FIG8 (b) for a plot of the MP distribution for several values of σ mp.As a point of reference, when Q = 4 and σ mp = 1 (blue in both subfigures), the mass of ρ N skews slightly to the left, and is bounded in [0.3 − 2.3]. For fixed σ mp, as Q increases, the support (i.e., [λ −, λ +]) narrows, and ρ N becomes less skewed. As Q → 1, the support widens and ρ N skews more leftward. Also, ρ N is concave for larger Q, and it is partially convex for smaller Q = 1.Although MP distribution depends on Q and σ 2 mp, in practice Q is fixed, and thus we are interested how σ 2 mp varies-distributionally for random matrices, and empirically for weight matrices. Due to Eqn., if σ 2 mp is fixed, then λ + (i.e., the largest eigenvalue of the bulk, as well as λ −) is determined, and vice versa. 8 8 In practice, relating λ + and σ 2 mp raises some subtle technical issues, and we discuss these in Section 6.3.The Quarter Circle Law for Q = 1. A special case of Eqn. FORMULA17 arises when Q = 1, i.e., when W is a square non-symmetric matrix. In this case, the eigenvalue density ρ(λ) is very peaked with a bounded tail, and it is sometimes more convenient to consider the density of singular values of W l, ρ(ν), which takes the form of a Quarter-Circle: DISPLAYFORM3 We will not pursue this further, but we saw this earlier, in FIG6 (b), with our toy MLP3 model. Finite-size Fluctuations at the MP Edge. In the infinite limit, all fluctuations in ρ N (λ) concentrate very sharply at the MP edge, λ ±, and the distribution of the maximum eigenvalues ρ ∞ (λ max) is governed by the TW Law. Even for a single finite-sized matrix, however, MP theory states the upper edge of ρ(λ) is very sharp; and even when the MP Law is violated, the TW Law, with finite-size corrections, works very well at describing the edge statistics. When these laws are violated, this is very strong evidence for the onset of more regular non-random structure in the DNN weight matrices, which we will interpret as evidence of Self-Regularization. In more detail, in many cases, one or more of the empirical eigenvalues will extend beyond the sharp edge predicted by the MP fit, i.e., such that λ max > λ + (where λ max is the largest eigenvalue of X). It will be important to distinguish the case that λ max > λ + simply due the finite size of W from the case that λ max is "truly" outside the MP bulk. According to MP theory, for finite (N, M), and with DISPLAYFORM4 where λ + is given by Eqn. Since Q = N/M, we can also express this in terms of N −2/3, but with different prefactors BID115. Most importantly, within MP theory (and even more generally), the λ max fluctuations, centered and rescaled, will follow TW statistics. In the DNNs we consider, M 400, and so the maximum deviation is only ∆λ M 0.02. In many cases, it will be obvious whether a given λ max is an outlier. When it is not, one could generate an ensemble of N R runs and study the information content of the eigenvalues (shown below) and/or apply TW theory (not discussed here).Fitting MP Distributions. Several technical challenges with fitting MP distributions, i.e., selecting the bulk edge λ +, are discussed in Section 6.3. MP-based RMT is applicable to a wide range of matrices (even those with large low-rank perturbations ∆ large to i.i.d. normal behavior); but it is not in general applicable when matrix elements are strongly-correlated. Strong correlations appear to be the case for many well-trained, production-quality DNNs. In statistical physics, it is common to model strongly-correlated systems by Heavy-Tailed distributions BID181. The reason is that these models exhibit, more or less, the same large-scale statistical behavior as natural phenomena in which strong correlations exist BID181 22]. Moreover, recent from MP/RMT have shown that new Universality classes exist for matrices with elements drawn from certain Heavy-Tailed distributions.We use these Heavy-Tailed extensions of basic MP/RMT to build an operational and phenomenological theory of Regularization in Deep Learning; and we use these extensions to justify DISPLAYFORM0 No edge. Frechet DISPLAYFORM1 No edge. Frechet Table 3: Basic MP theory, and the spiked and Heavy-Tailed extensions we use, including known, empirically-observed, and conjectured relations between them. Boxes marked " * " are best described as following "TW with large finite size corrections" that are likely Heavy-Tailed, leading to bulk edge statistics and far tail statistics that are indistinguishable. Boxes marked " * * " are phenomenological fits, describing large (2 < µ < 4) or small (0 < µ < 2) finite-size corrections on N → ∞ behavior. See [38, 20, 19, BID158 7, 40, 8, 26, 22, 21] for additional details.our analysis of both Self-Regularization and Heavy-Tailed Self-Regularization. 9 Briefly, our theory for simple Self-Regularization is insipred by the Spiked-Covariance model of Johnstone BID115 and it's interpretation as a form of Self-Organization by Sornette BID139; and our theory for more sophisticated Heavy-Tailed Self-Regularization is inspired by the application of MP/RMT tools in quantitative finance by Bouchuad, Potters, and coworkers BID96 BID124 BID125 20, 19, 22, 24], as well as the relation of Heavy-Tailed phenomena more generally to Self-Organized Criticality in Nature BID181. Here, we highlight basic for this generalized MP theory; see [38, 20, 19, BID158 7, 40, 8, 26, 22, 21] in the physics and mathematics literature for additional details. Universality classes for modeling strongly correlated matrices. Consider modeling W as an N × M random matrix, with elements drawn from a Heavy-Tailed-e.g., a Pareto or Power Law (PL)-distribution: DISPLAYFORM2 In these cases, if W is element-wise Heavy-Tailed, 10 then the ESD ρ N (λ) likewise exhibits HeavyTailed properties, either globally for the entire ESD and/or locally at the bulk edge. Table 3 summarizes these (relatively) recent , comparing basic MP theory, the SpikedCovariance model, 11 and Heavy-Tailed extensions of MP theory, including associated Universality classes. To apply the MP theory, at finite sizes, to matrices with elements drawn from a HeavyTailed distribution of the form given in Eqn., then, depending on the value of µ, we have 9 The Universality of RMT is a concept broad enough to apply to classes of problems that appear well beyond its apparent range of validity. It is in this sense that we apply RMT to understand DNN Regularization.10 Heavy-Tailed phenomena have many subtle properties BID168; we consider here only the most simple cases. 11 We discuss Heavy-Tailed extensions to MP theory in this section. Extensions to large low-rank perturbations are more straightforward and are described in Section 5.3.one of the following three 12 Universality classes:• (Weakly) Heavy-Tailed, 4 < µ: Here, the ESD ρ N (λ) exhibits "vanilla" MP behavior in the infinite limit, and the expected mean value of the bulk edge is λ + ∼ M −2/3. Unlike standard MP theory, which exhibits TW statistics at the bulk edge, here the edge exhibits PL / Heavy-Tailed fluctuations at finite N. These finite-size effects appear in the edge / tail of the ESD, and they make it hard or impossible to distinguish the edge versus the tail at finite N.• (Moderately) Heavy-Tailed, 2 < µ < 4: Here, the ESD ρ N (λ) is Heavy-Tailed / PL in the infinite limit, approaching the form ρ(λ) ∼ λ −1−µ/2. In this regime of µ, there is no bulk edge. At finite size, the global ESD can be modeled by the form ρ N (λ) ∼ λ −(aµ+b), for all λ > λ min, but the slope a and intercept b must be fit, as they display very large finitesize effects. The maximum eigenvalues follow Frechet (not TW) statistics, with λ max ∼ M 4/µ−1 (1/Q) 1−2/µ, and they have large finite-size effects. Even if the ESD tends to zero, the raw number of eigenvalues can still grow-just not as quickly as N (i.e., we may expect some λ max > λ +, in the infinite limit, but the eigenvalue density ρ(λ) → 0). Thus, at any finite N, ρ N (λ) is Heavy-Tailed, but the tail decays moderately quickly.• (Very) Heavy-Tailed, 0 < µ < 2: Here, the ESD ρ N (λ) is Heavy-Tailed / PL for all finite N, and as N → ∞ it converges more quickly to a PL distribution with tails ρ(λ) ∼ λ −1−µ/2. In this regime, there is no bulk edge, and the maximum eigenvalues follow Frechet (not TW) statistics. Finite-size effects exist here, but they are are much smaller here than in the 2 < µ < 4 regime of µ. The log-log histogram plots of the ESD for three Heavy-Tailed random matrices M with same aspect ratio Q = 3, with µ = 1.0, 3.0, 5.0, corresponding to the three Heavy-Tailed Universality classes (0 < µ < 2 vs 2 < µ < 4 and 4 < µ) described in Table 3.Visualizing Heavy-Tailed distributions. It is often fruitful to perform visual exploration and classification of ESDs by plotting them on linear-linear coordinates, log-linear coordinates (linear horizontal/X axis and logarithmic vertical/Y axis), and/or log-log coordinates (logarithmic horizontal/X axis and logarithmic vertical/Y axis). It is known that data from a PL distribution will appear as a convex curve in a linear-linear plot and a log-linear plot and as a straight line in a log-log plot; and that data from a Gaussian distribution will appear as a bell-shaped curve in a linear-linear plot, as an inverted parabola in a log-linear plot, and as a strongly concave curve in a log-log plot. Examining data from an unknown ESD on different axes suggests a classification for them. (See FIG1 .) More quantitative analysis may lead to more definite , but that too comes with technical challenges. To illustrate this, we provide a visual and operational approach to understand the limiting forms for different µ. See FIG9. FIG9 displays the log-log histograms for the ESD ρ N (λ) for three Heavy-Tailed random matrices M N (µ), with µ = 1.0, 3.0, 5.0 For µ = 1.0 (blue), the log-log histogram is linear over 5 log scales, from 10 3 − 10 8. If N increases (not shown), λ max will grow, but this plot will remain linear, and the tail will not decay. In the infinite limit, the ESD will still be Heavy-Tailed. Contrast this with the ESD drawn from the same distribution, except with µ = 3.0 (green). Here, due to larger finite-size effects, most of the mass is confined to one or two log scales, and it starts to vanish when λ > 10 3. This effect is amplified for µ = 5.0 (red), which shows almost no mass for eigenvalues beyond the MP bulk (i.e. λ > λ +). Zooming in, in FIG9 (b), we see that the log-log plot is linear-in the central region only-and the tail vanishes very quickly. If N increases (not shown), the ESD will remain Heavy-Tailed, but the mass will grow much slower than when µ < 2. This illustrates that, while ESDs can be HeavyTailed at finite size, the tails decay at different rates for different Heavy-Tailed Universality classes (0 < µ < 2 or 2 < µ < 4 or 4 < µ).Fitting PL distributions to ESD plots. Once we have identified PL distributions visually (using a log-log histogram of the ESD, and looking for visual characteristics of FIG9), we can fit the ESD to a PL in order to obtain the exponent α. For this, we use the Clauset-Shalizi-Newman (CSN) approach, as implemented in the python PowerLaw package, 13 which computes an α such that DISPLAYFORM3 Generally speaking, fitting a PL has many subtleties, most beyond the scope of this paper [33, BID102 BID138 BID147 16, BID120 35, 2, BID192 BID105 . For example, care must be taken to ensure the distribution is actually linear (in some regime) on a log-log scale before applying the PL estimator, lest it give spurious ; and the PL estimator only works reasonably well for exponents in the range 1.5 < α 3.5.To illustrate this, consider FIG20. In particular, FIG20 (a) shows that the CSN estimator performs well for the regime 0 < µ < 2, while for 2 < µ < 4 there are substantial deviations due to finite-size effects, and for 4 < µ no reliable are obtained; and FIG20 show that the finite-size effects can be quite complex (for fixed M, increasing Q leads to larger finite-size effects, while for fixed N, decreasing Q leads to larger finite-size effects).Identifying the Universality class. Given α, we identify the corresponding µ (as illustrated in FIG20) and thus which of the three Heavy-Tailed Universality classes (0 < µ < 2 or 2 < µ < 4 or 4 < µ, as described in Table 5) is appropriate to describe the system. For our theory, the following are particularly important points. First, observing a Heavy-Tailed ESD may indicate the presence of a scale-free DNN. This suggests that the underlying DNN is strongly-correlated, and that we need more than just a few separated spikes, plus some random-like bulk structure, to model the DNN and to understand DNN regularization. Second, this does not necessarily imply. In (6(a)), the PL exponent α is fit, using the CSN estimator, for the ESD ρ emp (λ) for a random, rectangular Heavy-Tailed matrix W(µ) (Q = 2, M = 1000), with elements drawn from a Pareto distribution p(x) ∼ x −1−µ. For 0 < µ < 2, finite-size effects are modest, and the ESD follows the theoretical prediction ρ emp (λ) ∼ λ −1−µ/2. For 2 < µ < 4, the ESD still shows roughly linear behavior, but with significant finite-size effects, giving the more general phenomenological relation ρ emp (λ) ∼ λ −aµ+b. For 4 < µ, the CSN method is known to fail to perform well. In (6(b)) and (6(c)), plots are shown for varying Q, with M and N fixed, respectively.that the matrix elements of W l form a Heavy-Tailed distribution. Rather, the Heavy-Tailed distribution arises since we posit it as a model of the strongly correlated, highly non-random matrix W l. Third, we conjecture that this is more general, and that very well-trained DNNs will exhibit Heavy-Tailed behavior in their ESD for many the weight matrices (as we have observed so far with many pre-trained models). When entries of a random matrix are drawn from distributions in the Gaussian Universality class, and under typical assumptions, eigenvectors tend to be delocalized, i.e., the mass of the eigenvector tends to be spread out on most or all the components of that vector. For other models, eigenvectors can be localized. For example, spike eigenvectors in Spiked-Covariance models as well as extremal eigenvectors in Heavy-Tailed random matrix models tend to be more localized BID157 24]. Eigenvector delocalization, in traditional RMT, is modeled using the Thomas Porter Distribution BID165. Since a typical bulk eigenvector v should have maximum entropy, therefore it's components v i should be Gaussian distributed, according to: DISPLAYFORM0 Here, we normalize v such that the empirical variance of the elements is unity, σ 2 v i = 1. Based on this, we can define several related eigenvector localization metrics.• The Generalized Vector Entropy, S(v):= i P (v i) ln P (v i), is computed using a histogram estimator.• The Localization Ratio, L(v):= v 1 v ∞, measures the sum of the absolute values of the elements of v, relative to the largest absolute value of an element of v.• The Participation Ratio, DISPLAYFORM1, is a robust variant of the Localization Ratio. For all three metrics, the lower the value, the more localized the eigenvector v tends to be. We use deviations from delocalization as a diagnostic that the corresponding eigenvector is more structured/regularized. In this section, we describe our main empirical for existing, pretrained DNNs. 14 Early on, we observed that small DNNs and large DNNs have very different ESDs. For smaller models, ESDs tend to fit the MP theory well, with well-understood deviations, e.g., low-rank perturbations. For larger models, the ESDs ρ N (λ) almost never fit the theoretical ρ mp (λ), and they frequently have a completely different functional form. We use RMT to compare and contrast the ESDs of a smaller, older NN and many larger, modern DNNs. For the small model, we retrain a modern variant of one of the very early and well-known Convolutional Nets-LeNet5. We use Keras, and we train LeNet5 on MNIST. For the larger, modern models, we examine selected layers from AlexNet, InceptionV3, and many other models (as distributed with pyTorch). Table 4 provides a summary of models we analyzed in detail. LeNet5 predates both the current Deep Learning revolution and the so-called AI Winter, dating back to the late 1990s BID126. It is the prototype early model for DNNs; it is the most widely-known example of a Convolutional Neural Network (CNN); and it was used in production systems for recognizing hand written digits BID126. The basic design consists of 2 Convolutional (Conv2D) and MaxPooling layers, followed by 2 Dense, or Fully Connected (FC), layers, FC1 and FC2. This design inspired modern DNNs for image classification, e.g., AlexNet, VGG16 and VGG19. All of these latter models consist of a few Conv2D and MaxPooling layers, followed by a few FC layers. Since LeNet5 is older, we actually recoded and retrained it. We used Keras 2.0, using 20 epochs of the AdaDelta optimizer, on the MNIST data set. This model has 100.00% training accuracy, and 99.25% test accuracy on the default MNIST split. We analyze the ESD of the FC1 Layer (but not the FC2 Layer since it has only 10 eigenvalues). The FC1 matrix W F C1 is a 2450 × 500 matrix, with Q = 4.9, and thus it yields 500 eigenvalues. FC1: MP Bulk+Spikes, with edge Bleeding-out. FIG21 presents the ESD for FC1 of LeNet5, with FIG21 (a) showing the full ESD and FIG21 showing the same ESD, zoomed-in along the X-axis to highlight smaller peaks outside the main bulk of our MP fit. In both cases, we show (red curve) our fit to the MP distribution ρ emp (λ). Several things are striking. First, the bulk of the density ρ emp (λ) has a large, MP-like shape for eigenvalues λ < λ + ≈ 3.5, and the MP distribution fits this part of the ESD very well, including the fact that the ESD just below the best fit λ + is concave. Second, some eigenvalue mass is bleeding out from the MP bulk for λ ∈ [3.5, 5], although it is quite small. Third, beyond the MP bulk and this bleeding out region, are several clear outliers, or spikes, ranging from ≈ 5 to λ max 25.Summary. The shape of ρ emp (λ), the quality of the global bulk fit, and the statistics and crisp shape of the local bulk edge all agree well with standard MP theory, or at least the variant of Exhibits all 5+1 Phases of Training by changing batch size Table 4: Description of main DNNs used in our analysis and the key observations about the ESDs of the specific layer weight matrices using RMT. Names in the "Key observation" column are defined in Section 5 and described in Table 7.MP theory augmented with a low-rank perturbation. In this sense, this model can be viewed as a real-world example of the Spiked-Covariance model BID115. AlexNet was the first modern DNN, and its spectacular performance opened the door for today's revolution in Deep Learning. Specifically, it was top-5 on the ImageNet ILSVRC2012 classification task BID121, achieving an error of 16.4%, over 11% ahead of the first runner up. AlexNet resembles a scaled-up version of the LeNet5 architecture; it consists of 5 layers, 2 convolutional, followed by 3 FC layers (the last being a softmax classifier). 15 We will analyze the version of AlexNet currently distributed with pyTorch (version 0.4.1). In this version, FC1 has a 9216 × 4096 matrix, with Q = 2.25; FC2 has a 4096 × 4096 matrix, with Q = 1.0; and FC3 has a 4096 × 1000 matrix, with Q = 4.096 ≈ 4.1. Notice that FC3 is the final layer and connects AlexNet to the labels. FIG22, and 10 present the ESDs for weight matrices of AlexNet for Layers FC1, FC2, and FC3, with FIG1 showing the full ESD, and FIG1 showing the "zoomed-in" along the X-axis. In each cases, we present best MP fits, as determined by holding Q fixed, adjusting the σ parameter, and selecting the best bulk fit by visual inspection. Fitting σ fixes λ +, and the λ + estimates differ for different layers because the matrices have different aspect ratios Q. In each case, the ESDs exhibit moderate to strong deviations from the best standard MP fit. FC1: Bulk-decay into Heavy-Tailed. Consider first AlexNet FC1 (in FIG22). The eigenvalues range from near 0 up to ca. 30, just as with LeNet5. The full ESD, however, is shaped very differently than any theoretical ρ mp (λ), for any value of λ. The best MP fit (in red in FIG22) does capture a good part of the eigenvalue mass, but there are important differences: the peak is not filled in, there is substantial eigenvalue mass bleeding out from the bulk, and the shape of the ESD is convex in the region near to and just above the best fit for λ + of the bulk edge. Contrast this with the excellent MP fit for the ESD for FC1 of LeNet5 FIG21 ), where the red curve captures all of the bulk mass, and only a few outlying spikes appear. Moreover, and very importantly, in AlexNet FC1, the bulk edge is not crisp. In fact, it is not visible at all; and λ + is solely defined operationally by selecting the σ parameter. As such, the edge fluctuations, ∆λ, do not resemble a TW distribution, and the bulk itself appears to just decay into the heavy tail. Finally, a PL fit gives good fit α ≈ 2.29, suggesting (due to finite size effects) µ 2.5. FIG23 ). This ESD differs even more profoundly from standard MP theory. Here, we could find no good MP fit, even by adjusting σ and Q simultaneously. The best MP fit (in red) does not fit the Bulk part of ρ emp (λ) at all. The fit suggests there should be significantly more bulk eigenvalue mass (i.e., larger empirical variance) than actually observed. In addition, as with FC1, the bulk edge is indeterminate by inspection. It is only defined by the crude fit we present, and any edge statistics obviously do not exhibit TW behavior. In contrast with MP curves, which are convex near the bulk edge, the entire ESD is concave (nearly) everywhere. Here, a PL fit gives good fit α ≈ 2.25, smaller than FC1 and FC3, indicating a µ 3. FIG1 ). Here, too, the ESDs deviate strongly from predictions of MP theory, both for the global bulk properties and for the local edge properties. A PL fit gives good fit α ≈ 3.02, which is larger than FC1 and FC2. This suggests a µ 2.5 (which is also shown with a log-log histogram plot in FIG1 in Section 5 below). Summary. For all three layers, the shape of ρ emp (λ), the quality of the global bulk fit, and the statistics and shape of the local bulk edge are poorly-described by standard MP theory. Even when we may think we have moderately a good MP fit because the bulk shape is qualitatively captured with MP theory (at least visual inspection), we may see a complete breakdown RMT at the bulk edge, where we expect crisp TW statistics (or at least a concave envelope of support). In other cases, the MP theory may even be a poor estimator for even the bulk. In the few years after AlexNet, several new, deeper DNNs started to win the ILSVRC ImageNet completions, including BID202, BID177, GoogLeNet/ BID186, and BID108. We have observed that nearly all of these DNNs have properties that are similar to AlexNet. Rather than describe them all in detail, in Section 4.4, we perform power law fits on the Linear/FC layers in many of these models. Here, we want to look more deeply at the Inception model, since it displays some unique properties. 16 In 2014, the VGG BID177 and GoogLeNet BID186 models were close competitors in the ILSVRC2014 challenges. For example, GoogLeNet won the classification challenge, but VGG performed better on the localization challenge. These models were quite deep, with GoogLeNet having 22 layers, and VGG having 19 layers. The VGG model is ∼2X as deep as AlexNet, but it replaces each larger AlexNet filter with more, smaller filters. Presumably this deeper architecture, with more non-linearities, can capture the correlations in the network better. The VGG features of the second to last FC layer generalize well to other tasks. A downside of the VGG models is that they have a lot of parameters and that they use a lot of memory. The GoogleLeNet/Inception design resembles the VGG architecture, but it is even more computationally efficient, which (practically) means smaller matrices, fewer parameters (12X fewer than AlexNet), and a very different architecture, including no internal FC layers, except those connected to the labels. In particular, it was noted that most of the activations in these DNNs are redundant because they are so strongly correlated. So, a sparse architecture should perform just as well, but with much less computational cost-if implemented properly to take advantage of low level BLAS calculations on the GPU. So, an Inception module was designed. This module approximates a sparse Convolutional Net, but using many smaller, dense matrices, leading to many small filters of different sizes, concatenated together. The Inception modules are then stacked on top of each other to give the full DNN. GoogLeNet also replaces the later FC layers (i.e., in AlexNet-like architectures) with global average pooling, leaving only a single FC / Dense layer, which connects the DNN to the labels. Being so deep, it is necessary to include an Auxiliary block that also connects to the labels, similar to the final FC layer. From this, we can extract a single rectangular 768 × 1000 tensor. This gives 2 FC layers to analyze. For our analysis of InceptionV3 BID186, we select a layer (L226) from in the Auxiliary block, as well as the final (L302) FC layer. FIG1 presents the ESDs for InceptionV3 for Layer L226 and Layer L302, two large, fully-connected weight matrices with aspect ratios Q ≈ 1.3 and Q = 2.048, respectively. We also show typical MP fits for matrices with the same aspect ratios 16 Indeed, these suggest that Inception models do not truly account for all the correlations in the data. Q. As with AlexNet, the ESDs for both the L226 and L302 layers display distinct and strong deviations from the MP theory. L226: Bimodal ESDs. Consider first L226 of InceptionV3. FIG1 (a) displays the L226 ESD. (Recall this is not a true Dense layer, but it is part of the Inception Auxiliary module, and it looks very different from the other FC layers, both in AlexNet and below.) At first glance, we might hope to select the bulk edge at λ + ≈ 5 and treat the remaining eigenvalue mass as an extended spike; but this visually gives a terrible MP fit (not shown). Selecting λ + ≈ 10 produces an MP fit with a reasonable shape to the envelope of support of the bulk; but this fit strongly over-estimates the bulk variance / Frobenius mass (in particular near λ ≈ 5), and it strongly under-estimates the spike near 0. We expect this fit would fail any reasonable statistical confidence test for an MP distribution. As in all cases, numerous Spikes extend all the way out to λ max ≈ 30, showing a longer, heavier tail than any MP fit. It is unclear whether or not the edge statistics are TW. There is no good MP fit for the ESD of L226, but it is unclear whether this distribution is "truly" Heavy-Tailed or simply appears Heavy-Tailed as a of the bimodality. Visually, at least the envelope of the L226 ESD to resembles a Heavy-Tailed MP distribution. It is also possible that the DNN itself is also not fully optimized, and we hypothesize that further refinements could lead to a true Heavy-Tailed ESD.L302: Bimodal fat or Heavy-Tailed ESDs. Consider next L302 of InceptionV3 (in FIG1). The ESD for L302 is slightly bimodal (on a log-log plot), but nowhere near as strongly as L226, and we can not visually select any bulk edge λ +. The bulk barely fits any MP density; our best attempt is shown. Also, the global ESD the wrong shape; and the MP fit is concave near the edge, where the ESD is convex, illustrating that the edge decays into the tail. For any MP fit, significant eigenvalue mass extends out continuously, forming a long tail extending al the way to λ max ≈ 23. The ESD of L302 resembles that of the Heavy-Tailed FC2 layer of AlexNet, except for the small bimodal structure. These initial observations illustrate that we need a more rigorous approach to make strong statements about the specific kind of distribution (i.e., Pareto vs other Heavy-Tailed) and what Universality class it may lay in. We present an approach to resolve these technical details this in Section 5.5. In addition to the models from Table 4 that we analyzed in detail, we have also examined the properties of a wide range of other pre-trained models, including models from both Computer Vision as well as Natural Language Processing (NLP). This includes models trained on ImageNet, distributed with the pyTorch package, including VGG16, VGG19, ResNet50, InceptionV3, etc. See Table 5. This also includes different NLP models, distributed in AllenNLP BID98, including models for Machine Comprehension, Constituency Parsing, Semantic Role Labeling, Coreference Resolution, and Named Entity Recognition, giving a total of 84 linear layers. See TAB8. Rather remarkably, we have observed similar Heavy-Tailed properties, visually and in terms of Power Law fits, in all of these larger, state-of-the-art DNNs, leading to that are nearly universal across these widely different architectures and domains. We have also seen Hard Rank deficiency in layers in several of these models. We provide a brief summary of those here. Power Law Fits. We have performed Power Law (PL) fits for the ESD of selected (linear) layers from all of these pre-trained ImageNet and NLP models. 17 Table 5 summarizes the detailed for the ImageNet models. Several observations can be made. First, all of our fits, except for certain layers in InceptionV3, appear to be in the range 1.5 < α 3.5 (where the CSN method is known to perform well). Second, we also check to see whether PL is the best fit by comparing the distribution to a Truncated Power Law (TPL), as well as an exponential, stretchexponential, and log normal distributions. Column "Best Fit" reports the best distributional fit. In all cases, we find either a PL or TPL fits best (with a p-value ≤ 0.05), with TPL being more common for smaller values of α. Third, even when taking into account the large finite-size effects in the range 2 < α < 4, as illustrated in FIG20, nearly all of the ESDs appear to fall into the 2 < µ < 4 Universality class. FIG1 displays the distribution of PL exponents α for each set of models. FIG1 shows the fit power law exponents α for all of the linear layers in pre-trained ImageNet models available in PyTorch (in Table 5), with Q 1; and FIG1 (b) shows the same for the pre-trained models available in AllenNLP (in TAB8). Overall, there are 24 ImageNet layers with Q 1, and 82 AllenNet FC layers. More than 80% of all the layers have α ∈, and nearly all of the rest have α < 6. One of these, InceptionV3, was discussed above, precisely since it was unusual, leading to an anomalously large value of α due to the dip in its ESD.Rank Collapse. RMT also predicts that for matrices with Q > 1, the minimum singular value will be greater than zero, i.e., ν min > 0. We test this by again looking at all of the FC layers in the pre-trained ImageNet and AllenNLP models. See FIG1 for a summary of the . While the ImageNet models mostly follow this rule, 6 of the 24 of FC layers have ν min ∼ 0. In fact, for 4 layers, ν min < 0.00001, i.e., it is close to the numerical threshold for 0. In these few cases, the ESD still exhibits Heavy-Tailed properties, but the rank loss ranges from one eigenvalue equal to 0 up to 15% of the eigenvalue mass. For the NLP models, we see no rank collapse, i.e., all of the 82 AllenNLP layers have ν min > 0. In a few cases (e.g., LetNet5 in Section 4.1), MP theory appears to apply to the bulk of the ESD, with only a few outlying eigenvalues larger than the bulk edge. In other more realistic cases (e.g., AlexNet Table 5 : Fit of PL exponents for the ESD of selected (2D Linear) layer weight matrices W l in pre-trained models distributed with pyTorch. Layer is identified by the enumerated id of the pyTorch model; Q = N/M ≥ 1 is the aspect ratio; (M × N) is the shape of W T l; α is the PL exponent, fit using the numerical method described in the text; D is the Komologrov-Smirnov distance, measuring the goodness-of-fit of the numerical fitting; and "Best Fit" indicates whether the fit is better described as a PL (Power Law) or TPL (Truncated Power Law) (no fits were found to be better described by Exponential or LogNormal). we have examined, as summarized in Section 4.4), the ESDs do not resemble anything predicted by standard RMT/MP theory. This should not be unexpected-a well-trained DNN should have highly non-random, strongly-correlated weight matrices W, in which case MP theory would not seem to apply. Moreover, except for InceptionV3, which was chosen to illustrate several unusual properties, nearly every DNN displays Heavy-Tailed properties such as those seen in AlexNet. These empirical suggest the following: first, that we can construct an operational and phenomenological theory (both to obtain fundamental insights into DNN regularization and to help guide the training of very large DNNs); and second, that we can build this theory by applying the full machinery of modern RMT to characterize the state of the DNN weight matrices. For older and/or smaller models, like LeNet5, the bulk of their ESDs (ρ N (λ); λ λ + ) can be well-fit to theoretical MP density ρ mp (λ), potentially with several distinct, outlying spikes (λ > λ +). This is consistent with the Spiked-Covariance model of Johnstone BID115, a simple perturbative extension of the standard MP theory. 18 This is also reminiscent of traditional Tikhonov regularization, in that there is a "size scale" (λ +) separating signal (spikes) from noise (bulk). In this sense, the small NNs of yesteryear-and smallish models used in many research studies-may in fact behave more like traditional ML models. In the context of disordered systems theory, as developed by Sornette BID139, this model is a form of Self-Organizaton. Putting this all together demonstrates that the DNN training process itself engineers a form of implicit Self-Regularization into the trained model. For large, deep, state-of-the-art DNNs, our observations suggest that there are profound deviations from traditional RMT. These networks are reminiscent of strongly-correlated disorderedsystems that exhibit Heavy-Tailed behavior. What is this regularization, and how is it related to our observations of implicit Tikhonov-like regularization on LeNet5?To answer this, recall that similar behavior arises in strongly-correlated physical systems, where it is known that strongly-correlated systems can be modeled by random matrices-with entries drawn from non-Gaussian Universality classes BID181, e.g., PL or other Heavy-Tailed distributions. Thus, when we observe that ρ N (λ) has Heavy-Tailed properties, we can hypothesize that W is strongly-correlated, 19 and we can model it with a Heavy-Tailed distribution. Then, upon closer inspection, we find that the ESDs of large, modern DNNs behave as expected-when using the lens of Heavy-Tailed variants of RMT. Importantly, unlike the Spiked-Covariance case, which has a scale cut-off (λ +), in these very strongly Heavy-Tailed cases, correlations appear on every size scale, and we can not find a clean separation between the MP bulk and the spikes. These observations demonstrate that modern, state-of-the-art DNNs exhibit a new form of Heavy-Tailed Self-Regularization. In the next few sections, we construct and test (on miniature AlexNet) our new theory. In this section, we develop an operational and phenomenological theory for DNN Self-Regularization that is designed to address questions such as the following. How does DNN Self-Regularization differ between older models like LetNet5 and newer models like AlexNet or Inception? What happens to the Self-Regularization when we adjust the numerous knobs and switches of the solver itself during SGD/Backprop training? How are knobs, e.g., early stopping, batch size, and learning rate, related to more familiar regularizers like Weight Norm constraints and Tikhonov regularization? Our theory builds on empirical from Section 4; and our theory has consequences and makes predictions that we test in Section 6.MP Soft Rank. We first define a metric, the MP Soft Rank (R mp), that is designed to capture the "size scale" of the noise part of the layer weight matrix W l, relative to the largest eigenvalue of W T l W l. Going beyond spectral methods, this metric exploits MP theory in an essential way. Let's first assume that MP theory fits at least a bulk of ρ N (λ). Then, we can identify a bulk edge λ + and a bulk variance σ 2 bulk, and define the MP Soft Rank as the ratio of λ + and λ max:MP Soft Rank: DISPLAYFORM0 Clearly, R mp ∈; R mp = 1 for a purely random matrix (as in Section 5.1); and for a matrix with an ESD with outlying spikes (as in Section 5.3), λ max > λ +, and R mp < 1. If there is no good MP fit because the entire ESD is well-approximated by a Heavy-Tailed distribution (as described in Section 5.5, e.g., for a strongly correlated weight matrix), then we can define λ + = 0 and still use Eqn. FORMULA1, in which case R mp = 0. The MP Soft Rank is interpreted differently than the Stable Rank (R s), which is proportional to the bulk MP variance σ 2 mp divided by λ max: DISPLAYFORM1 As opposed to the Stable Rank, the MP Soft Rank is defined in terms of the MP distribution, and it depends on how the bulk of the ESD is fit. While the Stable Rank R s (M) indicates how many eigencomponents are necessary for a relatively-good low-rank approximation of an arbitrary matrix, the MP Soft Rank R mp (W) describes how well MP theory fits part of the matrix ESD ρ N (λ). Empirically, R s and R mp often correlate and track similar changes. Importantly, though, there may be no good low-rank approximation of the layer weight matrices W l of a DNNespecially a well trained one. Visual Taxonomy. We characterize implicit Self-Regularization, both for DNNs during SGD training as well as for pre-trained DNNs, as a visual taxonomy of 5+1 Phases of Training (Random-like, Bleeding-out, Bulk+Spikes, Bulk-decay, Heavy-Tailed, and Rankcollapse). See Table 7 for a summary. The 5+1 phases can be ordered, with each successive phase corresponding to a smaller Stable Rank / MP Soft Rank and to progressively more SelfRegularization than previous phases. FIG1 depicts typical ESDs for each phase, with the MP fits (in red). Earlier phases of training correspond to the final state of older and/or smaller models like LeNet5 and MLP3. Later phases correspond to the final state of more modern models like AlexNet, Inception, etc. Thus, while we can describe this in terms of SGD training, this taxonomy does not just apply to the temporal ordering given by the training process. It also allows us to compare different architectures and/or amounts of regularization in a trained-or even pre-trained-DNN. Each phase is visually distinct, and each has a natural interpretation in terms of RMT. One consideration is the global properties of the ESD: how well all or part of the ESD is fit by an MP distribution, for some value of λ +, or how well all or part of the ESD is fit by a Heavy-Tailed or PL distribution, for some value of a PL parameter. A second consideration is local properties of the ESD: the form of fluctuations, in particular around the edge λ + or around the largest eigenvalue λ max. For example, the shape of the ESD near to and immediately above λ + is very different in FIG1 FIG1 and Sxn. 5.4Heavy-Tailed Table 7: The 5+1 phases of learning we identified in DNN training. We observed Bulk+Spikes and Heavy-Tailed in existing trained models (LeNet5 and AlexNet/InceptionV3, respectively; see Section 4); and we exhibited all 5+1 phases in a simple model (MiniAlexNet; see Section 7).As an illustration, FIG1 depicts the 5+1 phases for a typical (hypothetical) run of Backprop training for a modern DNN. FIG1 (a) illustrates that we can track the decrease in MP Soft Rank, as W e l changes from an initial random (Gaussian-like) matrix to its final W l = W f l form; and FIG1 (b) illustrates that (at least for the early phases) we can fit its ESD (or the bulk of its ESD) using MP theory, with ∆ corresponding to non-random signal eigendirections. Observe that there are eigendirections (below λ +) that fit very well the MP bulk, there are eigendirections (well above λ +) that correspond to a spike, and there are eigendirections (just slightly above λ +) with (convex) curvature more like FIG1 Theory of Each Phase. RMT provides more than simple visual insights, and we can use RMT to differentiate between the 5+1 Phases of Training using simple models that qualitatively describe the shape of each ESD. In each phase, we model the weight matrices W as "noise plus signal," where the "noise" is modeled by a random matrix W rand, with entries drawn from the Gaussian Universality class (well-described by traditional MP theory) and the "signal" is a (small or very large) correction ∆ sig: Table 7 summarizes the theoretical model for each phase. Each model uses RMT to describe the global shape of ρ N (λ), the local shape of the fluctuations at the bulk edge, and the statistics and information in the outlying spikes, including possible Heavy-Tailed behaviors. DISPLAYFORM2 In the first phase (Random-like), the ESD is well-described by traditional MP theory, in which a random matrix has entries drawn from the Gaussian Universality class. This does not mean that the weight matrix W is random, but it does mean that the signal in W is too weak to be seen when viewed via the lens of the ESD. In the next phases (Bleeding-out, Bulk+Spikes), and/or for small networks such as LetNet5, ∆ is a relatively-small perturbative correction to W rand, and vanilla MP theory (as reviewed in Section 3.1) can be applied, as least to the bulk of the ESD. In these phases, we will model the W rand matrix by a vanilla W mp matrix (for appropriate parameters), and the MP Soft Rank is relatively large (R mp (W) 0). In the Bulk+Spikes phase, the model resembles a Spiked-Covariance model, and the SelfRegularization resembles Tikhonov regularization. In later phases (Bulk-decay, Heavy-Tailed), and/or for modern DNNs such as AlexNet and InceptionV3, ∆ becomes more complex and increasingly dominates over W rand. For these more strongly-correlated phases, W rand is relatively much weaker, and the MP Soft Rank collapses (R mp (W) → 0). Consequently, vanilla MP theory is not appropriate, and instead the SelfRegularization becomes Heavy-Tailed. In these phases, we will treat the noise term W rand as small, and we will model the properties of ∆ with Heavy-Tailed extensions of vanilla MP theory (as reviewed in Section 3.2) to Heavy-Tailed non-Gaussian universality classes that are more appropriate to model strongly-correlated systems. In these phases, the strongly-correlated model is still regularized, but in a very non-traditional way. The final phase, the Rank-collapse phase, is a degenerate case that is a prediction of the theory. We now describe in more detail each phase in turn. In the first phase, the Random-like phase, shown in FIG1 (a), the DNN weight matrices W resemble a Gaussian random matrix. The ESDs are easily-fit to an MP distribution, with the same aspect ratio Q, by fitting the empirical variance σ 2 emp. Here, σ 2 emp is the element-wise variance (which depends on the normalization of W).Of course, an initial random weight matrix W 0 l will show a near perfect MP fit. Even in well trained DNNs, however, the empirical ESDs may be Random-like, even when the model has a non-zero, and even somewhat large, generalization accuracy. 20 That is, being fit well by an MP distribution does not imply that the weight matrix W is random. It simply implies that W, while having structure, can be modeled as the sum of a random "noise" matrix W rand, with the same Q and σ 2 emp, and some small-sized matrix ∆ small, as: DISPLAYFORM0 where ∆ small represents "signal" learned during the training process. In this case, λ max is sharply bounded, to within M DISPLAYFORM1, to the edge of the MP distribution. In the second phase, the Bleeding-out phase, shown in FIG1, the bulk of the ESD still looks reasonably random, except for one or a small number K min{N, M} of eigenvalues that extend at or just beyond the MP edge λ +. That is, for the given value of Q, we can choose a σ emp (or λ +) parameter so that: most of the ESD is well-fit; and the part of the ESD that is not well-fit consists of a "shelf" of mass, much more than expected by chance, just above λ +: DISPLAYFORM0 This corresponds to modeling W as the sum of a random "noise" matrix W rand and some medium-sized matrix ∆ medium, as: DISPLAYFORM1 where ∆ medium represents "signal" learned during the training process. As the spikes just begin to pull out from the bulk, i.e., when λ max − λ + is small, it may be difficult to determine unambiguously whether any particular eigenvalue is spike or bulk. The reason is that, since the matrix is of finite size, we expect the spike locations to be Gaussiandistributed, with fluctuations of order N − 1 2. One option is to try to estimate σ bulk precisely from a single run. Another option is to perform an ensemble of runs and plot ρ N R (λ) for the ensemble. Then, if the model is in the Bleeding-out phase, there will be a small bump of eigenvalue mass, shaped like a Gaussian, 21 which is very close to but bleeding-out from the bulk edge. When modeling DNN training in terms of RMT and MP theory, the transition from Randomlike to Bleeding-out corresponds to the so-called BPP phase transition. This transition represents a "condensation" of the eigenvector corresponding to the largest eigenvalue λ max onto the eigenvalue of the rank-one (or, more generally, rank-k, if the perturbation is higher rank) perturbation ∆. In the third phase, the Bulk+Spikes phase, shown in FIG1, the bulk of the ESD still looks reasonably random, except for one or a small number K min{N, M} of eigenvalues that extend well beyond the MP edge λ +. That is, for the given value of Q, we can choose a σ emp (or λ +) parameter so that: most of the ESD is well-fit; and the part of the ESD that is not well-fit consists of several (K) eigenvalues, or Spikes, that are much larger than λ +: DISPLAYFORM0 This corresponds to modeling W as the sum of a random "noise" matrix W rand and some moderately large-sized matrix ∆ large, as: DISPLAYFORM1 where ∆ large represents "signal" learned during the training process. For a single run, it may be challenging to identify the spike locations unambiguously. If we perform an ensemble of runs, however, then the Spike density is clearly visible, distinct, and separated from bulk, although it is much smaller in total mass. We can try to estimate σ bulk precisely, but in many cases we can select the edge of the bulk, λ +, by visual inspection. As in the Bleeding-out phase, the empirical bulk variance σ 2 bulk is smaller than both the full elementwise variance, σ 2 bulk < σ 2 f ull, and the shuffled variance (fit to the MP bulk), σ 2 bulk < σ 2 shuf, because we remove several large eigendirections from the bulk. (See Section 6.3 for more on this.)When modeling DNN training in terms of RMT and MP theory, the Bulk+Spikes phase corresponds to vanilla MP theory plus a large low-rank perturbation, and it is what we observe in the LeNet5 model. In statistics, this corresponds to the Spiked Covariance model BID115 BID156 BID116. Relatedly, in the Bulk+Spikes phase, we see clear evidence of Tikhonov-like Self-Regularization. MP theory with large low-rank perturbations. To understand, from the perspective of MP theory, the properties of the ESD as eigenvalues bleed out and start to form spikes, consider modeling W as W W rand + ∆ large. If ∆ is a rank-1 perturbation 22, but now larger, then one can show that the maximum eigenvalue λ max that bleeds out will extend beyond theoretical MP bulk edge λ + and is given by DISPLAYFORM2 Here, by σ 2, we mean the theoretical variance of the un-perturbed W rand. 23 Moreover, in an ensemble of runs, each of these Spikes will have Gaussian fluctuations on the order N −1/2.Eigenvector localization. Eigenvector localization on extreme eigenvalues can be a diagnostic for Spike eigenvectors (as well as for extreme eigenvectors in the Heavy-Tailed phase). The interpretation is that when the perturbation ∆ is large, "information" in W will concentrate on a small number of components of the eigenvectors associated with the outlier eigenvalues. The fourth phase, the Bulk-decay phase, is illustrated in FIG1 (d), and is characterized by the onset of Heavy-Tailed behavior, both in the very long tail, and at the Bulk edge. 24 The Bulkdecay phase is intermediate between having a large, low-rank perturbation ∆ to an MP Bulk (as in the Bulk+Spikes phase) and having strong correlations at all scales (as in the Heavy-Tailed phase). Viewed naïvely, the ESDs in Bulk-decay resemble a combination of the Bleeding-out and Bulk+Spikes phases: there is a large amount of mass above λ + (from any reasonable MP fit); and there are a large number of eigenvectors much larger than this value of λ +. However, quantitatively, the ESDs are quite different than either Bleeding-out or Bulk+Spikes: there is much more mass bleeding-out; there is much greater deterioration of the Bulk; and the Spikes lie much farther out. In Bulk-decay, the Bulk region is both hard to identify and difficult to fit with MP theory. Indeed, the properties of the Bulk start to look less and less consistent the an MP distribution (with elements drawn from the Universality class of Gaussian matrices), for any parameter values. This implies that λ max can be quite large, in which case the MP Soft Rank is much smaller. The best MP fit neglects a large part of the eigenvalue mass, and so we usually have to select λ + numerically. Most importantly, the mass at the bulk edge now starts to exhibit Heavy-Tailed, not Gaussian, properties; and the overall shape of the ESD is itself taking on a Heavy-Tailed form. Indeed, the ESDs are may be consistent with (weakly) Heavy-Tailed (4 < µ) Universality class, in which the local edge statistics exhibit Heavy-Tailed behavior due to finite-size effects. 25 22 More generally, ∆ may be a rank-k perturbation, for k M, and similar should hold. 23 In typical theory, this is scaled to unity (i.e., σ 2 = 1). In typical practice, we do not a priori know σ 2, and it may be non-trivial to estimate because the scale W may shift during Backprop training. As a rule-of-thumb, one can select the bulk edge λ + well to provide a good fit for the bulk variance σ 2 bulk. 24 We observe Bulk-decay in InceptionV3 FIG1 ). This may indicate that this model, while extremely good, might actually lend itself to more fine tuning and might not be fully optimized. 25 Making this connection more precise-e.g., measuring α in this regime, relating α to µ in this regime, having precise theory for finite-size effects in this regime, etc.-is nontrivial and left for future work. The final of the 5 main phases, the Heavy-Tailed phase, is illustrated in FIG1 (e). This phase is formally, and operationally, characterized by an ESD that resembles the ESD of a random matrix in which the entries are drawn i.i.d. from a Heavy-Tailed distribution. This phase corresponds to modeling W as the sum of a small "noise" matrix W rand and a large "stronglycorrelated" matrix ∆ str.corr., as: DISPLAYFORM0 where ∆ str.corr. represents strongly-correlated "signal" learned during the training process. 26 As usual, W rand can be modeled as a random matrix, with entries drawn i.i.d. from a distribution in the Gaussian Universality class. Importantly, the strongly-correlated signal matrix ∆ str.corr. can also be modeled as a random matrix, but (as described in Section 3.2) one with entries drawn i.i.d. from a distribution in a different, Heavy-Tailed, Universality class. In this phase, the ESD visually appears Heavy-Tailed, and it is very difficult if not impossible to get a reasonable MP fit of the layer weight matrices W (using standard Gaussian-based MP/RMT). Thus, the matrix W has zero (R mp (W) = 0) or near-zero (R mp (W) 0) MP Soft Rank; and it has intermediate Stable Rank (1 R s (W) min{N, M}). 27 When modeling DNN training in terms of RMT and MP theory, the Heavy-Tailed phase corresponds to the variant of MP theory in which elements are chosen from a non-Gaussian Universality class [38, 20, 19, BID158 7, 40, 8, 26, 22, 21]. In physics, this corresponds to modeling strongly-correlated systems with Heavy-Tailed random matrices BID181 23]. Relatedly, in the Heavy-Tailed phase, the implicit Self-Regularization is strongest. It is, however, very different than the Tikhonov-like regularization seen in the Bulk+Spikes phases. Although there is a decrease in the Stable Rank (for similar reasons to why it decreases in the Bulk+Spikes phases, i.e., Frobenius mass moves out of the bulk and into the spikes), Heavy-Tailed Self-Regularization does not exhibit a "size scale" in the eigenvalues that separates the signal from the noise. 28 Heavy-Tailed ESDs. Although FIG1 (e) is presented on the same linear-linear plot as the other subfigures in FIG1, the easiest way to compare Heavy-Tailed ESDs is with a log-log histogram and/or with PL fits. Consider FIG1 (a), which displays the ESD for FC3 of pretrained AlexNet, as a log-log histogram; and consider also FIG1 (b), which displays an overlay (in red) of a log-log histogram of the ESD of a random matrix M. This matrix M has the same aspect ratio as W F C3, but the elements M i,j are drawn from a Heavy-Tailed Pareto distribution, Eqn., with µ = 2.5. We call ESDs such as W F C3 of AlexNet Heavy-Tailed because they resemble the ESD of a random matrix with entries drawn from a Heavy-Tailed distribution, as observed with a log-log histogram. 29 We can also do a PL fit to estimate α and then try to estimate the Universality class we are in. Our PL estimator works well for µ ∈ [1.5, 3.5]; but, due to large finite-size effects, it is difficult to determine µ from α precisely. This is discussed in more detail in Section 3.2. As a rule of thumb, if α < 2, then we can say α ≈ 1 + µ/2, and we are in the (very) Heavy-Tailed Universality class; and if 2 < α < 4, but not too large, then α is well-modeled by α ≈ b + aµ, and we are mostly likely in the (moderately, or "fat") Heavy-Tailed Universality class. In addition to the 5 main phases, based on MP theory we also expect the existence of an additional "+1" phase, which we call the Rank-collapse Phase, and which is illustrated in FIG1 (f). For many parameter settings, the minimum singular value (i.e., λ − in Eqn. FORMULA18 for vanilla MP theory) is strictly positive. For certain parameter settings, the MP distribution has a spike at the origin, meaning that there is a non-negligible mass of eigenvalues equal to 0, i.e., the matrix is rank-deficient, i.e., Hard Rank is lost. 30 For vanilla Gaussian-based MP theory, this happens when Q > 1, and this phenomenon exists more generally for Heavy-Tailed MP theory. In this section, we validate and illustrate how to use our theory from Section 5. This involved extensive training and re-training, and thus we used the smaller MiniAlexNet model. Section 6.1 describes the basic setup; Section 6.2 presents several baseline ; Section 6.3 provides some important technical details; and Section 6.4 describes the effect of adding explicit regularization. We postpone discussing the effect of changing batch size until Section 7. Here, we describe the basic setup for our empirical evaluation. Model Deep Neural Network. We analyzed MiniAlexNet, 31 a simpler version of AlexNet, similar to the smaller models used in BID203, scaled down to prevent overtraining, and trained on CIFAR10. The basic architecture follows the same general design as older NNs such as LeNet5, VGG16, and VGG19. It is illustrated in FIG1. It consists of two 2D Convolutional layers, each with Max Pooling and Batch Normalization, giving 6 initial layers; it then has two Fully Connected (FC), or Dense, layers with ReLU activations; and it then has a final FC layer added, with 10 nodes and softmax activation. For the FC layers: DISPLAYFORM0 The W F C1 and W F C2 matrices are initialized with a Glorot normalization BID101. 32 We apply Batch Normalization to the Conv2D layers, but we leave it off the FC layer; do not change if remove all Batch Normalization. All models are trained using Keras 2.x, with TensorFlow as a backend. We use SGD with momentum, with a learning rate of 0.01, a momentum parameter of 0.9, and a baseline batch size of 32; and we train up to 100 epochs. To compare different batch sizes and other tunable knobs, we employed early stopping criteria on the total loss which causes termination at fewer than 100 epochs. We save the weight matrices at the end of every epoch, and we study the complexity of the trained model by analyzing the empirical properties of the W F C1 and W F C2 matrices. Experimental Runs. It is important to distinguish between several different types of analysis. First, analysis of ESDs (and related quantities) during Backprop training during 1 training run. In this case, we consider a single training run, and we monitor empirical properties of weight matrices as they change during the training process. Second, analysis of the final ESDs from 1 training run. In this case, we consider a single training run, and we analyze the empirical properties of the single weight matrix that is obtained after the training process terminates. This is similar to analyzing pre-trained models. Third, analysis of an ensemble of final ESDs from N R training runs. In this case, we rerun the model N R ∼ 10-100 times, using different initial random weight matrices W 0 l, and we form an ensemble of N R of final weight matrices [W DISPLAYFORM1]. We do this in order to compensate for finite-size effects, to provide a better visual interpretation of our claims, and to help clarify our scientific claims about the learning process. Of course, as 32 Here and elsewhere, most DNNs are initialized with random weight matrices, e.g., as with the Glorot Normal initialization BID101 (which involves a truncation step). If we naïvely fit the MP distribution to W 0 trunc, then the empirical variance will be larger than one, i.e., σ 2 emp > 1. That is because the Glorot normalization is 2/N + M, whereas the MP theory is presented with normalization √ N −1. To apply MP theory, we must rescale our empirically-fit σ an engineering matter, one wants exploit our on a single "production" run of the training process. In that case, we expect to observe (and do observe) a noisy version of what we present. 33 Empirically Measured Quantities. We compute several RMT-based quantities of interest for each layer weight matrices W l, for layers l = F C1, F C2, including the following: Matrix complexity metrics, such as the Matrix Entropy S(W e l), Hard Rank R(W e l), Stable Rank R e s (W l), and MP Soft Rank R e mp (W l); ESDs, ρ(λ) for a single run, both during Backprop training and for the final weight matrices, and/or ρ N R (λ) for the final states an ensemble of N R runs; and Eigenvector localization metrics, including the Generalized Vector Entropy S(x), Localization Ratio L(x), and Participation Ratio P(x), of the eigenvectors of X, for an ensemble of runs. Knobs and Switches of the Learning Process. We vary knobs and switches of the training process, including the following: number of epochs (typically ≈ 100, well past when entropies and measured training/test accuracies saturate); Weight Norm regularization (on the fully connected layers-in Keras, this is done with an L 2 -Weight Norm kernel regularizer, with value 0.0001); various values of Dropout; and batch size 34 (varied between 2 to 1024). Here, we present several baseline for our RMT-based analysis of MiniAlexNet. For our baseline, the batch size is 16; and Weight Norm regularization, Dropout, and other explicit forms of regularization are not employed. FIG1 shows the Matrix Entropy (S(W)) and Stable Rank (R s (W)) for layers FC1 and FC2, as well as of the training and test accuracies, for MiniAlexNet, as a function of the number of epochs. This is for an ensemble of N R = 10 runs. Both layers start off with an Entropy close to but slightly less than 1.0; and both retrain full rank during training. For each layer, the matrix Entropy gradually lowers; and the Stable Rank shrinks, but more prominently. These decreases parallel the increase in training and test accuracies, and both complexity metrics level off as the training/test accuracies do. The Matrix Entropy decreases relatively more for FC2, and the Stable Rank decreases relatively more for FC1; but they track the same gross changes. The large difference between training and test accuracy should not be surprising since-for these baseline -we have turned off regularization like removing Batch Norm, Dropout layers, and any Weight Norm constraints. Eigenvalue Spectrum: Comparisons with RMT. FIG1 show, for FC1 and FC2, respectively, the layer matrix ESD, ρ(λ), every few epochs during the training process. For layer FC1 (with Q ≈ 10.67), the initial weight matrix W 0 looks very much like an MP distribution (with Q ≈ 10.67), consistent with a Random-like phase. Within a very few epochs, however, eigenvalue mass shifts to larger values, and the ESD looks like the Bulk+Spikes phase. Once the Spike(s) appear(s), substantial changes are hard to see in FIG1, but minor changes do continue in the ESD. Most notably, λ max increases from roughly 3.0 to roughly 4.0 during training, indicating further Self-Regularization, even within the Bulk+Spikes phase. For layer FC2 (with Q = 2), the initial weight matrix also resembles an MP distribution, also consistent with a Random-like phase, but with a much smaller value of Q than FC1 (Q = 2 here). Here too, the ESD changes during the first few epochs, after which there are not substantial changes. The most prominent change is that eigenvalue mass pulls out slightly from the bulk and λ max increases from roughly 3.0 to slightly less than 4.0.Eigenvector localization. FIG1 plots three eigenvector localization metrics, for an ensemble N R = 10 runs, for eigenvectors in the bulk and spike of layer FC1 of MiniAlexNet, after training. 35 Spike eigenvectors tend to be more localized than bulk eigenvectors. This effect is less pronounced for FC2 (not shown) since the spike is less well-separated from the bulk. 35 More precisely, bulk here refers to eigenvectors associated with eigenvalues less than λ +, defined below and illustrated in FIG5, and spike here refers to those in the main part of the spike. There are several technical issues with applying RMT that we discuss here. 36 • Single run versus an ensemble of runs. FIG5 shows ESDs for Layer FC1 before and after training, for a single run, as well as after training for an ensemble of runs. FIG5 does the same for FC2. There are two distinct effects of doing an ensemble of runs: first, the histograms get smoother (which is expected); and second, there are fluctuations in λ max. These fluctuations are not due to finite-size effects; and they can exhibit Gaussian or TW or other Heavy-Tailed properties, depending on the phase of learning. Thus, they can be used as a diagnostic, e.g., to differentiate between Bulk+Spikes versus Bleeding-out. 36 While we discuss these in the context of our baseline, the same issues arise in all applications of our theory. • Finite-size effects. FIG5 (a) shows that we can estimate finite-size effects in RMT by shuffling the elements of a single weight matrices W l → W shuf l and recomputing the eigenvalue spectrum ρ shuf (λ) of X shuf. We expect ρ shuf (λ) to fit an MP distribution well, even for small sample sizes, and we see that it does. We also expect and see a very crisp edge in λ +. More generally, we can visually observe the quality of the fit at this sample size to gauge whether deviations are likely spurious. This is relatively-easy to do for Random-like, and also for Bulk+Spikes (since we can simply remove the spikes before shuffling). For Bleeding-out and Bulk-decay, it is somewhat more difficult due to the need to decide which eigenvalues to keep in the bulk. For Heavy-Tailed, it is much more complicated since finite-size effects are larger and more pronounced.• Fitting the bulk edge λ +, i.e., the bulk variance σ 2 bulk. Estimating λ + (or, equivalently, σ 2 bulk) can be tricky, even when the spike is well-separated from the bulk. We illustrate this in FIG5. In particular, compare FIG5, λ + is chosen to reproduce very well the bulk edge of the ESD, at the expense of having some "missing mass" in the ESD just below λ + (leading to a "Bulk + 9 Spikes" model). In FIG5 (b), λ + is chosen to reproduce very well the ESD just below λ +, at the expense of having a slight bleeding-out region just above λ + (leading to a "Bulk + 18 Spikes" or a "Bulk + 9 Bleeding-out + 9 Spikes" model). If we hypothesize that a MP distribution fits the bulk very well, then the fit in FIG5 (b) is more appropriate, but FIG5 (b) shows this can be challenging to identify in a single run. We recommend choosing the bulk maximum λ + and (from Eqn.) selecting σ 2 bulk as σ 2 bulk = λ + 1 + 1/ √ Q −2. In fitting σ 2 bulk, we expect to lose some variance due to the eigenvalue mass that "bleeds out" from the bulk (e.g., due to Bleeding-out or Bulkdecay), relative to a situation where the MP distribution provides a good fit for the entire ESD (as in Random-like). Rather than fitting σ 2 mp directly on the ESD of W l, without removing the outliers (which may thus lead to poor estimates since λ max is particularly large), we can always define a baseline variance for any weight matrix W by shuffling it elementwise W → W shuf, and then finding the MP σ 2 shuf from the ESD of W shuf. In doing so, the Frobenius norm is preserved W shuf l F = W l F, thus providing a way to (slightly over-) estimate the unperturbed variance of W l for comparison. 37 Since at least one eigenvalue bleeds out, σ 2 bulk < σ 2 shuf, i.e., the empirical bulk variance σ 2 bulk will always be (slightly) less that than shuffled bulk variance σ 2 shuf.The best way to automate these choices, e.g., with a kernel density estimator, remains open. We consider here how explicit regularization affects properties of learned DNN models, in light of baseline of Section 6.2. We focus on L 2 Weight Norm and Dropout regularization. 37 We suggest shuffling W l at least 100 times then fitting the ESD to obtain an σ 2. We can then estimate σ 2 bulk as σ 2 minus a contribution for each of the K bleeding-out eigenvalues, giving, as a rule of thumb, σ Transition in Layer Entropy and Stable Rank. See FIG5 for plots for FC1 and FC2 when L 2 norm weight regularization is included; and see FIG5 for plots when Dropout regularization is included. In both cases, baseline are provided, and compare with FIG1. In each case, we observe a greater decrease in the complexity metrics with explicit regularization than without, consistent with expectations; and we see that explicit regularization affects these metrics dramatically. Here too, the Layer Entropy decreases relatively more for FC2, and the Stable Rank decreases relatively more for FC1. DISPLAYFORM0 Eigenvalue Spectrum: Comparisons with RMT. See FIG5 for the ESD for layers FC1 and FC2 of MiniAlexNet, with explicit Dropout, including MP fits to a bulk when 9 or 10 spikes are removed. Compare with FIG5 (for FC1) and FIG5 (for FC2). Note, in particular, the differences in the scale of the X axis. FIG5 shows that when explicit Dropout regularization is added, the eigenvalues in the spike are pulled to much larger values (consistent with a much more implicitly-regularized model). A subtle but important consequence of this regularization 38 is the following: this leads to a smaller bulk MP variance parameter σ 2 mp, and thus smaller values for λ +, when there is a more prominent spike. See FIG5 for similar for the ESD for layers FC1 and FC2 of MiniAlexNet, with explicit L 2 norm weight regularization. Eigenvalue localization. We observe that eigenvector localization tends to be more prominent when the explicit regularization is stronger, presumably since explicit (L 2 Weight Norm or Dropout) regularization can make spikes more well-separated from the bulk. In this section, we demonstrate that we can exhibit all five of the main phases of learning by changing a single knob of the learning process. 39 We consider the batch size (used in the construction of mini-batches during SGD training) since it is not traditionally considered a regularization parameter and due to its its implications for the generalization gap phenomenon. The Generalization Gap refers to the peculiar phenomena that DNNs generalize significantly less well when trained with larger mini-batches (on the order of 10 3 − 10 4) BID127 BID111 BID119 BID104. Practically, this is of interest since smaller batch sizes makes training large DNNs on modern GPUs much less efficient. Theoretically, this is of interest since it contradicts simplistic stochastic optimization theory for convex problems. The latter suggests that larger batches should allow better gradient estimates with smaller variance and should therefore improve the SGD optimization process, thereby increasing, not decreasing, the generalization performance. For these reasons, there is interest in the question: what is the mechanism responsible for the drop in generalization in models trained with SGD methods in the large-batch regime?To address this question, we consider here using different batch sizes in the DNN training algorithm. We trained the MiniAlexNet model, just as in Section 6 for the Baseline model, except with batch sizes ranging from moderately large to very small (b ∈ {500, 250, 100, 50, 32, 16, 8, 4, 2}). 39 We can also exhibit the "+1" phase, but in this section we are interested in changing only the batch size.and then it begins to decrease; and test accuracy actually increases for extremely small b, and then it gradually decreases as b increases. • At batch size b = 250 (and larger), the ESD resembles a pure MP distribution with no outliers/spikes; it is Random-like.• As b decreases, there starts to appear an outlier region. For b = 100, the outlier region resembles Bleeding-out.• Then, for b = 32, these eigenvectors become well-separated from the bulk, and the ESD resembles Bulk+Spikes.• As batch size continues to decrease, the spikes grow larger and spread out more (observe the increasing scale of the X-axis), and the ESD exhibits Bulk-decay.• Finally, at the smallest size, b = 2, extra mass from the main part of the ESD plot almost touches the spike, and the curvature of the ESD changes, consistent with Heavy-Tailed. While the shape of the ESD is different for FC2 (since the aspect ratio of the matrix is less), very similar properties are observed. In addition, as b decreases, some of the extreme eigenvectors associated with eigenvalues that are not in the bulk tend to be more localized. Implications for the generalization gap. Our here (both that training/test accuracies decrease for larger batch sizes and that smaller batch sizes lead to more well-regularized models) demonstrate that the generalization gap phenomenon arises since, for smaller values of the batch size b, the DNN training process itself implicitly leads to stronger Self-Regularization.(Depending on the layer and the batch size, this Self-Regularization is either the more traditional Tikhonov-like regularization or the Heavy-Tailed Self-Regularization corresponding to strongly-correlated models.) That is, training with smaller batch sizes implicitly leads to more well-regularized models, and it is this regularization that leads to improved . The obvious mechanism is that, by training with smaller batches, the DNN training process is able to "squeeze out" more and more finer-scale correlations from the data, leading to more strongly-correlated models. Large batches, involving averages over many more data points, simply fail to see this very fine-scale structure, and thus they are less able to construct strongly-correlated models characteristic of the Heavy-Tailed phase. Our also suggest that, if one hopes to compensate for this by decreasing the learning rate, then one would have to decrease the learning rate by an extraordinary amount. There is a large body of related work, much of which either informed our approach or should be informed by our . This includes: work on large-batch learning and the generalization gap BID195 BID119 BID111 BID104 BID114 BID178 BID113 BID193 BID141 BID198 BID199; work on Energy Landscape approaches to NN training BID117 BID196 44, BID170 30, 29, 27, BID112 47, 12, BID151 BID197 BID97 BID130 BID129 BID142; work on using weight matrices or properties of weight matrices [15, BID149 BID150 4, 14, BID200 BID148 3, BID133 BID145 ; work on different Heavy-Tailed Universality classes [46, 32, 18, 25, 20, 5, BID158 7, 38, 17, BID137 8, BID146 ; other work on RMT approaches BID180 BID167 BID161 BID159 BID134 BID134 BID184 BID132 BID173 ; other work on statistical physics approaches BID179 BID99 BID164 BID172 BID184 BID166 BID160 ; work on fitting to noisy versus reliable signal BID183 BID203 BID122 BID169 6]; and several other related lines of work BID110 BID143 BID163 34, 1, BID153 BID154 BID144 BID131. We conclude by discussing several aspects of our in this broader context. Failures of VC theory. In light of our , we have a much better understanding of why VC theory does not apply to NNs. VC theory assumes, at its core, that a learning algorithm could sample a very large, potentially infinite, space of hypothesis functions; and it then seeks a uniform bound on this process to get a handle on the generalization error. It thus provides a very lose, data-independent bound. Our suggest a very different reason why VC theory would fail than is sometimes assumed: naïvely, the VC hypothesis space of a DNN would include all functions described by all possible values of the layer weight matrices (and biases). Our suggest, in contrast, that the actual space is in some sense "smaller" or more restricted than this, in that the FC layers (at least) cover only one Universality class-the class of Heavy (or Fat) Tailed matrices, with PL exponent µ ∈. During the course of training, the space becomes smaller-through Self-Regularization-since even if the initial matrices are random, the class of possible final matrices is very strongly correlated. The process of Self-Regularization and Heavy-Tailed Self-Regularization collapses the space of available functions that can be learned. Indeed, this also suggests why transfer learning is so effective-the initial weigh matrices are much closer to their final versions, and the space of functions need not shrink so much. The obvious conjecture is that what we have observed is characteristic of general NN/DNN learning systems. Since there is nothing like this in VC theory, our suggest revisiting more generally the recent suggestions of BID140.Information bottleneck. Recent empirical work on modern DNNs has shown two phases of training: an initial "fast" phase and a second "slower" phase. To explain this, Tishby et al. BID188 BID176 have suggested using the Information Bottleneck Theory for DNNs. See also BID187 BID175 BID171 BID201. While this theory may be controversial, the central concept embodies the old thinking that DNNs implicitly lose some capacity (or information/entropy) during training. This is also what we observe. Two important differences with our approach are the following: we provide a posteriori guarantees; and we provide an unsupervised theory. An a posteriori unsupervised theory provides a mechanism to minimize the risk of "label leakage," clearly a practical problem. The obvious hypothesis is that the initial fast phase corresponds to the initial drop in entropy that we observe (which often corresponds to a Spike pulling out of the Bulk), and that the second slower phase corresponds to "squeezing out" more and more correlations from the data (which, in particular, would be easier with smaller batches than larger batches, and which would gradually lead to a very strongly-correlated model that can then be modeled by Heavy-Tailed RMT).Energy landscapes and rugged convexity. Our observations about the onset of HeavyTailed or scale-free behavior in realistic models suggest that (relatively) simple (i.e., Gaussian) Spin-Glass models, used by many researchers, may lead to very misleading for realistic systems. Results derived from such models are very specific to the Gaussian Universality class; and other Spin-Glass models can show very different behaviors. In particular, if we select the elements of the Spin-Glass Hamiltonian from a Heavy-Tailed Levy distribution, then the local minima do not concentrate near the global minimum. See also BID196 BID96 48, 28, BID185. Based on this, as well as the we have presented, we expect that well-trained DNNs will exhibit a ruggedly convex global energy landscape, as opposed to a landscape with a large number of very different degenerate local minima. This would clearly provide a way to understand phenomena exhibited by DNN learning that are counterintuitive from the perspective of traditional ML BID140.Connections with glass theory. It has been suggested that the slow training phase arises because the DNN optimization landscape has properties that resemble a glassy system (in the statistical physics sense), meaning that the dynamics of the SGD is characterized by slow HeavyTailed or PL behavior. See [13, BID119 BID111 10] -and recall that, while this connection is sometimes not explicitly noted, glasses are defined in terms of their slow dynamics. Using the glass analogy, however, it can also shown that very large batch sizes can, in fact, be used-if one adjusts the learning rate (potentially by an extraordinary amount). For example, it is argued that, when training with larger batch sizes, one needs to change the learning rate adaptively in order to take effectively more times steps to reach a obtain good generalization performance. Our are consistent with the suggestion that DNNs operate near something like a finite size form of a spin-glass phase transition, again consistent with previous work BID140. This is likewise similar in spirit to how certain spin glass models are Bayes optimal in that their optimal state lies on the Nishimori Line BID152. Indeed, these ideas have been a great motivation in looking for our empirical and formulating our theory. Self-Organization in Natural (and Engineered) Phenomena. Typical implementations of Tikhonov regularization require setting a specific regularization parameter or regularization size scale, whereas Self-Regularization just arises as part of the DNN training process. A different mechanism for this has been described by Sornette, who suggests it can arise more generally in natural Self-Organizing systems, without needing to tune specific exogenous control parameters BID181. Such Self-Organization can manifest itself as Bulk+Spikes BID139, as true (infinite order) Power Laws, or as a finite-sized Heavy-Tailed (or Fat-Tailed) phenomena BID181. This corresponds to the three Heavy-Tailed Universality classes we described. To the best of our knowledge, ours is the first observation and suggestion that a Heavy-Tailed ESD could be a signature/diagnostic for such Self-Organization. That we are able to induce both Bulk+Spikes and Heavy-Tailed Self-Organization by adjusting a single internal control parameter (the batch size) suggests similarities between Self-Organized Criticality (SOC) (a very general phenomena also thought to be "a fundamental property of neural systems" more generally BID109 36] ) and modern DNN training. There are subtle issues that make RMT particularly appropriate for analyzing weight matrices. Taking the right limit. The matrix X is an empirical correlation matrix of the weight layer matrix W l, akin to an estimator of the true covariance of the weights. It is known, however, that this estimator is not good, unless the aspect ratio is very large (i.e., unless Q = N/M 1, in which case X l is very tall and thin). The limit Q → ∞ (e.g., N → ∞ for fixed M) is the case usually considered in mathematical statistics and traditional VC theory. For DNNs, however, M ∼ N, and so Q = O; and so a more appropriate limit to consider is (M → ∞, N → ∞) such that Q is a fixed constant BID140. This is the regime of MP theory, and this is why deviations from the limiting MP distribution provides the most significant insights here. Relation to the SMTOG. In recent work BID140, Martin and Mahoney examined DNNs using the Statistical Mechanics Theory of Generalization (SMTOG) BID174 BID194 BID107 43]. As with RMT, the STMOG also applies in the limit (M → ∞, N → ∞) such that Q = 1 or Q = O, i.e., in the so-called Thermodynamic Limit. Of course, RMT has a long history in theoretical physics, and, in particular, the statistical mechanics of the energy landscape of strongly-correlated disordered systems such as polymers. For this reason, we believe RMT will be very useful to study broader questions about the energy landscape of DNNs. Martin and Mahoney also suggested that overtrained DNNs-such as those trained on random labelings-may effectively be in a finite size analogue of the (mean field) spin glass phase of a neural network, as suggested by the SMTOG BID140. We should note that, in this phase, self-averaging may (or may not) break down. The importance of Self-Averaging. Early RMT made use of replica-style analysis from statistical physics BID174 BID194 BID107 43], and this assumes that the statistical ensemble of interest is Self-Averaging. This property implies that the theoretical ESD ρ(λ) is independent of the specific realization of the matrix W, provided W is a typical sample from the true ensemble. In this case, RMT makes statements about the empirical ESD ρ N (λ) of a large random matrix like X, which itself is drawn from this ensemble. To apply RMT, we would like to be able inspect a single realization of W, from one training run or even one epoch of our DNN. If our DNNs are indeed self-averaging, then we may confidently interpret the ESDs of the layer weight matrices of a single training run. As discussed by Martin and Mahoney, this may not be the case in certain situations, such as severe overtraining BID140. From the SMTOG perspective, NN overfitting, 40 which in NN overtraining, 41 is an example of non-self-averaging. When a NN generalizes well, it can presumably be trained, using the same architecture and parameters, on any large random subset of the training data, and it will still perform well on any test/holdout example. In this sense, the trained NN is a typical random draw from the implicit model class. In contrast, an overtrained model is when this random draw from this implicit model class is atypical, in the sense that it describes well the training data, but it describes poorly test data. A model can enter the spin glass phase when there is not enough training data and/or the model is too complicated BID174 BID194 BID107 43]. The spin glass phase is (frequently) non-self-averaging, and this is why overtraining was traditionally explained using spin glass models from statistical mechanics. 42 For this reason, it is not obvious that RMT can be applied to DNNs that are overtrained; we leave this important subtly for future work. Our practical theory opens the door to address very practical questions, including the following.• What are design principles for good models? Our approach might help to incorporate domain knowledge into DNN structure as well as provide finer metrics (beyond simply depth, width, etc.) to evaluate network quality.• What are ways in which adversarial training/learning or training/learning in new environments affects the weight matrices and thus the loss surface? Our approach might help characterize robust versus non-robust and interpretable versus non-interpretable models.• When should training be discontinued? Our approach might help to identify empirical properties of the trained models, e.g., of the weight matrices-without explicitly looking at labels-that will help to determine when to stop training. Finally, one might wonder whether our RMT-based theory is applicable to other types of layers such as convolutional layers and/or other types of data such as natural language data. Initial suggest yes, but the situation is more complex than the relatively simple picture we have described here. These and related directions are promising avenues to explore.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJeFNoRcFQ
See the abstract. (For the revision, the paper is identical, except for a 59 page Supplementary Material, which can serve as a stand-along technical report version of the paper.)
We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting. The proposed attention mechanism is based on recent methods to visually explain predictions made by DNNs. We apply the proposed explanation-based attention to MNIST and SVHN classification. The conducted experiments show accuracy improvements for the original and class-imbalanced datasets with the same number of training examples and faster long-tail convergence compared to uncertainty-based methods. Deep active learning (AL) minimizes the number of expensive annotations needed to train DNNs by selecting a subset of relevant data points from a large unlabeled dataset BID7. This subset is annotated and added to the training dataset in a single pool of data points or, more often, in an iterative fashion. The goal is to maximize prediction accuracy while minimizing the product of pool size × number of iterations. A proxy for this goal could be the task of matching feature distributions between the validation and the AL-selected training datasets. In density-based AL approaches, data selection is typically performed using a simple L 2 -distance metric BID10. The image retrieval field BID17 has advanced much further in this area. For example, recent state-of-the-art image retrieval systems are based on DNNbased feature extraction BID0 with attention mechanisms BID8. The latter estimates an attention mask to weight importance of the extracted features and it is trained along with the feature extraction. Inspired by this, we employ image retrieval techniques and propose a novel attention mechanism for deep AL. Unlike supervised self-attention in BID8 BID14, our attention mechanism is not trained with the model. It relies on recent methods to generate visual explanations and to attribute feature importance values BID13. We show the effectiveness of such explanation-based attention (EBA) mechanism for AL when combined with multi-scale feature extraction on a number of image classification datasets. We also conduct experiments for distorted class-imbalanced training data which is a more realistic assumption for unlabeled data. AL is a well-studied approach to decrease annotation costs in a traditional machine learning pipelines BID11. Recently, AL has been applied to data-demanding DNN-based systems in semi-supervised or weakly-supervised settings. Though AL is an attractive direction, existing methods struggle to deal with high-dimensional data e.g. images. We believe this is related to the lack of class and instance-level feature importance information as well as the inability to capture spatially-localized features. To overcome these limitations, we are interested in estimating spatiallymultiscale features and using our EBA mechanism to select only the most discriminative features. BID16 proposed to augment the training dataset by labeling the least confident data points and heuristically pseudo-labeling high confidence predictions. We believe the softmax output is not a reliable proxy for the goals of AL i.e. for selecting images using feature distribution matching between validation and train data. Unlike BID16, we use pseudo labels only to estimate EBA vectors and find similarities between discriminative features. BID2 introduced a measure of uncertainty for approximate Bayesian inference that can be estimated using stochastic forward passes through a DNN with dropout layers. An acquisition function then selects data points with the highest uncertainty which is measured at the output of softmax using several metrics. Recent work BID1 extended this method by using an ensemble of networks for uncertainty estimation and achieved superior accuracy. formulated feature similarity-based selection as a geometric core-set approach which outperforms greedy k-center clustering. Though their method can complement our approach, we are focusing on the novel feature extraction. For instance, they employed a simple L 2 distance similarity measure for the activations of the last fully-connected layer. The most similar work to ours, by BID15, uses the gradients as a measure of importance for dataset subsampling and analysis. However, our approach formulates the problem as a multi-scale EBA for AL application and goes beyond a less robust single-step gradient attention. Other related works are online importance sampling methods BID9 and the influence functions approach in BID4. Online importance sampling upweights samples within the mini-batch during supervised training using gradient similarity while influence functions analyze data point importance using computationally challenging second-order gradient information. Pool-based AL. Let (X, y) be an input-label pair. There is a validation dataset {(X v i, y v i)} i∈M of size M and a collection of training pairs {(X i, y i)} i∈N of size N for which, initially, only a small random subset or pool of labels indexed by N 1 is known. The validation dataset approximates the distribution of test data. At every bth iteration the AL algorithm selects a pool of P new labels to be annotated and added to existing training pairs which creates a training dataset indexed by N b.A DNN Φ(X, θ) is optimized by minimizing a loss function (DISPLAYFORM0 to model parameters θ. However, the actual task is to minimize validation loss expressed by M DISPLAYFORM1 Therefore, an oracle AL algorithm achieves minimum of validation loss using the smallest b × P product. In this work, we are interested not in finding an oracle acquisition function, but in a method to extract relevant features for such function. We use a low-complexity greedy k-center algorithm to select the data points in the unlabeled training collection which are most similar to the misclassified entries in the validation dataset. Feature descriptors. Let F j i ∈ R C×H×W, where C, H, and W are the number of channels, the height, and the width, respectively be the output of the jth layer of DNN for input image X i . Then, a feature vector or descriptor of length L can be defined as DISPLAYFORM2, where function φ(·) is a conventional average pooling operation from BID0. In a multi-scale case, descriptor is a concatenation of multiple feature vectors DISPLAYFORM3 A descriptor matrix for the validation dataset V d ∈ R L×M and training dataset S d ∈ R L×N can be calculated using forward passes. Practically, descriptors can be compressed for storage efficiency reasons using PCA, quantization, etc. Then, a match kernel BID6, e.g. cosine similarity, can be used to match features in both datasets. Assuming that vectors d i are L 2 -normalized, the cosine-similarity matrix is simply DISPLAYFORM4 Explanation-based attention. Feature maps F i extracted by Φ(X, θ) and pooled by φ(·) contain features that: a) are not class and instance-level discriminative (in other words, not disentangled), b) spatially represent features for a plurality of objects in the input. We would like to upweight discriminative features that satisfy a) and b) using an attention mechanism. One approach would be to use self-attention BID14 at the cost of modifying network architecture and intervening into the training process. Instead, we propose to use EBA that is generated only for feature selection. The EBA mechanism attributes feature importance values w.r.t. to the output predictions. Unlike a visual explanation task, which estimates importance heatmaps in the input (image) space, we propose to estimate feature importance tensors A i of the internal DNN representations F i. Attention tensors A i can be efficiently calculated using a series of backpropagation passes. Using one of backpropagation-based methods called integrated gradients (IG) from BID13, A j i can be estimated as DISPLAYFORM5 where K is the number of steps to approximate the continuous integral by a linear path. Other forms of are possible: from the simplest saliency method for which K = 1 BID12 to more advanced methods with randomly sampled input features BID3.Due to lack of labels y i in, we use common pseudo-labeling strategy: y i = 1 arg maxŷi. It is schematically shown in FIG0. Unlike BID16, pseudo-labels are used only to calculate similarity without additional hyperparameters rather than to perform a threshold-selected greedy augmentation. The EBA A i can be converted to multi-scale attention vector using the same processing a i = φ(A i) ∈ R L×1, which, by analogy, forms validation V a ∈ R L×M and train attention matrices S a ∈ R L×N. The latter processing is implemented in most modern frameworks and, therefore, the complexity to generate A i is only K forward-backward passes. Summary for the proposed method. A random subset of N 1 training data points is annotated and a DNN Φ(X, θ) optimized for this subset. Then, the AL algorithm iteratively (b = 2, 3 . . .) performs following steps: 1) generates descriptor-attention matrix pairs DISPLAYFORM6 is element-wise product, 3) selects P relevant data points from the remaining subset using acquisition function arg max i∈N\N b−1 (R(X i), Φ) and 4) retrains Φ(X, θ) using augmented subset N b. Our method as well as uncertainty-based methods from BID2 are applied to the MNIST and SVHN classification. We evaluate AL with the original and distorted training data because unlabeled collection of data points cannot be a-priori perfectly selected. Hence, we introduce a class imbalance which is defined as the ratio of {0 . . . 4} to {5 . . . 9} digits. The following methods have been employed: random sampling, uncertainty-based (uncert), greedy selection using similarity matching without (top-P:none) and with EBA. The latter is estimated by saliency (top-P:grad) or IG (top-P:ig). We rerun experiments 10 times for MNIST and 5 times for SVHN with all-randomized initial parameters. Mean accuracy and standard deviation are reported. DNN parameters are trained from scratch initially and after each AL iteration. Mini-batch size is chosen by cross-validation. MNIST. The dataset train/val/test split is 50K/10K/10K. The LeNet is used with the following hyperparameters: epochs=50, batch-size=25, lr=0.05, lr-decay=0.1 every 15 epochs, uncert methods and IG EBA use K = 128 passes and L is 20 for single-scale (before fc1 layer) and 90 for multiscale descriptors (all layers are concatenated). Figure 2(a) shows that feature-only matching (top-P:none L20) outperforms random selection by ≈ 1% while EBA (top-P:ig L90) adds another 1% of accuracy when there is no class imbalance. High class imbalance (Figure 2(c) ) increases that gap: up to 20% for feature-only matching and 25% with EBA. The highest accuracy is achieved by multi- 90 91 92 93 94 95 96 97 98 99 Top-1 Accuracy, % full random uncert:varMC uncert:entMC top-P:none_L20 top-P:grad_L20 top-P:ig_L20 top-P:ig_L90 top-P:igAbl_L20 top-P:igAbl_L90 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Fraction of full training dataset, % (b) 75 77 79 81 83 85 87 89 91 93 95 97 Top-1 Accuracy, % full random uncert:varMC uncert:entMC top-P:none_L20 top-P:grad_L20 top-P:ig_L20 top-P:ig_L90 top-P:igAbl_L20 top-P:igAbl_L90 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Fraction of full training dataset, % (c) 58 61 64 67 70 73 76 79 82 85 88 91 Top-1 Accuracy, % full random uncert:varMC uncert:entMC top-P:none_L20 top-P:grad_L20 top-P:ig_L20 top-P:ig_L90 top-P:igAbl_L20 top-P:igAbl_L90Figure 2: MNIST test dataset accuracy for 3 class imbalance ratios: a) 1 (no imbalance), b) 10 and c) 100. Total 9 AL iterations (b = 10) are performed each with P = 250 pool size. Top-1 Accuracy, % full random uncert:entMC uncert:baldMC top-P:none_L256 top-P:grad_L256 top-P:ig_L256 top-P:ig_L384Figure 3: SVHN test dataset accuracy for 3 class imbalance ratios: a) 1 (no imbalance), b) 10 and c) 100. Total 9 AL iterations (b = 10) are performed each with P = 2, 500 pool size.scale EBA estimated by IG. EBA-based methods outperform the best uncertainty-based variation ratio (uncert:varMC) approach for all class imbalance settings except the last one where its accuracy is higher by less than 1% when b = 4. This might be related to small-scale MNIST and pseudo-label noise for EBA. To study the effects of pseudo-labeling, we plot true-label configurations (marked by "Abl") as well. The accuracy gap between EBA using true-and pseudo-labels is small with no class imbalance, but much larger (up to 25%) when class imbalance ratio is 100 during first AL iterations. The dataset train/validation/test split is 500K/104K/26K. A typical 8-layer CNN is used with the following hyperparameters: epochs=35, batch-size=25, lr=0.1, lr-decay=0.1 every 15 epochs, uncert methods and IG EBA use K = 128 and L is 256 for single-scale (before fc1 layer) and 384 for two-scale descriptors (+ layer before conv7). Figure 3 shows that the gap between random selection and the best EBA-based AL method grows from 2% to more than 12% when the unlabeled training collection has more class imbalance. The gap between full training dataset accuracy increases for larger-scale SVHN as well. This in even faster convergence for the proposed AL relative to random selection. Accuracies of the uncert methods are closer to each other than for MNIST, which may signal their declining effectiveness for large-scale data. The proposed EBA-based methods outperform all uncertainty-based methods for SVHN in the first AL iterations (up to +2.5%) and later arrive at approximately equal . We applied recent image retrieval feature-extraction techniques to deep AL and introduced a novel EBA mechanism to improve feature-similarity matching. First feasibility experiments on MNIST and SVHN datasets showed advantages of EBA to improve density-based AL. Rather than performing AL for the well-picked training datasets, we also considered more realistic and challenging scenarios with class-imbalanced training collections where the proposed method emphasized the importance of additional feature supervision. In future research, EBA could be evaluated with other types of data distortions and biases: within-class bias, adversarial examples, etc. Furthermore, such applications as object detection and image segmentation may benefit more from EBA because multiscale attention can focus on spatially-important features.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyxKiVmedV
We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting.
We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces. We present an algorithm for calculations of the objective function's barcodes of minima. Our experiments confirm two principal observations: the barcodes of minima are located in a small lower part of the range of values of objective function and increase of the neural network's depth brings down the minima's barcodes. This has natural implications for the neural network learning and the ability to generalize. The learning via finding minima of objective functions is the principal strategy underlying majority of learning algorithms. For example, in Neural Network training, the objective function's input is model parameters (weights) and the objective function's output is the loss on training dataset. The graph of the loss function, often called loss surface, typically has complex structure (e.g. see loss surface visualisations by): non-convexity, many local minima, flat regions, steep slopes. These obstacles harm exploration of the loss surface and complicate searching for optimal network weights. The optimization of modern neural networks is based on the gradient descent algorithm. The global topological characteristics of the gradient vector field trajectories are captured by the Morse complex via decomposing the parameter space into cells of uniform flow, see; and references therein. The invariants of Morse complex called "canonical forms"(or barcodes) constitute the fundamental summary of the topology of the gradient vector field flow. The "canonical forms", or barcodes, in this context are decompositions of the change of topology of the sublevel sets of objective function into simple "birth-death" phenomena of topological feautures of different dimensions. The calculation of the barcodes for different functions constitutes the essence of the topological data analysis. The currently available software packages for the calculation of barcodes of functions, also called "sublevel persistence", are GUDHI, Dionysus, PHAT, and TDA package which incorporates all three previous packages B.T.. They are based on the algorithm, described in , see also appendix and e.g. and references therein. This algorithm which has complexity of O(n 3). These packages can currently handle calculations of barcodes for functions defined on a grid of up to 10 6 points, and in dimensions two and three. Thus all current packages have the scalability issues. We describe a new algorithm for computations of the barcodes of functions in lowest degree. Our algorithm works with functions defined on randomly sampled or specifically chosen point clouds. Point cloud based methods are known to work better than grid based methods in optimization related problems . We also use the fact that the definition of the barcode of lowest degree can be reformulated in geometrical terms (see definition 1 in section 2). The previously known algorithms were based on the more algebraic approach as in definition 3. Our algorithm has complexity of O(n log(n)). It was tested in dimensions up to 16 and with number of points of up to 10 8. In this work, we develop a methodology to describe the properties of the loss surface of the neural network via topological features of local minima. We emphasize that the value of the objective function at the minimum can be viewed as only a part of its topological characteristic from the "canonical form" (barcode). The second half can be described as the value of objective function at the index-one saddle, which can be naturally associated with each local minimum. The difference between the values of objective function at the associated index-one saddle and at the local minimum is a topological invariant of the minimum. For optimization algorithms this quantity measures, in particular, the obligatory penalty for moving from the given local minimum to a lower minimum. The main contributions of the paper are as follows: Applying the one-to-one correspondence between local minima and 1-saddles to exploration of loss surfaces. For each local minimum p there is canonically defined 1-saddle q (see Section 2). The 1-saddle associated with p can be described as follows. The 1-saddle q is precisely the point where the connected component of the sublevel set Θ f ≤c = {θ ∈ Θ | f (θ) ≤ c} containing the minimum p merges with another connected component of the sublevel set whose minimum is lower. This correspondence between the local minima and the 1-saddles, killing a connected component of Θ f ≤c, is one-to-one. The segment [f (p), f (q)] is then the "canonical form" invariant attached to the minimum p. The set of all such segments is the barcode ("canonical form") of minima invariant of f. It is a robust topological invariant of objective function. It is invariant in particular under the action of homeomorphisms of Θ. Full "canonical form" invariants give a concise summary of the topology of objective function and of the global structure of its gradient flow. Algorithm for calculations of the barcodes (canonical invariants) of minima. We describe an algorithm for calculation of the canonical invariants of minima. The algorithm works with function's values on a a randomly sampled or specifically chosen set of points. The local minima give birth to clusters of points in sublevel sets. The algorithm works by looking at neighbors of each point with lower value of the function and deciding if this point belongs to the existing clusters, gives birth to a new cluster (minimum), or merges two or more clusters (index one saddle). A variant of the algorithm has complexity of O(n log(n)), where n is the cardinality of the set of points. Calculations confirming observations on behaviour of neural networks loss functions barcodes. We calculate the canonical invariants (barcodes) of minima for small fully-connected neural networks of up to three hidden layers and verify that all segments of minima's barcode belong to a small lower part of the total range of loss function's values and that with the increase in the neural network depth the minima's barcodes descend lower. The usefulness of our approach and algorithms is clearly not limited to the optimization problems. Our algorithm permits really fast computation of the canonical form invariants (persistence barcodes) of many functions which were not accessible until now. These sublevel persistence barcodes have been successfully applied in different discipline, to mention just a few: cognitive science (M. K.), cosmology , see e.g. and references therein. Our viewpoint should also have applications in chemistry and material science where 1-saddle points on potential energy landscapes correspond to transition states and minima are stable states corresponding to different materials or protein foldings (see e.g. ,). The article is structured as follows. First we describe three definitions of barcodes of minima. After that our algorithm for their calculation is described. In the last part we give examples of calculations, including the loss functions of simple neural nets. The "canonical form" invariants (barcodes) give a concise summary of topological features of functions (see , and references therein). These invariants describe a decomposition of the change of topology of the function into the finite sum of "birth"-"death" of elementary features. We propose to apply these invariants as a tool for exploring topology of loss surfaces. In this work we concentrate on the part of these canonical form invariants, describing the "birth"-"death" phenomena of connected components of sublevel sets of the function. However it should be stressed that this approach works similarly also for "almost minima", i.e. for the critical points (manifolds) of small indexes, which are often the terminal points of the optimization algorithms in very high dimensions. We give three definitions of the "canonical form" invariants of minima. The values of parameter c at which the topology of sublevel set Let p be one of minima of f. When c increases from f (p)− to f (p)+, a new connected component of the set Θ f ≤c is born (see fig 1a, the connected components S 1, S 2, S 3 of sublevel set are born at the blue, green and red minima correspondingly. If p is a minimum, which is not global, then, when c is increased, the connected component of Θ f ≤c born at p merges with a connected component born at a lower minimum. Let q is the merging point where this happens. The intersection of the set Θ f <f (q) with any small neighborhood of q has two connected components. This is the index-one saddle q associated with p. (a) "Death" of the connected component S3. The connected component S3 of sublevel set merges with connected component S2 at red saddle, red saddle is associated with the red minimum. (b) "Death" of the connected component S4. The connected component S4 of sublevel set merges with connected component S1 at violet saddle, violet saddle is associated with the violet minimum (c) "Death" of the connected component S2. The connected component S2 of sublevel set merges with connected component S1 at green saddle, green saddle is associated with the green minimum. Figure 1: Merging of connected components of sublevel sets at saddles. Note that the green saddle is associated with the green minimum which is separated by another minimum from the green saddle. Also these two subsets of small neighborhood of q belong to two different connected components of the whole set Θ f <f (q). The 1-saddles of this type are called "+" ("plus") or "death" type. The described correspondence between local minima and 1-saddles of this type is one-to-one. In a similar way, the 1-saddle q associated with p can be described also as follows. Proposition 2.1. Consider various paths γ starting from the local minimum p and going to a lower minimum. Let m γ ∈ Θ is the maximum of the restriction of f to such path γ. Then 1-saddle q which is paired with the local minimum p is the minimum over the set of all such paths γ of the maxima m γ: The correspondence in the opposite direction can be described analogously. Let q is a 1-saddle point of such type that the two branches of the set Θ f ≤f (q)− near q belong to two different connected components of Θ f ≤f (q)−. A new connected component of the set Θ f ≤c is formed when c decreases from f (q) + to f (q) −. The restriction of f to each of the two connected components has its global minimum. Proposition 2.2. Given a 1-saddle q, the minimum p which is paired with q is the new minimum of f on the connected component of the set Θ f ≤c which is formed when c decreases from f (q) + to f (q) −. The two branches of the set Θ f ≤f (q)− near q can also belong to the same connected components of this set. Then such saddle is of "birth" type and it is naturally paired with index-two saddle of "death" type (see theorem 2.3). Chain complex is the algebraic counterpart of intuitive idea representing complicated geometric objects as a decomposition into simple pieces. It converts such a decomposition into a collection of vector spaces and linear maps. A chain complex (C *, ∂ *) is a sequence of finite-dimensional k-vector spaces and linear operators The j−th homology of the chain complex (C *, ∂ *) is the quotient A chain complex C * is called R−filtered if C * is equipped with an increasing sequence of sub- by a finite set of real numbers s 1 < s 2 <... < s max. Theorem 2.3. Any R−filtered chain complex C * can be brought by a linear transformation preserving the filtration to "canonical form", a canonically defined direct sum of R−filtered complexes of two types: one-dimensional complexes with trivial differential ∂ j (e i) = 0 and two-dimensional complexes with trivial homology ∂ j (e i2) = e i1. The ing canonical form is uniquely determined. The full barcode is a visualization of the decomposition of an R−filtered complexes according to the theorem 2.3. Each filtered 2-dimensional complex with trivial homology ∂ j (e i2) = e i1, e i1 = F ≤s1, e i1, e i2 = F ≤s2 describes a topological feature in dimension j which is "born" at s 1 and which "dies" at s 2. It is represented by segment [s 1, s 2] in the degree-j barcode. And each filtered 1-dimensional complex with trivial differential, ∂ j e i = 0, e i = F ≤r describes a topological feature in dimension j which is "born" at r and never "dies". It is represented by the half-line [r, +∞[ in the degree-j barcode. The proof of the theorem is given in Appendix. Essentially, one can bring an R−filtered complex to the required canonical form by induction, starting from the lowest basis elements of degree one, in such a way that the manipulation of degree j basis elements does not destroy the canonical form in degree j − 1 and in lower filtration pieces in degree j. Let f : Θ → R is smooth, or more generally, piece-wise smooth continuous function such that the sublevel sets Θ f ≤c = {θ ∈ Θ | f (θ) ≤ c} are compact. One filtered complex naturally associated with function f and such that the subcomplexes F s C * compute the homology of sublevel sets Θ f ≤s is the gradient (Morse) complex, see e.g.; and references therein. Without loss of generality the function f can be assumed smooth here, otherwise one can always replace f by its smoothing. By adding a small perturbation such as a regularization term we can also assume that critical points of f are non-degenerate. The generators of the gradient (Morse) complex correspond to the critical points of f. The differential is defined by counting gradient trajectories between critical points when their number is finite. The canonical form of the gradient (Morse) complex describes a decomposition of the gradient flow associated with f into standard simple pieces. Let p be a minimum, which is not a global minimum. Then the generator corresponding to p represents trivial homology class in the canonical form, since the homology class of its connected component is already represented by the global minimum. Then p is the lower generator of a two-dimensional complex with trivial homology in the canonical form. I.e. p is paired with an index-one saddle q in the canonical form. The segment [f (p), f (q)] is then the canonical invariant (barcode) corresponding to the minimum p. The full canonical form of the gradient (Morse) complex of all indexes is a summary of global structure of the objective function's gradient flow. The total number of different topological features in sublevel sets Θ f ≤c of the objective function can be read immediately from the barcode. Namely the number of intersections of horizontal line at level c with segments in the index j barcode gives the number of independent topological features of dimension j in Θ f ≤c. The description of the barcode of minima on manifold Θ with nonempty boundary ∂Θ is modified in the following way. A connected component can be also born at a local minimum of restriction of f to the boundary f | ∂Θ, if gradf is pointed inside manifold Θ. The merging of two connected components can also happen at an index-one critical point of f | ∂Θ, if gradf is pointed inside Θ. In this section we describe the developed algorithm for calculation of the canonical form invariants of local minima. The computation exploits the first definition of barcodes (see Section 2), which is based on the evolution on the connected components of the sublevel sets. To analyse the surface of the given function f: Θ → R, we first build its approximation by finite graph-based construction. To do this, we consider a random subset of points {θ 1, . . ., θ N} ∈ Θ and build a graph with these points as vertices. The edges connect close points. Thus, for every vertex θ n, by comparing f (θ n) with f (θ n) for neighbors θ n of θ n, we are able to understand the local topology near the point θ n. At the same time, connected componenets of sublevel sets Θ f ≤c = {θ ∈ Θ | f (θ) ≤ c} will naturally correspond to connected components of the subgraph on point θ n, such that f (θ n) ≤ c. Two technical details here are the choice of points θ n and the definition of closeness, i.e. when to connect points by an edge. In our experiments, we sample points uniformly from some rectangular box of interest. To add edges, we compute the oriented k-Nearest Neighbor Graph on the given points and then drop the orientation of edges. Thus, every node in the obtained graph has a degree in [k, 2k]. In all our experiments we use k = 2D, where D is the dimension of f's input. Next we describe our algorithm, which computes barcodes of a function from its graph-based approximation described above. The key idea is to monitor the evolution of the connected components of the sublevel sets of the graph, i.e. sets Θ c = {θ n | f (θ n) ≤ c} for increasing c. For simplicity we assume that points θ are ordered w.r.t. the value of function f, i.e. for n < n we have f (θ n) < f (θ n). In this case we are interested in the evolution of connected components throughout the process sequential adding of vertices θ 1, θ 2,..., θ N to graph, starting from an empty graph. We denote the subgraph on vertices θ 1,..., θ n by Θ n. When we add new vertex θ n+1 to θ n, there are three possibilities for connected componenets to evolve: 1. Vertex θ n+1 has zero degree in Θ n+1. This means that θ n+1 is a local minimum of f and it forms a new connected component in the sublevel set. 2. All the neighbors of θ n+1 in Θ n+1 belong to one connected component in Θ n. 3. All the neighbors of θ n+1 in Θ n+1 belong to ≥ 2 connected components s 1, s 2,..., s K ⊂ Θ n. Thus, all these components will form a single connected component in Θ n+1. Algorithm 1: Barcodes of minima computation for function on a graph. Input: Connected undirected graph G = (V, E); function f on graph vertices. Output: Barcodes: a list of "birth"-"death" pairs. In the third case, according to definition 1 of Section 2 the point θ n+1 is a 1-saddle point. Thus, one of the components s k swallows the rest. This is the component which has the lowest minimal value. For other components, 2 this gives their barcodes: for s k the birth-death pair is min We summarize the procedure in the following algorithm 1. Note that we assume that the input graph is connected (otherwise the algorithm can be run on separate connected components). In the practical implementation of the algorithm, we precompute the values of function f at all the vertices of G. Besides that, we use the disjoint set data structure to store and merge connected components during the process. We also keep and update the global minima in each component. We did not include these tricks into the algorithm's pseuso-code in order to keep it simple. The ing complexity of the algorithm is O(N log N) in the number of points. Here it is important to note that the procedure of graph creation may be itself time-consuming. In our case, the most time consuming operation is nearest neighbor search. In our code, we used efficient HNSW Algorithm for aproximate NN search by. In this section we apply our algorithm to describing the surfaces of functions. In Subsection 4.1 we apply the algorithm to toy visual examples. In Subsection 4.2 we apply the algorithm to analyse the loss surfaces of small neural networks. In this subsection we demonstrate the application of the algorithm to simple toy functions f: R D → R. For D ∈ {1, 2} we consider three following functions: 1. Polynomial of a single variable of degree 4 with 2 local minima (see Fig. 2a): 2. Camel function with 3 humps, i.e. 3 local minima (see Fig. 2b): 3. Camel function with 6 humps, i.e. 6 local minima (see Fig. 2c): Function plots with their corresponding barcodes of minima are given in Figure 2. The barcode of the global minimum is represented by the dashed half-line which goes to infinity. In this section we compute and analyse barcodes of small fully connected neural networks with up to three hidden layers. For several architectures of the neural networks many on the loss surface and its local minima are known (see e.g. and references therein). Different geometrical and topological properties of loss surfaces were studied in;;;. There is no ground truth on how should the best loss surface of a neural network looks like. Nevertheless, there exists many common opinions on this topic. First of all, from practical optimization point of view, the desired local (or global) minima should be easily reached via basic training methods such as Stochastic Gradient Descent, see. Usually this requires more-or-less stable slopes of the surface to prevent instabilities such as gradient explosions or vanishing gradients. Secondly, the value of obtained minimum is typically desired to be close to global, i.e. attain smallest training error. Thirdly, from the generalization point of view, such minima are required to provide small loss on the testing set. Although in general it is assumed that the good local optimum is the one that is flat, some recent development provide completely contrary arguments and examples, e.g. sharp minima that generalize well. Besides the optimization of the weights for a given architecture, neural network training implies also a choice of the architecture of the network, as well as the loss function to be used for training. In fact, it is the choice of the architecture and the loss function that determines the shape of the loss surface. Thus, proper selection of the network's architecture may simplify the loss surface and lead to potential improvements in the weight optimization procedure. We have analyzed very tiny neural networks. However our method permits full exploration of the loss surface as opposed to stochastical exploration of higher-dimensional loss surfaces. Let us emphasize that even from practical point of view it is important to understand first the behavior of barcodes in simplest examples where all hyper-parameters optimization schemes can be easily turned off. For every analysed neural network the objective function is its mean squared error for predicting (randomly selected) function g: [−π, π] → R given by plus l 2 −regularization. The error is computed for prediction on uniformly distributed inputs x ∈ {−π + 2π 100 k | k = 0, 1, . . ., 100}. The neural networks considered were fully connected one-hidden layer with 2, 3 and 4 neurons, two-hidden layers with 2x2, 3x2 and 3x3 neurons, and three hidden layers with 2x2x2 and 3x2x2 neurons. We have calculated the barcodes of the loss functions on the hyper-cubical sets Θ which were chosen based on the typical range of parameters of minima. The are as shown in Figure 3. We summarize our findings into two main observations: In this work we have introduced a methodology for analysing the plots of functions, in particular, loss surfaces of neural networks. The methodology is based on computing topological invariants called canonical forms or barcodes. To compute barcodes we used a graph-based construction which approximates the function plot. Then we apply the algorithm we developed to compute the barcodes of minima on the graph. Our experimental of computing barcodes for small neural networks lead to two principal observations. First all barcodes sit in a tiny lower part of the total function's range. Secondly, with increase of the depth of neural network the barcodes descend lower. From the practical point of view, this means that gradient descent optimization cannot stuck in high local minima, and it is also not difficult to get from one local minimum to another (with smaller value) during learning. The method we developed has several further research directions. Although we tested the method on small neural networks, it is possible to apply it to large-scale modern neural networks such as convolutional networks (i.e. ResNet, VGG, AlexNet, U-Net, see) for imageprocessing based tasks. However, in this case the graph-based approximation we use requires wise choice of representative graph vertices, which is a hardcore in high-dimensional spaces (dense filling of area by points is computationally intractable). Another direction is to study the connections between the barcode of local minima and the generalization properties of given minimum and of neural network. There are clearly also connections, deserving further investigation, between the barcodes of minima and concerning the rate of convergency during learning of neural networks.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1gwC1StwS
We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces.
New types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. In particular, models that exploit structured input via complex and instance-dependent control flow are difficult to accelerate using existing algorithms and hardware that typically rely on minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently, even for small minibatch sizes, ing in shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today. An emerging category of neural networks show the common trait of reacting in dynamic and unique ways to properties of their input. Networks like tree-structured recursive neural networks BID35 BID36 and graph neural networks (GNNs) BID31 BID20 BID12 take structured data types as input and and execute a computation that depends on the input structure. This defies the moden GPU-driven paradigm of minibatch-based processing, and we refer to this new class of models with dynamic control flow as dynamic neural networks. The development of dynamic neural network frameworks -Chainer BID37, DyNet BID24, and PyTorch (PyTorch core team) -speaks to the importance of this class of models and highlights the challenge of how to make it easy for users to describe them. Yet there is another big challenge: how can we train these models efficiently?Managing minibatches to keep GPUs fully utilized is typically considered a user's responsibility in these dynamic frameworks (with the exception of DyNet's autobatching feature; see Sec. 7). This means that users have to think about how to change their data feeding pipeline or even the model itself to run efficiently on GPUs, rather spending time innovating to improve the model accuracy. What if we had a hypothetical device with low memory overhead that allows perfect scaling without batching; i.e., processing 1 item is simply 100x faster than processing 100 items? Recent work on FPGAs and other specialized hardware BID10 BID4 BID17 for deep learning encourages us to investigate this question. Our premises are 1. No batching is required for efficient processing.2. Each device may not have enough memory to hold the entire model (this is a realistic constraint for current memory systems that approach the perfect scaling we require)Based on these premises, we propose an asynchronous model-parallel (AMP) training algorithm. Our idea is illustrated in FIG0. We need model parallelism because each device may be too small to hold the entire model (premise 2). However, if we perform synchronous parameter updates following the full forward and backward propagations, the only way to increase device utilization is by pipelining multiple instances into the system (see e.g., . Pipeline parallelism with synchronous updates is at odds with convergence speed due to a decreased parameter update frequency; compare FIG0 (a) and (b).To overcome this problem, we propose asynchronous parameter updates that occur without global synchronization whenever a pre-specified number of gradients have been accumulated; see Fig. 1 (c). With this design we aim for both high device utilization and update frequency. In this setting, however, model parameters may be updated between the forward and the backward computation of an instance, introducing gradient "staleness". Despite staleness, we show that AMP training can converge fast with good hardware utilization. Specifically, our contributions are:• We present the asynchronous model parallel training algorithm for efficient distributed training of dynamic networks.• We present an intermediate representation (IR) with explicit constructs for branching and joining control flow that supports AMP training. Unlike previous work that considers static computation graphs for static control flow (e.g., Caffe), and dynamic computation graphs for dynamic control flow (e.g., Chainer), our IR encodes a static computation graph to execute dynamic control flow 1. This makes training easy to distribute and parallelize.• We show that our IR can readily encode replicas, a form of data parallelism (see Sec. 5). In addition, our IR includes operators for data aggregation, which recover a form of batching, enabling our methods to be applied even on hardware where batching is beneficial.• We implement AMP training on a multi-core CPU and empirically demonstrate that AMP training converges to similar accuracies as synchronous algorithms on a variety of dynamic neural network models including Tree RNN and gated graph neural networks (GGNN).In summary, our work demonstrates the benefits of AMP training and gives a novel way to design and deploy neural network libraries with dynamic control flow. In addition, we use our implementation to estimate the performance on a hypothetical device satisfying premises 1 & 2, with 1TFLOPS compute capability (see Appendix C). Together, these contributions open up new ways to scale up dynamic networks on interconnected compute devices. Below we highlight three models with dynamic control flow that will be studied in depth in this paper:Variable-length RNNs iterate over the tokens of variable-length sequences. Pseudo-code for a simple vanilla RNN is given in Figure 2. The linear (fully connected) layer and rectified linear unit (ReLU) can be substituted with a more sophisticated unit such as a gated recurrent unit BID8. Though each instance has a different length, it is possible to add padding to enable batching. However this may lead to significant redundant compute due to variability in sequence lengths. Isu inc dec inc(s) = s{t=s.t+1} dec(s) = s{t=s. Tree-structured neural networks are powerful models used for parsing of natural language and images, semantic representation, and sentiment analysis BID34 BID3 BID35 BID36 . They require evaluation of (potentially multiple) trees with shared parameters but different topology for each instance. Each tree structure is instance-specific and batching requires nontrivial planning BID21. A simple form of tree neural network performs a bottom up traversal of the instance, starting from an embedding of the leaves. At each level the values from the child nodes are concatenated and sent through a specialized unit (e.g. LSTM). The is then propagated further up the tree. Backpropagation over the tree structure is known as backpropagation through structure BID13. BID31 BID20 BID12 combine both the temporal recurrence and recurrence over the structure. GNNs can be seen as performing aggregation/distribution operations over a general graph structure with shared parameters. Apart from the models above, there exist many recently proposed models with flexible control flow (e.g. hierarchical memory networks BID5, neural programmer interpreters BID29, adaptive computation networks BID14 BID11, and sparsely-gated mixture of experts BID33, to which our framework can be applied. In AMP training, each node of a computation graph (including control flow nodes -see next section) is associated with a worker, which is an abstraction of a compute device. Neural network training and inference is carried out by message passing among workers following algorithm 1. All workers run in parallel without synchronization. Each message contains a payload (float or int tensor), as well as a state that includes the IDs of the source and sink nodes and a label indicating the type of message (forward, backward, or update). The state is used to keep track of algorithm and control flow information. For example, in a variable-length RNN the state also contains the instance identifier, the current position in the sequence, and the total sequence length for the instance. More generally, if the neural model use (possibly nested) loops, then the state for the messages that arrive to and are produced from nodes that logically belong in loop bodies will contain sets of loop counters that together with the instance id uniquely identify the messages throughout the course of the computation. When a worker receives a message labeled as forward (or backward) then it performs the operation of the node indicated by the sink node ID on the supplied payload. This produces one or more outgoing messages that are then enqueued into the queues of the workers hosting the next sink nodes in the computation graph. The final loss layer initiates backward propagation. If the message type is update, the worker will carry out weight updates on the sink node using gradients accumulated in the appropriate slot in the worker's local memory. Since both the operation weights and weight gradients can be stored locally on the worker then workers only need to communicate activations and activation gradients, which are typically an order of magnitude smaller than the weights. The update message is typically sent from the sink node to itself as part of the backward process but it can also be sent from a controller node to simulate synchronous pipelined training. There are two important details that are not fully spelled out in Algorithm 1. First, since the messages arrive asynchronously (and possibly out of order), any operation that has more than one parent nodes need to store the payload into its local cache until all the parents send the corresponding payloads. Thus output message(s) can only be produced when all the payloads become available. The cache needs to be able to distinguish payloads received from different parents and payloads with different instance ID, and different counters (all encoded in the message states). The same is true in the backward pass for a node with multiple child nodes. Second, op * denotes the adjoint operation of op and takes the backward message msg and potentially the forward message fwd msg stored in the cache. For a nonlinear activation node (e.g., ReLU), the node will not change the state of the message in the forward pass. Thus in the backward pass, the adjoint operation will just multiply the partial derivative of the activation function to the payload of the received backward message keeping the state unchanged. By contrast, an operation that only changes the state of the message in the forward pass (e.g., increment the loop counter) will reverse the change in the backward pass leaving the payload unchanged. In the experiments we vary two hyper parameters to control the effect of asynchrony: min update interval: determines the minimum number of gradients that a parameterized operation needs to accumulate before it can update its parameters (using the update message). The staleness of a gradient can be measured by the number of updates between the forward and backward computation that produces the gradient. Small min update interval may increase gradient staleness. On the other hand, large min update interval can reduce the variance of the gradient but can in very infrequent updates and also slow down convergence. max active keys: controls the maximum number of active instances that are in-flight at any point in time. By setting max active keys = 1 we restrict to single-instance processing, typically equivalent to synchronous training. More in-flight messages generally increase hardware utilization, but may also increase gradient staleness. Section 6 demonstrates the effects of these parameters in a multi-core CPU runtime. In our asynchronous execution model, the optimal assignment of N neural network computation graph nodes to W workers (referred to as affinitization) is in general a non-trivial scheduling problem. We investigated several heuristics for assigning affinities (for example k-hop coloring BID15 to ensure subsequent heavy operations were assigned different workers). However, we find that the following procedure achieves high performance in practice in our multi-core CPU implementation, and is adopted throughout our experiments for simplicity. We first partition the nodes into H'heavy' operations (namely matrix multiplies) and (N − H)'light' operations, and then balance the heavy nodes across the workers by affinitizing the h th with the (h mod W) th worker. Finally, the light operations are affinitized randomly among the rest of the workers. Note that in scenarios where communication is over a physical network affinitization will become more critical for high performance. Overview Computation graphs are expressed using a static intermediate representation (IR) that can be a compilation target for high-level libraries (e.g. TensorFlow or our own Python and C++ frontends), and can itself admit multiple backends (e.g. the multi-core CPU runtime in this paper, or a network of accelerators). Static means that the IR graph is instance-independent. Nevertheless, it can execute dynamic and instance-dependent control flow decisions, in a forward and backward manner, by storing instance-and iteration-dependent information as the computation evolves. Each IR node comes with a forward and a backward semantics. A model is specified by (i) an IR graph, and (ii) a specialized controller loop that pumps instances and other data (e.g. initial hidden states or labels), and is responsible for throttling asynchrony. In the rest of this section we discuss the most important IR nodes along with their operational semantics, and show how they are used in the example models. Payload transformations Parameterized payload transform (PPT) nodes can be used to encode, for instance, fully connected layers. They apply a transform in the forward pass, but also record the activation in order to use it to compute gradients in the backward pass. An activation is recorded by keying on the state of the message, allowing thus to process forward and backwards messages completely asynchronously, and -in the extreme case -out of order. A PPT node requires specification of the forward and the backward transformation. It may decide to independently apply accumulated gradients to update its parameters. For transformations that do not involve parameters (e.g. ReLUs) our IR offers a simpler non-parameterized payload transform. Loops, state, and control flow A condition node (Cond f) is parameterized by a function f that queries the state (but not the payload) of the incoming message and, based on the response, routes the input to one of the successor nodes. A join node (Phi) propagates the messages it receives from each of its ancestor nodes but records the origin (using the state of the message as the key) so that in the backward pass it can backpropagate them to the correct origin. An invertible state update node (Isu f f −1) is parameterized by two functions f and f −1 that operate on the state of a message, and satisfy f −1 (f (x)) = x. Typically these are loop counter update functions. Figure 2 shows how to encode an RNN. The loop at the heart of the RNN is implemented with Cond, Phi and Isu nodes. The controller pumps sequences in a lookup table (just another PPT layer), and our Ungroup node (to be described in the next section) generates a stream of tensors each corresponding to a single token, tagged with the current time-step (loop counter). For each forward message, the Isu node increments the time-step, and the conditional node tests whether the end of the sequence has been reached. Depending on the answer it either propagates the hidden state back to Phi, or pushes the hidden state to the final linear and loss layers. In backward mode, messages pass through the Isu (which decrements the time-step), and reach the Phi node. The Phi node will (based on information from the forward phase) either back-propagate to the Cond node, or to the controller to terminate. Hence the loop is executed in both the forward and backward direction. Aggregation and disaggregation Our IR offers several constructs for aggregation and disagreggation; for example RNN requires us to concatenate (Concat) hidden states and embeddings with matching timesteps and instance ids (states). We offer a construct for broadcasting (Bcast) a message to multiple successors. We offer an ungrouping construct (Ungroup) that ungroups a matrix and emits all ing messages tagged with an extra user-provided increasing loop counter. This allows us to insert the stream of token embeddings in the middle of the RNN loop in Figure 2. In backward mode Ungroup groups back all the incoming gradients. A simpler variant of Ungroup is Replicate, which replicates a message with an extra loop counter in the state. In backwards mode Replicate sums up all incoming gradients that correspond to the state without the extra loop counter. Figure 4(a) describes a GNN that combines aggregation on the structure of a graph instance with an outer loop. The controller pumps data that contain the feature embeddings of all nodes of an input graph. In addition it pumps in a map specifiying graph topology. The Distribute node uses that information along with the graph itself to create sub-matrices (here each corresponding to edge types) and pass them through linear (fully connected) layers. The Collect node collects and re-groups the based on graph topology. These nodes correspond to a form of dynamic partition and merging. Schematically the Distribute behaviour is given in FIG1. In backward mode, based on the control information received during the forward pass (gray line) re-groups the gradients and sums together those that correspond to the same index. The Collect operator is essentially symmetric. Pipelined model parallelism can often be augmented with forms of data parallelism. Consider the RNN in Fig. 2. The only heavy operation (Linear-1) in the body of the loop will act as a bottleneck for computation. One solution is to split the linear layer into smaller tiles and compute them in parallel. This is expressible in our IR but the linear operation needs to be large enough to benefit from tiling in this way. Another approach is to replicate the linear layer in full. This requires only minimal new machinery -we can replicate the linear layer and place the replicas inside Cond and Phi nodes as in FIG2 (b). Different instances or messages from the same instance but with different position in the sequence can be processed in an (pipeline-)parallel fashion by being sent to one of the replicas chosen by a random or deterministic function of the message state. To enable parameters to be shared among the replicas, we use infrequent end-of-epoch replica synchronization (averaging) that incurs negligible communication cost. We also tried more elaborate message-passing protocols for group synchronization, but found that infrequent global synchronization was sufficient for fast convergence. We evaluate AMPNet using the dynamic models introduced in Section 2. For completeness, we additionally consider a multi-layer perceptron (MLP) as an example of a static, batched network that AMPNet is not specifically designed to tackle. Brief details of the models and data sets in the experiments are presented below, and further details are given in Appendix B. Results Our asynchronous runtime is motivated by the promise of emerging hardware (e.g. FPGA accelerators) that fulfill the premises in section 1 and are well suited to dynamic neural network execution. Here we are primarily interested in how the performance of our runtime improves as we increase the degree of asynchrony (by varying max active keys) while keeping other factors fixed. The aim is to answer the question of whether AMP training is a promising direction for novel distributed hardware that deviates from the CPU/GPU batching paradigm. To answer this question using resources available today we run the AMPNet training using a multi-core CPU runtime where each worker is a hardware thread (see Appendix A). Additionally we forecast the performance on a hypothetical 1TFLOPS device satisfying our premises by replacing all computation nodes by configurable sleep nodes. This allows us to estimate the performance on a new hardware keeping the control flow decisions dynamic. See Appendix C.It is also interesting to compare how the raw CPU performance of the AMP runtime compares with existing frameworks (TensorFlow, TensorFlow Fold and DyNet) to see that our implementation is already competitive with state of the art methods even on CPUs that do not conform to our target hardware model. We provide additional analysis for each experiment below. On MNIST, TAB3 shows 3x speedup from synchrony (max active keys = 1) to asynchrony (max active keys = 4) in terms of throughput. This is almost ideal as the first three linear layers are the heaviest operations. The number of epochs to reach the target validation accuracy increases from 3 to 4 but the overall speedup in terms of the wall clock time is 2.2x. We have also compared AMP training against pipeline parallel training (FIG0). FIG3 (a) shows that while AMP training achieves 3x throughput gain already with max active keys = 4, pipeline parallelism can only achieve 2x (in fact from FIG0, 3m/(m + 2) is the best case for max active keys = m) and higher throughput at max active keys = 8 is achieved at the cost of the sharp increase in the number of epochs to convergence. The list reduction dataset demonstrates the power of replicas. As there is only one heavy operation (Linear-1, Figure 2), the speedup from asynchrony is mild (1.3x). However we get 2.5x and 3.5x speedup for 2 and 4 replicas, respectively, which is nearly ideal. Again, the # of epochs to convergence is not affected by increasing max active keys. The slowdown in convergence for 4 replicas is due to the increased effective minibatch size -also commonly observed in data parallel training. Next the sentiment tree-RNN dataset shows that our runtime is competitive without batching to TensorFlow Fold BID21 using dynamic batching of batch size 100. It is worth mentioning that our runtime allows us to specify different min update interval parameter for each parameterized operation. We set this parameter to 1000 for the embedding layer, which is initialized by Glove vectors, and 50 for all other layers. This reduced gradient staleness in the embedding layer. The QM9 dataset demonstrates that increasing asynchrony helps on real-world tasks with complex control flow, and our method outperforms an efficient TensorFlow implementation on CPUs. Finally, we have implemented the BiLSTM w/ char model in BID25 on Wikiner dataset BID26. Our preliminary implementation without any batching achieves 130 sentences/s at max active keys = 32 without any noticeable loss in accuracy (around 94 % after one epoch). This is competitive with DyNet's performance on the same machine (23 sentences/s without and 220 with autobatching, respectively); See Sec. B.5 for details. Asynchrony Finally we provide additional analysis on the effect of asynchrony. The degree of asynchrony is controlled by hyperparameters min update interval and max active keys. In FIG3 we use an 8-replica RNN model on the list reduction dataset to investigate how these parameters affect the data and time required to converge to 96% validation accuracy. We find, in analogy with minibatch size in traditional systems, that min update interval must neither be too large nor too small. Increasing max active keys (increasing asynchrony) monotonically increases performance when the number of keys is similar to the number of individually affinitized heavy operations in the model 8 in this case). Increasing max active keys significantly beyond this point produces diminishing returns. One approach to the task of training networks with instance dependent control flow is to define the computation graph dynamically per-instance. This approach is taken in Chainer BID37, DyNet BID24, and PyTorch (PyTorch core team). There are key challenges in accelerating this approach: Model parallelism would require the dynamically generated computation graph to be scheduled on-the-fly, and BLAS level parallelism would require operations to be batched on-the-fly. Automatic dynamic batching has been implemented in DyNet BID24, and is an interesting alternative to our asynchronous execution. Similar methods are used in TensorFlow Fold BID21. The basic idea is to inspect and merge together (by depth) the unrolled computation graphs of several instances to create batched BLAS operations. The effectiveness of automatic batching greatly depends on the model -for example, it would not perform well on random permutations of a sequence of operations. By contrast, our IR would very succinctly express and achieve pipeline parallelism using a static computation graph that is easy to distribute and optimize. Theano BID1 and TensorFlow (TF) can syntactically handle instance dependent control flow with abstractions for conditional execution (ifelse in Theano and cond in TF) and loops (scan and while loop, respectively); TF also provides higher-order functions, such as map, foldl, foldr, and scan. The main difference between AMPNet and the above frameworks is that AMPNet is streaming and asynchronous whereas Theano is non-streaming and synchronous. Although not designed for streaming, TF can support streaming programmatically as it exposes first-class queues, as well as data prefetching with so called input pipelines. In our IR, all the queuing is implicit and stream-based execution is the default. TF additionally does support static description of dynamic control flow and state update, but we depart from the classic dataflow architecture that TF follows BID2: First, instead of having nodes that represent mutable reference cells, we encapsulate the state with which a message should be processed through the graph in the message itself. Second, because we encapsulate algorithmic state in the messages, we do not introduce the notion of control dependencies (which can be used to impose a specific execution order on TF operations). Our choices complicate algorithmic state management from a programming point of view and make the task of designing a high-level compiler non-trivial, but allow every node to run asynchronously and independently without a scheduler and without the need for control messages: For example, nodes that dynamically take a control flow path or split the data simply consult the state of the incoming message, instead of having to accept additional control inputs. For "small" states (e.g. nested loop counters or edge and node ids) this might be preferable than out-of-band signaling. Our IR can implement loops by simply using state-update, conditional, and phi nodes, because the state accompanies the payload throughout its lifetime, whereas TF introduces specialized operators from timely dataflow BID23 ) to achieve the same effect. Asynchronous data parallel training BID28 BID9 BID7 ) is another popular approach to scale out optimization by removing synchronization (orthogonal to and combinable with model-parallel training). For example, convolutional layers are more amenable to data-parallel training than fully connected layers, because the weights are smaller than the activations. Moreover, when control flow differs per data instance, data parallelism is one way to get an effective minibatch size > 1, which may improve convergence by reducing variance. The impact of staleness on convergence BID28 and optimization dynamics BID22 have been studied for data parallelism. It would be interesting to extend those to our setting. BID16, like us, aim to to train different parts of a model in a decoupled or asynchronous manner. More precisely, their goal is to approximate a gradient with a synthetic gradient computed by a small neural network that is locally attached to each layer. Hence, the local gradient calculation becomes independent of other layers (except for the training of the gradient predictor network) and allows asynchronous parameter updates. This would be especially useful if the evaluation of the local network is cheaper than the computation of the real gradient; for example, if the computation of the real gradient required significant communication of forward/backward messages between devices. We have presented an asynchronous model-parallel SGD algorithm for distributed neural network training. We have described an IR and multi-core CPU runtime for models with irregular and/or instance-dependent control flow. Looking forward, we aim to deploy our system on specialized hardware. Equally importantly, we plan to build a compiler that automatically deduces the information to be placed in the states and generates state keying functions from a higher-level description of the models. By unlocking scalable distributed training of dynamic models, we hope to enable exploration of this class of models that are currently only on the horizon but may become more mainstream in the future. A AMPNET RUNTIME IMPLEMENTATIONWe have implemented an AMPNet runtime for multi-core CPUs. Our runtime spawns multiple workers each associated with a hardware thread and hosting one or more IR nodes -in a more general setting each worker corresponds to a compute device. To remain faithful to a distributed environment communication is only through message passing. Each worker is equipped with a multiple-producer single-consumer queue that can accept messages for any IR node hosted on that worker. The main worker loop periodically offloads messages from the concurrent queue to a worker-local priority queue that assigns higher priority to backward messages. Backward prioritization is designed for situations when multiple IR nodes with a dependency on the IR graph end up hosted on the same worker. As a consequence, backpropagation can complete faster and new instances can be pumped in by the controller. We dequeue the top message and invoke the forward or backward method of the target IR node. These methods may update internal IR node state (such as cache the state of the incoming message and wait for more messages) or post new forward or backward messages. How to update the parameters using the gradients is a configuration option that selects amongst a range of optimization algorithms. We have implemented runtime configuration options for selecting several well-known schemes such as (momentum-)SGD and Adam BID18, and for controlling the training hyper-parameters. We provide more details of the experiment and analysis in this section. All experiments were carried out on machines with 16 cores and 112 GB of RAM. The validation curves were averaged over at least 20 independent runs. The time/epoch to reach a target accuracy was calculated as median of the time an algorithm takes to reach the target accuracy over the repetitions. We found this approach to be more reliable than reporting the time/epoch when the averaged accuracy reaches the target. TAB4 shows both the training and validation throughputs we obtained with AMPNet and our TensorFlow baselines. We train a 4-layer perceptron with ReLUs and 784-dimensional hidden units on MNIST BID19. Both the AMP runtime and a TensorFlow baseline use SGD with learning rate 0.1 and batch size of 100.Figure 6(a) shows the validation accuracy vs. time, validation accuracy vs. epochs, and throughputs of synchronous and asynchronous versions of AMPNet as well as TensorFlow. The throughput greatly increases from synchronous (max active keys = 1) to asynchronous (max active keys = 4) while the speed of convergence (middle panel) is hardly affected for mild amount of asynchrony. Taking higher max active keys = 8 increase throughput only very little (because there is no more work) and seems to rather make the convergence more unstable. This is due to the fact that our current scheduler is greedy and pumps in a forward message whenever the first layer is unoccupied, which leads to large gradient staleness. Clearly a better scheduling will remove this sensitivity. We train a vanilla RNN to perform reduction operations on variable length lists of digits. Each training instance is a sequence of at most 10 tokens: The first token indicates which of 4 reduction operations 2 is to be performed, and the remaining tokens represent the list of digits. The output is the of the calculation rounded modulo 10. The dataset consists of 10 5 training and 10 4 validation instances. We present this task as a classification problem to a vanilla RNN with ReLU activation and a hidden dimension of 128. All parameterized operations are affinitized on individual workers. We bucket training instances into batches of 100 sequences (in the baseline and in AMPNet). Figure 6(b) shows the validation accuracy vs. time and the number of epochs, and throughputs of the methods we discussed in the main text on the list reduction dataset. We first notice that increasing the asynchrony from synchronous (max active keys=1) to max active keys = 4 and max active keys = 16 affects the convergence very little at least in average. However, there (d) Sentiment Tree Bank (max active keys = 16) (e) QM9 dataset is also very little speedup without introducing replicas as we discussed in the main text. Increasing the number of replicas increases the throughput almost linearly from 13k sequences/s (synchronous) to 35k sequences/s (2 replicas) and over 70k sequences/s (4 replicas). Convergence is almost unaffected for 2 replicas. This was rather surprising because the parameters of the replicas are only synchronized after each epoch as we described in Sec. 5. A slight slow-down in convergence can be noticed for 4 replicas. Since even max active keys = 16 has almost no effect on the convergence without replicas, this is not due to asynchrony. We also tried to synchronize more frequently but this did not help. Thus we believe that the slow-down is due to the increase in the effective minibatch size ing in reduced number of updates per epoch, which is commonly observed in data parallel training. We consider the sentiment classification dataset from BID35 consisting of binarized constituency parse trees of English sentences with sentiment labels at each node. Following Tai et al. BID36, we use 8,544 trees for training, 1,101 trees for validation, and 2,210 trees for testing. We use a Tree LSTM for this classification task based on the TensorFlow Fold BID21 benchmark model. Both the AMP and Fold models are trained following BID36 with the additional architectural modifications proposed by BID21; BID32. Furthermore, we split our Tree-LSTM cell into Leaf LSTM and Branch LSTM cells. This does not affect the expressiveness of the model because the LSTM cell receives either zero input (on branch) or zero hidden states (on leaves); i.e., the two cells do not share weights except for the bias parameters, which are learned independently in our implementation. We compare the time to reach 82 % fine grained (5 classes) accuracy (averaged over all the nodes) on the validation set. Figure 6(c) shows the averaged fine grained validation accuracy for the tree RNN model with different max active keyson the Stanford Sentiment Tree Bank dataset. Interestingly although TensorFlow Fold achieves higher throughput, AMPNet converges faster (in terms of the number of epochs). This speedup is mainly due to the fact that we are not batching and updating whenever we have accumulated 50 gradients (except for the lookup table node that updates every 1000 gradients); 50 gradients correspond to roughly 2 trees. The reason for the lower throughput compared to TensorFlow Fold is that we are only grouping the leaf operations and not the branch operations. Grouping the branch operations is possible by extending our IR nodes and we are actively working on it. Figure 6(d) shows the same information for fixed max active keys = 16 and different min update interval. We can see that as we increase min update interval from the originally used 50 to larger values, the peak of the validation accuracy shifts later and lower becoming closer to the curve obtained by TensorFlow Fold. This is consistent with the parallels between min update interval and minibatch size we drew in Section 6. The min update interval parameter has marginal influence on the training throughput. We study a real-world application for GNNs: prediction of organic molecule properties from structural formulae in the QM9 dataset BID30; BID27. GNNs have previously been applied to this task in BID12.We concentrate on prediction of the norm of a molecule's dipole moment using a regression layer build on the propagation model from BID20 (corresponding to the simplest setting in BID12). We use a hidden dimension of 100 and 4 propagation steps, initializing the graph nodes (atoms) following BID12. The molecules contain up to 29 atoms and in a TensorFlow baseline we bucket molecules into batches of 20 with atom counts differing by at most 1 within a batch. Following BID12, we report regression accuracies in multiples of a target accuracy from the chemistry community. Figure 6(e) shows that GGNN can tolerate relatively large max active keys = 16, and increased the throughput significantly from 300 graphs/s (synchronous) to 1797 graphs/s (see TAB4). We compare the performance of AMP training with DyNet with and without autobatching using the BiLSTM tagger w/ char model from the DyNet benchmark suite 3. This model consists of both character-level and word-level bidirectional LSTMs. The model uses a learnable word embedding for frequent words (more than 5 times in the corpus) and character-level bidirectional LSTMs for infrequent words. We use our Distribute and Collect nodes to dynamically route the messages depending on the word frequencies. Wikiner dataset 4 is a named entity recognition dataset extracted from Wikipedia by BID26. We use the training/validation split provided by the DyNet benchmark suite with 142,153 and 1,696 sentences, respectively. The is shown in FIG5. We achieve more than 3x in throughput from max active keys = 1 to max active keys = 32 without any noticeable loss in the validation accuracy after 2 or 3 epochs. The slight decrease in the validation accuracy after the third epoch is due to overfitting and it is not related to asynchrony. In order to estimate the performance of AMPNet on a hypothetical device with 1 TFLOPS compute capability, we replaced all fully connected layers in the network with a dummy operation that simply waits for a specified time. The dummy operation waits for 2 · k · d in · d out · 10 −12 seconds when the input is k × d in and the weight matrix is d in × d out for forward, backward, and gradient accumulation operation. It waits for d in · d out · 10 −12 seconds for weight update. In this way we can maintain all the data-dependent control decisions (e.g., sequence length) identical to the original network and also measure the real time spent for all the other operations. In order to calculate the time to reach target accuracy, we take the median number of epochs the original network required to reach the target accuracy and calculate the time as time = epochs · n train throughput train + n val throughput val, where n train and n val are the number of training and validation instances, respectively, and throughput train and throughput val are the throughput for training and validation, respectively. The are shown in TAB5. For the 4-way replicated RNN, we estimate roughly 260k instances/s, which is a 3.7x speedup compared to our CPU runtime. For tree RNN and GGSNN, we estimate milder 30 -70 % speedups mainly due to the fact that they have more complicated operations like Distribute and Collect, which we did not attempt to extrapolate the computation time because the implementation on a new hardware may be drastically different from the current CPU runtime.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJnQJXbC-
Using asynchronous gradient updates to accelerate dynamic neural network training
In cooperative multi-agent reinforcement learning (MARL), how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem. The global reward signal assigns the same global reward to all agents without distinguishing their contributions, while the local reward signal provides different local rewards to each agent based solely on individual behavior. Both of the two reward assignment approaches have some shortcomings: the former might encourage lazy agents, while the latter might produce selfish agents. In this paper, we study reward design problem in cooperative MARL based on packet routing environments. Firstly, we show that the above two reward signals are prone to produce suboptimal policies. Then, inspired by some observations and considerations, we design some mixed reward signals, which are off-the-shelf to learn better policies. Finally, we turn the mixed reward signals into the adaptive counterparts, which achieve best in our experiments. Other reward signals are also discussed in this paper. As reward design is a very fundamental problem in RL and especially in MARL, we hope that MARL researchers can rethink the rewards used in their systems. In reinforcement learning (RL), the goal of the agent is formalized in terms of a special signal, i.e., reward, coming from the environment. The agent tries to maximize the total amount of reward it receives in the long run. Formally, we express this idea as the Reward Hypothesis BID31: the goal of RL agent can be exactly described as the maximization of the expected value of the cumulative sum of a received scalar reward signal. It is thus critical that the rewards truly indicate what we want to accomplish. One reward design principle is that the rewards must reflect what the goal is, instead of how to achieve the goal 1. For example, in AlphaGo, the agent is only rewarded for actually winning. If we also reward the agent for achieving subgoals such as taking its opponent's pieces, the agent might find a way to achieve them even at the cost of losing the game. A similar example of faulty reward function is provided by BID26: if we reward the action of cleaning up dirt, the optimal policy causes the robot to repeatedly dump and clean up the same dirt. In fact, the how reward encodes human experience, which is heuristic in some extent. Based on the heuristic how reward, it is really easy to deviate from the ultimate goal. However, as BID35 point out, the exact what reward that encodes the performance objective might be awful to use as a training objective. It will in slow and unstable learning occasionally. At the same time, a training objective that differs from the performance objective can still do well with respect to it. For example, the Intrinsically Motivated Reinforcement Learning (IMRL) BID6 BID27 ) combines a domain-specific intrinsic reward with the reward coming from the environment to improve learning especially in sparse-reward domains. Although reward design problem in single-agent RL is relatively tractable, it becomes more thorny in multi-agent reinforcement learning (MARL), as MARL is naturally more complex than singleagent RL. As we know, the global reward and local reward have long been proved to be defective: the former might encourage lazy agents, while the latter might produce selfish agents. Inspired by the success of intrinsic reward in single-agent RL, we hypothesize that similar methods may be useful in MARL too. Naturally, in this paper, we ask and try to answer a question:Can we formulate some special rewards (such as intrinsic reward) based on the meta what rewards to accelerate learning and stabilize convergence of MARL systems? Specifically, in this paper, we propose several new MARL environments modified from the well known Packet Routing Domain. In those environments, the goal is to figure out some good flow splitting policies for all routers (i.e., agents) to minimize the maximum link utilization ratio in the whole network. We set the meta reward signals as 1 -max(U l). We argue that the meta reward signals are some kinds of what rewards because they tell the agents that we want to minimize max(U l), i.e., minimize the maximum of all link utilization ratios. For detailed discussions, we refer the readers to the proposed environments and rewards in Section 3 and 4.Based on those environments and the meta what rewards, we can focus on our reward design research purpose. Specifically, we firstly show that both of the widely adopted global and local reward signals are prone to produce suboptimal policies. Then, inspired by some observations and considerations, we design some mixed reward signals, which are off-the-shelf to learn better policies. Finally, we turn the mixed reward signals into the adaptive counterparts, which achieve best in our experiments. Besides, we also discuss other reward signals in this paper. In summary, our contributions are two-fold. We propose some new MARL environments to advance the study of MARL. As many applications in the real world can be modeled using similar methods, we expect that other fields can also benefit from this work. We propose and evaluate several reward signals in these MARL environments. Our studies generalize the following thesis BID6 BID35 in single-agent RL to MARL: agents can learn better policies even when the training objective differs from the performance objective. This remind us to be careful to design the rewards, as they are really important to guide the agent behavior. The rest of this paper is organized as follows. Section 2 introduces briefly, followed by the proposed environments and rewards in Section 3 and 4, respectively. We then present the experiments and discussions in Section 5 and 6, respectively. Section 7 concludes this work. As reward is the foundation of RL, there are many related studies. We only introduce the most relevant fields of this work. Topics such as Inverse RL BID23 and Average Reward RL BID19 are not included. The only way to talk with your agents might be the reward, as expressed by the well known Reward Hypothesis .When considering reward design for single objective RL problem, we should always be aware of whether the designed reward is a kind of what reward rather than how reward. For multiple objectives RL problem, researchers have to design sub-rewards for different objectives, and the final reward is a weighted sum of those sub-rewards. Unfortunately, the weights have to be adjusted manually even in recent studies BID12 BID17 BID8.For single-agent RL, there are many remarkable reward design studies. The most relevant field may be the IMRL. A recent Deep RL work based on IMRL is VIME . It uses a surprise reward as the intrinsic reward to balance exploitation and exploration, and achieves better performance than heuristic exploration methods. Another successful model is Categorical DQN BID1, which considers the long time run reward, or value, used in approximate RL. The authors model the value distribution rather than the traditional expected value. They obtain anecdotal evidence demonstrating the importance of the value distribution in approximate RL. BID18 use the temporal logic (TL) quantitative semantics to translate TL formulas into realvalued rewards. They propose a temporal logic policy search (TLPS) method to solve specify tasks and verify its usefulness. However, the reward design studies for MARL is so limited. To the best of our knowledge, the first (and may be the only) reward design study about Deep MARL is the well known Independent DQN BID32. By setting different rewarding schemes of Pong, the authors demonstrate how competitive and collaborative behaviors emerge. Although the rewards used are very elaborate, there are only two agents in Pong, which limits its generalization ability. Credit assignment usually has two meanings. In single-agent RL, it is the problem that how much credit should the current reward be apportioned to previous actions. In MARL, it is the problem that how much credit should the reward obtained at a team level be apportioned to individual agents. In fact, credit assignment is a subclass of reward design, and the two fields can often be seen as the same in MARL. As far as we know, credit assignment approaches in MARL can be divided into two categories: traditional global/local reward and other rewards. The global reward approach assigns the same global reward to all agents without distinguishing their contributions. On one hand, the lazy agents often receive higher rewards than what they really contribute to, which leaves lazy agents with no desire to optimize their policies. On the other hand, the diligent agents can only receive lower rewards even when they generate good actions, as the lazy agents often make the whole system worse. This makes the diligent agents confuse about what is the optimal policy. The local reward approach provides different rewards to each agent based solely on its individual behavior. It discourages laziness. However, agents do not have any rational incentive to help each other, and selfish agents often develop greedy behaviors. Other reward signals need more complex computation or depend on non-universal assumptions. After having considered several other options, we finally choose the Packet Routing Domain as our experimental environments. The reasons are as follows.• Firstly, they are classical MARL environments. There are many researchers studying packet routing problem BID2 BID7 BID29 BID37 BID4 BID33 BID36 BID9 BID25 BID20 ).• Besides, many real-world applications can be modeled using similar methods, for example, the internet packet routing, electric power supply, natural gas transportation and traffic flow allocation. We expect that these fields can also benefit from this work.• And most importantly, the what reward of these environments can be easily figured out, so we can focus on our reward design research purpose. Specifically, in these environments, the goals are high throughput and low link utilization ratio. In the real world, we also assume that the whole network capacity is bigger than the total flow demands 2, so the throughput is equal to the flow demands if we can find good packet routing policies. With this assumption, we set the meta reward signals as 1 -max(U l). We argue that the meta rewards are some kinds of what rewards because they tell the agents that we want to minimize max(U l), i.e., minimize the maximum of all U l.• Another attractive feature of those environments is that when we compute the global reward signal, the local reward signals of each agent can also be calculated without too much additional computation. See the detailed discussions in Section 4.More concretely, we consider the environments as shown in Figure 1, 2 and 3. Currently, the Internet is made up of many ISP networks. In each ISP network, as shown in Figure 1, there are several edge routers. Two edge routers are combined as ingress-egress router pair (IE-pair). The i-th IE-pair has a input flow demand F i and K available paths that can be used to deliver the flow from ingress-router to egress-router. Each path P k i is made up of several links and each link can belong to several paths. The link L l has a flow transmission capacity C l and a link utilization ratio U l. As we know, high link utilization ratio is bad for dealing with bursty traffic, so we want to find some good flow splitting policies jointly for all IE-pairs across their available paths to minimize the maximum link utilization ratio in the ISP network. More formally, the goal is the same as BID16: DISPLAYFORM0 subject to the constraints: DISPLAYFORM1 where F i * y DISPLAYFORM2 Before the definition of the designed rewards, we firstly define some link utilization ratio sets.• ALL set of any router. It contains the link utilization ratios of ALL links in the network. For example, in Figure 1, DISPLAYFORM0 • DIRECT set of a given router. It only contains the link utilization ratios of the DIRECT links of this router. For example, in Figure 1, DIRECT (e) = {U ec, U ed, U ef}.• BASIN set of a given router. It only contains the link utilization ratios of the BASIN links of this router. And the BASIN links of a router are the links that this router can route flow on. For example, in Figure 1, DISPLAYFORM1 Now, we are ready to define the designed reward signals as follows. A simplified illustration of these reward signals is shown in FIG3.• Global Reward (gR). The reward is calculated based on ALL set: gR = 1 − max(ALL).• Direct Local Reward (dlR). The reward is calculated only based on DIRECT set of a given router. For example, dlR(e) = 1 − max(DIRECT (e)).• Basin Local Reward (blR). The reward is calculated only based on BASIN set of a given router. For example, blR(e) = 1 − max(BASIN (e)).• Direct Mixed Reward (dlgMixedR). The reward is the sum of gR and dlR. For example, dlgM ixedR(e) = gR + dlR(e).• Basin Mixed Reward (blgMixedR). The reward is the sum of gR and blR. For example, blgM ixedR(e) = gR + blR(e).• Direct Adaptive Reward (dlgAdaptR). The reward is the sum of gR and w*dlR, where w is the weight of dlR. We decay w gradually at the training procedure. For example, dlgAdaptR(e) = gR + w * dlR(e).• Basin Adaptive Reward (blgAdaptR). The reward is the sum of gR and w*blR, where w is the weight of blR. We decay w gradually at the training procedure. For example, blgAdaptR(e) = gR + w * blR(e).From the above definitions, we can see that gR is what reward. It tells the agents that we want to minimize the maximum of all U l. Besides, both dlR and blR can be seen as some kinds of what rewards from the local perspective of individual agent. Specially, both mixed rewards and adaptive rewards are simple combinations of those what rewards. Despite the simplicity of those designed rewards, we can focus on our research purpose: can those rewards accelerate learning and stabilize convergence of MARL systems? And as mentioned in previous sections, we can calculate these rewards at a low cost, because DIRECT set and BASIN set are subsets of ALL set. To make our contributions more clearly, please note that traditional packet routing studies often use gR and dlR, all other signal forms are firstly introduced in this paper as far as we know. In order to make the experimental environments consistent with the real-world systems, we highlight the following setting. More detailed information can be found in the Appendix Section A.1 and A.2.• The routers are partially observable as they are located in different places. We use the recent proposed ACCNet BID20 as the experimental method to handle this problem.• The time delay of router, link and reward signal cannot be neglected. That is to say, actions have long term effect on the environments, which makes the task more challenging.• Both synthetic flow and real flow trajectory from the American Abilene Network 3 are used to test the proposed rewards. For different rewards, we run 10 independent experiments on different topologies, and the averaged convergence rates (CR) and maximum link utilization ratios (U l *) are shown in TAB0. From the perspective of topology, we can see that the convergence rates of any reward signal are decreasing gradually as the topology becoming more complex, which is reasonable. From the perspective of reward, we draw the following . Firstly, both dlR and blR are better than gR, which means that the widely used gR in other MARL environments is not a good choice for the packet routing environments. The bad performance of gR motivates us to discover other more effective reward signals. Importantly, the proposed blR seems to have similar capacity with dlR, but when we consider mixed reward signals and adaptive reward signals, the differences between them are obvious. For example, blgM ixedR and blgAdaptR can achieve higher convergence rates than dlgM ixedR and dlgAdaptR on Simple Topology and Moderate Topology, while dlgM ixedR and dlgAdaptR has better performance on Complex Topology than blgM ixedR and blgAdaptR. In our opinion, dlR has low f actoredness but high learnability, while blR can better balance f actoredness and learnability 4, which makes dlR more suitable for symmetrical Complex Topology and blR more suitable for asymmetrical Simple Topology and Moderate Topology. Anyways, the proposed blR is very necessary for the packet routing environments. Besides, on the whole, the adaptive reward signals are better than the mixed reward signals, and both of them are better than the meta rewards (we refer to gR, dlR and blR). But please note that the mixed reward signals can achieve good without any change of the experimental setting 5, while the adaptive reward signals have to adjust the replay buffer size and the weight decay rate. Finally, we also notice that the proposed reward signals cannot effectively decrease U l * on Simple Topology. However, when we consider Moderate Topology and Complex Topology, the best reductions of U l * are bigger than 10%. The reason is that all rewards can approach the optimal policies on Simple Topology, which leaves no space for the proposed rewards to further improve. But when the topology becomes more complex, the proposed rewards begin to show their abilities. In short, our are: both global reward and local reward are prone to produce suboptimal policies; the mixed reward signals are off-the-shelf to learn better policies; and the adaptive reward signals can achieve best at the cost of careful experimental setting. Those also generalize the following thesis in single-agent RL to MARL: the training objective can differ from the performance objective, but still do well with respect to it. So we hope that MARL researchers can rethink the rewards used in their systems, especially when the agents cannot work as expected. In this section, we highlight some observations and considerations that inspire us to design the above reward signals, based on Simple Topology in Figure 1.The gR and dlR. We firstly try gR without any thinking, but only get a 30% convergence rate, so we try the widely used dlR, and we find the performance is slightly better. However, we notice that the maximum or minimum link utilization ratio of most divergence experiments is U f c or U f d, as shown in Figure 5. That is to say, the dlR cannot catch information of L f c or L f d, which is a little far from the nearest agent e (router f is not a ACCNet agent). We realise that the combination of gR and dlR might have the ability to catch both nearest link information and far away link information, so we propose the dlgM ixedR. Figure 5: link utilization ratios of one divergence environment using dlR.The dlgMixedR. As expected, the dlgM ixedR performs better than gR or dlR alone, not only for higher convergence rate but also lower U l *. At this time, we further ask ourselves two questions. The first question is that can we simplify dlgM ixedR but still keep its ability? This inspires us to propose the blR and subsequently blgM ixedR. The second one is that can we gradually decay the weight of dlR so that the received reward is progressively approaching gR, i.e., the performance objective of our environments? This inspires us to try the adaptive rewards. The blR and blgMixedR. Although blR is simpler than dlgM ixedR, it can incorporate both nearest link information and far away link information adaptively. And as expected, blR achieves similar convergence rates and maximum link utilization ratios as dlgM ixedR. But what surprises us is that blgM ixedR can boost the convergence rate up to 80%. One hindsight is that blR can better balance f actoredness and learnability. The dlgAdaptR and blgAdaptR. Although the adaptive rewards get the best , it is not until we actually implement them that we realize the difficulties. The are sensitive to replay buffer size and weight decay rate, because the reward in replay buffer is slightly inconsistent with current reward even thought the <s,a,s'> tuple is the same. Larger buffer size and higher weight decay rate usually mean greater inconsistency, while small buffer size and low weight decay rate in slow learning and data inefficiency. So we suggest to use the off-the-shelf mixed reward signals. Other reward forms. We also test min-max reward mmR = 1 + min(U l) − max(U l) and average reward aveR = 1−average(U l). The observation is that some links always have low link utilization ratios, which means that the agents have not learned to share flow on those links. So we try to encode those information into the reward using mmR and aveR. Although those reward forms take some effect indeed, we do not consider them as general methods to discuss as they are not what rewards. Finally, we give the link utilization ratios testing on real flow trajectory from the American Abilene Network, as shown in Figure 6. We see that all links have similar utilization ratios, and the trends of the curves are consistent. Those mean that all the links can properly share responsibility for the flow demand according to their respective capabilities. Figure 6: link utilization ratios testing on real Abilene Network flow using blgM ixedR. Why the convergence rate of Complex Topology is low? In this paper, we only focus on designing special reward signals, rather than applying other sophisticated technologies, to solve the packet routing problem. In fact, the convergence rate can be improved to almost 100% for all topologies if we combine the proposed rewards with other methods. To make this paper easy to read, we do not introduce irrelevant methods. Can the proposed rewards generalize successfully in other environments? In fact, not all environments can directly calculate the local reward or global reward, as BID24 point out. In such environments, the proposed rewards might be only useful at high computation cost. However, the calculation of the rewards is not the research purpose of this paper. We argue that although the proposed rewards have limitations, they can be easily applied to many real-world applications such as internet packet routing and traffic flow allocation, as mentioned in Section 3.Can the designed rewards be seen as a kind of auxiliary task? Yes, they are some auxiliary reward signals indeed. But please note that the auxiliary reward signals are different from the auxiliary task used in UNREAL BID15, where the auxiliary task is used for improving the feature extraction ability of neural networks, while our auxiliary reward signals directly guide the learned policies. In fact, the mixed rewards are similar with VIME as analyzed in Section 2.1. And the adaptive rewards are similar with curriculum learning BID38, as both of them train the agents progressively from easy to the final difficult environment. In this paper, we study reward design problem in cooperative MARL based on packet routing environments. Firstly, we show that both of the widely adopted global and local reward signals are prone to produce suboptimal policies. Then, inspired by some observations and considerations, we design some mixed reward signals, which are off-the-shelf to learn better policies. Finally, we turn the mixed reward signals into the adaptive counterparts, which achieve best in our experiments. Our study generalizes the following thesis BID6 BID35 in singleagent RL to MARL: the training objective that differs from the performance objective can still do well with respect to it. As reward design is a very fundamental problem in RL and especially in MARL, we hope that MARL researchers can rethink the rewards used in their systems. For future work, we would like to use Evolutionary Algorithm BID11 to search the best weight of local reward, and verify whether the learned weight has the same decay property. We also expect to test the proposed reward signals in other application domains. Real flow trajectories from the American Abilene Network are shown in FIG5. Note that we normalize the flow demands so that they can be consistent with link capacities. To test the learned policies, we randomly change the flow demands of each IE-pair. State is represented as tuple <F, U, A, aveU>, where F stands for two ticks history of flow demands; U stands for five ticks history of direct link utilization ratio; A stands for last action token by the agent; aveU stands for ten ticks average of direct link utilization ratio. Specifically, for Simple Topology in Figure 1, the state dimensions of agent a, b and e are 28, 28 and 41, respectively; for Moderate Topology in Figure 2, the state dimensions of agent a, b and e are 30, 30 and 41, respectively; for Complex Topology in Figure 3, the state dimensions of agent a, b, 1, 2, 3, 4, and 5 are 30, 30, 28, 30, 30, 30 and 30, respectively. For action, as the ingress-router should generate a splitting ratio y k i with a constraint k y k i = 1 for current traffic demand F i. So the softmax activation is chosen as the final layer of actor network. This design is natural for the continuous action with sum-to-one constraint. Settings related with ACCNet are: buffer size is 6280; batch size is 64; learning rates of actor and critic are 0.001 and 0.01, respectively; discount factor is 0.9; weight for updating target network is 0.001. The link utilization ratios of AC-CNet and A-CCNet testing on Complex Topology using Abilene Network flow and dlgM ixedR are shown in FIG6 and 9, respectively. A-CCNet achieves smaller fluctuation range of link utilization ratio than AC-CNet, which means that A-CCNet is better than AC-CNet to deal with this environment, as claimed in the original paper BID20. Besides, similar with Figure 6, all links in FIG6 and 9 have similar utilization ratios and the trends of the curves are consistent, which means that all the links can properly share responsibility for the flow demand according to their respective capabilities.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r15kjpHa-
We study reward design problem in cooperative MARL based on packet routing environments. The experimental results remind us to be careful to design the rewards, as they are really important to guide the agent behavior.
Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized least squares solutions. Along these lines, we present some extensions of the Neumann network, a recently introduced end-to-end learned architecture inspired by a truncated Neumann series expansion of the solution map to a regularized least squares problem. Here we summarize the Neumann network approach, and show that it has a form compatible with the optimal reconstruction function for a given inverse problem. We also investigate an extension of the Neumann network that incorporates a more sample efficient patch-based regularization approach. We consider solving linear inverse problems in imaging in which a p-pixel image, β ∈ R p (in vectorized form), is observed via m noisy linear projections as y = Xβ +, where X ∈ R m×p and ∈ R m is a noise vector. The problem of estimating β from y is referred to as image reconstruction, and a typical estimate is given by solving the regualarized least squares problem β = arg min where r(·) is a regularizer. Classical image reconstruction methods specify a choice of regularizer to promote piecewise smoothness of the reconstruction, sparsity in some dictionary or basis, or other geometric properties. However, an emerging body of research explores the idea that training data can be used to learn to solve inverse problems using neural networks. At a high level, existing learning-based approaches to solving inverse problems can be categorized as either decoupled or end-to-end. Decoupled approaches first learn a representation of the data that is independent of the forward model X, followed by a reconstruction phase that uses X explicitly. Existing methods in this vein include using training images to learn a low-dimensional image manifold captured by the range of a generative adversarial network (GAN) and constraining the estimate β to lie on this manifold, or learning a denoising autoencoder that can be treated as a regularization step (i.e., proximal operator) within an iterative reconstruction scheme. End-to-end approaches incorporate the forward model X directly into the network architecture during both training and testing, and are optimized for a specific X or class of X's. Many end-to-end approaches are based on "unrolling" finitely many iterations of an optimization algorithm for solving, where instances of the regularizer (or its gradient or proximal operator) are replaced by a neural network to be trained; see among others. The advantage of a decoupled approach is that the learned representation can be used for a wide variety of inverse problems without having to retrain. However, this flexibility comes with a high price in terms of sample complexity. Learning a generative model or a denoising autoencoder fundamentally amounts to estimating a probability distribution and its support over the space of images; let us denote this distribution as P (β). On the other hand, if X is known at training time, then we only need to learn the conditional distribution P (β|Xβ), which can require far fewer samples to estimate. To make this idea more precise, consider the problem of finding the MSE optimal reconstruction function in the noiseless setting: Then ρ * is characterized as follows. Proposition 1. Let X ∈ R m×p, m ≤ p, be full rank, and let X ⊥ ∈ R p−m×p be a matrix whose rows form an orthonormal basis for the nullspace of X. Then the MSE-optimal reconstruction function ρ * in is given by where X + is the pseudoinverse of X and We omit the proof of Proposition 1 for brevity, but the technique is similar to those used in to derive the expressions for the MSE optimal autoencoder for a given data distribution. This proposition shows that the optimal reconstruction function only requires estimating a conditional expectation of the component of the image in the nullspace of the linear forward model, or implicitly a conditional probability density rather than the full probability density over the space of all images. Therefore, in settings where training data is limited, end-to-end approaches are expected to outperform decoupled approaches due to their lower sample complexity. It also implies the end-to-end networks should have a structure compatible with if they are to well-approximate the MSE optimal reconstruction function. The focus of this work is on the recently-proposed Neumann network architecture as an end-to-end approach for learning to solve inverse problems. Here we summarize the Neumann network architecture, and give it a new interpretation in light of Proposition 1. We also introduce an extension of Neumann networks to the case of a patch-based regularization strategy, which further improves the sample complexity of the approach. Neumann networks are motivated by the regularized least squares optimization problem in the special case where the regularizer r is quadratic. In particular, assume r(β) = 1 2 β Rβ so that ∇r(β) = Rβ for some matrix R ∈ R p×p. A necessary condition for β to be a minimizer of in this case is (X X + R)β = X y If the matrix on the left-hand side of is invertible, the solution is given by To approximate the matrix inverse in, the authors of use a Neumann series expansion for the inverse of a linear operator, given by A −1 = η ∞ k=0 (I − ηA) k, which converges provided I − ηA < 1. Applying this series expansion to the matrix inverse appearing in, we have β = ∞ j=0 (I − ηX X − ηR) j (ηX y). Truncating this series to B + 1 terms, and replacing multiplication by the matrix R with a general learnable mapping R: R p → R p, motivates an estimator β of the form The estimator above becomes trainable by letting R = R θ be a neural network depending on a vector of parameters θ ∈ R q to be learned from training data, along with the scale parameter η. Neumann Net Figure 1: Neumann network architecture. Unlike other networks based on unrolling of iterative optimization algorithms, the series structure of Neumann networks lead naturally to additional "skip connections" (highlighted in red) that route the output of each dashed block to directly to the output layer. Any estimator β(y) = β(y; θ, η) specified with trainable network R = R θ is called a Neumann network in. Figure 1 shows a block diagram which graphically illustrates a Neumann network. The main architectural difference with Neumann networks over related unrolling approaches is the presence of additional "skip connections" that arise naturally due to the series structure. Empirical evidence in suggests these additional skip connections may improve the optimization landscape relative to other architectures, and make Neumann networks easier to train. Efficiently finding a solution to the linear system using an iterative method can be challenging when the matrix X X + R is ill-conditioned. This suggests that the Neumann network, which is derived from a Neumann series expansion of the system in, may benefit from preconditioning. Starting from, for any λ > 0 we have (X X + λI)β + (R − λI)β = X y. Applying T λ:= (X X + λI) −1 to both sides and rearranging terms gives (I − λT λ +R)β = T λ X y. Following the same steps used to derive the Neumann network gives the modified estimator which is called a preconditioned Neumann network in. HereR =R θ is a trainable mapping depending on parameters θ. The Neumann network estimators in and can be interpreted as approximating the MSE optimal reconstruction function in Proposition 1. To see this, observe that the pseudo-inverse X + y = (X X) −1 X y is given by the Neumann series The preconditioned Neumann network estimator β(y) has the form where β R (y) collects all terms that depend on R. The preconditioned Neumann network more directly approximates ρ * (y) since the initial iterateβ = T λ X y = (X X + λI) −1 X y already well-approximates X + y provided λ > 0 is small. Here we present an extension to the Neumann network which incorporates a learned patchwise regularizer. For large images, learning an accurate regularizer may require more samples than are practical to gather due to cost or time constraints, leading to inaccurate reconstructions or overfitting. However, empirical evidence suggests there is considerable low-rank and other subspace structure shared among small patches of natural images. Redundancy and subspace structure across image patches permits learning parameters of statistical models for image patches using training data, like Gaussian mixture models with low-rank covariance structure. We propose leveraging the highly structured nature of image patches in the learned component of the Neumann network. Specifically, the patchwise learned regularizer first divides the input image into overlapping patches, subtracting the mean from each patch (a standard preprocessing technique in patch-based methods ), and passing each mean-subtracted patch through the learned component (e.g., neural network). The original patch means are added to the regularizer outputs, which are recombined. Figure 2 compares the presented learning-based methods at different training set sizes. Methods that do not incorporate the forward model, like ResAuto and CSGM, appear not to perform well in the low-sample regime. We also demonstrate that patchwise regularization enables reconstruction of large images with very small training sets. In this experiment, the training set consists only of a single clean image, taken from the SpaceNet dataset.Test PSNR is 31.90 ± 1.42 dB for the 8x8 patchwise regularized NN, and 18.34 ± 1.31 for the full-image regularized NN across a test set of size 64. Fig. 3 contains some sample reconstructions of an image from the test set. This work explores the Neumann network architecture to solve linear inverse problems, which can be interpreted as an approximation of the MSE optimal reconstruction according to our Proposition 1. The Neumann network architecture also permits a learned patchwise regularizer, which learns the low-dimensional conditional distributions over image patches instead of the whole image. The Neumann network is empirically competitive with other state-of-the-art methods for inverse problems in imaging, and we demonstrate the ability to learn to regularize from a single training pair.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyxYnQ398H
Neumann networks are an end-to-end, sample-efficient learning approach to solving linear inverse problems in imaging that are compatible with the MSE optimal approach and admit an extension to patch-based learning.
End-to-end task-oriented dialogue is challenging since knowledge bases are usually large, dynamic and hard to incorporate into a learning framework. We propose the global-to-local memory pointer (GLMP) networks to address this issue. In our model, a global memory encoder and a local memory decoder are proposed to share external knowledge. The encoder encodes dialogue history, modifies global contextual representation, and generates a global memory pointer. The decoder first generates a sketch response with unfilled slots. Next, it passes the global memory pointer to filter the external knowledge for relevant information, then instantiates the slots via the local memory pointers. We empirically show that our model can improve copy accuracy and mitigate the common out-of-vocabulary problem. As a , GLMP is able to improve over the previous state-of-the-art models in both simulated bAbI Dialogue dataset and human-human Stanford Multi-domain Dialogue dataset on automatic and human evaluation. Task-oriented dialogue systems aim to achieve specific user goals such as restaurant reservation or navigation inquiry within a limited dialogue turns via natural language. Traditional pipeline solutions are composed of natural language understanding, dialogue management and natural language generation BID32 BID28, where each module is designed separately and expensively. In order to reduce human effort and scale up between domains, end-to-end dialogue systems, which input plain text and directly output system responses, have shown promising based on recurrent neural networks BID10 BID14 and memory networks BID24. These approaches have the advantages that the dialogue states are latent without hand-crafted labels and eliminate the needs to model the dependencies between modules and interpret knowledge bases (KB) manually. However, despite the improvement by modeling KB with memory network BID0, end-to-end systems usually suffer from effectively incorporating external KB into the system response generation. The main reason is that a large, dynamic KB is equal to a noisy input and hard to encode and decode, which makes the generation unstable. Different from chit-chat scenario, this problem is especially harmful in task-oriented one, since the information in KB is usually the expected entities in the response. For example, in TAB0 the driver will expect to get the correct address to the gas station other than a random place such as a hospital. Therefore, pointer networks BID26 or copy mechanism BID8 ) is crucial to successfully generate system responses because directly copying essential words from the input source to the output not only reduces the generation difficulty, but it is also more like a human behavior. For example, in TAB0, when human want to reply others the Valero's address, they will need to "copy" the information from the table to their response as well. Therefore, in the paper, we propose the global-to-local memory pointer (GLMP) networks, which is composed of a global memory encoder, a local memory decoder, and a shared external knowledge. Unlike existing approaches with copy ability BID9 BID8, which the only information passed to decoder is the encoder hidden states, our model shares the external knowledge and leverages the encoder and the external knowledge to learn a global memory pointer and global contextual representation. Global memory pointer modifies the external knowledge by softly filtering words that are not necessary for copying. Afterward, instead of generating system responses directly, the local memory decoder first uses a sketch RNN to obtain sketch responses without slot values but sketch tags, which can be considered as learning a latent dialogue management to generate dialogue action template. Then the decoder generates local memory pointers to copy words from external knowledge and instantiate sketch tags. We empirically show that GLMP can achieve superior performance using the combination of global and local memory pointers. In simulated out-of-vocabulary (OOV) tasks in the bAbI dialogue dataset BID0, GLMP achieves 92.0% per-response accuracy and surpasses existing end-to-end approaches by 7.5% in full dialogue. In the human-human dialogue dataset, GLMP is able to surpass the previous state of the art on both automatic and human evaluation, which further confirms the effectiveness of our double pointers usage. Our model 1 is composed of three parts: global memory encoder, external knowledge, and local memory decoder, as shown in FIG0 (a). The dialogue history X = (x 1, . . ., x n) and the KB information B = (b 1, . . ., b l) are the input, and the system response Y = (y 1, . . ., y m) is the expected output, where n, l, m are the corresponding lengths. First, the global memory encoder uses a context RNN to encode dialogue history and writes its hidden states into the external knowledge. Then the last hidden state is used to read the external knowledge and generate the global memory pointer at the same time. On the other hand, during the decoding stage, the local memory decoder first generates sketch responses by a sketch RNN. Then the global memory pointer and the sketch RNN hidden state are passed to the external knowledge as a filter and a query. The local memory pointer returns from the external knowledge can copy text from the external knowledge to replace the sketch tags and obtain the final system response. Our external knowledge contains the global contextual representation that is shared with the encoder and the decoder. To incorporate external knowledge into a learning framework, end-to-end memory networks (MN) are used to store word-level information for both structural KB (KB memory) and temporal-dependent dialogue history (dialogue memory), as shown in FIG0. In addition, the MN is well-known for its multiple hop reasoning ability BID24, which is appealing to strengthen copy mechanism. Global contextual representation. In the KB memory module, each element b i ∈ B is represented in the triplet format as (Subject, Relation, Object) structure, which is a common format used to represent KB nodes BID18. For example, the KB in the TAB0 will be denoted as {(Tom's house, distance, 3 miles),..., (Starbucks, address, 792 Bedoin St)}. On the other hand, the dialogue context X is stored in the dialogue memory module, where the speaker and temporal encoding are included as in BID0 like a triplet format. For instance, the first utterance from the driver in the TAB0 will be denoted as {($user, turn1, I), ($user, turn1, need), ($user, turn1, gas)}. For the two memory modules, a bag-of-word representation is used as the memory embeddings. During the inference time, we copy the object word once a memory position is pointed to, for example, 3 miles will be copied if the triplet (Toms house, distance, 3 miles) is selected. We denote Object function as getting the object word from a triplet. Knowledge read and write. Our external knowledge is composed of a set of trainable embedding matrices C = (C 1, . . ., C K+1), where C k ∈ R |V |×d emb, K is the maximum memory hop in the MN, |V | is the vocabulary size and d emb is the embedding dimension. We denote memory in the external knowledge as M = [B; X] = (m 1, . . ., m n+l), where m i is one of the triplet components mentioned. To read the memory, the external knowledge needs a initial query vector q 1. Moreover, it can loop over K hops and computes the attention weights at each hop k using DISPLAYFORM0 where DISPLAYFORM1 is the embedding in i th memory position using the embedding matrix C k, q k is the query vector for hop k, and B is the bag-of-word function. Note that p k ∈ R n+l is a soft memory attention that decides the memory relevance with respect to the query vector. Then, the model reads out the memory o k by the weighted sum over c k+1 and update the query vector q k+1. Formally, DISPLAYFORM2 In FIG1 (a), a context RNN is used to model the sequential dependency and encode the context X. Then the hidden states are written into the external knowledge as shown in FIG0 (b). Afterward, the last encoder hidden state serves as the query to read the external knowledge and get two outputs, the global memory pointer and the memory readout. Intuitively, since it is hard for MN architectures to model the dependencies between memories, which is a serious drawback especially in conversational related tasks, writing the hidden states to the external knowledge can provide sequential and contextualized information. With meaningful representation, our pointers can correctly copy out words from external knowledge, and the common OOV challenge can be mitigated. In addition, using the encoded dialogue context as a query can encourage our external knowledge to read out memory information related to the hidden dialogue states or user intention. Moreover, the global memory pointer that learns a global memory distribution is passed to the decoder along with the encoded dialogue history and KB information. Context RNN. A bi-directional gated recurrent unit (GRU) BID2 ) is used to encode dialogue history into the hidden states H = (h e 1, . . ., h e 1), and the last hidden state h e n is used to query the external knowledge as the encoded dialogue history. In addition, the hidden states H are written into the dialogue memory module in the external knowledge by summing up the original memory representation with the corresponding hidden states. In formula, c DISPLAYFORM0 Global memory pointer. Global memory pointer G = (g 1, . . ., g n+l) is a vector containing real values between 0 and 1. Unlike conventional attention mechanism that all the weights sum to one, each element in G is an independent probability. We first query the external knowledge using h e n until the last hop, and instead of applying the Softmax function as in FORMULA0, we perform an inner product followed by the Sigmoid function. The memory distribution we obtained is the global memory pointer G, which is passed to the decoder. To further strengthen the global pointing ability, we add an auxiliary loss to train the global memory pointer as a multi-label classification task. We show in the ablation study that adding this additional supervision does improve the performance. Lastly, the memory readout q K+1 is used as the encoded KB information. In the auxiliary task, we define the label DISPLAYFORM1 by checking whether the object words in the memory exists in the expected system response Y. Then the global memory pointer is trained using binary cross-entropy loss Loss g between G and G label. In formula, DISPLAYFORM2 Given the encoded dialogue history h e n, the encoded KB information q K+1, and the global memory pointer G, our local memory decoder first initializes its sketch RNN using the concatenation of h e n and q K+1, and generates a sketch response that excludes slot values but includes the sketch tags. For example, sketch RNN will generate "@poi is @distance away", instead of "Starbucks is 1 mile away." At each decoding time step, the hidden state of the sketch RNN is used for two purposes: 1) predict the next token in vocabulary, which is the same as standard sequence-to-sequence (S2S) learning; 2) serve as the vector to query the external knowledge. If a sketch tag is generated, the global memory pointer is passed to the external knowledge, and the expected output word will be picked up from the local memory pointer. Otherwise, the output word is the word that generated by the sketch RNN. For example in FIG1 (b), a @poi tag is generated at the first time step, therefore, the word Starbucks is picked up from the local memory pointer as the system output word. DISPLAYFORM0 We use the standard cross-entropy loss to train the sketch RNN, we define Loss v as. DISPLAYFORM1 We replace the slot values in Y into sketch tags based on the provided entity table. The sketch tags ST are all the possible slot types that start with a special token, for example, @address stands for all the addresses and @distance stands for all the distance information. Local memory pointer. Local memory pointer L = (L 1, . . ., L m) contains a sequence of pointers. At each time step t, the global memory pointer G first modify the global contextual representation using its attention weights, DISPLAYFORM2 and then the sketch RNN hidden state h d t queries the external knowledge. The memory attention in the last hop is the corresponding local memory pointer L t, which is represented as the memory distribution at time step t. To train the local memory pointer, a supervision on top of the last hop memory attention in the external knowledge is added. We first define the position label of local memory pointer L label at the decoding time step t as DISPLAYFORM3 The position n+l+1 is a null token in the memory that allows us to calculate loss function even if y t does not exist in the external knowledge. Then, the loss between L and L label is defined as DISPLAYFORM4 Furthermore, a record R ∈ R n+l is utilized to prevent from copying same entities multiple times. All the elements in R are initialized as 1 in the beginning. During the decoding stage, if a memory position has been pointed to, its corresponding position in R will be masked out. During the inference time,ŷ t is defined aŝ DISPLAYFORM5 where is the element-wise multiplication. Lastly, all the parameters are jointly trained by minimizing the weighted-sum of three losses (α, β, γ are hyper-parameters): DISPLAYFORM6 3 EXPERIMENTS We use two public multi-turn task-oriented dialogue datasets to evaluate our model: the bAbI dialogue BID0 and Stanford multi-domain dialogue (SMD). The bAbI dialogue includes five simulated tasks in the restaurant domain. Task 1 to 4 are about calling API calls, modifying API calls, recommending options, and providing additional information, respectively. Task 5 is the union of tasks 1-4. There are two test sets for each task: one follows the same distribution as the training set and the other has OOV entity values. On the other hand, SMD is a human-human, multi-domain dialogue dataset. It has three distinct domains: calendar scheduling, weather information retrieval, and point-of-interest navigation. The key difference between these two datasets is, the former has longer dialogue turns but the regular user and system behaviors, the latter has few conversational turns but variant responses, and the KB information is much more complicated. The model is trained end-to-end using Adam optimizer BID12, and learning rate annealing starts from 1e −3 to 1e −4. The number of hop K is set to 1,3,6 to compare the performance difference. The weights α, β, γ summing up the three losses are set to 1. All the embeddings are initialized randomly, and a simple greedy strategy is used without beam-search during the decoding stage. The hyper-parameters such as hidden size and dropout rate are tuned with grid-search over Table 2: Per-response accuracy and completion rate (in the parentheses) on bAbI dialogues. GLMP achieves the least out-of-vocabulary performance drop. Baselines are reported from Query Reduction Network BID20, End-to-end Memory Network BID0, Gated Memory Network BID15, Point to Unknown Word BID9, and Memory-to-Sequence. DISPLAYFORM0 Ptr-Unk Mem2Seq GLMP K1 GLMP K3 GLMP K6 T1 99.4 (-) 99.9 (99.6) 100 FORMULA0 100 FORMULA0 100 FORMULA0 100 FORMULA0 100 FORMULA0 100 FORMULA0 100 T2 99.5 (-) 100 FORMULA0 100 FORMULA0 100 FORMULA0 100 FORMULA0 100 FORMULA0 100 FORMULA0 100 FORMULA0 100 FORMULA0 the development set (per-response accuracy for bAbI Dialogue and BLEU score for the SMD). In addition, to increase model generalization and simulate OOV setting, we randomly mask a small number of input source tokens into an unknown token. The model is implemented in PyTorch and the hyper-parameters used for each task and the dataset statistics are reported in the Appendix. bAbI Dialogue. In Table 2, we follow BID0 to compare the performance based on per-response accuracy and task-completion rate. Note that for utterance retrieval methods, such as QRN, MN, and GMN, cannot correctly recommend options (T3) and provide additional information (T4), and a poor generalization ability is observed in OOV setting, which has around 30% performance difference in Task 5. Although previous generation-based approaches (Ptr-Unk, Mem2Seq) have mitigated the gap by incorporating copy mechanism, the simplest cases such as generating and modifying API calls (T1, T2) still face a 6-17% OOV performance drop. On the other hand, GLMP achieves a highest 92.0% task-completion rate in full dialogue task and surpasses other baselines by a big margin especially in the OOV setting. No per-response accuracy loss for T1, T2, T4 using only the single hop, and only decreases 7-9% in task 5.Stanford Multi-domain Dialogue. For human-human dialogue scenario, we follow previous dialogue works BID10 to evaluate our system on two automatic evaluation metrics, BLEU and entity F1 score 2. As shown in TAB3, GLMP achieves a highest 14.79 BLEU and 59.97% entity F1 score, which is a slight improvement in BLEU but a huge gain in entity F1. In fact, for unsupervised evaluation metrics in task-oriented dialogues, we argue that the entity F1 might be a more comprehensive evaluation metric than per-response accuracy or BLEU, as shown in that humans are able to choose the right entities but have very diversified responses. Note that the of rule-based and KVR are not directly comparable because they simplified the task by mapping the expression of entities to a canonical form using named entity recognition and linking 3. Moreover, human evaluation of the generated responses is reported. We compare our work with previous state-of-the-art model Mem2Seq 4 and the original dataset responses as well. We randomly select 200 different dialogue scenarios from the test set to evaluate three different responses. Amazon Mechanical Turk is used to evaluate system appropriateness and human-likeness on a scale from 1 to 5. As the shown in TAB3, we see that GLMP outperforms Mem2Seq in both measures, which is coherent to previous observation. We also see that human performance on this assessment sets the upper bound on scores, as expected. More details about the human evaluation are reported in the Appendix. Ablation Study. The contributions of the global memory pointer G and the memory writing of dialogue history H are shown in TAB4. We compare the using GLMP with K = 1 in bAbI OOV setting and SMD. GLMP without H means that the context RNN in the global memory encoder does not write the hidden states into the external knowledge. As one can observe, our model without H has 5.3% more loss in the full dialogue task. On the other hand, GLMP without G means that we do not use the global memory pointer to modify the external knowledge, and an 11.47% entity F1 drop can be observed in SMD dataset. Note that a 0.4% increase can be observed in task 5, it suggests that the use of global memory pointer may impose too strong prior entity probability. Even if we only report one experiment in the table, this OOV generalization problem can be mitigated by increasing the dropout ratio during training. Visualization and Qualitative Evaluation. Analyzing the attention weights has been frequently used to interpret deep learning models. In Figure 3, we show the attention vector in the last hop for each generation time step. Y-axis is the external knowledge that we can copy, including the KB information and the dialogue history. Based on the question "what is the address?" asked by the driver in the last turn, the gold answer and our generated response are on the top, and the global memory pointer G is shown in the left column. One can observe that in the right column, the final memory pointer successfully copy the entity chevron in step 0 and its address 783 Arcadia Pl in step 3 to fill in the sketch utterance. On the other hand, the memory attention without global weighting is reported in the middle column. One can find that even if the attention weights focus on several point of interests and addresses in step 0 and step 3, the global memory pointer can mitigate the issue as expected. More dialogue visualization and generated including several negative examples and error analysis are reported in the Appendix. Task-oriented dialogue systems. Machine learning based dialogue systems are mainly explored by following two different approaches: modularized and end-to-end. For the modularized systems BID29 BID28, a set of modules for natural language understanding BID32 BID1, dialogue state tracking BID13 BID35, dialogue management BID23, and natural language generation BID22 are used. These approaches achieve good stability via combining domain-specific knowledge and slot-filling techniques, but additional human labels are needed. On the other hand, end-to-end approaches have shown promising recently. Some works view the task as a next utterance retrieval problem, for examples, recurrent entity networks share parameters between RNN BID30, query reduction networks modify query between layers BID20, and memory networks BID0 BID15 perform multi-hop design to strengthen reasoning ability. In addition, some approaches treat the task as a sequence generation problem. BID14 Delexicalized Generation: @poi is at @address Final Generation: chevron is at 783_arcadia_pl Gold: 783_arcadia_pl is the address for chevron gas_station Figure 3: Memory attention visualization in the SMD navigation domain. Left column is the global memory pointer G, middle column is the memory pointer without global weighting, and the right column is the final memory pointer. these approaches can encourage more flexible and diverse system responses by generating utterances token-by-token. Pointer network. BID26 uses attention as a pointer to select a member of the input source as the output. Such copy mechanisms have also been used in other natural language processing tasks, such as question answering BID3 BID10, neural machine translation BID9 BID8, language modeling BID17, and text summarization BID19. In task-oriented dialogue tasks, first demonstrated the potential of the copy-augmented Seq2Seq model, which shows that generationbased methods with simple copy strategy can surpass retrieval-based ones. Later, augmented the vocabulary distribution by concatenating KB attention, which at the same time increases the output dimension. Recently, combines end-to-end memory network into sequence generation, which shows that the multi-hop mechanism in MN can be utilized to improve copy attention. These models outperform utterance retrieval methods by copying relevant entities from the KBs. Others. BID10 proposes entity indexing and introduces recorded delexicalization to simplify the problem by record entity tables manually. In addition, our approach utilized recurrent structures to query external memory can be viewed as the memory controller in Memory augmented neural networks (MANN) BID6. Similarly, memory encoders have been used in neural machine translation BID27 and meta-learning applications. However, different from other models that use a single matrix representation for reading and writing, GLMP leverages end-to-end memory networks to perform multiple hop attention, which is similar to the stacking self-attention strategy in the Transformer BID25. In the work, we present an end-to-end trainable model called global-to-local memory pointer networks for task-oriented dialogues. The global memory encoder and the local memory decoder are designed to incorporate the shared external knowledge into the learning framework. We empirically show that the global and the local memory pointer are able to effectively produce system responses even in the out-of-vocabulary scenario, and visualize how global memory pointer helps as well. As a , our model achieves state-of-the-art in both the simulated and the human-human dialogue datasets, and holds potential for extending to other tasks such as question answering and text summarization. A.1 TRAINING PARAMETERS Table 5: Selected hyper-parameters in each dataset for different hops. The values is the embedding dimension and the GRU hidden size, and the values between parenthesis is the dropout rate. For all the models we used learning rate equal to 0.001, with a decay rate of 0.5. T2 T3 T4 T5 SMD GLMP K1 64 FORMULA0 2) GLMP K3 64 FORMULA0 2) GLMP K6 64 FORMULA0 A.2 DATASET STATISTICS For bAbI dialogues, the mistakes are mainly from task 3, which is recommending restaurants based on their rating from high to low. We found that sometimes the system will keep sending those restaurants with the higher score even if the user rejected them in the previous turns. On the other hand, SMD is more challenging for response generation. First, we found that the model makes mistakes when the KB has several options corresponding to the user intention. For example, once the user has more than one doctor appointment in the table, the model can barely recognize. In addition, since we do not include the domain specific and user intention supervision, wrong delexicalized responses may be generated, which in an incorrect entity copy. Lastly, we found that the copied entities may not be matched to the generated sketch tags. For example, an address tag may in a distance entity copy. We leave the space of improvement to future works. One of the reviewers suggested us to compare our work to some existing dialogue framework such as PyDial 5. To the best of our knowledge, in the PyDial framework, it requires to have the dialogue acts labels for the NLU module and the belief states labels for the belief tracker module. The biggest challenge is we do not have such labels in the SMD and bAbI datasets. Moreover, the semi tracker in PyDial is rule-based, which need to re-write rules whenever it encounters a new domain or new datasets. Even its dialogue management module could be a learning solution like policy networks, the input of the policy network is still the hand-crafted state features and labels. Therefore, without the rules and labels predefined in the NLU and belief tracker modules, PyDial could not learn a good policy network. Truly speaking, based on the data we have (not very big size) and the current state-of-the-art machine learning algorithms and models, we believe that a well and carefully constructed task-oriented dialogue system using PyDial in a known domain using human rules (in NLU and Belief Tracker) with policy networks may outperform the end-to-end systems (more robust). However, in this paper, without additional human labels and human rules, we want to explore the potential and the advantage of end-to-end systems. Besides easy to train, for multi-domain cases, or even zero-shot domain cases, we believe end-to-end approaches will have better adaptability compared to any rule-based systems. Delexicalized Generation: the nearest @poi_type is @poi, @distance away at @address Final Generation: the nearest grocery_store is willows_market, 3_miles away at 409_bollard_st Gold: we are 3_miles away from willows_market but there is a car_collision_nearby Figure 5: Memory attention visualization from the SMD navigation domain. Delexicalized Generation: the nearest @poi_type is @poi, @distance away at @address Final Generation: the nearest grocery_store is sigona_farmers_market, 4_miles away at 819_alma_st Gold: there are whole_foods 2_miles away and sigona_farmers_market 4_miles away where do we go? Delexicalized Generation: okay, setting a reminder for your @event with @party @date at @time Final Generation: okay, setting a reminder for your dinner with marie 7pm at the_6th Gold: i will set a reminder for dinner at 7pm, for the_6th of this month with marie Figure 9: Memory attention visualization from the SMD schedule domain. Delexicalized Generation: your @event is on @date at @time Final Generation: your tennis_activity is on the_4th at 5pm Gold: your tennis_activity is on the_4th at 5pm and your sister will be attending Delexicalized Generation: your next @event is on @date at @time Final Generation: your next tennis_activity is on the_13th at 6pm Gold: your next tennis_activity is on the_4th at 5pm with your mother Delexicalized Generation: your @event is on @date at @time with @party Final Generation: your doctor is on the_5th at 6pm with alex Gold: are you talking about the doctor_appointment on wednesday or the one on the_5th? Delexicalized Generation: it will not be @weather_attribute in @location @weekly_time Final Generation: it will not be drizzle in redwood_city weekend Gold: there will be no drizzle in redwood_city this weekend Delexicalized Generation: there will be @weather_attribute in @location on @date Final Generation: there will be clear_skies in danville on thursday Gold: dew is predicted in danville on thursday FIG0: Memory attention visualization from the SMD weather domain.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryxnHhRqFm
GLMP: Global memory encoder (context RNN, global pointer) and local memory decoder (sketch RNN, local pointer) that share external knowledge (MemNN) are proposed to strengthen response generation in task-oriented dialogue.
The checkerboard phenomenon is one of the well-known visual artifacts in the computer vision field. The origins and solutions of checkerboard artifacts in the pixel space have been studied for a long time, but their effects on the gradient space have rarely been investigated. In this paper, we revisit the checkerboard artifacts in the gradient space which turn out to be the weak point of a network architecture. We explore image-agnostic property of gradient checkerboard artifacts and propose a simple yet effective defense method by utilizing the artifacts. We introduce our defense module, dubbed Artificial Checkerboard Enhancer (ACE), which induces adversarial attacks on designated pixels. This enables the model to deflect attacks by shifting only a single pixel in the image with a remarkable defense rate. We provide extensive experiments to support the effectiveness of our work for various attack scenarios using state-of-the-art attack methods. Furthermore, we show that ACE is even applicable to large-scale datasets including ImageNet dataset and can be easily transferred to various pretrained networks. The checkerboard phenomenon is one of the well-known artifacts that arise in various applications such as image super resolution, generation and segmentation BID29 BID27. In general, the checkerboard artifacts indicate an uneven pattern on the output of deep neural networks (DNNs) that occurs during the feed-forward step. BID21 have investigated in-depth the origin of the phenomenon that the artifacts come from the uneven overlap of the deconvolution operations (i.e., transposed convolution,) on pixels. Its solutions have been suggested in various studies BID35 BID0.Interestingly, a possible relationship between the checkerboard artifacts and the robustness of neural networks has been noted by BID21 but not has it been seriously investigated. Moreover, while the previous works BID16 BID20 BID21 BID8 have concentrated on the artifacts in the pixel space, studies have been rare on the artifacts in the gradient space that occur during the backward pass of the convolution operation. To show that the gradient checkerboard artifacts phenomenon is crucial for investigating the network robustness and is indeed a weak point of a neural network, we focus on analyzing its effects on the gradient space in terms of adversarial attack and defense. By explicitly visualizing the gradients, we demonstrate that the phenomenon is inherent in many contemporary network architectures such as ResNet BID12, which use strided convolutions with uneven overlap. It turns out that the gradient checkerboard artifacts substantially influence the shape of the loss surface, and the effect is image-agnostic. Based on the analysis, we propose an Artificial Checkerboard Enhancer module, dubbed ACE. This module further boosts or creates the checkerboard artifacts in the target network and manipulates the gradients to be caged in the designated area. Because ACE guides the attacks to the intended environment, the defender can easily dodge the attack by shifting a single pixel (Figure 1) with a negligible accuracy loss. Moreover, we demonstrate that our module is scalable to large-scale datasets such as ImageNet BID6 ) and also transferable to other models in a plug and play fashion without additional fine-tuning of pretrained networks. Therefore, our module is highly practical in general scenarios in that we can easily plug any pretrained ACE module into the target architecture. Base Network (e.g. ResNet, VGG) ACE Module The first layer of the ACE module has kernel size 1 and stride 2 Gradient Checkerboard Artifact (GCA) Generate adversarial image from the model's enhanced gradient checkerboard artifact Adv Image GCA Induce attack on GCA Zero-pad a single row/column Remove opposite row/column Adv Image Input ImageFigure 1: Defense procedure using the proposed Artificial Checkerboard Enhancer (ACE) module. ACE shapes the gradient into a checkerboard pattern, thus attracting adversarial attacks to the checkerboard artifacts. Since the defender is aware of the guided location in advance, adversarial attacks can be easily deflected during inference by padding the image with a single row/column and discarding the opposite row/column. Our contributions are summarized as three-fold:Analysis of gradient checkerboard artifacts. We investigate the gradient checkerboard artifacts in depth which are inherent in many of the contemporary network architectures with the uneven overlap of convolutions. To the best of our knowledge, this is the first attempt to analyze the artifacts of the gradient space in terms of network robustness. We empirically show that the gradient checkerboard artifacts incur vulnerability to the network. Artificial Checkerboard Enhancer (ACE). We introduce ACE module that strengthens gradient artifacts and induces adversarial attacks to our intended spaces. After guiding the attacks to the pre-specified area using ACE, we deflect adversarial attacks by one-pixel padding. Our extensive experimental support that our proposed defense mechanism using ACE module successfully defends various adversarial attacks on CIFAR-10 and ImageNet BID6 datasets. Scalability. We show that ACE is readily transferable to any pretrained model without fine-tuning, which makes the module scalable to a large-scale dataset. To the best of our knowledge, this is the first defense method that attempts and succeeds to defend the attacks with the projected gradient descent algorithm (PGD) BID17 BID1 on ImageNet dataset. Adversarial attacks can be conducted in various ways depending on how much the adversary, also known as the threat model, has access to the target model. If the attacker can acquire the gradients of the target, gradient-based attacks can be performed, which are usually very effective because iterative optimization becomes possible BID10 BID14; BID22 BID5. Score-based attacks can be valid when the adversary can use the logits or the predicted probabilities for generating adversarial examples BID32. If generated adversarial examples are not from the target model, we call this transfer-based attack. Recently, a new type of attack, called a decisionbased attack, has been introduced where the adversary only has knowledge about the final decision of the model (e.g., top-1 class label) BID3.According to, defense methods can be largely categorized into gradient masking and adversarial training. Gradient masking methods usually make adversaries difficult to compute exact gradients and make it challenging to fool the target. Gradient obfuscation, which is a recently introduced term of gradient masking by BID2, includes specific gradient categories such as stochastic gradients, shattered gradients and vanishing/exploding gradients. Works related to our defense method can be considered as input transformations which focus on the input image BID36 BID25 BID4 BID37 BID11.On the other hand, adversarial training has been known to make models robust against adversarial perturbation BID17 BID19 BID33, but there remains an issue that the robustness comes with the cost of accuracy BID34 BID31. Moreover, according to BID28, restricting on l ∞ bounded perturbations during adversarial training has limited robustness to attacks with different distortion metrics. We would like to define the following terms here and will use them without further explanation throughout this paper. Gradient Overlap (Ω(x i)) represents the number of parameters associated with a single pixel in the input x. For more explanation on its calculation, see Appendix C. We define the set of pixels whose gradient overlap is in the top p fraction as G(p) (e.g., G(1.0) represents the entire pixel set). Gradient Checkerboard Artifacts (GCA) is a phenomenon which shows checkerboard patterns in gradients. The existence of GCA has been introduced in BID21, although not has it been thoroughly examined. GCA occurs when a model uses a convolution operation of kernel size k that is not divisible by its stride s. We first introduce a simple experiment that motivated the design of our proposed ACE module. We conduct an experiment to visualize the attack success rate of a given image set using a single pixel perturbation attack. For each pixel of an input image, we perturb the pixel to white (i.e., a pixel with RGB value of ). Next, we use our toy model based on LeNet (see Appendix A for the detailed architecture) and ResNet-18 BID12 to measure the attack success rate on the test images in CIFAR-10 dataset. Note that the attack success rate P attack in Figure 2 is defined as the average number of successful attacks per pixel on the entire set of test images. As we can see in Figure 2a and Figure 2c, checkerboard patterns in the attack success rate are clearly observable. This pattern can be considered as image-agnostic because it is the of the average over the entire set of the test images of CIFAR-10 dataset. Then a natural question arises: What is the cause of this image-agnostic phenomenon? We speculate that the uneven gradient overlap is the cause of this phenomenon, which is directly associated with the number of parameters that are connected to a single pixel. As depicted in Figure 2b and Figure 2d, we can observe checkerboard patterns in the gradient overlap. In fact, this uneven overlap turns out to be substantially susceptible to the adversarial attacks. We will provide the supporting on this in the following sections.(a) Pattack of Toy model DISPLAYFORM0 Figure 2: Illustration of the attack success rate P attack and the gradient overlap Ω(x i) of toy model and ResNet-18. The illustrated gradient overlap of ResNet-18 comes from the features after the fifth convolutional layer. Attack success rate (a) and (c) are computed by perturbing each pixel of an image to white (i.e., ), over the entire set of test images in CIFAR-10 dataset. Note that higher probability at a pixel denotes a higher success rate when it is attacked. We can observe patterns in P attack aligned to our gradient overlap on (b) toy model and (d) ResNet-18. Table 1: Top-1 test accuracy (%) after performing various adversarial attacks on every pixel G(p = 1.0), its subset G(p = 0.3), and their differences (i.e., diff) on CIFAR-10 dataset. The toy model (see Appendix A) and ResNet-18 achieved 81.4% and 94.6% top-1 test accuracy, respectively. Note that all the diffs are close to zero. ResNet-18 Attack Methods DISPLAYFORM0 OnePixel BID32 56.6 58.4 1.7 57.2 59.5 2.4 JSMA BID22 0.2 0.4 0.2 3.2 9.8 6.6 DeepFool BID18 18.5 18.6 0.1 7.2 11.5 4.3 CW BID5 0.0 0.0 0.0 0.0 0.0 0.0 PGD BID17 0.0 1.6 1.6 0.0 0.0 0.0 To show that the pixels with high gradient overlaps are indeed a weak point of network, we generate adversarial examples on G(p). We evaluate top-1 accuracy of our toy model (the model defined as in the previous subsection) and ResNet-18 on CIFAR-10 dataset after performing five adversarial attacks BID32 BID22 BID18 BID5 BID17 for p ∈ {1.0, 0.3} (Table 1). Interestingly, constraining the domain of the attacks to G(0.3) barely decreases the success rate compared to the attacks on G(1.0).We can observe that the pixels with the high gradient overlaps are more susceptible (i.e., likely to be in a vulnerable domain) to the adversarial attacks. Considering all the observations, we leverage the vulnerable domain of the pixels for adversarial defense. If we can intentionally impose the GCA onto a model input and let GCA occupy the vulnerable domain, we can fully induce the attacks on it so that the induced attacks can be dodged easily by a single padding operation. In this section, we propose the Artificial Checkerboard Enhancer (ACE) module, which artificially enhances the checkerboard pattern in the input gradients so that it induces the vulnerable domain to have the identical pattern. Figure 3a illustrates our proposed ACE module, which is based on a convolutional autoencoder. The encoder consists of convolutional layers where the first layer's k is not divisible by s (k ≡ 0 mod s), for example, when k = 1 and s = 2. In order to preserve the information of the input x, we add an identity skip connection that bypasses the input of ACE module to the output. The hyperparameter λ is to control the magnitude of checkerboard artifacts in the input gradients. We plug our ACE module in front of a base convolutional network to enhance the checkerboard artifacts on its input gradients. By increasing λ, we can artificially increase the gradient checkerboard artifacts of the network. Figure 3b and 3c show the heatmaps of the input gradients of ResNet-18 when λ = 10 and λ = 0 (i.e., without ACE module), respectively. The heatmap is generated by a channel-wise absolute sum of input gradients. Note that the checkerboard pattern is clearly observed in Figure 3b.By changing the value of λ, we report the top-1 test accuracy and the proportion of the pixels having checkerboard artifacts (C) in the top-30% gradient overlaps G(0.3) (TAB0). More precisely, we denote C as the GCA imposed by ACE module, which is identical to the set of pixels that are connected to its first convolutional layer of k = 1 and s = 2. In TAB0, we can observe that 1) there is only a small decrease in the accuracy even with a large λ and 2) the pixels with the large gradient overlaps gradually coincide with the GCA as the λ increases. Furthermore, according to the in Table 1, the existing adversarial attacks tend to be induced on the pixels with the high gradient overlaps. Therefore, we can conjecture that our ACE module which builds a high gradient overlap with a significant checkerboard pattern could cage the adversarial attacks into the checkerboard artifacts, and this will be empirically proven in Section 5. We now study the effects of λ. To this end, we first visualize the classified labels with respect to the magnitude of the perturbation on pixels in checkerboard artifacts C and pixels in non-checkerboard artifacts X\C. Let x be the input image andê C = DISPLAYFORM0 where M C denotes a mask (i.e., value of i-th element of M C equals to one if x i ∈ C and zero otherwise) and denotes the element-wise multiplication. We defineê X\C in a similar way. We plot the classified label map of x +īê X\C +jê C by varyingī andj. For the experiment, we first train ACE module as an autoencoder using ImageNet BID6 ) datasets and plug ACE module into pretrained ResNet-152 as described in the following experiment section. Next, we plot classified labels for a sample image from ImageNet by varyingī andj from −100 to 100 by interval 1 (i.e., we test 40k perturbed images per each image) in FIG1. The figure signifies the classified label map with respect to the perturbation on X\C and C. Without using our ACE module (when λ = 0), the perturbation through non-artifact pixels and artifact pixels similarly affect the label map. However, when λ > 0, we can observe that the artifact pixels are susceptible to change their labels with only a small perturbation while non-artifact pixels are robust to the same perturbation. Note that the asymmetry between the artifact pixels and non-artifacts becomes more clear as λ increases. Here, we propose a novel defense method using ACE module. First, we plug ACE module into the input of a given network which enhances gradient checkerboard artifacts. Next, we let the adversary generate adversarial images using the network with ACE module. Because ACE module is likely to expose the pixels in the vulnerable domain, which is empirically shown in Appendix I, we may consider the ACE module as an inducer to the adversarial perturbations generated by the adversary into the checkerboard. Interestingly, because the pixels in the vulnerable domain are going to be aligned to the repeated checkerboard pattern, by shifting a single pixel of the adversarial sample, we can move perturbations into the non-vulnerable domain (i.e., non-artifact pixels). The proposed defense mechanism is similar to the defense method introduced in BID36. However, thanks to our ACE module, the vulnerable pixels are induced to checkerboard artifacts so that only onepixel padding is enough to avoid several adversarial attacks aiming the pixels in the vulnerable domain. We also report the defense regarding the diverse padding-sizes in Appendix D. It is worthwhile to note that large λ induces significant gradient checkerboard artifacts hence leads to more robust defense. The detailed are reported in Section 5. For thorough evaluation, we evaluate our proposed defense method in the following three attack scenarios, which are vanilla, transfer, and adaptive attacks. First, in the vanilla attack scenario, the adversary has access to the target model but not our proposed defense method (i.e., single pixel padding defense). We remark that the vanilla attack scenario is similar to the scenario used by BID36. Second, in the transfer attack scenario, the adversary generates adversarial perturbations from a source model, which is different from the target model. Finally, in the adaptive attack scenario, the adversary knows every aspect of the model and the defense method so that it can directly exploit our defense. For our experiments, Expectation Over Transformation (EOT) BID2 ) is used for the adaptive attack scenario. For the evaluation of our defense method, we use the following five attack methods as our adversary. OnePixel BID32, JSMA BID22, DeepFool , CW BID5 and PGD BID17 1. In addition, we conduct experiments on CIFAR-10 and ImageNet BID6 ) datasets for the attack scenarios. For the models evaluated in CIFAR-10 dataset, we train the models with the two layered ACE module from scratch. Note that the first convolutional layer in ACE has k = 1 and s = 2, and the following deconvolutional layer has k = 3 and s = 2 so that it enhances gradient checkerboard artifacts. We would like to recall that the top-1 accuracy of VGG-11 BID30 and ResNet-18 BID12 with ACE module in respect of different λ are reported in TAB0. Meanwhile, for a large-scale dataset such as ImageNet, training an entire network is very expensive. Hence, we train the ACE module as autoencoder with UNet architecture BID26 and plug ACE into the input of a pretrained network without any additional training procedure. In order to retain the scale of the input image of the pretrained network, we slightly modify the ACE module by constraining λ ∈ and multiplying (1 − λ) to the identity skip connection. In this way, our ACE module becomes capable of handling large-scale datasets in a plug and play fashion on any model. We now introduce two evaluation metrics named attack survival rate and defense success rate which denote that top-1 accuracy after attack divided by the original top-1 accuracy of the model Table 3: Attack survival rate (%) and defense success rate (%) (larger is better) by one-pixel padding defense on CIFAR-10 dataset with varying λ. Note that λ = 0 is the equivalent setting to single padding pixel experiments in BID36 Table 4: Attack survival rate (%) and defense success rate (%) (larger is better) by one-pixel padding defense on ImageNet dataset with varying λ. ResNet-18-ACE 0.0 0.0 10.2 23.9 32.1 93.5 98.3 VGG-11-ACE 0.8 1.7 1.9 9.7 85.9 79.5 98.8 and top-1 accuracy after defending attack divided by the original accuracy, respectively. We note that all experimental reported in this section are reproduced by ourselves 2.Vanilla attack scenario. We evaluate our defense method in both CIFAR-10 and ImageNet dataset. For CIFAR-10 experiments, we train an entire network including ACE. For ImageNet experiments, we only train ACE module as conventional autoencoder by minimizing the mean squared error without training an entire network. Table 3 shows that our proposed method defends various attack methods successfully on CIFAR-10 dataset. We remark that by choosing a large λ (e.g., λ = 100), the top-1 accuracy after defense on performed attacks is barely dropped. In table 4, we report same experiments conducted on ImageNet dataset. We use the pretrained models of VGG-19 and ResNet-152 3 repository whose top-1 test accuracy on ImageNet dataset is 72.38% and 78.31%, respectively. We abbreviate the of OnePixel, JSMA and DeepFool due to the infeasibly high cost of time and memory limit for those algorithms to craft numerous adversarial images in ImageNet dataset. Comparison with other defense methods BID25 BID36 BID4 for CIFAR-10 and ImageNet datasets are reported in Appendix E. To investigate the effectiveness of λ regards to defense success rate, we evaluate PGD attack for λ ∈ {0, 1, 2, 5, 10, 20, 100} and report the defense success rates in Table 5. The shows that: Defense success rate (%) on CIFAR-10 test dataset after one-pixel padding defense on transfer attacks from the source model to the target model via JSMA, CW and PGD. The number followed by name of network denotes the intensity of λ, e.g., VGG-11-100 denotes VGG-11 + ACE with λ = 100. Note that adversarial examples generated by different λ are not transferable to other models.when λ increases, the defense success rate improves as well. To the best of our knowledge, this is the first work that defends PGD up to 98%.Transfer attack scenario. It has been demonstrated that conventional deep networks are vulnerable to transfer attacks proposed by BID23. To show that our method is robust to transfer attacks, we conduct transfer attacks for VGG-11 and ResNet-18 by choosing λ ∈ {0, 10, 100} on CIFAR-10 dataset. We report the defense success rate after one-pixel padding defense of transfer attacks by JSMA, CW and PGD in FIG2. More including OnePixel and DeepFool transfer attack experiments are reported in Appendix G. The show that generated adversarial samples are not transferable to other models with different λ. Adaptive attack scenario. A successful defense method should defend l 0, l 2, and even l ∞ bounded adversaries and also show robust on the adaptive white-box setting. We report our defense combined with adversarial training against Expectation Over Transformation (EOT) BID2 ) of PGD attack in Appendix H. From the , it turns out that our method is complementary with robust training defense methods (e.g., adversarial training). Therefore, if we combine our method with the existing robust training defense methods together, we can secure promising on the vanilla scenario and even perform well on the adaptive scenario. In this paper, we have investigated the gradient checkerboard artifacts (GCA) which turned out to be a potential threat to the network robustness. Based on our observations, we proposed a novel Artificial Checkerboard Enhancer (ACE) which attracts the attack to the pre-specified domain. ACE is a module that can be plugged in front of any pretrained model with a negligible performance drop. Exploiting its favorable characteristics, we provide a simple yet effective defense method that is even scalable to a large-scale dataset such as ImageNet. Our extensive experiments show that the proposed method can deflect various attack scenarios with remarkable defense rates compared with several existing adversarial defense methods. We present our toy model used in Section 3. The model consists of three convolutional layers followed by ReLUs and two fully-connected layers (Table 6). Table 6: Architecture detail of our toy model used on CIFAR-10 dataset. Input image x ∈ R 32×32×3 3×3, stride=2 conv16 ReLU 3×3, stride=2 conv32 ReLU 3×3, stride=1 conv64 ReLU dense 1600 → 512 dense 512 → 10 To visualize the existence of checkerboard artifacts in the gradients, here we present the average heatmaps over the test images. Let ∇x l i,j,k denote the feature gradient in layer l where i, j and k are the indices of xy-axis and the channel, respectively. In each layer, the heatmap h l is gathered by a channel-wise absolute sum of the feature gradients. More specifically, h We can observe checkerboard patterns before strided convolutional layers (layer 5, 9, 13) and they propagate to their front layers (layer 4, 8). To count the gradient overlaps per pixel (Ω(x i)), for simplicity, we only consider convolutional and fully-connected layers. We set each and every network parameter to 1 to evenly distribute the gradients from the loss layer to the previous layers. Therefore, the gradients of the modified network are aggregated in a certain location (or pixel) if and only if there is a linear overlap from the later layers at the backward pass. In TAB2, we report the defense success rate varying the padding-size from one to five. We can observe that the proposed defense mechanism almost prevents accuracy drop with the padding-sizes of one, three and five (odd numbers). The supports that ACE induces adversarial perturbations into checkerboard artifacts, which could be avoided by padding an image with one pixel. BID25 BID36 BID4 ) by following their papers. For fair comparison, we follow the suggested settings in their papers, and the are presented in TAB3. Specifically, for BID25, with R-CAM implemented, the number of deflection is set to 100 with window-size of 10 and sigma for denoiser of 0.04, respectively. For BID36, the scale factor is set to 0.9. Finally, for BID4, the number of the ensemble is set to 1000 and the radius of the region is set to 0.02. In this section, we plot the classified label map after pixel perturbation on artifact pixels and nonartifact pixels as described in Section 4.2. We follow the same setting of Section 4.2. The are reported in FIG5. Adaptive attack case A successful defense method should be able to defend various conditions including l 0, l 2 and l ∞ -bounded adversaries as well as an adaptive white-box setting where the adversary knows our defense method in every aspect. Under the adaptive white-box setting, we conducted experiments in Table 10. In order to avoid direct exploitation of our padding direction, we shift our images in the random direction around the known safe points near our checkerboard artifacts. By combining PGD adversarial training BID17 for robustness on l ∞ bounded attacks to our method, we can defend the corresponding adaptive attack for stochastic methods known as Expectation Over Transformation (EOT, BID1). This method was used to break BID36 in BID2. Although we have some loss in Top-1 accuracy when λ is high, we have advantages in that we can defend vanilla attack cases at the same time. Table 10: Top-1 accuracy (%) after EOT attack on our defense method together with adversarial training on CIFAR-10 dataset. All images were padded by one-pixel to X-axis. Conducted attacks are written in the format of PGD-norm-iterations. ACE module only shows training accuracy loss due to high λ.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJlc6iA5YX
We propose a novel aritificial checkerboard enhancer (ACE) module which guides attacks to a pre-specified pixel space and successfully defends it with a simple padding operation.
Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, and understanding the dynamics of (stochastic) gradient algorithms for solving such formulations has been a grand challenge. As a first step, we restrict to bilinear zero-sum games and give a systematic analysis of popular gradient updates, for both simultaneous and alternating versions. We provide exact conditions for their convergence and find the optimal parameter setup and convergence rates. In particular, our offer formal evidence that alternating updates converge "better" than simultaneous ones. Min-max optimization has received significant attention recently due to the popularity of generative adversarial networks (GANs) and adversarial training , just to name some examples. Formally, given a bivariate function f (x, y), we aim to find a saddle point (x *, y *) such that f (x *, y) ≤ f (x *, y *) ≤ f (x, y *), ∀x ∈ R n, ∀y ∈ R n. (1.1) Since the beginning of game theory, various algorithms have been proposed for finding saddle points (; ; ; ; ; ; ; ;). Due to its recent resurgence in ML, new algorithms specifically designed for training GANs were proposed (; ; b;). However, due to the inherent non-convexity in deep learning formulations, our current understanding of the convergence behaviour of new and classic gradient algorithms is still quite limited, and existing analysis mostly focused on bilinear games or strongly-convex-strongly-concave games (; ; b; ; b). Nonzero-sum bilinear games, on the other hand, are known to be PPAD-complete (for finding approximate Nash equilibria, see e.g.). In this work, we study bilinear zero-sum games as a first step towards understanding general min-max optimization, although our apply to some simple GAN settings (a). It is well-known that certain gradient algorithms converge linearly on bilinear zero-sum games (; b; ;). These iterative algorithms usually come with two versions: Jacobi style updates or Gauss-Seidel (GS) style. In a Jacobi style, we update the two sets of parameters (i.e., x and y) simultaneously whereas in a GS style we update them alternatingly (i.e., one after the other). Thus, Jacobi style updates are naturally amenable to parallelization while GS style updates have to be sequential, although the latter is usually found to converge faster (and more stable). In numerical linear algebra, the celebrated Stein-Rosenberg theorem formally proves that in solving certain linear systems, GS updates converge strictly faster than their Jacobi counterparts, and often with a larger set of convergent instances. However, this does not readily apply to bilinear zero-sum games. Our main goal here is to answer the following questions about solving bilinear zero-sum games: • When exactly does a gradient-type algorithm converge? • What is the optimal convergence rate by tuning the step size or other parameters? • Can we prove something similar to the Stein-Rosenberg theorem for Jacobi and GS updates? Table 2: Optimal convergence rates. In the second column, β * denotes a specific parameter that depends on σ 1 and σ n (see equation 4.2). In the third column, the linear rates are for large κ. The optimal parameters for both Jacobi and Gauss-Seidel EG algorithms are the same. α denotes the step size (α 1 = α 2 = α), and β 1 and β 2 are hyper-parameters for EG and OGD, as given in §2. Algorithm α β 1 β 2 rate exponent Comment Jacobi and Gauss-Seidel Jacobi OGD 2β 1 β * β 1 ∼ 1 − 1/(6κ 2) β 1 = β 2 = α/2 GS OGD √ 2/σ 1 √ 2σ 1 /(σ Contributions We summarize our main from §3 and §4 in Table 1 and 2 respectively, with supporting experiments given in §5. We use σ 1 and σ n to denote the largest and the smallest singular values of matrix E (see equation 2.1), and κ:= σ 1 /σ n denotes the condition number. The algorithms will be introduced in §2. Note that we generalize gradient-type algorithms but retain the same names. Table 1 shows that in most cases that we study, whenever Jacobi updates converge, the corresponding GS updates converge as well (usually with a faster rate), but the converse is not true (§3). This extends the well-known Stein-Rosenberg theorem to bilinear games. Furthermore, Table 2 tells us that by generalizing existing gradient algorithms, we can obtain faster convergence rates. In the study of GAN training, bilinear games are often regarded as an important simple example for theoretically analyzing and understanding new algorithms and techniques (e.g. ; a; b;). It captures the difficulty in GAN training and can represent some simple GAN formulations (; ; a;). Mathematically, bilinear zero-sum games can be formulated as the following min-max problem: min x∈R n max y∈R n x Ey + b x + c y. The set of all saddle points (see definition in eq. (1.1)) is: Throughout, for simplicity we assume E to be invertible, whereas the seemingly general case with non-invertible E is treated in Appendix G. The linear terms are not essential in our analysis and we take b = c = 0 throughout the paper 1. In this case, the only saddle point is. For bilinear games, it is well-known that simultaneous gradient descent ascent does not converge and other gradient-based algorithms tailored for min-max optimization have been proposed (; ; a;). These iterative algorithms all belong to the class of general linear dynamical systems (LDS, a.k.a. matrix iterative processes). Using state augmentation z (t):= (x (t), y (t) ) we define a general k-step LDS as follows: where the matrices A i and vector d depend on the gradient algorithm (examples can be found in Appendix C.1). Define the characteristic polynomial, with A 0 = −I: The following well-known decides when such a k-step LDS converges for any initialization: Theorem 2.1 (e.g.). The LDS in eq. (2.3) converges for any initialization (z,..., z (k−1) ) iff the spectral radius r:= max{|λ| : p(λ) = 0} < 1, in which case {z (t) } converges linearly with an (asymptotic) exponent r. Therefore, understanding the bilinear game dynamics reduces to spectral analysis. The (sufficient and necessary) convergence condition reduces to that all roots of p(λ) lie in the (open) unit disk, which can be conveniently analyzed through the celebrated Schur's theorem ). The roots of a real polynomial p(λ) = a 0 λ n + a 1 λ n−1 + · · · + a n are within the (open) unit disk of the complex plane iff ∀k ∈ {1, 2, . . ., n}, det(In the theorem above, we denoted 1 S as the indicator function of the event S, i.e. 1 S = 1 if S holds and 1 S = 0 otherwise. For a nice summary of related stability tests, see . We therefore define Schur stable polynomials to be those polynomials whose roots all lie within the (open) unit disk of the complex plane. Schur's theorem has the following corollary (proof included in Appendix B.2 for the sake of completeness): Corollary 2.1 (e.g.). A real quadratic polynomial λ 2 + aλ + b is Schur stable iff b < 1, |a| < 1 + b; A real cubic polynomial λ 3 + aλ 2 + bλ + c is Schur stable iff |c| < 1, Let us formally define Jacobi and GS updates: Jacobi updates take the form while Gauss-Seidel updates replace x (t−i) with the more recent x (t−i+1) in operator T 2, where T 1, T 2: R nk × R nk → R n can be any update functions. For LDS updates in eq. (2.3) we find a nice relation between the characteristic polynomials of Jacobi and GS updates in Theorem 2.3 (proof in Appendix B.1), which turns out to greatly simplify our subsequent analyses: and L i is strictly lower block triangular. Then, the characteristic polynomial of Jacobi updates is p(λ, 1) while that of Gauss-Seidel updates is p(λ, λ). Compared to the Jacobi update, in some sense the Gauss-Seidel update amounts to shifting the strictly lower block triangular matrices L i one step to the left, as p(λ, λ) can be rewritten as det This observation will significantly simplify our comparison between Jacobi and Gauss-Seidel updates. Next, we define some popular gradient algorithms for finding saddle points in the min-max problem We present the algorithms for a general (bivariate) function f although our main will specialize f to the bilinear case in eq. (2.1). Note that we introduced more "step sizes" for our refined analysis, as we find that the enlarged parameter space often contains choices for faster linear convergence (see §4). We only define the Jacobi updates, while the GS counterparts can be easily inferred. We always use α 1 and α 2 to define step sizes (or learning rates) which are positive. The generalized GD update has the following form: When α 1 = α 2, the convergence of averaged iterates (a.k.a. Cesari convergence) for convex-concave games is analyzed in (; ; Nedić &). We study a generalized version of EG, defined as follows: EG was first proposed in with the restriction α 1 = α 2 = γ 1 = γ 2, under which linear convergence was proved for bilinear games. A slightly more generalized version was analyzed in where α 1 = α 2, γ 1 = γ 2, again with linear convergence proved. For later convenience we define β 1 = α 2 γ 1 and β 2 = α 1 γ 2. Optimistic gradient descent (OGD) We study a generalized version of OGD, defined as follows: 10) The original version of OGD was given in with α 1 = α 2 = 2β 1 = 2β 2, and its linear convergence for bilinear games was proved in. A slightly more generalized version with α 1 = α 2 and β 1 = β 2 was analyzed in Mokhtari et al. (2019b), again with linear convergence proved. Momentum method Generalized heavy ball method was analyzed in Gidel et al. (2019b): 11) (2.12) This is a modification of Polyak's heavy ball (HB) , which also motivated Nesterov's accelerated gradient algorithm (NAG) . Note that for both x-update and the y-update, we add a scale multiple of the successive difference (e.g. proxy of the momentum). For this algorithm our below improves those obtained in Gidel et al. (2019b), as will be discussed in §3. EG and OGD as approximations of proximal point algorithm It has been observed recently in Mokhtari et al. (2019b) that for convex-concave games, EG (α 1 = α 2 = γ 1 = γ 2 = η) and OGD (α 1 /2 = α 2 /2 = β 1 = β 2 = η) can be treated as approximations of the proximal point algorithm when η is small. With this , one can show that EG and OGD converge to saddle points sublinearly for smooth convex-concave games (a). We give a brief introduction of the proximal point algorithm in Appendix A (including a linear convergence for the slightly generalized version). The above algorithms, when specialized to a bilinear function f (see eq. (2.1)), can be rewritten as a 1-step or 2-step LDS (see. eq. (2.3)). See Appendix C.1 for details. With tools from §2, we formulate necessary and sufficient conditions under which a gradient-based algorithm converges for bilinear games. We sometimes use "J" as a shorthand for Jacobi style updates and "GS" for Gauss-Seidel style updates. For each algorithm, we first write down the characteristic polynomials (see derivation in Appendix C.1) for both Jacobi and GS updates, and present the exact conditions for convergence. Specifically, we show that in many cases the GS convergence regions strictly include the Jacobi convergence regions. The proofs for Theorem 3.1, 3.2, 3.3 and 3.4 can be found in Appendix C.2, C.3, C.4, and C.5, respectively. GD The characteristic equations can be computed as: Scaling symmetry From section 3 we obtain a scaling symmetry (α 1, α 2) → (tα 1, α 2 /t), with t > 0. With this symmetry we can always fix α 1 = α 2 = α. This symmetry also holds for EG and momentum. For OGD, the scaling symmetry is slightly different with (α 1, β 1, α 2, β 2) → (tα 1, tβ 1, α 2 /t, β 2 /t), but we can still use this symmetry to fix α 1 = α 2 = α. Theorem 3.1 (GD). Jacobi GD and Gauss-Seidel GD do not converge. However, Gauss-Seidel GD can have a limit cycle while Jacobi GD always diverges. When α 1 = α 2, this theorem was proved by Gidel et al. (2019a). EG The characteristic equations can be computed as: Theorem 3.2 (EG). For generalized EG with α 1 = α 2 = α and γ i = β i /α, Jacobi and Gauss-Seidel updates achieve linear convergence iff for any singular value σ of E, we have:, the convergence region of GS updates strictly include that of Jacobi updates. OGD The characteristic equations can be computed as: Theorem 3.3 (OGD). For generalized OGD with α 1 = α 2 = α, Jacobi and Gauss-Seidel updates achieve linear convergence iff for any singular value σ of E, we have: The convergence region of GS updates strictly include that of Jacobi updates. Momentum The characteristic equations can be computed as: Theorem 3.4 (momentum). For the generalized momentum method with α 1 = α 2 = α, the Jacobi updates never converge, while the GS updates converge iff for any singular value σ of E, we have: This condition implies that at least one of β 1, β 2 is negative. Prior to our work, only sufficient conditions for linear convergence were given for the usual EG and OGD; see §2 above. For the momentum method, our improves upon Gidel et al. (2019b) where they only considered specific cases of parameters. For example, they only considered β 1 = β 2 ≥ −1/16 for Jacobi momentum (but with explicit rate of divergence), and β 1 = −1/2, β 2 = 0 for GS momentum (with convergence rate). Our Theorem 3.4 gives a more complete picture and formally justifies the necessity of negative momentum. In the theorems above, we used the term "convergence region" to denote a subset of the parameter space (with parameters α, β or γ) where the algorithm converges. Our shares similarity with the celebrated Stein-Rosenberg theorem , which only applies to solving linear systems with non-negative matrices (if one were to apply it to our case, the matrix S in eq. (F.1) in Appendix F needs to have non-zero diagonal entries, which is not possible). In this sense, our extend the Stein-Rosenberg theorem to cover nontrivial bilinear games. In this section we study the optimal convergence rates of EG and OGD. We define the exponent of linear convergence as r = lim t→∞ ||z (t) ||/||z (t−1) || which is the same as the spectral radius. For ease of presentation we fix α 1 = α 2 = α > 0 (using scaling symmetry) and we use r * to denote the optimal exponent of linear convergence (achieved by tuning the parameters α, β, γ). Our show that by generalizing gradient algorithms one can obtain better convergence rates. Theorem 4.1 (EG optimal). Both Jacobi and GS EG achieve the optimal exponent of linear conver- Note that we defined β i = γ i α in Section 2. In other words, we are taking very large extra-gradient steps (γ i → ∞) and very small gradient steps (α → 0).. For Jacobi OGD with β 1 = β 2 = β, to achieve the optimal exponent of linear convergence, we must have α ≤ 2β. For the original OGD with α = 2β, the optimal exponent of linear convergence r * satisfies. For GS OGD with β 2 = 0, the optimal exponent of convergence is r * = (κ 2 − 1)/(κ 2 + 1), at α = √ 2/σ 1 and Remark The original OGD with α = 2β may not always be optimal. For example, take one-dimensional bilinear game and σ = 1, and denote the spectral radius given α, β as r(α, β). If we fix α = 1/2, by numerically solving section 3 we have i.e, α = 1/2, β = 1/3 is a better choice than α = 2β = 1/2. Numerical method We provide a numerical method for finding the optimal exponent of linear convergence, by realizing that the unit disk in Theorem 2.2 is not special. Let us call a polynomial to be r-Schur stable if all of its roots lie within an (open) disk of radius r in the complex plane. We can scale the polynomial with the following lemma: With the lemma above, one can rescale the Schur conditions and find the convergence region where the exponent of linear convergence is at most r (r < 1). A simple binary search would allow one to find a better and better convergence region. See details in Appendix D.3. Bilinear game We run experiments on a simple bilinear game and choose the optimal parameters as suggested in Theorem 4.1 and 4.2. The are shown in the left panel of Figure 1, which confirms the predicted linear rates. Figure 2: Heat maps of the spectral radii of different algorithms. We take σ = 1 for convenience. The horizontal axis is α and the vertical axis is β. Top row: Jacobi updates; Bottom row: Gauss-Seidel updates. Columns (left to right): EG; OGD; momentum. If the spectral radius is strictly less than one, it means that our algorithm converges. In each column, the Jacobi convergence region is contained in the GS convergence region (for EG we need an additional assumption, see Theorem 3.2). Density plots We show the density plots (heat maps) of the spectral radii in Figure 2. We make plots for EG, OGD and momentum with both Jacobi and GS updates. These plots are made when β 1 = β 2 = β and they agree with our theorems in §3. Wasserstein GAN As in , we consider a WGAN that learns the mean of a Gaussian: where s(x) is the sigmoid function. It can be shown that near the saddle point (θ *, φ *) = (0, v) the min-max optimization can be treated as a bilinear game (Appendix E.1). With GS updates, we find that Adam diverges, SGD goes around a limit cycle, and EG converges, as shown in the middle panel of Figure 1. We can see that Adam does not behave well even in this simple task of learning a single two-dimensional Gaussian with GAN. Our next experiment shows that generalized algorithms may have an advantage over traditional ones. Inspired by Theorem 4.1, we compare the convergence of two EGs with the same parameter β = αγ, and find that with scaling, EG has better convergence, as shown in the right panel of Figure 1. Finally, we compare Jacobi updates with GS updates. In Figure 3, we can see that GS updates converge even if the corresponding Jacobi updates do not. Mixtures of Gaussians (GMMs) Our last experiment is on learning GMMs with a vanilla GAN that does not directly fall into our analysis. We choose a 3-hidden layer ReLU network for both the generator and the discriminator, and each hidden layer has 256 units. We find that for GD and OGD, Jacobi style updates converge more slowly than GS updates, and whenever Jacobi updates converge, the corresponding GS updates converges as well. These comparisons can be found in Figure 4 and 5, which implies the possibility of extending our to non-bilinear games. Interestingly, we observe that even Jacobi GD converges on this example. We provide additional comparison between the Jacobi and GS updates of Adam in Appendix E.2. In this work we focus on the convergence behaviour of gradient-based algorithms for solving bilinear games. By drawing a connection to discrete linear dynamical systems (§2) and using Schur's theorem, we provide necessary and sufficient conditions for a variety of gradient algorithms, for both simultaneous (Jacobi) and alternating (Gauss-Seidel) updates. Our show that Gauss-Seidel updates converge more easily than Jacobi updates. Furthermore, we find the optimal exponents of linear convergence for EG and OGD, and provide a numerical method for searching that exponent. We performed a number of experiments to validate our theoretical findings and suggest further analysis. There are many future directions to explore. For example, our preliminary experiments on GANs suggest that similar (local) might be obtained for more general games. Indeed, the local convergence behaviour of min-max nonlinear optimization can be studied through analyzing the spectrum of the Jacobian matrix of the update operator (see, e.g., ; Gidel et al. (2019b) ). We believe our framework that draws the connection to linear discrete dynamic systems and Schur's theorem is a powerful machinery that can be applied in such problems and beyond. It would be interesting to generalize our to the constrained case (even for bilinear games), initiated in the recent work of. Extending our to account for stochastic noise (as empirically tested in our experiments) is another interesting direction, with some initial in Gidel et al. (2019a A PROXIMAL POINT (PP) ALGORITHM PP was originally proposed by with α 1 = α 2 and then carefully studied by. The linear convergence for bilinear games was also proved in the same reference. Note that we do not consider Gauss-Seidel PP since we do not get a meaningful solution after a shift of steps 2. where x (t+1) and y (t+1) are given implicitly by solving the equations above. For bilinear games, one can derive that: We can compute the exact form of the inverse matrix, but perhaps an easier way is just to compute the spectrum of the original matrix (the same as Jacobi GD except that we flip the signs of α i) and perform λ → 1/λ. Using the fact that the eigenvalues of a matrix are reciprocals of the eigenvalues of its inverse, the characteristic equation is: With the scaling symmetry (α 1, α 2) → (tα 1, α 2 /t), we can take α 1 = α 2 = α > 0. With the notations in Corollary 2.1, we have a = −2/(1 + α 2 σ 2) and b = 1/(1 + α 2 σ 2), and it is easy to check |a| < 1 + b and b < 1 are always satisfied, which means linear convergence is always guaranteed. Hence, we have the following theorem: Theorem A.1. For bilinear games, the proximal point algorithm always converges linearly. Although the proximal point algorithm behaves well, it is rarely used in practice since it is an implicit method, i.e., one needs to solve (x (t+1), y In this section we apply Theorem 2.1 to prove Theorem 2.3, an interesting connection between Jacobi and Gauss-Seidel updates: and L i is strictly lower block triangular. Then, the characteristic polynomial of Jacobi updates is p(λ, 1) while that of Gauss-Seidel updates is p(λ, λ). Let us first consider the block linear iterative process in the sense of Jacobi (i.e., all blocks are updated simultaneously):...... where A i,j is the j-th column block of A i. For each matrix A i, we decompose it into the sum where L i is the strictly lower block triangular part and U i is the upper (including diagonal) block triangular part. Theorem 2.1 indicates that the convergence behaviour of equation B.1 is governed by the largest modulus of the roots of the characteristic polynomial: Alternatively, we can also consider the updates in the sense of Gauss-Seidel (i.e., blocks are updated sequentially): We can rewrite the Gauss-Seidel update elegantly 3 as: i.e., where L k+1:= 0. Applying Theorem 2.1 again we know the convergence behaviour of the GaussSeidel update is governed by the largest modulus of roots of the characteristic polynomial: Note that A 0 = −I and the factor det(I − L 1) −1 can be discarded since multiplying a characteristic polynomial by a non-zero constant factor does not change its roots. B.2 PROOF OF COROLLARY 2.1 Corollary 2.1 (e.g.). A real quadratic polynomial λ 2 + aλ + b is Schur stable iff b < 1, |a| < 1 + b; A real cubic polynomial λ 3 + aλ 2 + bλ + c is Schur stable iff |c| < 1, Proof. It suffices to prove the for quartic polynomials. We write down the matrices: We require det(2 and thus |c − ad| < 1 − d 2 due to the first condition. δ 4 > 0 simplifies to: 14) which yields |a + c| < |b + d + 1|. Finally, δ 3 > 0 reduces to: Denote p(λ):= λ 4 + aλ 3 + bλ 2 + cλ + d, we must have p > 0 and p(−1) > 0, as otherwise there is a real root λ 0 with |λ 0 | ≥ 1. Hence we obtain b + d + 1 > |a + c| > 0. Also, from |c − ad| < 1 − d 2, we know that: So, the second factor in B.15 is negative and the positivity of the first factor reduces to: To obtain the Schur condition for cubic polynomials, we take d = 0, and the quartic Schur condition becomes: To obtain the Schur condition for quadratic polynomials, we take c = 0 in the above and write: The proof is now complete. Some of the following proofs in Appendix C.4 and C.5 rely on Mathematica code (mostly with the built-in function Reduce) but in principle the code can be verified manually using cylindrical algebraic decomposition. In this appendix, we derive the exact forms of LDSs (eq. (2.3)) and the characteristic polynomials for all gradient-based methods introduced in §2, with eq. (2.4). The following lemma is well-known and easy to verify using Schur's complement: Gradient descent From equation 2.6 the update equation of Jacobi GD can be derived as: and with Lemma C.1, we compute the characteristic polynomial as in eq. (2.4): With spectral decomposition we obtain equation 3.1. Taking α 2 → λα 2 and with Theorem 2.3 we obtain the corresponding GS updates. Therefore, the characteristic polynomials for GD are: Extra-gradient From eq. (2.7) and eq. (2.8), the update of Jacobi EG is: the characteristic polynomial is: Since we assumed α 2 > 0, we can left multiply the second row by β 2 E/α 2 and add it to the first row. Hence, we obtain: With Lemma C.1 the equation above becomes: which simplifies to equation 3.2 with spectral decomposition. Note that to obtain the GS polynomial, we simply take α 2 → λα 2 in the Jacobi polynomial as shown in Theorem 2.3. For the ease of reading we copy the characteristic equations for generalized EG: Optimistic gradient descent We can compute the LDS for OGD with eq. (2.9) and eq. (2.10): With eq. (2.4), the characteristic polynomial for Jacobi OGD is Taking the determinant and with Lemma C.1 we obtain equation 3.6. The characteristic polynomial for GS updates in equation 3.7 can be subsequently derived with Theorem 2.3, by taking (α 2, β 2) → (λα 2, λβ 2). For the ease of reading we copy the characteristic polynomials from the main text as: Momentum method With eq. (2.11) and eq. (2.12), the LDS for the momentum method is: From eq. (2.4), the characteristic polynomial for Jacobi momentum is Taking the determinant and with Lemma C.1 we obtain equation 3.10, while equation 3.11 can be derived with Theorem 2.3, by taking α 2 → λα 2. For the ease of reading we copy the characteristic polynomials from the main text as: C.2 PROOF OF THEOREM 3.1: SCHUR CONDITIONS OF GD Theorem 3.1 (GD). Jacobi GD and Gauss-Seidel GD do not converge. However, Gauss-Seidel GD can have a limit cycle while Jacobi GD always diverges. Proof. With the notations in Corollary 2.1, for Jacobi GD, b = 1 + α 2 σ 2 > 1. For Gauss-Seidel GD, b = 1. The Schur conditions are violated. For generalized EG with α 1 = α 2 = α and γ i = β i /α, Jacobi and Gauss-Seidel updates achieve linear convergence iff for any singular value σ of E, we have: If β 1 + β 2 + α 2 < 2/σ 2 1, the convergence region of GS updates strictly include that of Jacobi updates. Both characteristic polynomials can be written as a quadratic polynomial λ 2 + aλ + b, where: Compared to Jacobi EG, the only difference between Gauss-Seidel and Jacobi updates is that the α 2 σ 2 in b is now in a, which agrees with Theorem 2.3. Using Corollary 2.1, we can derive the Schur conditions equation 3.4 and equation 3.5. More can be said if β 1 + β 2 is small. For instance, if β 1 + β 2 + α 2 < 2/σ More precisely, to show that the GS convergence region strictly contains that of the Jacobi convergence region, simply take β 1 = β 2 = β. The Schur condition for Jacobi EG and Gauss-Seidel EG are separately: It can be shown that if β = α 2 /3 and α → 0, equation C.21 is always violated whereas equation C.22 is always satisfied. Conversely, we give an example when Jacobi EG converges while GS EG does not. Let β 1 σ 2 = β 2 σ 2 ≡ 3 2, then Jacobi EG converges iff α 2 σ 2 < 3 4 while GS EG converges iff α 2 σ 2 < 1 4. In this subsection, we fill in the details of the proof of Theorem 3.3, by first deriving the Schur conditions of OGD, and then studying the relation between Jacobi OGD and GS OGD. Theorem 3.3 (OGD). For generalized OGD with α 1 = α 2 = α, Jacobi and Gauss-Seidel updates achieve linear convergence iff for any singular value σ of E, we have: The convergence region of GS updates strictly include that of Jacobi updates. The Jacobi characteristic polynomial is now quartic in the form λ 4 + aλ 3 + bλ 2 + cλ + d, with Comparably, the GS polynomial equation 3.7 can be reduced to a cubic one λ 3 + aλ 2 + bλ + c with First we derive the Schur conditions equation 3.8 and equation 3.9. Note that other than Corollary 2.1, an equivalent Schur condition can be read from Cheng & Chiou (2007, Theorem 1) as: Theorem C.1 (With equation C.23 and Theorem C.1, it is straightforward to derive equation 3.8. With equation C.24 and Corollary 2.1, we can derive equation 3.9 without much effort. Now, let us study the relation between the convergence region of Jacobi OGD and GS OGD, as given in equation 3.8 and equation 3.9. Namely, we want to prove the last sentence of Theorem 3.3. The outline of our proof is as follows. We first show that each region of (α, β 1, β 2) described in equation 3.8 (the Jacobi region) is contained in the region described in equation 3.9 (the GS region). Since we are only studying one singular value, we slightly abuse the notations and rewrite β i σ as β i (i = 1, 2) and ασ as α. From equation 3.6 and equation 3.7, β 1 and β 2 can switch. WLOG, we assume β 1 ≥ β 2. There are four cases to consider: The third Jacobi condition in equation 3.8 now is redundant, and we have α > β 1 or α < β 2 for both methods. Solving the quadratic feasibility condition for α gives: where u = (β 1 β 2 + 1)(. On the other hand, assume α > β 1, the first and third GS conditions are automatic. Solving the second gives: 2 /2 and g(β 2):= (β 2 + 4 + 5β 2 2)/(2(1 + β 2 2)), and one can show that (C.28) Furthermore, it can also be shown that given 0 < β 2 < 1 and β 2 ≤ β 1 < g(β 2), we have • β 1 ≥ β 2 = 0. The Schur condition for Jacobi and Gauss-Seidel updates reduces to: One can show that given β 1 ∈, we have Reducing the first, second and fourth conditions of equation 3.8 yields: This region contains the Jacobi region. It can be similarly proved that even within this larger region, GS Schur condition equation 3.9 is always satisfied. • β 2 ≤ β 1 < 0. We have u < 0, tv < 0 and thus α < (u + √ u 2 + tv)/t < 0. This contradicts our assumption that α > 0. Combining the four cases above, we know that the Jacobi region is contained in the GS region. To show the strict inclusion, take β 1 = β 2 = α/5 and α → 0. One can show that as long as α is small enough, all the Jacobi regions do not contain this point, each of which is described with a singular value in equation 3.8. However, all the GS regions described in equation 3.9 contain this point. The proof above is still missing some details. We provide the proofs of equation C.26, equation C.28, equation C.29 and equation C.32 in the sub-sub-sections below, with the help of Mathematica, although one can also verify these claims manually. Moreover, a one line proof of the inclusion can be given with Mathematica code, as shown in Section C.4.5. The fourth condition of equation 3.8 can be rewritten as: where we used |β 1 β 2 | < 1 in both cases. So, equation C.33 becomes: Combining with α > β 1 or α < β 2 obtained from the second condition, we have: The first case is not possible, with the following code: u = (b1 b2 + 1) (b1 + b2); v = b1 b2 (b1 b2 + 1) (b1 b2 -3); t = (b1^2 + 1) (b2^2 + 1); Reduce[b2 t > u -Sqrt[u^2 + t v] && b1 >= b2 > 0 && Abs[b1 b2] < 1], and we have: Therefore, the only possible case is β 1 < α < (u + √ u 2 + tv)/t. Where the feasibility region can be solved with: What we get is: 1 + b2^2) ), {b2, b1}], we can remove the first constraint and get: 0 < b2 < 1 && b2 <= b1 < b2/(2 (1 + b2^2)) + 1/2 Sqrt[(4 + 5 b2^2)/(1 + b2^2)^2]. The second Jacobi condition simplifies to α > β 1 and the fourth simplifies to equation C.34. Combining with the first Jacobi condition: we have: This can be further simplified to achieve equation C.32. In fact, there is another very simple proof: Reduce[ForAll[{b1, b2, a}, (a -b1) (a -b2) > 0 && (a + b1) (a + b2) > -4 && Abs[b1 b2] < 1 && a^2 (b1^2 + 1) (b2^2 + 1) < (b1 b2 + 1) (2 a (b1 + b2) + b1 b2 (b1 b2 -3)), (a -b1) (a -b2) > 0 && (a + b1) (a + b2) < 4 && (a b1 + 1) (a b2 + 1) > (1 + b1 b2)^2], {b2, b1, a}] True. However, this proof does not tell us much information about the range of our variables. Theorem 3.4 (momentum). For the generalized momentum method with α 1 = α 2 = α, the Jacobi updates never converge, while the GS updates converge iff for any singular value σ of E, we have: This condition implies that at least one of β 1, β 2 is negative. Jacobi condition We first rename ασ as al and β 1, β 2 as b1, b2. With Theorem C.1: We obtain: {Abs[b1 b2] < 1, Abs[2 + b1 + b2] < 3 + b1 b2, al^2 > 0, al^2 + 4 (1 + b1) (1 + b2) > 0, al^2 (-1 + b1 b2)^2 < 0}. The last condition is never satisfied and thus Jacobi momentum never converges. Gauss-Seidel condition With Theorem C.1, we compute: The is: {Abs[b1 b2] < 1, Abs[2 -al^2 + b1 + b2] < 3 + b1 b2, al^2 > 0, 4 (1 + b1) (1 + b2) > al^2, al^2 (b1 + b2 + (-2 + al^2 -b1) b1 b2 + b1 (-1 + 2 b1) b2^2) < 0}, which can be further simplified to equation??. With Theorem 3.4, we can actually show that in general at least one of β 1 and β 2 must be negative. There are three cases to consider, and in each case we simplify equation??: 1. β 1 β 2 = 0. WLOG, let β 2 = 0, and we obtain −1 < β 1 < 0 and α 2 σ 2 < 4(1 + β 1). (C.36) 2. β 1 β 2 > 0. We have 3. β 1 β 2 < 0. WLOG, we assume β 1 ≥ β 2. We obtain: The constraints for α are α > 0 and: These conditions can be further simplified by analyzing all singular values. They only depend on σ 1 and σ n, the largest and the smallest singular values. Now, let us derive equation C.37, equation C.38 and equation C.39 more carefully. Note that we use a for ασ. Reduce[Abs[b1 b2] < 1 && Abs[-a^2 + b1 + b2 + 2] < b1 b2 + 3 && 4 (b1 + 1) (b2 + 1) > a^2 && a^2 b1 b2 < (1 -b1 b2) (2 b1 b2 -b1 -b2) && b1 b2 > 0 && a > 0, {b2, b1, a}] -1 < b2 < 0 && -1 < b1 < 0 && 0 < a < Sqrt[4 + 4 b1 + 4 b2 + 4 b1 b2] C.5.4 PROOF OF EQUATIONS C.38 AND C.39 Reduce[Abs[b1 b2] < 1 && Abs[-a^2 + b1 + b2 + 2] < b1 b2 + 3 && 4 (b1 + 1) (b2 + 1) > a^2 && a^2 b1 b2 < (1 -b1 b2) (2 b1 b2 -b1 -b2) && b1 b2 < 0 && b1 >= b2 && a > 0, {b2, b1, a}] (-1 < b2 <= -(1/3) && ((0 < b1 <= b2/(-1 + 2 b2) && 0 < a < Sqrt[4 + 4 b1 + 4 b2 + 4 b1 b2]) || (b2/(-1 + 2 b2) < b1 < -(1/(3 b2)) && Sqrt[(-b1 -b2 + 2 b1 b2 + b1^2 b2 + b1 b2^2 -2 b1^2 b2^2)/(b1 b2)] < a < Sqrt[4 + 4 b1 + 4 b2 + 4 b1 b2]))) || (-(1/3) < b2 < 0 && ((0 < b1 <= b2/(-1 + 2 b2) && 0 < a < Sqrt[4 + 4 b1 + 4 b2 + 4 b1 b2]) || (b2/(-1 + 2 b2) < b1 < -(b2/(1 + 2 b2)) && Sqrt[(-b1 -b2 + 2 b1 b2 + b1^2 b2 + b1 b2^2 -2 b1^2 b2^2)/(b1 b2)] < a < Sqrt[4 + 4 b1 + 4 b2 + 4 b1 b2]))) Some further simplication yields equation C.38 and equation C.39. For bilinear games and gradient-based methods, a Schur condition defines the region of convergence in the parameter space, as we have seen in Section 3. However, it is unknown which setting of parameters has the best convergence rate in a Schur stable region. We explore this problem now. Due to Theorem 3.1, we do not need to study GD. The remaining cases are EG, OGD and GS momentum (Jacobi momentum does not converge due to Theorem 3.4). Analytically (Section D.1 and D.2), we study the optimal linear rates for EG and special cases of generalized OGD (Jacobi OGD with β 1 = β 2 and Gauss-Seidel OGD with β 2 = 0). The special cases include the original form of OGD. We also provide details for the numerical method described at the end of Section 4. The optimal spectral radius is obtained by solving another min-max optimization problem: where θ denotes the collection of all hyper-parameters, and r(θ, σ) is defined as the spectral radius function that relies on the choice of parameters and the singular value σ. We also use Sv(E) to denote the set of singular values of E. In general, the function r(θ, σ) is non-convex and thus difficult to analyze. However, in the special case of quadratic characteristic polynomials, it is possible to solve equation D.1. This is how we will analyze EG and special cases of OGD, as r(θ, σ) can be expressed using root functions of quadratic polynomials. For cubic and quartic polynomials, it is in principle also doable as we have analytic formulas for the roots. However, these formulas are extremely complicated and difficult to optimize and we leave it for future work. For EG and OGD, we will show that the optimal linear rates depend only on the conditional number κ:= σ 1 /σ n. For simplicity, we always fix α 1 = α 2 = α > 0 using the scaling symmetry studied in Section 3. D.1 PROOF OF THEOREM 4.1: OPTIMAL CONVERGENCE RATE OF EG Theorem 4.1 (EG optimal). Both Jacobi and GS EG achieve the optimal exponent of linear convergence r * = (κ 2 − 1)/(κ 2 + 1) at α → 0 and β 1 = β 2 = 2/(σ 2 1 + σ 2 n). As κ → ∞, r * → 1 − 2/κ 2. For Jacobi updates, if β 1 = β 2 = β, by solving the roots of equation 3.2, the min-max problem is: If σ 1 = σ n = σ, we can simply take α → 0 and β = 1/σ 2 to obtain a super-linear convergence rate. Otherwise, let us assume σ 1 > σ n. We obtain a lower bound by taking α → 0 and equation D.2 reduces to: The optimal solution is given at 1 − βσ 2 n = βσ From general β 1, β 2, it can be verified that the optimal radius is achieved at β 1 = β 2 and the problem reduces to the previous case. The optimization problem is: In the first case, a lower bound is obtained at α 2 = (β 1 − β 2) 2 σ 2 /4 and thus the objective only depends on β 1 + β 2. In the second case, the lower bound is obtained at α → 0 and β 1 → β 2. Therefore, the function is optimized at β 1 = β 2 and α → 0. Our analysis above does not mean that α → 0 and β 1 = β 2 = 2/(σ 2 1 + σ 2 n) is the only optimal choice. For example, when σ 1 = σ n = 1, we can take β 1 = 1 + α and β 2 = 1 − α to obtain a super-linear convergence rate. For Gauss-Seidel updates and β 1 = β 2 = β, we do the following optimization: where by solving equation 3.3: r(σ, β, σ 2) is quasi-convex in σ 2, so we just need to minimize over α, β at both end points. Hence, equation D.5 reduces to: min α,β max{r(α, β, σ 1), r(α, β, σ n)}. By arguing over three cases: n, we find that the minimum (κ 2 − 1)/(κ 2 + 1) can be achieved at α → 0 and β = 2/(σ 2 1 + σ 2 n), the same as Jacobi EG. This is because α → 0 decouples x and y and it does not matter whether the update is Jacobi or GS. For general β 1, β 2, it can be verified that the optimal radius is achieved at β 1 = β 2. We do the following transformation: β i → ξ i − α 2 /2, so that the characteristic polynomial becomes: Denote ξ 1 + ξ 2 = φ, and (ξ 1 − α 2 /2)(ξ 2 − α 2 /2) = ν, we have: The discriminant is ∆:= σ 2 (σ 2 (φ 2 − 4ν) − 4α 2 ). We discuss two cases: 1. φ 2 − 4ν < 0. We are minimizing: with a ∨ b:= max{a, b} a shorthand. A minimizer is at α → 0 and ν → φ 2 /4 (since φ 2 < 4ν), where β 1 = β 2 = 2/(σ 2 1 + σ 2 n) and α → 0. 2. φ 2 − 4ν ≥ 0. A lower bound is:. This is only possible if α → 0 and φ 2 → 4ν, which yields β 1 = β 2 = 2/(σ From what has been discussed, the optimal radius is (κ 2 − 1)/(κ 2 + 1) which can be achieved at β 1 = β 2 = 2/(σ 2 1 + σ 2 n) and α → 0. Again, this might not be the only choice. For instance, take σ 1 = σ 2 n = 1, from equation 3.3, a super-linear convergence rate can be achieved at β 1 = 1 and D.2 PROOF OF THEOREM 4.2: OPTIMAL CONVERGENCE RATE OF OGD Theorem 4.2 (OGD optimal). For Jacobi OGD with β 1 = β 2 = β, to achieve the optimal linear rate, we must have α ≤ 2β. For the original OGD with α = 2β, the optimal linear rate r * satisfies. For Gauss-Seidel OGD with β 2 = 0, the optimal linear rate is r * = (κ 2 − 1)/(κ 2 + 1), at α = √ 2/σ 1 and For OGD, the characteristic polynomials equation 3.6 and equation 3.7 are quartic and cubic separately, and thus optimizing the spectral radii for generalized OGD is difficult. However, we can study two special cases: for Jacobi OGD, we take β 1 = β 2; for Gauss-Seidel OGD, we take β 2 = 0. In both cases, the spectral radius functions can be obtained by solving quadratic polynomials. We assume β 1 = β 2 = β in this subsection. The characteristic polynomial for Jacobi OGD equation 3.6 can be written as: (D.10) Factorizing it gives two equations which are conjugate to each other: 11) The roots of one equation are the conjugates of the other equation. WLOG, we solve λ(λ − 1) + i(λα − β)σ = 0 which gives (1/2)(u ± v), where (D.12), v can be expressed as: therefore, the spectral radius r(α, β, σ) satisfies: (D.14) and the minimum is achieved at α = 2β. From now on, we assume α ≤ 2β, and thus v = a + ib. We write: E SUPPLEMENTARY MATERIAL FOR SECTIONS 5 AND 6 We provide supplementary material for Sections 5 and 6. We first prove that when learning the mean of a Gaussian, WGAN is locally a bilinear game in Appendix E.1. For mixtures of Gaussians, we provide supplementary experiments about Adam in Appendix E.2. This implies that in some cases, Jacobi updates are better than GS updates. We further verify this claim in Appendix E.3 by showing an example of OGD on bilinear games. Optimizing the spectral radius given a certain singular value is possible numerically, as in Appendix E.4. Inspired by , we consider the following WGAN : with s(x):= 1/(1 + e −x) the sigmoid function. We study the local behavior near the saddle point (v, 0), which depends on the Hessian: with E v a shorthand for E x∼N (v,σ 2 I) and E φ for E z∼N (φ,σ 2 I). At the saddle point, the Hessian is simplified as: Therefore, this WGAN is locally a bilinear game. Given the same parameter settings as in Section 5, we train the vanilla GAN using Adam, with the step size α = 0.0002, and β 1 = 0.9, β 2 = 0.999. As shown in Figure 6, Jacobi updates converge faster than the corresponding GS updates.: Contour plot of spectral radius equal to 0.8. The red curve is for the Jacobi polynomial and the blue curve is for the GS polynomial. The GS region is larger but for some parameter settings, Jacobi OGD achieves a faster convergence rate. Take α = 0.9625, β 1 = β 2 = β = 0.5722, and σ = 1, the Jacobi and GS OGD radii are separately 0.790283 and 0.816572 (by solving equation 3.6 and equation 3.7), which means that Jacobi OGD has better performance for this setting of parameters. A more intuitive picture is given as Figure 7, where we take β 1 = β 2 = β. We minimize r(θ, σ) for a given singular value numerically. WLOG, we take σ = 1, since we can rescale parameters to obtain other values of σ. We implement grid search for all the parameters within the range [−2, 2] and step size 0.05. For the step size α, we take it to be positive. We use {a, b, s} as a shorthand for {a, a + s, a + 2s, . . ., b}. • We first numerically solve the characteristic polynomial for Jacobi OGD equation 3.6, fixing α 1 = α 2 = α with scaling symmetry. With α ∈ {0, 2, 0.05}, β i ∈ {−2, 2, 0.05}, the best parameter setting is α = 0.7, β 1 = 0.1 and β 2 = 0.6. β 1 and β 2 can be switched. The optimal radius is 0.6. • We also numerically solve the characteristic polynomial for Gauss-Seidel OGD equation 3.7, fixing α 1 = α 2 = α with scaling symmetry. With α ∈ {0, 2, 0.05}, β i ∈ {−2, 2, 0.05}, the best parameter setting is α = 1.4, β 1 = 0.7 and β 2 = 0. β 1 and β 2 can be switched. The optimal rate is 1/(5 √ 2). This rate can be further improved to be zero where α = √ 2, β 1 = 1/ √ 2 and β 2 = 0. • Finally, we numerically solve the polynomial for Gauss-Seidel momentum equation 3.11, with the same grid. The optimal parameter choice is α = 1.8, β 1 = −0.1 and β 2 = −0.05. β 1 and β 2 can be switched. The optimal rate is 0.5. In this appendix, we interpret the gradient-based algorithms (except PP) we have studied in this paper as splitting methods , for both Jacobi and Gauss-Seidel updates. By doing this, one can understand our algorithms better in the context of numerical linear algebra and compare our in Section 3 with the Stein-Rosenberg theorem. For EG, we need to compute an inverse: Given det(α 1 α 2 I + β 1 β 2 EE) = 0, the inverse always exists. The splitting method can also work for second-step methods, such as OGD and momentum. We split S = M − N − P and solve: For OGD, we have: For EG, we need to compute an inverse: The splitting method can also work for second-step methods, such as OGD and momentum. We split S = M − N − P and solve: z t+1 = M −1 N z t + M In this paper we considered the bilinear game when E is a non-singular square matrix for simplicity. Now let us study the general case where E ∈ R m×n. As stated in Section 2, saddle points exist iff b ∈ R(E), c ∈ R(E). The set of saddle points is: {(x, y)|y ∈ N (E), x ∈ N (E)}. (G.3)
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJlVY04FwH
We systematically analyze the convergence behaviour of popular gradient algorithms for solving bilinear games, with both simultaneous and alternating updates.
Most approaches in generalized zero-shot learning rely on cross-modal mapping between an image feature space and a class embedding space or on generating artificial image features. However, learning a shared cross-modal embedding by aligning the latent spaces of modality-specific autoencoders is shown to be promising in (generalized) zero-shot learning. While following the same direction, we also take artificial feature generation one step further and propose a model where a shared latent space of image features and class embeddings is learned by aligned variational autoencoders, for the purpose of generating latent features to train a softmax classifier. We evaluate our learned latent features on conventional benchmark datasets and establish a new state of the art on generalized zero-shot as well as on few-shot learning. Moreover, our on ImageNet with various zero-shot splits show that our latent features generalize well in large-scale settings. Generalized zero-shot learning (GZSL) is a classification task where no labeled training examples are available from some of the classes. Many approaches learn a mapping between images and their class embeddings BID11 BID19 BID0. For instance, ALE maps CNN features of images to a perclass attribute space. An orthogonal approach to GZSL is to augment data by generating artificial image features, such as BID21 who proposed to generate image features via a conditional WGAN. As a third approach, BID16 proposed to learn a latent space embedding by transforming both modalities to the latent spaces of autoencoders and match the corresponding distributions by minimizing the Maximum Mean Discrepancy (MMD). Learning such cross-modal embeddings can be beneficial for potential downstream tasks that require multimodal fusion. In this regard, BID13 recently used a cross-modal autoencoder to extend visual question answering to previously unseen objects. Although recent cross-modal autoencoder architectures represent class prototypes in a latent space BID10 BID16, better generalization can be achieved if the shared representation space is more amenable to interpolation between different classes. Variational Autoencoders (VAEs) are known for their capability in accurate interpolation between representations in their latent space, i.e. as demonstrated for sentence interpolation BID2 and image interpolation BID6. Hence, in this work, we train VAEs to encode and decode features from different modalities, and align their latent spaces by matching the parametrized latent distributions and by enforcing a cross-modal reconstruction criterion. Since we learn representations that are oblivious to their origin, a zero-shot visual classifier can be trained using latent space features from semantic data. Our contributions in this work are as follows. FORMULA1 Generalized Zero-shot Learning Let S = {(x, y, c(y))| x ∈ X, y ∈ Y S, c(y) ∈ C} be a set of training examples, consisting of image-features x, e.g. extracted by a CNN, class labels y available during training and class-embeddings c(y). Typical class-embeddings are vectors of continuous attributes or Word2Vec features. In addition, an set U = {(u, c(u))| u ∈ Y u, c(u) ∈ C} is used, where u denote unseen classes from a set Y u, which is disjoint from Y S. Here, C(U) = {c(u 1),..., c(u L)} is the set of class-embeddings of unseen classes. In the legacy challenge of ZSL, the task is to learn a classifier f ZSL: X → Y U. However, in this work, we focus on the more realistic and challenging setup of generalized zero-shot learning (GZSL) where the aim is to learn a classifier DISPLAYFORM0 The Objective Function CADA-VAE is trained with pairs of image features and attribute vectors of seen classes. The data of each pair has to belong to the same class. In this process, an image feature encoder and an attribute encoder learn to transform the training data into the shared latent space. The encoders belong to two VAEs with a common latent space. Once the VAEs are trained, a softmax classifier is trained on both seen image data and unseen attributes, after they are transformed into the latent representation. As the VAE encoding is non-deterministic, many latent features are sampled for each datapoint. Since we only have one attribute vector per class, we oversample latent-space encoded features of unseen classes. To test the classifier, the visual test data is first transformed into the latent space, using only the predicted means µ of the latent representation. The Objective function for training the VAEs is derived as follows. For every modality i (image features, attributes), a VAE is trained. The basic VAE loss for a feature x of modality i ∈ 1, 2,..M is: DISPLAYFORM1 where D KL represents the Kullback-Leibler Divergence, β is a weight, q(z|x (i) ) = N (µ, Σ) is the VAE encoder consisting of a multilayer perceptron, and p(z) is a Gaussian prior. Additionally, each encoded datapoint is decoded into every available modality, e.g. encoded image features are decoded into attributes and vice versa. Consequently, we minimize the L1 cross-reconstruction loss: DISPLAYFORM2 where γ is a weight. The L1 loss empirically proved to provide slighthly better than L2. Furthermore, the 2-Wasserstein W distance between the multivariate Gaussian latent distribution of image features and attributes is minimized: DISPLAYFORM3 The VAE is trained using the final objective L = L basic +L CA +L DA. We refer to the Cross-Aligned and Distribution-Aligned VAE as CADA-VAE. In addition, we test the variant L = L basic + L CA, termed CA-VAE, and the variant L = L basic + L DA, referred to as DA-VAE. A latent size of 64 is used for all experiments, except 128 for ImageNet. We evaluate our framework on zero-shot learning benchmark datasets CUB-200-2011 BID18, SUN attribute BID12 BID7 BID20 for the GZSL setting. All image features used for training the VAEs are extracted from the 2048-dimensional final pooling layer of a ResNet-101. To avoid violating the zero-shot assumption, i.e. test classes need to be disjoint from the classes that ResNet-101 was trained with, we use the proposed training splits in BID20. As class embeddings, attribute vectors were utilized if available. For ImageNet we used Word2Vec embeddings provided by BID3. All hyperparameters were chosen on a validation set provided by BID20. We report the harmonic mean (H) between seen (S) and unseen (U) average per-class accuracy, i.e. the Top-1 accuracy is averaged on a per-class basis., AwA1 and 2 (Generalized Zero-Shot Learning We compare our model with 11 state-of-the-art models. Among those, CVAE, SE , and f-CLSWGAN BID21 learn to generate artificial visual data and thereby treat the zero-shot problem as a data-augmentation problem. On the other hand, the classic ZSL methods DeViSE, SJE BID0, ALE, EZSL BID14 and LATEM BID19 use a linear compatibility function or other similarity metrics to compare embedded visual and semantic features; CMT BID15 and LATEM BID19 utilize multiple neural networks to learn a non-linear embedding; and SYNC BID3 learns by aligning a class embedding space and a weighted bipartite graph. ReViSE BID16 proposes a shared latent manifold learning using an autoencoder between the image features and class attributes. The in TAB3 show that our CADA-VAE outperforms all other methods on all datasets. Moreover, our model achieves significant improvements over feature generating models most notably on CUB. Compared to the classic ZSL methods, our method leads to at least 100% improvement in harmonic mean accuracies. In the legacy challenge of ZSL setting, which is hardly realistic, our CADA-VAE provides competitive performance, i.e. 60.4 on CUB, 61.8 on SUN, 62.3 on AWA1, 64.0 on AWA2. However, in this work, we focus on the more practical and challenging GZSL setting. We believe the obtained increase in performance by our model can be explained as follows. CADA-VAE learns a shared representation in a weakly supervised fashion, through a crossreconstruction objective. Since the latent features have to be decoded into every involved modality, and since every modality encodes complementary information, the model is encouraged to learn an encoding that retains the information contained in all used modalities. In doing so, our method is less biased towards learning the distribution of the seen class image features, which is known as the projection domain shift problem BID5. As we generate a certain number of latent features per class using non-deterministic encoders, our method is also akin to data-generating approaches. However, the learned representations lie in a lower dimensional space, i.e. only 64, and therefore, are less prone to bias towards the training set of image features. In effect, our training is more stable than the adversarial training schemes used for data generation BID21. BID20 several evaluation splits were proposed with increasing granularity and size both in terms of the number of classes and the number of images. Note that since all the images of 1K classes are used to train ResNet-101, measuring seen class accuracies would be biased. However, we can still evaluate the accuracy of unseen class images in the GZSL search space that contains both seen and unseen classes. Hence, at test time the 1K seen classes BID15 49.8 7.2 12.6 21.8 8.1 11.8 87.6 0.9 1.8 90.0 0.5 1.0 SJE BID0 59 BID14 63.8 12.6 21.0 27.9 11.0 15.8 75.6 6.6 12.1 77.8 5.9 11.0 SYNC BID3 70.9 11.5 19.8 43.3 7.9 13.4 87.3 8.9 16.2 90.5 10.0 18.0 DeViSE 53.0 23.8 32.8 27.4 16.9 20.9 68.7 13.4 22.4 74.7 17.1 27.8 f- CLSWGAN Xian et al. (2018b) 57 act as distractors. For ImageNet, as attributes are not available, we use Word2Vec features as class embeddings provided by BID3. We compare our model with f-CLSWGAN BID21, i.e. an image feature generating framework which currently achieves the state of the art on ImageNet. We use the same evaluation protocol on all the splits. Among the splits, 2H and 3H are the classes 2 or 3 hops away from the 1K seen training classes of ImageNet according to the ImageNet hierarchy. M 500, M 1K and M 5K are the 500, 1000 and 5000 most populated classes, while L500, L1K and L5K are the 500, 1000 and 5000 least populated classes that come from the rest of the 21K classes. Finally,'All' denotes the remaining 20K classes of ImageNet. As shown in FIG1, our model significantly improves the state of the art in all the available splits. Note that the test time search space in the'All' split is 22K dimensional. Hence even a small improvement in accuracy on this split is considered to be compelling. The achieved substantial increase in performance by CADA-VAE shows that our 128-dim latent feature space constitutes a robust generalizable representation, surpassing the current state-of-the-art image feature generating framework f-CLSWGAN. In this work, we propose CADA-VAE, a cross-modal embedding framework for generalized zeroshot learning in which the modality-specific latent distributions are aligned by minimizing their Wasserstein distance and by using cross-reconstruction. This procedure leaves us with encoders that can encode features from different modalities into one cross-modal embedding space, in which a linear softmax classifier can be trained. We present different variants of cross-aligned and distribution aligned VAEs and establish new state-of-the-art in generalized zero-shot learning for four medium-scale benchmark datasets as well as the large-scale ImageNet. We further show that a cross-modal embedding model for generalized zero-shot learning achieves better performance than data-generating methods, establishing the new state of the art.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkghJoRNO4
We use VAEs to learn a shared latent space embedding between image features and attributes and thereby achieve state-of-the-art results in generalized zero-shot learning.
Intuitively, image classification should profit from using spatial information. Recent work, however, suggests that this might be overrated in standard CNNs. In this paper, we are pushing the envelope and aim to further investigate the reliance on and necessity of spatial information. We propose and analyze three methods, namely Shuffle Conv, GAP+FC and 1x1 Conv, that destroy spatial information during both training and testing phases. We extensively evaluate these methods on several object recognition datasets (CIFAR100, Small-ImageNet, ImageNet) with a wide range of CNN architectures (VGG16, ResNet50, ResNet152, MobileNet, SqueezeNet). Interestingly, we consistently observe that spatial information can be completely deleted from a significant number of layers with no or only small performance drops. Despite the fantastic performances of convolutional neural networks (CNNs) on computer vision tasks, their inner workings remain mostly obfuscated to us and analyzing them often in surprising . Generally, the majority of modern CNNs for image classification learn spatial information across all the convolutional layers: every layer in AlexNet, VGG, Inception, and ResNet applies 3×3 or larger filters. Such design choice is based on the assumption that spatial information remains important at every convolutional layer to consecutively increase the access to a larger spatial context. This is based on the observations that single local features can be ambiguous and should be related to other features in the same scene to make accurate predictions;. Recent work on restricting the receptive field of CNN architectures, scrambling the inputs or using wavelet feature networks ing in networks with shallow depth have all found it to be possible to acquire competitive performances on the respective classification tasks. This raises doubts on whether common CNNs learn representations of global context as small local features appear to be sufficient for classification. We add to the list of surprising findings surrounding the inner workings of CNNs and present a rigorous investigation on the necessity of spatial information in standard CNNs by avoiding learning spatial information at multiple layers. To this end, we propose three methods i.e., shuffle conv, GAP+FC and 1x1Conv, to eliminate the spatial information. Surprisingly, we find that the modified CNNs i.e., without the ability to access any spatial information at last layers, can still achieve competitive on several object recognition datasets. This indicates that the spatial information is overrated for standard CNNs and not necessary to reach competitive performances. In our experiments, the last layers of standard CNNs can be simplified by substituting them with our proposed GAP+FC or 1x1Conv layers which ignore spatial information, leading to a smaller model with less parameters. Moreover, our novel simplifications can be adapted to a wide range of CNN architectures and maintain state-of-the-art performance on various image classification datasets. The detail of the shuffle conv. Each feature map from the input tensor will be randomly and independently shuffled before being fed into an ordinary convolution. Training models for the task of object recognition, our intuitive understanding would be that global image context is beneficial for making accurate predictions. For that reason extensive efforts have been made to enhance the aggregation of spatial information in the decision-making progress of CNNs.; have made attempts to generalize the strict spatial sampling of convolutional kernels to allow for globally spread out sampling and have spurred a range of follow-up work on embedding global context layers with the help of spatial down-sampling. While all of these works have improved on a related classification metric in some way, it is not entirely evident whether the architectural changes alone can be credited, as there is an increasing number of work on questioning the importance of the extent of spatial information for common CNNs. One of the most recent observations by for example indicate that the VGG-16 architecture trained on ImageNet is invariant to scrambled images to a large extent, e.g. they reported only a drop of slightly over 10% points top-5 accuracy for a pre-trained VGG-16. Furthermore, they were also able to construct a modified ResNet architecture with a limited receptive field as small as 33 × 33 and were able to reach competitive on ImageNet, similar to the style of the traditional Bag-of-Visual-Words. The latter was also explicitly incorporated into the training of CNNs in the works by;; , the effect of neglecting global spatial information by design had surprisingly little effect on performance values. On a related note, has indicated with constructing object-texture mismatched images that models trained solely on ImageNet do not learn shape sensitive representations, which would be expected to require global spatial information, but instead are mostly sensitive to local texture features. Our work is motivated to push the envelope further in order to investigate the necessity of spatial information in the process pipeline of CNNs. While the related work has put the attention mainly on altering the input, we are interested in taking measures that remove the spatial information in intermediate layers to shed light on how CNNs process spatial information, thus evaluating its importance and make suggestions for architectural design choices. In order to test how spatial information is processed in the CNN processing pipeline, we propose three approaches: shuffle convolution, GAP+FC and 1x1Conv that neglect spatial information in different ways in intermediate layers and apply these to well established architectures. The evaluation is primarily done with comparing the classification accuracy for models that have been increasingly constrained with respect to how much spatial information can be propagated throughout the network. Section 3.1 elaborates details on our approaches and the experimental setup is discussed in section 3.2. Shuffle Convolution extends the ordinary convolution operation by prepending a random spatial shuffle operation, so that the input to the convolution is permuted. As illustrated in Fig. 1 right: Assume an input tensor of size c×h×w with c being the number of feature maps for a convolutional layer. We first take one feature map from the input tensor and flatten it into a 1-d vector with h × w elements, whose ordering is then permuted randomly. The ing vector is finally reshaped back into h × w and substitute the original feature map. This procedure is independently repeated c times for each feature map so that activations from the same location in the previous layer are misaligned, thereby preventing the information from being encoded by the spatial arrangement of the activations. The shuffled output becomes the input of an ordinary convolutional layer in the end. Even though shuffling itself is not differentiable, gradients can still be propagated through in the same way as Max Pooling. Therefore it can be embedded into the model directly for end-to-end training. As the indices are recomputed within each forward pass, the shuffled output is also independent across training and testing steps. Images within the same batch are shuffled in the same way for the sake of simplicity since we find empirically that it doesn't make a difference whether the images inside the same batch are shuffled in different ways. Instead of shuffling a single layer, we shuffle all layers from the last to the specific depth (last 2 convolutional layers are shuffled in Fig. 1) in order to prevent the model from remembering encountered permutations. Memorization of random patterns is something that deep networks have been shown to be powerful at. Global Average Pooling and Fully Connected Layers: Shuffle convolution is an intuitive way of destroying spatial information but it also makes it impossible to learn correlations across channels for a particular spatial location. Furthermore, shuffling introduces undesirable randomness into the model so that during evaluation multiple forward passes are needed to acquire an estimate of the mean of the output. A simple deterministic alternative achieving a similar goal is what we call GAP+FC. The deployment of Global Average Pooling (GAP) after an intermediate layer, and substitute all the subsequent ones by fully connected layers. Compared to shuffle conv, it is a much more efficient way to avoid learning spatial information at intermediate layers because it shrinks the spatial size of feature maps to one. Fig. 1 demonstrates a toy example of a CNN with the last two convolutional layers modified by GAP+FC. 1x1 Convolution: GAP+FC collapses the spatial information to a size of 1. However, reducing the spatial size potentially influences the expressive ability of the model. For example, the point-wise difference of two consecutive 7 × 7 feature maps lies in the 49 dimension space while the difference of two 1 × 1 feature maps is just a single value, so if the information would be conveyed by the order of the feature maps, larger feature map size tends to be more expressive. In order to retain the information kept in the spatial dimensions but restrict the model to be invariant to the relationships between spatial locations, we propose as an alternative the use of 1x1 convolutions, which replaces the 3x3 convolutions at last layers in a network. It differs from shuffle conv in that the activation at the same spatial location is aligned. Fig. 1 gives a small demonstration where the last 2 layers from a toy CNN are modified. It is worth noting that ResNets use stride-two convolution to downsample the feature maps at the end of bottleneck. Such down-sampling strategy is not ideal for 1x1 convolution because it ignores more than 3/4 of the input. So we use max or average pooling with 2x2 windows as our down-sampling method instead. We test different architectures with shuffle conv, GAP+FC and 1x1Conv on 3 datasets: CIFAR100, Small-ImageNet-32x32 and ImageNet. We measure in each experiment the top-1 accuracy and the number of model parameters. We will take an existing model and apply the modification to layers from the last layer on. The rest of the setup and hyper-parameters remain the same as the baseline model. By shuffle conv or GAP+FC or 1x1Conv, our modification on the baseline model always starts from the last layer and is consecutively extended to the first layer. We denote as K the number of modified convolutional layers or sub-modules counting from the last layer on. The rest of the operations, like skip connections, and modules remain the same. 2 × 2 average pooling with stride 2 is used for down-sampling in all experiments due to the ablation of down-sampling methods in section 4.4. For the VGG-16 architecture, the modification is only performed on the convolutional layers as illustrated in Fig. 1. K varies from 0 (representing the baseline) to 13 since 13 out of the 16 layers are convolutional. For the ResNet-50 architecture with 16 bottleneck sub-modules, one bottleneck is considered as one layer and the modification is only applied onto the 3 × 3 convolutions inside since they are the only operation with spatial extent, the rest of the configuration remains the same as in the baseline model (see Appendix foran example of modified ResNet-50 architecture). For CIFAR100 and Small-ImageNet-32x32 experiments, the first convolution in ResNet is set to 3 × 3 with stride 1 and the first max pooling layer is removed so that the final feature map size is 4 × 4. For each architecture, we first reproduce the original on the benchmark as our baseline, and then the same training scheme is directly used to train our models. All models in the same set of experiments are trained with the same setup from scratch and they are initialized by the same random seed. During testing, we make sure to use a different random seed than during training. We first present an in-depth study of our main observations on CIFAR100 for VGG-16 and ResNet-50 in section 4.1 and then verify them on other datasets and architectures in section 4.3. Finally, the influence of the depth and receptive field size is discussed in section 4.4. In this section, we first investigate the invariance of pre-trained models to the absence of the spatial information at test time, then we impose this invariance at training time with methods in section 3.1. Contradicting to the common sense, recent works suggest a less important role of spatial information in image classification task. Here we take a further step to study the robustness of the model against the absence of the spatial information at test time by applying Shuffle Conv. More specifically, we substitute the last 3 convolutional layers (see Appendix A.4 for more on other layers) of a pre-trained VGG-16 with shuffle conv at test time on CIFAR100 such that the spatial information is neglected in those layers. Because random shuffle is independent at each forward pass, the final test accuracy will be the average of 200 evaluations and the standard deviation is also present. The left table in 2 clean → shuffle shows the model from the clean training scheme gives around 1% test accuracy, which is the same as random guess on CIFAR100, when evaluated with random shuffle. However, if the shuffle conv is infused into the model at training time, then the baseline performance can be achieved no matter whether random shuffle appears at test time as shown in the left table of 2 (73.67% for shuffle → shuffle and 73.57% for shuffle → clean). We thus design the following experiment: we modify the last K convolutional or bottleneck layers of VGG-16 or ResNet-50 on CIFAR100 by Shuffle Conv (both at training and test time), GAP+FC, and 1x1Conv such that the spatial information is removed in different ways. Our modification on the baseline model always starts from the last layer and is consecutively extended to the first layer. The modified networks with different K are then trained on the training set with the same setup and evaluated on the hold-out validation set of CIFAR100. Table 1: Table summarizes the top-1 accuracy and the number of parameters of different K on CIFAR100 for VGG-16 and ResNet-50 with GAP+FC and 1x1Conv. K is defined as the number of modified layers counting from the last layer. The first column for each modification method shows the most compressed model within 1% accuracy difference to the corresponding baseline model and the second column presents the best performed model for each modification method. We can see that 1x1Conv gives even a slightly higher test accuracy while having fewer parameters. The on CIFAR100 for VGG-16 and ResNet-50 are shown in Fig. 2. The x-axis is the number of modified layers K, ranging from 0 to the total number of convolutional or bottleneck layers. K = 0 is the baseline model performance without modifying any layer. As we can see in the right of Fig. 2, with the increasing number of modified layers, the performance of ResNet-50 drops surprisingly slowly for our three methods consistently. For example, Shuffle conv can modify up to the last 5 layers of ResNet-50 while maintaining similar baseline performance i.e., Shuffle conv(K=5) achieves 77.89% accuracy v.s. 78.06% accuracy of the baseline (K=0). 1x1Conv and GAP+FC can preserve the baseline performance until K = 5 and K = 9, where the feature map size is 8 and 16, respectively. For VGG-16, as shown in the left of Fig. 2, a similar trend can be observed. Shuffle conv, GAP+FC, and 1x1Conv are able to tolerate modification of the last 5 layers without losing any accuracy. This is in strong contrast to the common belief that the spatial information is essential for object recognition tasks. One obvious advantage of our methods is that 1x1Conv and GAP+FC can reduce the number of model parameters without affecting the performance. As a side effect, we find that GAP+FC and 1x1Conv have a regularization effect that can lead to improved generalization performance when data augmentation is not applied. Fig. 2 shows the test accuracy of modified ResNet-50 via GAP+FC and 1x1Conv trained with and without data augmentation. While the models trained with data augmentation show similar test accuracy, we observe a significant performance improvement over the baseline on ResNet-50 trained without data augmentation, e.g 1x1Conv outperforms the baseline by 8.01% on CIFAR100 when several last layers are modified. Unfortunately, this effect doesn't hold across other architectures and datasets. Table 2: Left: Top-1 accuracy of VGG-16 with random shuffle enabled at either training and test time for the last 3 layers. Shuffled model is robust to the standard test scheme while the test accuracy of a standard VGG-16 drops to the random guess level if evaluated with shuffling. Right: Effect of data augmentation on classification for ResNet-50 on CIFAR100. The data augmentation here is the random flipping and the random cropping. We present here the best performed model for each method. We can see that modified models reach higher test accuracy when data augmentation is not applied. ResNet-50 with 1x1Conv trained without data augmentation shows a significant performance improvement over the baseline from 65.64% to 73.65% on CIFAR100. Table 3: Image classification on Small-ImageNet for VGG16 and ResNet50 with GAP+FC and 1x1Conv. K is defined as the number of modified layers counting from the last layer. Our experiments in Table 2 left clearly show that ordinary models by default don't possess the invariance to the absence of the spatial information. In contrast to the common wisdom, we find that spatial information can be neglected from a significant number of last layers without any performance drop if the invariance is imposed at training, which suggests that spatial information at last layers is not necessary for a good performance. We should however notice that it doesn't indicate that models whose prediction is based on the spatial information can't generalize well. Besides, unlike the common design manner that layers at different depth inside the network are normally treated equally, e.g. the same module is always used throughout the architecture, our observation implies it is beneficial to have different designs for different layers since there is no necessity to encode spatial information in the last layers (see Appendix A.3 for discussion on first layers), therefore reducing the model complexity. Comparing our three methods, we observe that 1x1Conv is more robust to the absence of the spatial information while Shuffle Conv and GAP+FC perform similarly for both VGG-16 and ResNet-50. This implies that CNNs can still benefit from the larger size of activation maps even though its spatial information is not presented. Since CIFAR100 is a relatively easy dataset with centered objects belonging to only 100 classes, we conduct in the following experiments on more complex inputs: small-ImageNet and ImageNet, whereas small-ImageNet is a down-sampled version of the latter (from 256 × 256 to 32 × 32). The on Small-ImageNet are summarized in the Table 3 (see more details in the Appendix). GAP+FC and 1x1Conv present a similar behavior as on CIFAR100 dataset. And the gap between the performance of GAP+FC and 1x1Conv increases, the maximal number of layers that can be modified on ResNet50 for GAP+FC and 1x1Conv are 3 and 6. This implies that spatial information at last layers of CNNs are not necessary for good performance on the datasets with enough complexity. Furthermore, we conduct experiments for different architectures on full ImageNet with an input image size of 224 × 224. We first reproduce baselines as in the original papers and then apply the same training scheme directly to train our models. Here we only evaluate 1x1Conv due to its superiority over GAP+FC and due to its excessive computational overhead training on the full ImageNet dataset. In Table 4, we observe that spatial information can be ignored at last layers So far, we have evaluated our methods with large models that have been shown to have incredible capacity to learn even from random labels. A hypothesis could be that the models we test are very complex to begin with such that it is of no surprise that they learn the relevant representations in earlier layers and can encode the information necessary to classify in very few dimensions. To approach this question, we deploy our experiments on architectures that have been specifically designed to be of minimal complexity in order to save memory and reduce the number of floating point operations. Hence, we evaluate MobileNetV2 with 3.5M parameters and with 1.25M parameters, both of which are able to reach competitive performance on ImageNet. MobileNetV2 uses the inverted residual bottleneck as their building block where the input tensor is first expanded along the channel dimension and then a 3 × 3 depth-wise convolution is performed before the number of channels is reduced to the output dimension by the final 1 × 1 convolution. In our modification we simply remove the 3 × 3 depthwise convolution together with its ReLU and batch normalization. SqueezeNet is composed of fire modules, which leverage the strategies from to reduce the model parameters. It first squeezes the number of channels by a 1 × 1 convolution and then expands by a mixture of 1 × 1 convolutions and 3 × 3 convolutions. In our modification, we replace all 3 × 3 convolutions in the expand phase by 1 × 1 convolutions. The in Table 4 show that the last two conv layers of both MobileNetV2 and SqueezeNet are also spatial invariant i.e., neglecting the spatial information at those 2 last layers does not affect the performance at all, despite the minimal model complexity. The experiments on Small-ImageNet and ImageNet confirm again the claim in section 4.2 that the spatial information at last layers is not necessary for a good performance and its generalizability across architectures can lead to a further reduction of the number of model parameters even on models that are already very efficient, e.g. MobileNetV2 and SqueezeNet. In the previous section, we observed that 1x1Conv gives the best performance in the sense that spatial information of more layers can be neglected without affect the test accuracy. Here we investigate whether these modified layers are of importance at all or whether they can be stripped of the architecture entirely. The relationship between the receptive field size of a layer and whether it can be modified without performance impact is evaluated subsequently. Importance of the Depth. We saw previously that 1x1Conv gives the best performance in the sense that spatial information at more layers can be neglected without affect the overall test accuracy. Here we ask whether those modified layers can be neglected altogether, effectively reducing the depth. We first pick the most compressed ResNet-50 with the same test accuracy as the baseline on Small-ImageNet, last 6 sub-modules are modified by 1x1Conv. We then strip off one modified layer at a time from the last layer on, ing in 6 different models which are trained with the same configuration. The is shown in Fig. 3 left. With the increase of the number of 1x1 convolutional layers, the test accuracy also increases. So even though the spatial information at last layers is not necessary, those last layers are still essential for good performance. The relation between the receptive field size and the test accuracy difference to the baseline for different image size on VGG-16 over CIFAR100 shows that the test accuracy saturates with the increase of the receptive field size for a given image size. The minimal required receptive field tends to be larger for larger image size and this minimum is normally larger than the actual image size. The exact relation is however unclear. that the spatial information is marginalized out at some particular depth and the ing non-linear transformations are solely used to disentangle the depth wise information. Relationship to the Receptive Field. A reason for that marginalization of spatial information could be hypothesized to be related to the receptive field size of a particular layer. If the receptive field size of a layer is greater or equal to the size of the image, does that tell us whether all following layers can be manipulated? We choose to ablate VGG-16 because the receptive field for a multibranch network is not properly defined as it can only state a theoretical upper bound and do so on CIFAR100 as each object normally occupies the entire image. We replace the 3 × 3 convolutional layers in VGG-16 by 1×1 convolutional layers from the last layer on and until the first layer, thereby varying the receptive field size of the last convolutional layer in our model. Results are shown in Fig. 3 right. Y-axis is the test accuracy difference between the modified model and the baseline model. We can see that the test accuracy saturates with the increase of the receptive field size for a given image size. In order to reach the saturation, it seems that the minimal required receptive field size has to exceeds the actual image size by a relatively large margin and this margin increases for larger image size. For example, the model reaches approximately the same test accuracy as a vanilla VGG-16 with receptive field being 50 for 32 × 32 input image, and the same number becomes around 120 for 64 × 64 input image. This is maybe because the effective receptive field is normally smaller than the theoretical receptive field. However, it is still not really possible to tell a quantitative relation between the required receptive field size and the image size since there are too few data points and it is hard to confirm if an architecture with a specific final receptive field size is sufficient to obtain the baseline performance. To conclude, we empirically show that last layers of CNNs are robust to the absence of the spatial information, which is commonly assumed to be important for object recognition tasks. Our proposed methods, without accessing any spatial information at last layers of modern CNNs, are able to achieve competitive on several object recognition datasets incuding CIFAR100, SmallImageNet and ImageNet. We suggest a good rule of thumb for CNN architectures: using 1x1 convolution or fully connected layers at last layers reduces the number of parameters without affecting the performance. An interesting future direction is to study whether our methods can generalize to other computer vision tasks, e.g., object detection and pose estimation where the spatial relationships are vital for localizing objects. A APPENDIX Convolution with stride 2 was suggested by for 3 × 3 filters as a replacement for pooling layers as the down-sampling method. For example, ResNets use 1 × 1 convolution with stride 2 to reduce the feature map size. However, a direct adaptation leads to a failure for our 1x1Conv. In figure 4, we observe a more rapid decrease of the test accuracy for stride 2 downsampling than average pooling and max pooling on ResNet50 over Small-ImageNet. With the same test accuracy as the baseline, the number of modifiable layers is 3 for convolution with stride 2 and 6 for average pooling. The reason for the failure of the stride 2 case may lie in the fact that 1x1 convolution doesn't have the spatial extent, so a down-sampling will ignore 75% of the activations even they may convey the majority of the information. In an ordinary bottleneck that performs down-sampling, the lost information in the main branch can be replenished from the skip connection where 3 × 3 convolution is deployed to ensure the information at each location is processed. In our modification, however, the skip connection branch will suffer from the loss of the information as well due to 1x1 convolution. Average pooling or max pooling on the other hand doesn't have this problem and their performance according to the plot doesn't have significant difference to each other. We test the necessity of spatial information by GAP+FC and 1x1Conv for VGG-16 and ResNet-50 on Small-ImageNet. Experimental setup is the same as the CIFAR100 experiment. Results are shown in Fig. 5. Within 1% test accuracy difference, GAP+FC manages to replace the last 4 layers in VGG-16 and 1x1Conv can replace the last 7 layers (46.05% and 45.44% compared to the baseline performance 46.59%, respectively). Similarly, the test accuracy can be preserved until K = 3 and K = 6 for GAP+FC and 1x1Conv, which confirms the better performance of 1x1Conv over GAP+FC. This indicates spatial information at last layers is not necessary for a good performance. Previous experiments always apply shuffle conv from one specific layer until the last layer in a network. We test here the impact of random shuffle at different depth by applying shuffle conv at one single layer at a time. The of VGG-16 on CIFAR100 is summarized in Fig. 6 where the x-axis is the layer index (VGG-16 has 13 convolutional layers). We plot the baseline performance with an horizontal line alongside the modified models in order to show a clearer comparison. We can see an overall similar trend as multiple layer shuffle in Fig. 2, the test accuracy drops slowly shuffle conv vanilla Figure 6: The orange curve is the test accuracy of the vanilla VGG-16 i.e. the baseline. The blue curve shows the test accuracy of the VGG-16 with a single convolutional layer modified by shuffle conv at different depth. The x-axis is the layer index with 13 being the last convolutional layer in VGG-16. Random shuffle is applied both at training and test time. The implies random shuffle has a larger impact at first layers than the last layers. with the decrease of the layer index. The baseline performance is maintained for the last 4 layers, which implies random shuffle has a larger impact at first layers than the last layers. In Table. 2 left, we presented the test accuracy of a specific model whose last 3 layers are replaced by shuffle conv under mismatched training and test schemes. We show here the complete of models with different K in Fig. 7. The green curve which is obtained by evaluating the baseline with different K at test time falls to random guess on CIFAR100, compared to the red curve which represents the baseline with clean training and test schemes. And the shuffled models which maintain the baseline accuracy have very similar behavior (the overlapped part of orange curve and blue curve) no matter whether random shuffle appears during evaluation. However, there is a gradually increasing gap between these 2 curve when the shuffled model can't preserve the baseline performance, that consistent schemes gives significant higher accuracy than the inconsistent one. Unfortunately, the reason is not fully understood. A.5 ON IMAGENET Fig. 8 and 9 show the complete of test accuracy of ResNet-152 and ResNet-50 being modified by 1x1Conv on ImageNet. All models are trained with the same scheme as in from scratch. The claim that spatial information is not necessary at last layers generalizes well on ImageNet. We test here another type of random shuffle along the depth of feature maps. It randomly swaps the order of the feature maps along the channel dimension in each forward pass and is denoted as channel shuffle. The experiments are run for VGG-16 on CIFAR100. Fig. 10 shows the change of the test accuracy with the number of layers K that is modified by channel shuffle increasing. Besides an overall decreasing trend, the test accuracy drops much faster than that from random spatial shuffle (74.10% to 71.49% with only the last layer being shuffled), which implies a much weaker robustness of the model against channel shuffle. We therefore assume a more important role of the order of the feature maps in encoding the information at last layers. A.7 RESNET50 ARCHITECTURE Fig. 11 shows an example of ResNet-50 with the last 3 bottlenecks being modified by shuffle conv, GAP+FC and 1x1Conv. Our modification is applied only the 3 × 3 convolution in side each bottleneck since it is the only operation that has the spatial extent.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1l7AkrFPS
Spatial information at last layers is not necessary for a good classification accuracy.
Disentangling underlying generative factors of a data distribution is important for interpretability and generalizable representations. In this paper, we introduce two novel disentangling methods. Our first method, Unlabeled Disentangling GAN (UD-GAN, unsupervised), decomposes the latent noise by generating similar/dissimilar image pairs and it learns a distance metric on these pairs with siamese networks and a contrastive loss. This pairwise approach provides consistent representations for similar data points. Our second method (UD-GAN-G, weakly supervised) modifies the UD-GAN with user-defined guidance functions, which restrict the information that goes into the siamese networks. This constraint helps UD-GAN-G to focus on the desired semantic variations in the data. We show that both our methods outperform existing unsupervised approaches in quantitative metrics that measure semantic accuracy of the learned representations. In addition, we illustrate that simple guidance functions we use in UD-GAN-G allow us to directly capture the desired variations in the data. Generative Adversarial Networks (GANs) are generative model estimators, where two neural networks (generator and discriminator) are trained in an adversarial setting, so that likelihood-based probabilistic modeling is not necessary. This works particularly well for sampling from a complex probability distribution, such as images. Although GANs yield realistic looking images BID18, the original formulation in only allows for randomly sampling from the data distribution without disentangled structural or semantic control over the generated data points. One way to disentangle the generation process is to use conditional GANs BID16 BID17. These models modify the generator by conditioning it with supervised labels. Then, they either take the same labels as input in the discriminator BID16 and measure the image-label compatibility, or classify the correct label at the output, given the generated image BID17. Conditional GANs rely on a dataset with labels, which might not always be available or might be time-consuming to collect. In this paper, we propose two GAN-based methods that learns disentangled representations without using labeled data. Our first method, Unlabeled Disentangling GAN (UD-GAN), generates image pairs, then embeds them with Siamese Networks BID2, and finally learns a distance metric on a disentangled representation space. Whereas our second method, UD-GAN-G, uses guidance functions to restrict the input to our siamese networks, so that they capture desired semantic variations. There have been many studies on learning disentangled representations in generative models, which can be grouped into the level of supervision/labeled data they require. In BID28 BID26, the identity and the viewpoint of an object are disentangled via reconstructing the same object from a different viewpoint and minimizing a reconstruction loss. Whereas in BID15, the style and category of an object is separated via autoencoders, where an encoder embeds the style of an input image to a latent representation, and a decoder takes the category and style input to reconstruct the input image. In BID24, autoencoders and GANs are combined to decompose identity and attribute of an object, where the disentangled representation is obtained at the encoder outputs, and image labels are used at the output of the discriminator. Disentangled representations (semi-supervised). In BID20, they clamp the hidden units for a pair of images with the same identity but with different pose or expression to have the same identity representation. Whereas in BID11, synthesized images are used to disentangle pose, light, and shape of an object by passing a batch of images where only one attribute varies and the rest of the representation is clamped to be the same. These techniques only require a batch of samples with one attribute different at a time. Disentangled representations (unsupervised). InfoGAN BID1 is an unsupervised technique that discovers categorical and continuous factors by maximizing the mutual information between a GAN's noise variables and the generated image. β-VAE BID7 and DIP-VAE BID12 are unsupervised autoencoder-based techniques that disentangle different factors in the latent representation of an encoded image. In β-VAE, the KL-divergence between the latent and a prior distribution is weighted with a factor β > 1 to encourage disentanglement in the posterior latent distributions. Wheres in DIP-VAE, the covariance matrix of the latent distribution is encouraged to be an identity matrix, thus leading to uncorrelated latent representations. For all of the unsupervised methods, after a model is trained, a human needs to investigate which factors map to which semantic property. In addition, as the methods are unsupervised, not all desirable factors might be represented. In contrast, our method builds on existing approaches with two important modifications: (i) We operate on pairs of similar/dissimilar image pairs. (ii) We compute the image embeddings using separate networks, which allows us to guide the disentangling process with information restriction. In GANs, the generator, G, maps a latent variable z, which has an easy-to-sample distribution, into a more complex and unknown distribution, such as images. On the other hand, the discriminator D tries to distinguish real images from the ones that are generated by G. In, the training is performed as a minimax game as follows: DISPLAYFORM0 where P R and P Z are the probability distributions of real images and the latent variable z, respectively. We train our GAN by using the loss in equation 1. In order to increase stability, we modify the generator loss by maximzing log(D(G(z))), instead of minimizing the second term in equation 1. In a standard GAN setting, all of the variation in the distribution of real images is captured by the latent variable z. However, a single dimension or a slice of z does not necessarily have a semantic meaning. In this paper, our target is to slice the latent variable into multiple vectors, where each vector controls a different semantic variation. Our network architecture is visualized in Figure 1. In our method, the latent vector DISPLAYFORM0, which represent different attributes we aim to disentangle. One can add a final variable that captures the variation (and the noise) that is not picked up by the knobs. In our experiments, this additional variable did not have a notable effect. In our notation, q i refers to all of the knobs, except q i. In order to train our model, first, for each q i, we sample two different vectors, q The image pairs that are generated with the same q i vectors, {x 11, x 12} or {x 21, x 22}, should have the same i th attribute, regardless of the values of q i. We can ensure this via embedding the generated image pairs into a representation space with Siamese Networks BID2, which are denoted as φ i , and then learning a distance metric on the embedding vectors by employing Contrastive Loss BID6 ). An optional guidance function is used to restrict the information that goes into a siamese network, thus letting us approximate a desired representation space. The guidance is disabled for our unsupervised UD-GAN approach. Whereas for UD-GAN-G, the guidance is a simple, user-defined function, which is discussed in Section 3.3.We use a Contrastive Loss function to pull similar image pairs together, and push dissimilar pairs apart as follows: DISPLAYFORM1 where, L φ i is the Contrastive Loss for the i th Siamese Network φ i , the function DISPLAYFORM2 ) 2 is a shorthand for embedding distance between x ni1 and x ni2, and γ DISPLAYFORM3 is an adaptive margin of the form γ DISPLAYFORM4. Using an adaptive margin makes the distance between two latent samples semantically meaningful and we empirically found that it improves the training stability. The discriminator network D is not modified and is trained to separate real and generated image distributions. BID3 use a similar latent variable slicing for capturing illumination and pose variations of a face with a fixed identity. Their discriminator needs image pairs, which must be labeled for real images, to judge the quality and identity of the faces. Our method does not require any labels for the real images. Instead, we create similar and dissimilar image pairs via concatenating latent variables and generating image batches. Our final loss function is: DISPLAYFORM5 where, L W GAN is the GAN loss described in equation 1, λ φ i is the weight of the embedding loss, and the sampling of the latent variables depends on i and is performed as described above. A guidance function reduces the information content that flows into a siamese network and causes the corresponding embedding space to capture only the variations in the restricted input. For example, consider we want to capture the hair-related attributes in the CelebA dataset BID14, which contains aligned images of human faces. By cropping every region but the top part of a generated image, we are able to guide φ top to learn only the variations in the "Hair Color" as shown in the first row of FIG1. Note that, the knob q top (that corresponds to φ top ) changes the hair color not only at the cropped part of the image but as a whole. This is due to the interplay between the adversarial part of our loss (see equation 3), which enforces global realism in images, and the contrastive loss, which administers disentangled representations. As shown in FIG1, different guidance functions leads to capturing different variations in the CelebA dataset. Crop Guidance We can gain a probabilistic interpretation of our method on a toy example. Let us assume a problem, where we want to generate images of colored polygons (see Figure 1), where there are two independent factors of variation: shape and color, which we want to capture using two knobs q i and q j, respectively. When we set q j to a certain value and vary q i, we want to generate polygons with the same color, but different shapes, and vice versa. Let P be the probability distribution of colored polygons. For each attribute, P can be decomposed into a mixture distribution as follows: DISPLAYFORM0 is a mixture component and π DISPLAYFORM1 is its corresponding probability of choosing it, and N i is the number of different values an attribute (in our example, i corresponds to shape) can take. A similar explanation can be made for attribute j, i.e. color. For the sake of this analysis, we accept that for each attribute, P can be decomposed into different discrete mixture distributions as shown in Figure 3. For this specific case, Q i and Q i are the distributions of colored squares and colored diamonds, respectively. For the color attribute, which is indexed by j, each Q (k) j corresponds to a distribution of polygons with a single color (i.e., green polygons).Our contrastive loss in equation 2 has two terms. The first term is minimizing the spread of each mixture component Q (k) i. This spread is inversely related to disentanglement. If all samples from DISPLAYFORM2 Figure 3: Illustration of the embedding spaces and separated probability distributions after training our model. DISPLAYFORM3 are mapped to the same embedding vector, the effect of j (and any other attribute) on the representation φ i disappears and disentangling is achieved. During training, we stochastically go through all embedding spaces and minimize their spread, thus ing in a disentangled representation in TAB6 in Appendix G.The second term in equation 2 separates all Q (k) i from each other using an adaptive margin γ i. This margin depends on the difference between input latent pairs, so that the ing embedding space is smooth. In other words, we separate rectangles, circles, and ovals from each other, but circles should be closer to ovals than squares, due to their relative similarity. In the following, we focus on the shape attribute that is represented by i, however, derivations carry over to the color attribute j. In order to separate the probability distributions over image embeddings, one can maximize a divergence between all pairs from Q (k) i. One way to measure the distance between these distributions is to use the unbiased estimator of the energy distance BID23: DISPLAYFORM4 The energy distance in equation 5 can be interpreted as an instance of Maximum Mean Discrepancy BID0 and resembles the Contrastive Loss BID6. We can rewrite equation 5 using the Contrastive Loss in equation 2 as follows: DISPLAYFORM5 Each element in the second sum is quadratic function and has its minimum at DISPLAYFORM6 /2 and the value of the minimum is γ i 2 /2. So, we can rewrite equation 6 as follows: DISPLAYFORM7 Therefore, as the margin γ i depends only on the input latent variables and is not trainable, minimizing our embedding loss L φ i maximizes the lower bound for the energy distance D E. This corresponds to learning a Siamese Network φ i that separates two probability distributions Q We perform our experiments on a server with Intel Xeon Gold 6134 CPU, 256GB system memory, and an NVIDIA V100 GPU with 16GB of graphics memory. Our generator and discriminator architectures are outlined in our Appendix A. Each knob is a 1-dimensional slice of the latent variable and is sampled from Unif (−1, 1). We use ADAM BID9 as an optimizer for our training with the following parameters: learning rate=0.0002 and β 1 = 0.5. We will release our code after the review process. Datasets. We evaluate our method on two image datasets: (i) the CelebA dataset BID14, which consists of over 200,000 images of aligned faces. We cropped the images to 64 × 64 pixels in size. (ii) the 2D Shapes BID7, which is a dataset that is synthetically created with different properties, such as shape, scale, orientation, and x-y locations. Both datasets are divided into training and test sets with a 90%-10% ratio. The weight values for the contrastive loss is λ φ = 1 for the CelebA dataset and λ φ = 5 for the 2D shapes dataset. We use a 32 and 10-dimensional latent variables for the CelebA and the 2D Shapes datasets, respectively. Baselines. We have two versions of our algorithm. UD-GAN refers to the that are obtained without any guidance at the input of our siamese networks, whereas UD-GAN-G represents a guided training. We compare our method against β-VAE , DIP-VAE BID12, and InfoGAN BID1 to compare against both autoencoder and GAN-based approaches. We get the quantitative and visual for β-VAE and DIP-VAE from BID7 and BID12, and use our own implementation of InfoGAN for training and testing. The same generator/discriminator architecture is used for InfoGAN and our method. Guidance. For the CelebA dataset, the first 28 of 32 latent knobs are unguided and therefore are processed by the same siamese network that outputs a 28-dimensional embedding vector 1. Whereas the remaining four knobs correspond to four siamese networks (φ top, φ miu, φ mil, φ bot) that are guided with cropped images in FIG1. For the 2D shapes dataset, we have 10 knobs, where the first 7 dimensions are unguided. In order to guide the remaining three networks, we estimate the center of mass (M x,M y) and the sizeŜ of the generated object and feed them to our siamese networks, φ X (M x), φ Y (M y), and φ S (Ŝ). More information for this computation can be found in Appendix D. Disentanglement Metric. This metric was proposed by BID7 and measures whether learned disentangled representations can capture separate semantic variations in a dataset. In β-VAE and DIP-VAE, this representation is the output of the encoder, i.e., the inferred latent variable. For InfoGAN, we use the representation learned by the discriminator. In our method, we use the concatenated outputs of our siamese networks, which we denote as φ.The disentanglement metric scores for different methods are illustrated in TAB0. Here, we can see that both of our methods outperforms the baseline on the CelebA dataset. All of the baseline approaches relate the latent variables to generated images on per-image basis. Whereas our approach attempts to relate similarities/differences of latent variable pairs to image pairs, which provides a discriminative image embedding, where each dimension is invariant to unwanted factors BID6. For both datasets, our guided network (UD-GAN-G) performs better than our unguided approach, especially on the CelebA dataset. This might be due to the correlations between irrelevant attributes. For example the correlation coefficient between "Wearing Lipstick" and "Wavy Hair" attributes is 0.36, although they are not necessarily dependent. One of our guided networks receive the cropped image around the mouth of a person, which prevents cluttering it with hairstyle. Therefore, this guidance provides better disentanglement and in an improved score as shown in TAB0. Due to containing simple synthetic images, our disentanglement scores for the 2D shapes dataset are very high. The reason we get 100.0 score on our guided method is because of the guidances we choose, which are highly correlated with the ground truth labels, as shown in TAB4 in Appendix D. TAB1, we compare our method against baseline approaches on CelebA attribute classification accuracy using the aforementioned projection vector. Similar to the in TAB0, our guided approach slightly outperforms our unguided method and the other completely unsupervised techniques. This is because some attributes in the CelebA dataset can be spatially isolated via cropping, which leads to a better classification performance. For example, the attributes that are related to hair (Black Hair, Blond Hair, Wavy Hair) and mouth (Mouth Slightly Open, Wearing Lipstick) are captured better by the guided approach, because our top and bottom crops (see FIG1) are detaching the effects of other variations and are making attributes less correlated. The accuracy on the attribute "Bangs" is worse on the guided approach. This might be due to heuristic cropping we perform that divides the relevant image region into two slits. Table 3, we illustrate images generated by different methods on the CelebA dataset. Each of the three rows capture the change in a semantic property: smile, azimuth, and hair color, respectively. Within each image group, a latent dimension is varied (from top to bottom) to visualize the semantic change in that property. Compared to adversarial methods, such as InfoGAN and UD-GAN-G, the DIP-VAE method generates blurrier images, due to the data likelihood term in VAE-based approaches, which is usually implemented as a pixel-wise image reconstruction loss. In GAN-based approaches, this is handled via a learnable discriminator in an adversarial setting. In TAB0 and 2, we quantitatively show the advantage of using our guided approach. Another advantage is to have better control over the captured attributes. For example, in all unsupervised approaches (including UD-GAN), we need to check which latent dimension represents corresponds to which visual attribute. In some cases, a semantic attribute might not be captured due to the correlated nature of a dataset. Whereas, in UD-GAN-G, we directly obtain the variations in smile, azimuth, and hair color through cropping the bottom, middle, and top part of our images, respectively. Thanks to our guidance in FIG1, we can directly manipulate these three attributes using the knobs q bot, q mil, and q top as shown in Table 3.The same trend is true for the 2D Shapes dataset in Table 4. Although the X and Y positions and the scale of the synthetic object is captured by both our unsupervised and guided approaches, the guidance we choose directly captures the desired feature on in advance chosen knobs q X, q Y, and q S, respectively. Table 3: Images generated by varying a latent dimension, which corresponds to a semantic property. Table 4: Generated images for the 2D Shapes dataset by varying a latent dimension, which corresponds to a semantic property (first row: UD-GAN, second row: UD-GAN-G). DISPLAYFORM0 DISPLAYFORM1 In completely unsupervised approaches, there is no guarantee to capture all of the desired semantic variations. The main premise behind UD-GAN-G is to find very simple, yet effective ways to capture some of the variation in the data. This weak supervision helps us to obtain proxies to certain semantic properties, so that we get the desired features without training the model multiple times with different hyperparameters or initializations. In the aligned the CelebA dataset, each face is roughly centered around the nose. This reduces the variation and simplifies the problem of guidance design, as we show in FIG1. In more complex scenarios, where the objects can appear in a large variety of scales, translations, and viewpoints, one can use a pre-trained object detection and localization method, such as YOLO BID19, as a guidance network. This enables us to use the knowledge obtained from a labeled dataset, such as ImageNet BID21 to disentangle a new unlabeled dataset. Note that backpropagating the gradients of a deep network into an image might cause adversarial samples BID22. However, the discriminator can alleviate this by rejecting problematic images. In order to backpropagate the gradients from the siamese networks to the generator, the guidance function we use needs to be differentiable. This might pose a limitation to our method; however, differentiable relaxations can instead be used to guide our network. For example, one can employ differentiable relaxation of the superpixel segmentation in BID8 ) to disentangle a low-level image segmentation. Our latent variables are sampled from a uniform distribution. In addition, image similarity is measured by using L2-distance between a pair of image embeddings. We experimented with modeling some latent dimensions as categorical variables. However, we encountered training stability issues, due to computing the softmax loss between two learnable categorical image embeddings, instead of one embedding and one fixed label vector as it is usually done. We plan to tackle that problem in our future work. In this paper we introduced UD-GAN and UD-GAN-G, novel GAN formulations which employ Siamese networks with contrastive losses in order to make slices of the latent noise space disentangled and more semantically meaningful. Our experiments encompassed guided and unguided approaches for the embedding networks, and illustrated how our methods can be used for semantically meaningful image manipulation. Our qualitative and quantiative confirm that our method can adjust well to the intrinsic factors of variation of the data and outperform the current state-of-the-art methods on the CelebA and 2D Shapes datasets. In future work, we plan to investigate more powerful forms of embedders, e.g. extracting information from pre-trained networks for semantic segmentation and landmark detection. This allows for even more powerful novel image manipulation techniques. In TAB2, we show the neural network layers we use in our generator for different datasets. Our discriminator and siamese network architectures are the inverted version of our generator. Each fully connected and Conv2D layer is followed by a Leaky ReLU non-linearity, except the last layer. DISPLAYFORM0 The Siamese Networks φ i are desired to map images into embedding spaces, where they can be grouped within a distinct semantic context. For the example shown in FIG3, where we disentangle the shape and the color, this might not be directly achievable in a completely unsupervised setting, because the separation in equation 4 is not unique. However, we can still benefit from the disentangling capability of our method via small assumptions and domain knowledge, without collecting labeled data. Consider the toy example, where we extend the MNIST dataset BID13 to have a random color, sampled from a uniform RGB color distribution. We define our problem to independently capture the shape of a digit with q 1 and its color with q 2.In FIG3 (a), we show images created by a generator, which is trained along with two networks, φ 1 and φ 2, without any guidance in an unsupervised setting. We can see that the knobs, q 1 and q 2, capture the variations in the data, however, these variations are coupled with multiple semantic properties. Each knob modifies a complicated combination of shape and color. However, if we design a network architecture in a slightly smarter way, we should be able to separate the shape and the color attributes. This is exemplified in FIG3 (b), where instead of feeding the whole image to φ 2, we feed the average color of some randomly sampled pixels from a generated image. This choice prevents φ 2 to capture the spatial structure of the generated digit and to focus only on color. After the training our method with a modified φ 2, the first network captures shape of a digit, and the second one captures the color variations. This can also be observed in FIG3 (c) and 4(d), where we use t-SNE (van der BID25 to visualize embedding spaces for shape and color, respectively. In order to show the effect of the guided siamese networks, we perform three experiments on the MS-Celeb dataset BID5 by using different guiding proxies. In the first experiment, only one of the two networks is guided with an edge detector at the input. Results of this experiment are shown in TAB3 . We can see that the first knob, which is connected to edges, captures the overall outline and roughly controls the identity of the generated face. On the other hand, the unguided second knob modifies the image with minimal changes to image edges. This change, in this case, corresponds to the lighting of the face. DISPLAYFORM0 We perform a second experiment with the edge detector, where in this case, the second knob is guided with the average color of the generated image. In TAB3, we can observe the of our disentangled image manipulation. The first knob with the edge detector again captures the outline of the face, and the second average color knob modifies a combination of the light and the skin color, similar to the in TAB3 .In our third experiment, we employ the cropped guidance networks. The two knobs receive the cropped top and bottom part of the image for training. Although these image crops are not independent, we still get acceptable that are shown in TAB3 . Adjusting the first knob only modifies the upper part of the face; the hair and the eyes. Similarly, the second knob is responsible for determining the chin and mouth shape. In order to guide our siamese networks for the 2D shapes dataset, we estimate the center of mass of the generated image, and the size of the generated object as follows: DISPLAYFORM1 where, x is a generated image, x[c x, c y] is the pixel intensity at image coordinates [c x, c y], (M x,M y) are the coordinates of the center of mass of x, andŜ is the size estimate for the generated object. As the 2D shapes dataset is relatively simple and contain only one object, these guidances are highly correlated with the ground truth attributes as shown in TAB4. FIG5, we illustrate additional semantic properties that are captured by UD-GAN-G. In TAB5, we compare the classification perfromance of our method to InfoGAN on all attributes in the CelebA dataset. G ATTRIBUTE CORRELATIONS.In TAB6, we compare the correlation between different embedding (or latent) dimensions and the correlation between embedding dimensions and the CelebA attributes. Although DIP-VAE encodes a more un-correlated representation, due to the correlated nature of CelebA attributes, it does not necessarily transfer to a disentangled semantic representation, as illustrated by the quantitative in TAB0 and 2.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1e0-30qKm
We use Siamese Networks to guide and disentangle the generation process in GANs without labeled data.
We present Predicted Variables, an approach to making machine learning (ML) a first class citizen in programming languages. There is a growing divide in approaches to building systems: using human experts (e.g. programming) on the one hand, and using behavior learned from data (e.g. ML) on the other hand. PVars aim to make using ML in programming easier by hybridizing the two. We leverage the existing concept of variables and create a new type, a predicted variable. PVars are akin to native variables with one important distinction: PVars determine their value using ML when evaluated. We describe PVars and their interface, how they can be used in programming, and demonstrate the feasibility of our approach on three algorithmic problems: binary search, QuickSort, and caches. We show experimentally that PVars are able to improve over the commonly used heuristics and lead to a better performance than the original algorithms. As opposed to previous work applying ML to algorithmic problems, PVars have the advantage that they can be used within the existing frameworks and do not require the existing domain knowledge to be replaced. PVars allow for a seamless integration of ML into existing systems and algorithms. Our PVars implementation currently relies on standard Reinforcement Learning (RL) methods. To learn faster, PVars use the heuristic function, which they are replacing, as an initial function. We show that PVars quickly pick up the behavior of the initial function and then improve performance beyond that without ever performing substantially worse -- allowing for a safe deployment in critical applications. Machine Learning (ML) has had many successes in the past decade in terms of techniques and systems as well as in the number of areas in which it is successfully applied. However, using ML has some cost that comes from the additional complexity added to software systems BID24. There is a fundamental impedance mismatch between the approaches to system building. Software systems have evolved from the idea that experts have full control over the behavior of the system and specify the exact steps to be followed. ML on the other hand has evolved from learning behavior by observing data. It allows for learning more complex but implicit programs leading to a loss of control for programmers since the behavior is now controlled by data. We believe it is very difficult to move from one to another of these approaches but that a hybrid between them needs to exist which allows to leverage both the developer's domain-specific knowledge and the adaptability of ML systems. We present Predicted Variables (PVars) as an approach to hybridize ML with programming. We leverage the existing concept of a variable which is universal across all programming modalities and add a new type, a predicted variable. PVars are akin to native variables with one important distinction: a PVar determines its value using ML when evaluated. A developer will be able to use a PVar just like any other variable, combine it with heuristics, domain specific knowledge, problem constraints, etc. in ways that are fully under the programmer's control. This represents an inversion of control compared to how ML systems are usually built. PVars allow to integrate ML tightly into algorithms whereas traditional ML systems are build around the model. PVars aim to make using ML in software development easier by avoiding the overhead of going through the traditional steps of building an ML system: collecting and preparing training data, defining a training loss, training an initial model, tweaking and optimizing the model, integrating the model into their system, and continuously updating and improving the model to adjust for drift in the distribution of the data processed. We show how these properties of PVars allow for applying ML in domains that have traditionally not been using ML. We demonstrate that ML can help improve the performance of "classical" algorithms that typically rely on a heuristic. The concrete implementation of PVars in this paper is based on standard deep reinforcement learning (RL). We emphasize that this is just one possible implementation. Other types of machine learning are in scope for PVars: supervised learning can be used when ground truth is available, or active learning is applicable when humans are in the loop to provide feedback. While in this paper we show PVars in the context of the Python programming language and use concepts from object oriented programming, everything described here applies directly to functional or procedural programming languages as well. We describe the framework around the central concept of a predicted variable but depending on the language the notion of a predicted function can be used interchangeably. We also introduce the notion of an initial function which can be the current heuristic that a PVar is replacing. It allows the PVar to minimize regret in critical applications and allow for safe deployment. This is a key strengths of our hybrid approach: it allows for better solutions while also providing better guarantees to the programmer. We demonstrate the feasibility of our approach on three algorithmic problems: binary search, QuickSort, and caches where we replace and enrich commonly used heuristics. We show improvements over common heuristics by injecting a predicted variable into an existing program, leaving much of the algorithm (including the domain-specific knowledge) untouched. We consider these problems the first applications of our newly defined interface and see the main contribution of this paper in the general applicability of the framework. The problem selection in this paper was driven by the desire for a self-contained setup and ease of reproducibility. PVars are applicable to more general problems across a large variety of domains from system optimization to user modelling. In our experiments we do not focus on the actual run time but rather on the effectiveness of the ML models. While for the algorithmic examples in this paper, in a practical scenario speed is the key metric, we see PVars as a more general interface that can be applied across a more diverse set of problems including user modelling, predicting user preference, or content recommendations. In many applications, speed is not a meaningful metric. Further, we believe that advances in specialized hardware will enable running machine learning models at insignificant cost .Our main contributions are:• we introduce the PVar API to smoothly integrate ML into software development;• we show how standard RL methods can be leveraged through the PVars interface;• we propose an approach to learn using the initial function, leveraging off-policy learning;• we demonstrate the feasibility of our approach on 3 standard algorithmic problems. The remainder of this paper is structured as follows: We describe how PVars can be used in software development in sec. 2 and how we make use of the heuristics that we are replacing to guide the training and avoid unstable behavior in sec. 3. Sec. 4 describes our implementation and the application of PVars to three algorithmic applications. We also describe the experiments that we performed to demonstrate that PVars are an intuitive approach to bring ML closer to software development and are applicable to different problems. We describe related work in sec. 5. A PVar has a simple API that allows the developer to provide enough information about its context, predict its value, and provide feedback about the quality of its predictions. PVars invert the control compared to common ML approaches that are model centric. Here, the developer has full control over how data and feedback are provided to the model, how inference is called, and how its are used. To create a PVar, the developer chooses its output type (float, int, category, ...), shape, and range; defines which data the PVar is able to observe (type, shape, range); and optionally provides an initial function. In the following example we instantiate a scalar float PVar taking on values between 0 and 1, which can observe three scalar floats (each in the range between 0 and 10), and which uses a simple initial function: The PVar can then be used like a normal variable. It determines its value at read time by using inference in the underlying ML model, e.g. value = pvar. Predict Specifically, developers should be able to use a PVar instead of a heuristic or an arbitrarily chosen constant. PVars can also take the form of a stochastic variable, shielding the developer from the underlying complexity of inference, sampling, and explore/exploit strategies. The PVar determines its value on the basis of observations about the context that the developer passes in: A developer might provide additional side-information into the PVar that an engineered heuristic would not be using but which a powerful model is able to use in order to improve performance. The developer provides feedback about the quality of previous predictions once it becomes available: DISPLAYFORM0 In this example we provide numerical feedback. Following common RL practice a PVar aims to maximize the sum of reward values received over time (possibly discounted). In other setups, we might become aware of the correct value in hindsight and provide the "ground truth" answer as feedback, turning the learning task into a supervised learning problem. Some problems might have multiple metrics to optimize for (run time, memory, network bandwidth) and the developer might want to give feedback for each dimension. This API allows for integrating PVars easily and transparently into existing applications with little overhead. See listing 1 for how to use the PVar created above in binary search. In addition to the API calls described above, model hyperparameters can be specified through additional configuration, which can be tuned independently. The definition of the PVar only determines its interface (i.e. the types and shapes of inputs and outputs). We allow for the developer to pass an initial function to the PVar. We anticipate that in many cases the initial function will be the heuristic that the PVar is replacing. Ideally it is a reasonable guess at what values would be good for the PVar to return. The PVar will use this initial function to avoid very bad performance in the initial predictions and observe the behavior of the initial function to guide its own learning process, similar to imitation learning BID13. The existence of the initial function should strictly improve the performance of a PVar. In the worst case, the PVar could choose to ignore it completely, but ideally it will allow the PVar to explore solutions which are not easily reachable from a random starting point. Further, the initial function plays the role of a heuristic policy which explores the state and action space generating initial trajectories which are then used for learning. Even though such exploration is biased, off-policy RL can train on this data. In contrast to imitation learning where an agent tries to become as good as the expert, we explicitly aim to outperform the initial function as quickly as possible, similar to BID23.For a PVar to make use of the initial heuristic, and to balance between learning a good policy and the safety of the initial function, it relies on a policy selection strategy. This strategy switches between exploiting the learned policy, exploring alternative values, and using the initial function. It can be applied at the action or episode level depending on the requirements. Finally, the initial function provides a safety net: in case the learned policy starts to misbehave, the PVar can always fallback to the initial function with little cost. In the following we describe how PVars can be used in three different algorithmic problems and how a developer can leverage the power of machine learning easily with just a few lines of code. We show experimentally how using PVars helps improving the algorithm performance. The interface described above naturally translates into an RL setting: the inputs to Observe calls are combined into the state, the output of the Predict call is the action, and Feedback is the reward. To evaluate the impact of PVars we measure cumulative regret over training episodes. Regret measures how much worse (or better when it is negative) a method performs compared to another method. Cumulative regret captures whether a method is better than another method over all previous decisions. For practical use cases we are interested in two properties: Regret should never be very high to guarantee acceptable performance of the PVar under all circumstances. Cumulative regret should become permanently negative as early as possible. This corresponds to the desire to have better performance than the baseline model as soon as possible. Unlike the usual setting which distinguishes a training and evaluation mode, we perform evaluation from the point of view of the developer without this distinction. The developer just plugs in the PVar and starts running the program as usual. Due to the online learning setup in which PVars are operating, overfitting does not pose a concern BID3. The (cumulative) regret numbers thus do contain potential performance regressions due to exploration noise. This effect could be mitigated by performing only a fraction of the runs with exploration. For our feasibility study we do not account for the computational costs of inference in the model. PVars would be applicable to a wide variety of problems even if these costs were high, particularly for problems relying on expensive approximation heuristics or working with inherently slow hardware, such as filesystems. Our implementation currently is a small library exposing the PVar interface to client applications (FIG1 . A PVar assembles observations, actions, and feedback into episode logs that are passed to a replay buffer. The models are trained asynchronously. When a new checkpoint becomes available the PVar loads it for use in consecutive steps. To enable PVars we leverage recent progress in RL for modelling and training. It allows to apply PVars to the most general use cases. While we are only looking at RL methods here, PVars can be used with other learning methods embedded such as supervised learning or multi-armed bandit methods. We are building our models on DDQN BID9 for categorical outputs and on TD3 BID4 for continuous outputs. DDQN is a de facto standard in RL since its success in AlphaGo . TD3 is a recent modification to DDPG BID19 using a second critic network to avoid overestimating the expected reward. We summarize the hyperparameters used in our experiments in the appendix TAB5 . While these hyperparameters are now new parameters that the developer can tweak, we hypothesize that on the one hand, tuning hyperparameters is often simpler than manually defining new problem-specific heuristics and on the other hand that improvements on automatic model tuning from the general machine learning community will be easily applicable here too. Our policy selection strategy starts by only evaluating the initial function and then gradually starts to increase the use of the learned policy. It keeps track of the received rewards of these policies adjusts the use of the learned policy depending on its performance. We show the usage rate of the initial function when we use it ( FIG3 demonstrating the effectiveness of this strategy. Similar to many works that build on RL technology we are faced with the reproducibility issues described by BID10 . Among multiple runs of any experiment, only some runs exhibit the desired behavior, which we report. However, in the "failing" runs we observe baseline performance because the initial function acts as a safety net. Thus, our experiments show that we can outperfom the baseline heuristics without a high risk to fail badly. We do not claim to have a solution to these reproducibility issues but any solution developed by the community will be applicable here. To quantify the reproducibility of our for the different problems, we provide the performance of the learned policies in the appendix when re-running the same experiments multiple times. Binary search BID26) is a standard algorithm for finding the location l x of a target value x in a sorted array A = {a 0, a 1, . . ., a N −1} of size N. Binary search has a worst case runtime complexity of log 2 (N) steps when no further knowledge about the distribution of data is available. Knowing more about the distribution of the data can help to reduce expected runtime. For example, if the array values follow a uniform distribution, the location of x can be approximated using linear interpolation l x ≈ (N − 1)(x − a 0)/(a N −1 − a 0). We show how PVars can be used to speed up binary search by learning to estimate the position l x for a more general case. The simplest way of using a PVar is to directly estimate the location l x and incentivize the search to do so in as few steps as possible by penalizing each step by the same negative reward (listing 1). At each step, the PVar observes the values a L, a R at both ends of the search interval and the target x. The PVar output q is used as the relative position of the next read index m, such that m = qL + (1 − q)R.In order to give a stronger learning signal to the model, the developer can incorporate problemspecific knowledge into the reward function or into how the PVar is used. One way to shape the reward is to account for problem reduction. For binary search, reducing the size of the remaining search space will speed up the search proportionally and should be rewarded accordingly. By replacing the step-counting reward in listing 1 (line 9) with the search range reduction DISPLAYFORM0 we directly reward reducing the size of the search space. By shaping the reward like this, we are able to attribute the feedback signal to the current prediction and to reduce the problem from RL to contextual bandit (which we implement by using a discount factor of 0).Alternatively we can change the way the prediction is used to cast the problem in a way that the PVar learns faster and is unable to predict very bad values. For many algorithms (including binary search) it is possible to predict a combination of (or choice among) several existing heuristics rather than predicting the value directly. We use two heuristics: (a) vanilla binary search which splits the search range {a L, . . ., a R} into two equally large parts using the split location l v = (L + R)/2, and (b) interpolation search which interpolates the split location as DISPLAYFORM1 We then use the value q of the PVar to mix between these heuristics to get the predicted split position l q = ql v + (1 − q)l i. Since in practice both of these heuristics work well on many distributions, any point in between will also work well. This reduces the risk for the PVar to pick a value that is really bad which in turn helps learning. A disadvantage is that it's impossible to find the optimal strategy with values outside of the interval between l v and l i. To evaluate our approaches we are using a test environment where in each episode, we sample an array of 5000 elements from a randomly chosen distribution (uniform, triangular, normal, pareto, power, gamma and chisquare), sort it, scale to [−10 4, 10 4] and search for a random element. FIG3 shows the for the different variants of binary search using a PVar and compares them to the vanilla binary search baseline. The show that the simplest case (pink line) where we directly predict the relative position with the simple reward and without using an initial function performs poorly initially but then becomes nearly as good as the baseline (cumulative regret becomes nearly constant after an initial bad period). The next case (yellow line) has an identical setup but we are using the initial function and we see that the initial regret is substantially smaller. By using the shaped reward (blue line), the PVar is able to learn the behavior of the baseline quickly. Both approaches that are mixing the heuristics (green and red lines) significantly outperform the baselines. In the appendix TAB1 we give details about when each of the different variants of using a PVar in binary search reaches break-even. QuickSort BID12 sorts an array in-place by partitioning it into two sets (smaller/larger than the pivot) recursively until the array is fully sorted. QuickSort is one of the most commonly used sorting algorithms where many heuristics have been proposed to choose the pivot element. While the average time complexity of QuickSort is θ(N log(N)), a worst case time complexity of O(N 2) can happen when the pivot elements are badly chosen. The optimal choice for a pivot is the median of the range, which splits it into two parts of equal size. To improve QuickSort using a PVar we aim at tuning the pivot selection heuristic. To allow for sorting arbitrary types, we decided to use the PVar to determine the number of elements that are sampled from the array to be sorted and then pick the median from these samples as the pivot (listing 2). As feedback signal for a recursion step we use an estimate of its impact on the computational cost ∆c. DISPLAYFORM0 where n is the size of the array, a and b are the sizes of the partitions with n = a + b and c pivot = c median +c partition is the cost to compute the median of the samples and to partition the array. ∆c recursive takes into account how close the current partition is to the ideal case (median). The cost is a weighted sum of number of reads, writes, and comparisons. Similar to the shaped reward in binary search, this reward allows us to reduce the RL problem to a contextual bandit problem and we use a discount of 0.For evaluation we are using a test environment where we sort randomly shuffled arrays. Results of the experiments are presented in fig. 3. It shows that the learned method outperforms all baseline heuristics within less than 100 episodes.' Vanilla' corresponds to a standard QuickSort implementation that picks one pivot at random in each step.'Random3' and'Random9' sample 3 and 9 random elements respectively and use the median of these as pivots.' Adaptive' uses the median of max(1, log 2 (n) − 1 ) randomly sampled elements as pivot when partitioning a range of size n. It uses more samples at for larger arrays, leading to a better approximation of the median, and thus to faster problem size reduction. Fig. 4 shows that the PVar learns a non-trivial policy. The PVar learns to select more samples at larger array sizes which is similar to the behavior that we hand-coded in the adaptive baseline but in this case no manual heuristic engineering was necessary and a better policy was learned. Also, note that a PVar-based method is able to adapt to changing environments which is not the case for engineered heuristics. One surprising is that the PVar prefers 13 over 15 samples at large array sizes. We hypothesize this happens because relatively few examples of large arrays are seen during training (one per episode, while arrays of smaller sizes are seen multiple times per episode). Caches are a commonly used component to speed up computing systems. They use a cache replacement policy (CRP) to determine which element to evict when the cache is full and a new element needs to be stored. Probably the most popular CRP is the least recently used (LRU) heuristic which evicts the element with the oldest access timestamp. A number of approaches have been proposed to improve cache performance using machine learning (see sec. 5). We propose two different approaches how PVars can be used in a CRP to improve cache performance. Discrete (listing 3): A PVar directly predicts which element to evict or chooses not to evict at all (by predicting an invalid index). That is, the PVar learns to become a CRP itself. While this is the simplest way to use a PVar, it makes it more difficult to learn a CRP better than LRU (in fact, even learning to be on par with LRU is non-trivial in this setting).Continuous (listing 4): A PVar is used to enhance LRU by predicting an offset to the last access timestamp. Here, the PVar learns which items to keep in the cache longer and which items to evict sooner. In this case it becomes trivial to be as good as LRU by predicting a zero offset. The PVar value in (−1, 1) is scaled to get a reasonable value range for the offsets. It is also possible to choose not to store the element by predicting a sufficiently negative score. In both approaches the feedback given to the PVar is whether an item was found in the cache (+1) or not (−1). In the discrete approach we also give a reward of −1 if the eviction actually takes place. In our implementation the observations are the history of accesses, memory contents, and evicted elements. The PVar can observe keys as a categorical input or features of the keys. Observing keys as categorical input allows to avoid feature engineering and enables directly learning the properties of particular keys (e.g. which keys are accessed the most) but makes it difficult to deal with rare and unseen keys. To handle keys as input we train an embedding layer shared between the actor and critic networks (FIG6).As features of the keys we observe historical frequencies computed over a window of fixed size. This approach requires more effort from the developer to implement such features, but pays off with better performance and the fact that the model does not rely on particular key values. We experiment with three combinations of these options: discrete caches observing keys, continuous caches observing keys, continuous caches observing frequencies. For evaluation we use a cache with size 10 and integer keys from 1 to 100. We use two synthetic access patterns of length 1000, sampled i.i.d. from a power law distribution with α = 0.1 and α = 0.5. FIG7 shows for the three variants of predicted caches, a standard LRU cache, and an oracle cache to give a theoretical, non-achievable, upper bound on the performance. We look at the hit ratio without exploration to understand the potential performance of the model once learning has converged. However, cumulative regret is still reported under exploration noise. Both implementations that work directly on key embeddings learn to behave similar to the LRU baseline without exploration (comparable hit ratio). However, the continuous variant pays a higher penalty for exploration (higher cumulative regret). Note that this means that the continuous variant learned to predict constant offsets (which is trivial), however the discrete implementation actually learned to become an LRU CRP which is non-trivial. The continuous implementation with frequencies quickly outperforms the LRU baseline, making the cost/benefit worthwhile long-term (negative cumulative regret after a few hundred episodes). Similar to our proposed interface, probabilistic programming BID5 introduces interfaces which simplify the developer complexity when working with statistical models and conditioning variable values on run-time observations. In contrast to PVars, the introduced interfaces are specialized on working with distributions and graphical models. In the space of approximate computing, BID22 propose a programming interface for approximate computation. While similar in spirit, this work does not explicitly target machine learning models. Similar in spirit to our approach is which proposes to incorporate neural models into database systems by replacing existing index structures with neural models that can be both faster and smaller. PVars in contrast aim not to replace existing data structures or algorithms but transparently integrate with standard algorithms and systems. PVars are general enough to be used to improve the heuristics in algorithms (as done here), to optimize database systems (similar to), or to simply replace an arbitrarily chosen constant. Another approach that is similar to PVars is Spiral BID2 but it is far more limited in scope than PVars in that it aims to predict boolean values only and relies on ground truth data for model building. Similarly, a number of papers apply machine learning to algorithmic problems, e.g. Neural Turing Machines BID7 aims to build a full neural model for program execution. BID14; BID15 BID1 propose end-to-end ML approaches to combinatorial optimization problems. In contrast to PVars these approaches replace the existing methods with an ML-system. These are a good demonstration of the inversion of control mentioned above: using ML requires to give full control to the ML system. There are a few approaches that are related to our use of the initial function, however most common problems where RL is applied do not have a good initial function. Generally related is the idea of imitation learning BID13 where the agent aims to replicate the behavior of an expert. Typically the amount of training data created by an expert is very limited. Based on imitation learning is the idea to use previously trained agents to kickstart the learning of a new model BID23 where the authors concurrently use a teacher and a student model and encourage the student model to learn from the teacher through an auxiliary loss that is decreased over time as the student becomes better. In some applications it may be possible to obtain additional training data from experts from other sources, e.g. BID11 BID0 leverage YouTube videos of gameplay to increase training speed of their agents. These approaches work well in cases where it is possible to leverage external data sources. Caches are an interesting application area where multiple teams have shown in the past that ML can improve cache performance BID27 BID20 BID8 BID21 BID6. In contrast to our approach, all ML models work on caches specifically and build task-dependent models that do not generalize to other tasks. Algorithm selection has been an approach to apply RL for improving sorting algorithms BID17. Search algorithms have also been improved using genetic algorithms to tweak code optimization BID18. We have introduced a new programming concept called a predicted variable (PVar) aiming to make it easier for developers to use machine learning from their existing code. Contrary to other approaches, PVars can easily be integrated and hand full control to the developer over how ML models are used and trained. PVars bridge the chasm between the traditional approaches of software systems building and machine learning modeling and thus allow for the developer to focus on refining their algorithm and metrics rather than working on building pipelines to incorporate machine learning. PVars achieve this by reusing the existing concept of variables in programming in a novel way where the value of the variable is determined using machine learning. PVar observes information about its context and receives feedback about the quality of predictions instead of being assigned a value directly. We have studied the feasibility of PVars in three algorithmic problems. For each we show how easy PVars can be incorporated, how performance improves in comparison to not using a PVar at all. Specifically, through our experiments we highlight both advantages and disadvantages that reinforcement learning brings when used as a solution for a generic interface as PVars. Note that we do not claim to have the best possible machine learning model for each of these problems but our contribution lies in building a framework that allows for using ML easily, spreading its use, and improving the performance in places where machine learning would not have been used otherwise. PVars are applicable to more general problems across a large variety of domains from system optimization to user modelling. Our current implementation of PVars is built on standard RL methods but other ML methods such as supervised learning are in scope as well if the problem is appropriate. In this paper we barely scratch the surface of the new opportunities created with PVars. The current rate of progress in ML will enable better and wider applicability of PVars to new applications. We hope that PVars will inspire the use of ML in places where it has not been considered before. We plan to release the code to reproduce the in this paper. Further, we hope to make PVars a standard feature in C++29, Python 4, and Java 12.;) TAB1 gives details about when each of the different variants of using a PVar in binary search reaches break-even. The numbers indicate how many episodes it takes for the cumulative regret to become permanently negative, which means that for any additional evaluations after that point the user has a net benefit from using a PVar compared to not using ML at all. The table shows that reward shaping and using the predictions smartly improve performance but it also shows that even simple methods are able to give improvements. Note, that no model outperforms interpolation search on a uniform distribution as it is the best approximation for this distribution. We do not claim to have solved reinforcement learning reproducibility and throughout our experiments we are facing the same issues as the larger community. The core aspect of the PVars framework is the ability to rely on the initial function or algorithmic formulation to limit the costs of a poorly performing learned policy. We illustrate this by looking at metrics over 100 restarts of the different experiments and highlight that, while some experiments for some problems are more reproducible than others, we do not perform worse than the initial function provided by the developer. The design construct specific to PVars and what distinguishes it from standard Reinforcement Learning is that it is applied in software control where often developers are able to provide safe initial functions or write the algorithm in a way that limits the cost of a poorly performing policy. To quantify the reproducibility, we ran the experiment from Sec. 4.2 120× and report the cumulative regret per episode (average number of extra steps per search episode) compared to vanilla binary search. On average, the cumulative regret is: -1.59 @5K (-2.20 @50K). The break-even point is reached in 85% of the cases, and within an average of 1965 episodes. The performance breakdown by percentile, and the number of steps at which the break-even point is reached are referenced in table 2. To quantify the reproducibility, we ran the experiment described in Sec. 4.3 115× and report the cumulative regret per episode (average number of extra operations, as read=write=1, compare=0.5 per sort) compared to vanilla QuickSort. On average, the cumulative regret per episode is -913 @1K (-1064 @10K) on a total operation cost of 25.1K per sort. The break-even point is reached in 94% of the cases, and in an average after 368 episodes. The performance breakdown by percentile, and the number of steps at which the break-even point is reached are referenced in table 3. In order to quantify the reproducibility of our experiments we ran 100 times the same experiment illustrated in sec. 4.4 and we report the performance of the learned cache policy using predicted variables when compared with LRU heuristic, by looking at the cumulative regret metric after 20000 episodes. We break down the cumulative regret by percentiles in table 4. When counting the number of runs for which there exists an episode where the cumulative regret is strictly negative until the end, we note that this happens for 26% of the runs. For 60% of the runs the cumulative regret does not become positive, meaning that using the learned cache policy is at least as good as using the LRU heuristic. This leaves us with 14% of the runs ing in strictly worse performance than relying on the LRU heuristic. In table 5 we provide the hyperparameters used for different experiments in order to ease reproducibility of our work. Together with Sec. B this details our entire experimental and setup.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1epooR5FX
We present Predicted Variables, an approach to making machine learning a first class citizen in programming languages.
Much recent research has been devoted to video prediction and generation, but mostly for short-scale time horizons. The hierarchical video prediction method by is an example of a state of the art method for long term video prediction. However, their method has limited applicability in practical settings as it requires a ground truth pose (e.g., poses of joints of a human) at training time. This paper presents a long term hierarchical video prediction model that does not have such a restriction. We show that the network learns its own higher-level structure (e.g., pose equivalent hidden variables) that works better in cases where the ground truth pose does not fully capture all of the information needed to predict the next frame. This method gives sharper than other video prediction methods which do not require a ground truth pose, and its efficiency is shown on the Humans 3.6M and Robot Pushing datasets. It is hypothesized that learning to predict the future and the effect of their actions is an important quality for intelligent agents that interact with their environment. This is a complicated task, as typical use cases require predicting the outcome of interactions between the agent and objects over multiple timesteps. In this work we are looking at the task of predicting the pixels of future video frames given the first few observed frames. We also consider the action conditional setting, in which we are given the action that the agent is taking and are tasked to predict the pixel level outcome of that action in the future. The method of BID20 is a novel way to generate long term video predictions, but requires ground truth human pose annotations. In this work we explore ways to generate videos using a hierarchical model without requiring a ground truth pose or other high level structure annotations for each frame. The method is hierarchical in the sense that it learns to generate a high level structure, then makes next frame predictions based on that structure. Patch level prediction The video prediction problem was initially studied at the patch level BID19 BID12 BID13 BID18. This work showed promising on synthetic data (e.g. bouncing balls), but did not scale to predicting higher resolution videos. Frame level prediction on realistic videos. More recently, the video prediction problem has been formulated at the entire frame level. Most of the recent work is based on the convolutional encoder/decoder framework. BID3 proposed a network that can perform next level video frame prediction by explicitly predicting movement. For each pixel in the previous frame, the network outputs a distribution over locations that pixel is predicted to move. The movements are averaged to get the final prediction. The network is trained end to end to minimize L2 loss. BID11 proposed adversarial training with multiscale convolutional networks to generate sharper pixel level predictions in comparison to conventional L2 loss. BID20 proposed a network that decomposes motion and content in video prediction and showed improved performance over BID11. BID10 proposed a deep predictive coding network in which each layer learns to predict the lower-level difference between the future frame and current frame. As an alternative approach to convolutional encoder/decoder networks, BID7 proposed an autoregressive generation scheme for improved prediction performance. Despite their promise, these work have not been demonstrated for long term prediction on high resolution natural videos beyond ≈ 20 frames. Long-term prediction. BID14 proposed action conditional convolutional encoderdecoder architecture that has demonstrated impressive long-term prediction performance on video games (e.g., Atari games), but it has not been applied for predicting challenging real-world videos.2.1 HIERARCHICAL VIDEO PREDICTION (VILLEGAS ET AL., 2017) BID20 demonstrated a long-term prediction method using hierarchical prediction where the ground truth human pose is assumed to be given as supervision. Our method is based off of that work, so we describe it in detail in the following section. To generate the image at timestep t, the following procedure is used. First, a convolutional neural network encoder generates an embedding vector from the previous ground truth image: e t−1 = CN N (img t−1). This encoding represents the pose of a person. Next, a multilayer LSTM predictor network predicts what the encoding will be in a future timestep. For some number of context frames, the predictor makes its prediction based off of the encoding from the ground truth image. After the predictor network has enough context, it makes its predictions based off of its previous predictions (Fig. 1 provides a helpful visual). For example, if there are C context frames, the following is used to generate the encoding at step t. DISPLAYFORM0 H t is the hidden state of the LSTM at timestep t. Note that only the encoding of the context frames are used, not the subsequent frames. Similar to e t in the above, p t represents the predicted pose. Once p t is obtained, a visual analogy network (VAN) BID15 is used to generate the corresponding image at time t. The VAN applies the transformation that occurred between two images to a given query image. In this case the first frame of the video should be transformed in the same way as the encoding was transformed from the first to t-th timestep. The VAN does this by mapping images to a space where analogies can be represented by additions and subtractions, and then mapping the back to image space. To obtain the predicted image at timestep t using the VAN one needs to use img t = V AN (e 1, p t, img 1), where the VAN is defined as DISPLAYFORM1 Where g is a hardcoded function to transform the pose into a 2 dimensional representation of the pose. The weights of f enc and f img are shared. The disadvantage of this method is that the training relies on ground truth pose annotations. The encoder is trained to produce the pose given the image, the predictor is trained to predict that pose into the future and the VAN is trained to generate the image given the pose. Our method uses a similar network architecture to BID20 but we present ways of training the network that do not require a ground truth pose. In our method, e t and p t have the same dimensionality and represent the network's own higher level structure (e.g., pose equivalent hidden variables) which the network learns as it is trained. In our case, there is no straightforward way to transform the encoding into a 2 dimensional representation of the pose. Therefore, the part of the VAN that maps the encoding is a fully connected network instead of a convolutional neural network. As a , the weights are not shared between the fully connected network which processes the encoding, and the ConvNet which processes the image. The equation for the VAN becomes: DISPLAYFORM0 Note that f enc is a fully connected network, and f img is a conv net. f dec is a deconv network. There are several ways these networks can be trained. In BID20, they are each trained separately with the ground truth human pose. In this work, we explore alternative ways of training these networks in the absence of any ground truth pose or other high level structure annotations. We use the same procedure as BID20 at inference time. One option is to connect the networks the same way as in inference time and train them end to end (E2E). In this method, the L2 loss of of the predicted image is optimized: DISPLAYFORM0 ). There are no constraints on what kind of encoding the encoder produces, or what kind of predictions the predictor makes. Because of how the networks are connected, the encoder will produce an encoding whose future state is easily predicted by the predictor. Likewise, the predictor will make predictions which the VAN can use to produce images which are similar to the ground truth. The encoder and predictor will not have to represent information that is present in the first ground truth frame, since the VAN will have access to the first frame. The size of e t and p t is a hyper parameter of this approach. Figure 1 represents a diagram of this method. Figure 1: The E2E method. The first few frames are encoded and fed into the predictor as context. The predictor predicts the subsequent encodings, which the VAN uses to produce the pixel level predictions. The average of the losses is minimized. This is also the configuration of every method at inference time, even if the predictor and VAN are trained separately. An alternative way to train the combined network is to explicitly train the encoder so that e t is easy to predict into the future, and so that the VAN can use e t to produce the next frame. We call this method Encoder Predictor with Encoder VAN, or EPEV. The encoder and predictor are trained together so the e t is easy to predict and the predictor predicts that encoding into the future. To accomplish this, the difference between e t and p t, L 2 (e t, p t) is minimized. The encoder is also trained with the VAN so the VAN can use e t to produce the image and so that the encoder generates an informative encoding. This is done by minimizing the loss of the VAN given the encoder output: L 2 (img et, img t) where img et = V AN (e 1, e t, img 1). The network is trained to minimize the sum of these two losses: min(DISPLAYFORM0, where α is a hyper-parameter that controls the degree to which the e t will be easy to predict vs. informative enough so the VAN can produce a good image. See figure 2 for a diagram of the encoder and predictor trained together, and figure 3 for the encoder and VAN trained together. Figure 2: The segment of the EPEV method in which the encoder and predictor are trained together. The encoder is trained to produce an encoding that is easy to predict, and the predictor is trained to predict that encoding into the future. The average of the losses is minimized. Figure 3: The segment of the EPEV method in which the encoder and VAN are trained together. The encoder is trained to produce an encoding that is informative to the VAN, while the VAN is trained to output the image given the encoding. The average of the losses is minimized. This method is similar to an autoencoder. Separate gradient descent procedures (or optimizers, in TensorFlow parlance) could be used to minimize L 2 (img et, img t) and L 2 (e t, p t), but we find that minimizing the sum works better experimentally. With this method, the predictor will predict the encoder outputs in future timesteps, and the VAN will use the encoder output to produce the frame. The end to end approach can also be augmented if the dataset has information about the ground truth pose or any other high level frame annotations. In this method, the e t and p t vectors would be split into two: the first path is optimized to represent the pose, and the rest of e t and p t is trained the same way as the E2E approach. At each training step a separate optimizer minimizes each loss. In this method, we can think of e t and p t as the concatenation of two vectors, one representing the pose, and the other containing additional information the network can represent. If e t = [e poset, e remainingt] and p t = [p poset, p remainingt], the following losses are minimized:The loss representing how well the encoder infers the pose: min(T t=1 L 2 (e poset, pose t)). The loss representing how well the predictor predicts the pose: min(T t=1 L 2 (p poset, pose t)). The end to end loss: min(T t=1 L 2 ( img t, img t)). These losses are minimized with separate optimizers in this method. Minimizing the end to end loss ensures that the VAN will learn to use the pose provided by the predictor network, and that the encoder and predictor will learn to produce additional information besides the pose that is useful to the VAN. In order to compare to a baseline, we also implemented the method where each of the networks are trained individually, as in BID20. The main difference between this method and Villegas et al. FORMULA0 is that we do not use an adversarial loss BID4. See section 5 for a discussion of how an adversarial loss could be added to our method. These methods were tested on two different datasets, the Robot Push dataset BID3 and the Humans 3.6M dataset BID6 BID1. Videos of the of our method are available by visiting the following URL: https://goo.gl/WA8uxc.The EPEV method works best experimentally if α starts small, around 1e-7, and is gradually increased to around.1 during training. As a , the encoder will first be optimized to produce an informative encoding, then gradually optimized to also make that encoding easy to predict. This dataset contains videos of a robot arm pushing objects on a table. The current joint angles and the location of the end effector are given, and we use these as the pose for the methods which require it. The action the robot arm is taking is fed into the predictor. Each of the methods considered was given two frames of context, and then trained to predict 17 subsequent frames. An encoding size of 16 was used for the E2E method. The size of the pose is 12, so the encoding size of the INDIVIDUAL method is 12. The other methods used an encoding size of 32.Additionaly, we randomly split the dataset into training, validation and test. We used 64x64 images, and the same frame rate as the original dataset. Results from our test set are shown in this section. Note that our experimental protocol is different from BID3, where the test set is composed of novel objects. We hypothesized that the methods where the network could learn its own pose equivalent would predict the movement of the objects the robot arm pushes more accurately than the INDIVIDUAL method. To test this, we manually compared the E2E and EPEV methods to the INDIVIDUAL method and evaluated where the movement of predicted objects most closely matched the ground truth. We evaluated 40 videos in which objects move. The are in TAB0. method, the predictor network can only produce the pose, so the VAN has to infer how the objects will move based on the start and end state of the arm. We were surprised by how well the VAN could infer this. However, from examining the videos, the EPEV method had better object predictions than the INDIVIDUAL method, which supports our hypothesis. The magnified part of ground truth frame 19 in FIG0 shows that the robot arm pushed the yellow object. The EPEV and E2E methods correctly predict this, but in the INDIVIDUAL method, the robot arm covers up the yellow object instead of moving it. Additional analysis is in appendix section F. The average Peak Signal to Noise Ratio (PSNR) of the different methods we introduce are similar on this dataset. In this data set, the model from BID3 gets a better PSNR than our model. The movement in this dataset can easily be represented by the movement of pixels, and it is relatively deterministic. So the model from BID3 which explicitly predicts the movement of objects and directly minimizes the L2 loss works well here. To confirm our claim that our method works well for long term predictions, we trained our method on a toy task with known factors of variation. We used a dataset with a generated shape that bounces around the image and changes size deterministically. We trained the EPEV method and the CDNA method in BID3 to predict 16 frames, given the first 3 frames as context. We do not show the E2E method since it usually predicts blurrier images than the EPEV method. Both methods are evaluated on predicting approximately 1k frames. We added noise to the LSTM states of the predictor network during training to help predict reasonable motion further into the future. Results form a held out test set are described in the following. After visually inspecting the of both methods, we found that when the CDNA fails, the shape disappears entirely, however when the EPEV method fails, the shape changes color. To quantitatively evaluate both methods, we used a script to measure whether a shape was present frames 1012 to 1022, and if that shape has the appropriate color. See table 2 for the averaged over 1k runs. The CDNA method predicts a shape with the correct color about 25% of the time, and the EPEV method predicts a shape with the correct color about 97% of the time. The EPEV method sometimes fails by predicting the shape in the same location from frame to frame. This does not happen very often, as the reader can confirm by examining the randomly sampled predictions in appendix section E. It is unrealistic to expect the methods to predict the location of the shape accurately in frame 1000, since small errors propagate in each prediction step. Our method was also tested on the Humans 3.6M Dataset. Only the E2E and EPEV methods were tested here, since BID20 has already shown the using the ground truth pose. We used subjects 1, 5, 6, 7 and 8 for training, subject 9 for validation. Subject 11 are reported in this paper for testing. We used 64 by 64 images. We subsampled the dataset to 6.25 frames per second. We trained the methods to predict 32 frames and the in this paper show predicting 64 frames. Each method is given the first 5 frames as context frames. So the in these images, the model predicts about 10 seconds into the future from.8 seconds of context. We used an encoding size of 32 for the E2E method and a encoding size of 64 for the EPEV method on this dataset. We compare our method to the CDNA method in BID3 in Fig. 6. Figure 6: A visual comparison of the EPEV method and CDNA from BID3 as the baseline. This example is cherry picked to show when there is significant movement in the ground truth. See appendix section G for non cherry picked . The contrast of these images was increased to make the humans easier to see. In CDNA from BID3, the person disappeared part way through the prediction. The EPEV method, produced relatively sharp predictions up until frame 42, and a blurry human prediction at frame 63.From visually inspecting the images we found that in images where there is not significant movement in the first 5 frames of the ground truth it is hard to tell the difference between our method and CDNA since both methods predict an image similar to the early ground truth frames. However, when there is significant movement in the first 5 ground truth frames, the predictions from EPEV are sharper further into the future than CDNA. See the appendix section G for images where there is significant movement in the first 5 ground truth frames so the methods can be compared. We also collected from the E2E method, but those blur out very quickly and are shown in appendix section G.The CDNA method from BID3 produces blurry images since it is trained to minimize L2 loss directly BID3. In the EPEV method, the predictor and VAN are trained separately. This prevents the VAN from learning to produce blurry images when the predictor is not confident. The predictions will be sharp as long as the predictor network predicts a valid encoding. We also compare our method to BID20. This method gives sharper than ours. We think that this is because BID20 uses a adversarial loss BID4 and since nothing besides the human is moving in this dataset, the pose works well as a high level structure. We propose to compare the methods quantitatively by considering whether the generated videos contain a recognizable person. To do this in an automated fashion, for each of the generated frames, we ran a MobileNet BID5 object detection model pretrained on the MS-COCO BID9 dataset. We recorded how confident the detector was that a person (one of the MS-COCO labels) is in the image. We call this the "person score" (its value ranges from 0 to 1, with a higher score corresponding to a higher confidence level). The on each frame averaged over 1k runs are shown in FIG2. The person score on the ground truth frames is about 0.4. This is likely due to the mismatch between the training set images of the model (the MS-COCO dataset images are very different in terms of image statistics compared to the Humans 3.6M data). The person score is 0.26 on average for the images generated by the EPEV method, and 0.18 for CDNA from BID3. The person score degrades very rapidly in the first 8 frames of CDNA, but degrades more slowly in the EPEV method. The person score of the EPEV method on frame 63 is about the same as on frame 8 of CDNA. This confirms our visual analysis that the EPEV method produces clearer predictions further into the future. The EPEV method was only trained to predict 32 frames into the future but there is no significant drop in the person score at frame 32, showing that the EPEV method generalizes well to predicting longer sequences. We also used a service similar to Mechanical Turk to collect comparisons of 1,000 generated videos from our EPEV method and the CDNA baseline. The task showed videos generated by the two methods side by side and asked raters to confirm whether one of the videos is more realistic. The workers rated the EPEV method as more realistic 53.6% of the time, the CDNA method as more realistic 11.1% of the time and the videos as being about the same 35.3% of the time. The high number of "same" responses could be because of it being difficult to tell the difference between the methods when there is little movement. On datasets where the pose does not capture all of the information needed to predict future frames, letting the network define its own high level structure in addition to the pose is an improvement upon a BID20. The EPEV method generates sharper images than BID3 on non deterministic datasets, and can generate further into the future on a toy dataset that we introduced. We posit an adversarial loss between the predictor and encoder would likely help with potentially uncertain scenarios and would fix the problem of the EPEV method sometimes generating blurry images,Under review as a conference paper at ICLR 2018 To see what the encoder has learned in the EPEV method, we can obtain from the visual analogy network given the input from the encoder. The encoder is given the ground truth image. The are shown in figure 9. The show that the encoder encodes where the person is Figure 9: Results from the EPEV approach when the VAN is given the output of the encoder on the ground truth frame.in the image, as well as the orientation of the arms, legs and head to some extent. The are not as good as one would expect from an autoencoder, since the encoder has the constraint that the encoding also has to be easy to predict. We trained all of the methods including BID3 for 3 million steps using async SGD, across 32 worker machines. We used a minibatch size of 8 sequences in each step. The minibatch size could be so small because there were multiple frames per sequence. In methods with multiple optimizers, a step is defined as running each optimizer once. The hyperparameters are optimized separately for both datasets on a validation set. We used the best learning rates for each method.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkmtTJZCb
We show ways to train a hierarchical video prediction model without needing pose labels.
Combining information from different sensory modalities to execute goal directed actions is a key aspect of human intelligence. Specifically, human agents are very easily able to translate the task communicated in one sensory domain (say vision) into a representation that enables them to complete this task when they can only sense their environment using a separate sensory modality (say touch). In order to build agents with similar capabilities, in this work we consider the problem of a retrieving a target object from a drawer. The agent is provided with an image of a previously unseen object and it explores objects in the drawer using only tactile sensing to retrieve the object that was shown in the image without receiving any visual feedback. Success at this task requires close integration of visual and tactile sensing. We present a method for performing this task in a simulated environment using an anthropomorphic hand. We hope that future research in the direction of combining sensory signals for acting will find the object retrieval from a drawer to be a useful benchmark problem A core aspect of human intelligence is the ability to integrate and translate information between multiple sensory modalities to achieve an end goal. For example, we have no trouble discriminating between a set of keys, a wallet or a watch kept in our pocket by simply feeling them with our hands. Similarly, we can easily retrieve a desired object present inside a dark drawer even if we can't see the objects using touch sensation from our hands. Not only can we retrieve a previously known object, but if we are shown an image of a previously unseen object, we would still have no trouble retrieving this object using only tactile exploration inside the drawer even in absence of any visual feedback. Such translation of information between sensory modalities is not specific to tactile and vision, but is noticed between other modalities as well. For instance, it is easy to imagine someone walking down the stairs and opening the door by simply hearing the sound that was generated. These examples demonstrate how easily humans can translate information between sensory modalities. Different sensory modalities provide a different view of the same underlying reality. The ability to transform between sensory modalities therefore provides an interesting way to learn useful representations of sensory inputs. Recent work in self-supervised learning has made extensive use of this observation and shown that useful visual features can be learned by predicting, from images, corresponding sounds BID18, ego-motion BID1 BID11, depth or even predicting color values from grayscale images BID23.In addition to learning feature representations, another and possibly more critical use of sensing from multiple modalities is performing goal directed actions in partially observable settings. In the running example of retrieving objects from a drawer, the agent receives only the image of the object as input and in absence of any light source in the drawer, the agent solely relies on its tactile sensing to find the object. Other examples are a pedestrian getting alerted when she hears the sound of a car coming from the back or animals in the jungle being alerted of a tiger behind the bushes by the sound of the movement. Yet another example showing close integration of two modalities (vision and touch) is a study that found it became almost impossible for human participants to perform the seemingly trivial task of picking up a matchstick and lighting it when their hands were anesthetized BID12 ). Figure 1: (Left) Shows our experimental setup. p objects are in a drawer and a dexterous hand equipped with tactile sensing can explore novel objects using deterministic routines. In this case, p = 3 but we compared performance by varying the number of objects (Middle) We are presented with a query image as seen by the inset in the top right of the image. We explore the objects in the drawer using tactile sensing only to identify the object (Right) We then retrieve the object by applying a grasping routineIn this work we use the task of retrieving objects from a drawer as an experimental setup to investigate joint learning from two sensory modalities of vision and touch. Because the agent is provided only with a visual image of the object to be retrieved, it must translate into the representation space of tactile sensing to retrieve objects only by touching them. In the general case of retrieving the object, the agent must first explore spatially to locate where the objects are. Once it finds the object, it must move its fingers in an exploratory manner to collect information required to determine if the object is the one that needs to be retrieved. Solving this problem in its full generality requires not only good goal directed exploration strategies and also a method for translating between different sensory signals. We therefore think that object retrieval from a drawer is a good challenge problem for investigating different models that combine visual and tactile information for a target end task. In our setup the agent learns a mapping from visual to tactile signals using unsupervised exploration. This mapping enables the agent to determine the representation of the image in the representation space of tactile sensing (i.e. expected tactile response). The agent explores each object present in the drawer by touching it and compares the of its exploration with the expected tactile response. Performing this comparisons requires a good representation of raw tactile signals. For learning such a representation, we leverage the in image classification, where it was found that a network pre-trained to classify images from the Imagenet dataset into one thousand image categories learns features useful for many other visual tasks. Similar to image classification, we pose the task of classifying objects from tactile signals collected by touching eleven objects. We show that tactile representation learned by performing the task of classification, generalize and can be used to retrieve novel objects. We present in a simulated environment and the agent explores the objects using an anthropomorphic hand. One of the earliest works presenting haptics as a sensory modality to explore the world was by BID8 ).Gibson showed that object recognition dramatically decreased when one could not actively interact with an object. Lederman and colleagues BID15 ]. They describe the various exploratory prcoedures (EP) that humans can perform to understand various object properties such as volume, temperature, friction, etc. Multi-modal learning is a key component for how biological agents learn and build models of objects. It can be argued by looking at failure modes to modern day robotics (rob) that it is exactly this lack in multi-modal learning that requires further study. Earlier work in haptic exploration includes BID3, BID9 ) who employed various hand engineered features to recognized objects using haptics. The challenges faced were largely due to robust sensors and the ability to control these sensors to explore objects effectively. BID5 ) measure various physical properties of objects using the bio-tac sensor using five different exploration procedures (EP). In addition, they also collect adjectives for each object and the corresponding They then compute precision, recall scores using a static hand-engineered feature and dynamic feature model employing Hidden Markov Models and compute precision, recall scores on a held out dataset. Similarly, Schneider et al. FORMULA1 et al. also classify objects using a bag-of-words appraoch. Romano et al. BID19 ) mimic human tactile sensing for grasping by hand engineering features that can appropriately measure slippage. They then design a control strategy that can grasp and place that employs the tactile responses. They show that in cases where objects are crushable, a naive controller crushes 100% of the time as compared to a controller that effectivel leverages tactile sensing. Others, such as Sinapov et al. BID21 ) have considered building object representations using vibrotactile sensation. They show that they can classify surface properties using data collected from five different EPs. Similarly, BID6 classify textures using the bio-tac sensor using a Bayesian exploration strategy. While, BID10 employ a palpatation sequence that is not learnt to effectively explore objects in a multi-fingered robot. Our work relates to work by BID7 who show that combining visual and haptic information can lead to better classification of haptic properties. More recently, Calandra et al. BID2 ) show that employing tactile inputs into a learnt model can help improve predictions of graspability. (OpenAI:) have shown that tactile features may not be required for certain constrained in-hand manipulation tasks. While this may seem contrary, this in fact is not a representative task. Further, the setup employed by the authors substitutes tactile sensing with a very rich 3D data along with a powerful learning method thus navigating around tactile sensing requirements. Task Setup: Figure 1 presents our task setup. A subset of objects from Figure 3 are placed in a drawer. An image of the object from a fixed pose is presented to the agent. The agent explores each object using a set of pre-determined routines combining palpation and grasping-like movements. The agent then identifies the object it needs to grasp and executes a grasping routine. In our setup the movement between the objects and grasping, is done using a pre-determined routine. The object is held translationally fixed in space but can rotate about its axes. The hand is initialized close to the object. The hand is translationally fixed, in that it cannot slide but it can rotate around the wrist joint. The fingers can explore their movements with only the restrictions imposed by the joints themselves. That is the fingers, say, cannot bend backwards towards the wrist. For each episode of 500 time steps long, the haptic forces H t and the corresponding Images I t are collected. Each object is presented in multiple random poses. The dataset consists of 500 samples per object, each sample is 500 timesteps long and has 19 dimensions. We use a simulated model of the anthropomorphic hand used as part of SouthHampton Hand Assesment Procedure (SHAP) test suite built by BID16 (see Figure 3). The SHAP procedure was established for evaluating prosthetic hands and arms. With this idea in mind, prior work built a prosthetic arm which could theoretically perform all useful human hand movements. Based on this hand and the DARPA Haptix challenge, a simulated model of a similar hand (but with fewer sensors) BID14 ) was built using the Mujoco physics engine BID22 ). This model was made publicly available and we use this for all our experiments. The hand has five fingers and 22 joints out of which many are tendon coupled. For example, curling the tip of a finger automatically actuates the other two joints on the finger so that the finger moves towards the palm. Because of these couplings, the ant dynamics can be quite complex and articulated. Out of the 22 joints, thirteen are actuated. Out of these thirteen, ten joints control the motion of fingers and the other three control the rotation of the hand. Additionally, there are three degrees of motion along the (x, y, z) axis and therefore overall 16 degrees of actuation. The hand is controlled by setting the position of these 16 actuators. In addition, the hand is equipped with 19 contact sensors (as seen in 3) that measure normal contact forces that are computed by Mujoco. These sensors form the basis of our tactile sensing. In our setup, we have two sets of networks. Network f 1 accepts as inputs, images at time t, defined by I t. It then learns to predict the haptic responses for the object being explored defined by H t. This network is optimized by minimizing the following objective function DISPLAYFORM0 Given, tactile responses can we discriminate a set of objects effectively? To do this, we train a separate network f 2. This network accepts, as inputs, H t and learns to predict object identities Y t. We then minimze the cross entropy loss as in 2. DISPLAYFORM1 To simulate how an agent would be able to identify an object during test time we present an image I to the model. Network f 1 predicts the haptic responses to this object -Ĥ. The predicted haptic responses,Ĥ are then used to compute the predicted object categoryŶ. We can then apply a learnt grasping routine to grasp the object and retrieve it. To train the haptics predictor network, F 1 the inputs were gray scaled, 64x64 images I t with the focus object centered in the image as seen in Figure 3. The network consisted of three convolutional layers of filters 32, 32, 32. The kernel size was 5,5 for each layer. The output of the convolutional layer was then fed into three sequential fully connected layers of sizes [..., 1024],, and to predict the 19 dimensional haptic forces. The groundtruth predictions were per-channel averaged haptic forces across an entire episode length of time T. We then trained the network using ADAM (Kingma & Ba FORMULA1) with an initial learning rate set to 1e-4.To train the object discriminator network, F 2 the inputs were average haptic forces over an entire episode which have 19 dimensions. These inputs were then passed into a network of fully connected layers of sizes,, and [250, K]. We then minimize the cross entropy loss between the ground truth and predicted object categories using the ADAM with an initial learning rate set to 1e-4.In both cases performing normalization of the input images and haptic forces was critical to training the network. For the images, a simple mean subtraction and division by the standard deviation sufficed. For the haptic cases since the forces were different across different dimensions and doing a simple normalization that ed in small values that were outside the range of tanh function ed in difficulties in good predictions. We introduced a scale term to the normalized output so that distribution of the target data was inline with the output range of the network. Figure 3: Displays the objects used in our experiments. We used a set of 25 objects. These were imported from the ShapeNet dataset BID4 ). Each object was presented in various different poses. The hand was initialized at the same location for each sample while the object was randomized in each trial. We present three sets of experiments in this section. First, we study how hard it is to identify an object using tactile sensing. We do this on novel poses that the model has not seen during training. Next, we explore the question of effective exploration length for these experiments. Finally, we study the problem of identifying novel objects in the dark. Before identifying novel objects in the dark, we wished to understand how challenging the problem of identifying an object through tactile sensing was. The inputs in this case were average haptic forces over the entire sampling routine. The training consisted of 400 training samples per object category. Each sample presents the object a random rotation about the z-axis. In total, 4400 were used in training. We used 50 samples per object to evaluate the model. During test time, another 50 samples from each object class but unseen random poses were provided. The model was asked to correctly identify the objects, this classification accuracy is reported in the table 4.1.For the object identification problem, we compare the classification accuracy of two networks. First, the pretrained f 2 network on ground truth haptics. Second, we provide the query in image space, we then employ f 1 to compute predicted haptics. We then used the predicted haptics to identify the object as seen in 4.1. We find that the network was able to predict the object identity using haptic forces on the known objects samples per category with near 100% accuracy. When employing the predicted means, this accuracy was a bit lower. Inputs 11 Object Accuracy Ground Truth Haptics.99 Predicted Haptics.54 Table 1: Table showing object identification test accuracies for training objects given ground truth and predicted haptic forces. There are 11 objects in the training set, many of which were imported from ShapeNet (Chang et al. FORMULA1). In our current setup the agent employs a predetermined sampling routine to explore the object using tactile sensors. A natural question is how much exploration is required to accurately identify the object just using tactile information. To answer this, we trained separate tactile classification networks (i.e. f 2) using different number of samples obtained by exploration. Results presented in table 4.2 show that about 100 samples are sufficient for accurate classification and performance improves only marginally when 500 samples are collected. The inputs to the network still consisted of averaged haptic forces but they were computed over different episode lengths. Every object from Figure 3 were presented in various poses. During test time, held-out samples were presented. Episode Length Accuracy 1 9% 10 14% 100 93% 500 95% Table 2: Table showing object identification accuracies given different episode lengths on the 11 objects imported from Shape-Net. From Table 4.2 we see that when only one time step is used the classification accuracy is just over chance (0.04%). This number increases significantly even with a few time steps and saturates fairly quickly after that. While the above experiments demonstrate that we are able to classify objects based on tactile sensing alone, they donot show if it possible to retrieve the object from interest when only an image of the object is presented (please see our experimental setup Figure 1).An input image I of the object to be retrieved is presented to the agent. A set of p objects where p ∈ K are presented to the hand. All the objects presented are novel. The model predicts the haptic responseĤ t of the image using F 1. We then use the predicted haptic response as an input to our classification network F 2. Since F 2 was not trained on the objects that we are trying to identify, we then identify the object using a nearest-neighbor classifier in the latent space of the network. We call this space embedded haptics. We compare the performance of this sampling classification by fitting the k-NN model with both raw haptic predictions and embedded haptic predictions for the p objects presented. We train F 1 and F 2 networks on 11 objects. During test time from the 14 novel objects, a set of p objects are are out in the drawer (see Figure 1). The higher p is, the harder the task is. The of our method for p = 2, 3, 5 are presented in Table 4.3.We generate a subset of held-out objects as well as three haptics templates per object. We then classify 1 query image from the set of objects with a kNN classifier with k = 3.To compute the mean precision and standard deviation in this precision we run 20 classification trials. For each trial, we classify 5000 query images following the above procedure. Mean precision and standard deviations are then computed across these trials. # objects Raw Hptx Embedded Hptx 2 58.5 % ±0.7% 65.9 % ±0.8% 3 42.5 % ±0.8% 50.0 % ±0.7% 5 26.6 % ±0.5% 34.2% ±0.6% Table 3: Table showing object identification accuracies given 2, 3 and 5 held-out objects to be discriminated from. We see that the haptic embedding yields a meaningful increase in classification accuracy. We present a model that when presented with a query image can identify a novel object from a set of p objects using tactile sensing only. As the number of novel objects presented in a single trial increased, this task quickly became more challenging. We show that a model with pre-determined exploration routine can identify the object but a richer exploration of the objects could allow us to answer more challenging inferences such as pose, texture, etc. One could imagine doing this by training a RL agent that uses the performance of {2 as a reward while generating trajectories that explore an object for identification. Presently, we train our two networks F 1 and F 2 independently. Since, our larger goal is to identify objects it would be interesting to jointly optimize the network to maximize object identification. This can help smooth prediction errors that are not consequential to classification or identification. While the MuJoCo simulator provides fast physics solvers that can compute realistic contact forces they only compute normal contact forces. Research in neuroscience has shown BID15) that a variety of forces are applied and measured while employing tactile sensing. It would be interesting to perform similar experiments on robot grippers equipped with tactile sensors. Our current setup still trains network F 2 in a supervised fashion. This is biologically implausible and tedious for practical scalability on real robots. It would be interesting if we posed this problem as a self-supervised problem and explored learning to identify novel objects using tactile sensing.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
B1lXGnRctX
In this work, we study the problem of learning representations to identify novel objects by exploring objects using tactile sensing. Key point here is that the query is provided in image domain.
Locality sensitive hashing schemes such as \simhash provide compact representations of multisets from which similarity can be estimated. However, in certain applications, we need to estimate the similarity of dynamically changing sets. In this case, we need the representation to be a homomorphism so that the hash of unions and differences of sets can be computed directly from the hashes of operands. We propose two representations that have this property for cosine similarity (an extension of \simhash and angle-preserving random projections), and make substantial progress on a third representation for Jaccard similarity (an extension of \minhash). We employ these hashes to compress the sufficient statistics of a conditional random field (CRF) coreference model and study how this compression affects our ability to compute similarities as entities are split and merged during inference. \cut{We study these hashes in a conditional random field (CRF) hierarchical coreference model in order to compute the similarity of entities as they are merged and split during inference.} We also provide novel statistical analysis of \simhash to help justify it as an estimator inside a CRF, showing that the bias and variance reduce quickly with the number of bits. On a problem of author coreference, we find that our \simhash scheme allows scaling the hierarchical coreference algorithm by an order of magnitude without degrading its statistical performance or the model's coreference accuracy, as long as we employ at least 128 or 256 bits. Angle-preserving random projections further improve the coreference quality, potentially allowing even fewer dimensions to be used. Probabilistic models in machine learning, such as conditional random fields (CRFs), are widely successful at modeling many problems at the heart of knowledge base construction, including those in natural language processing, information extraction and data integration. However, when dealing with natural language data, the underlying feature representations are often sparse, high-dimensional and dynamic (i.e., they change during inference). In this paper we consider the task of coreference resolution, in which the goal is to partition a set of mentions into the entities to which they refer. We might represent each mention with a feature vector in which each dimension corresponds to a word or n-gram. Since only a small subset of the vocabulary is observed per mention, most elements of the vector are zero. Given the model and these representations, inference entails making decisions about whether two entities should be coreferent. To make such decisions, the model estimates a probability that involves computing the similarities between the aggregate feature representations of the two entities' mentions. Since all the vectors are both sparse and high-dimensional, these similarity operations are computationally expensive because the sparse data structures supplant the dense arrays that would otherwise support fast look-ups. Moreover, as the inference algorithm makes decisions about whether or not two entities are coreferent, we may have to split or merge the entities and thus we must update the feature vector to reflect these changes. Maintaining such sparse-vector representations in the inner-loop of probabilistic inference is expensive, especially as the entities grow in size. In order to cope with the computational problems associated with sparse, high dimensional dynamic feature representations, we propose using homomorphic compression, in which the compressed representations of intermediate inference can be computed directly from their operands, allowing inference to run directly on the compressed representations of the data even as they change. In this paper, we consider several such schemes to scale hierarchical coreference. First, we propose a novel homomorphic cosine-preserving hashing scheme based on simhash BID7 that also supports addition and subtraction to more efficiently represent the data and the evolving intermediate of probabilistic inference. Second, because various linear norm-preserving random projections also preserve angles BID24, we can directly compute cosine similarity on projected data -linearity of the projections means that they too can be updated dynamically. The ing angle estimates are superior to the homomorphic simhash representation, at the cost of reduced efficiency for certain operations. Third, we develop a homomorphic version of minhash BID5 to support Jaccard similarity. Our current algorithm is biased, but the bias appears small in practice for the situations we have considered. Although the minhash based set representations are not currently employed in hierarchical coreference, they might be useful in other models or applications that involve binary features over changing sets BID9.We provide error analysis for all three schemes, collating and extending known , and in the case of simhash, we provide novel statistical analysis for its use as a direct estimator for cos(θ) that shows the bias and variance decrease rapidly with the number of bits, helping to justify its use in a CRF for a task like coreference. On a hierarchical model for coreference resolution, the proposed simhash scheme improves the speed of probabilistic inference by an order of magnitude while having little effect on model quality. Moreover, we find that the underlying random projection representation can provide even better cosine estimates than simhash, at the cost of not being able to use certain fast bitwise-operations to compute similarities. Finally, we briefly evaluate homomorphic minhash and show that even though there are possible pathological cases, as long as we employ enough hash functions, the estimates are reasonably close to the true Jaccard, albeit, biased. Coreference resolution Coreference resolution is the problem of determining whether different mentions refer to the same underlying entity BID12. BID34. Coreference resolution arises in many other situations; for example, when merging two or more databases together it is desirable to remove duplicates that from the merge, a problem sometimes termed record linkage or deduplication BID32. Coreference is also foundational to knowledge base construction which requires that we combine information about entities of interest from multiple sources that might mention them in different contexts. For example, if we were to build a knowledge base of all scientists in the world -similar to Google scholar -we would need to perform the task of author coreference to determine who authored what papers BID16 BID8. Are the following two mentions of "J Smith" the same author? Although generally this is a difficult problem, it can be solved with machine learning since features of the mentions such as the words in the title (both have "Boson" in common), the topic of the title (both are about a similar subfield of physics), the journal (both are physics journals) and the co-authors (there appears to be a co-author in common) provide some evidence about whether or not the two "J Smith's" might be the same person. In order to solve the problem, it is thus common to extract such contextual features about each mention, such as in the above example, features from the title, co-author list, venue, year and author-name and employ them in a probabilistic model. These features are typically the raw words, character-ngrams and normalized variants thereof, sometimes with positive real-valued weights to indicate the importance (e.g., via TFIDF) of each feature. Then, given such features, a coreference model measures the similarities between mentions via functions such as cosine-similarity. In contrast to within document coreference discussed earlier, this type of coreference resolution problem is better suited for similarity based models, such as the ones we will use in the following. Moreover, since coreference decisions are not restricted by document-boundaries, the entities can grow unbounded in size, making compact representations of their growing feature sets especially important. Typically, the model is a discriminative conditional random field (CRF) that measure the probability of an assignment of mentions to entities conditioned on the observed features BID26. The model factorizes into potentials that score local coreference decisions. Local search procedures such as greedy-agglomerative clustering or Markov-chain Monte Carlo (MCMC) find the most likely assignment of mentions to entities BID26 BID8 BID42.In pairwise models, potential functions measure the compatibility of two mentions being in the same cluster. An issue with such models is that the possible pairwise comparisons scales quadratically with the number of mentions. An alternative class of models that avoids this quadratic blow-up are entity-based, in which entities are treated as first-class variables with their own set of inferred features, and potentials measure compatibility between mentions and entities. However, entity-based models come with their own scalability challenges. To illustrate the problem (and our solution), we focus on an entity-based model called hierarchical coreference, which recently won a challenge to disambiguate inventor names for the USPTO, due to its accuracy and scalability [, Monath and BID31 .Hierarchical Coreference In the hierarchical coreference model, mentions are organized into latent tree structures BID42 . There is one tree per entity with mentions at the leaves and intermediate nodes as "subentities" that organize subsets of the entity's mentions. Rather than modeling interactions between mention-pairs, the potential functions measure compatibility between child and parent nodes in the tree. The score of a given assignment of mentions into latent trees is the product of all model potentials which includes these child-parent compatibility scores as well as some additional priors on tree-shape and entities. These compatibility scores are parametrized cosine functions. Each mention is endowed with multiple feature variables that each capture a subset of the total features. For example, in author coreference, one feature variable might capture features of the author's name and another might capture features of the title and venue of the paper. Colloquially, we refer to these feature variables as "bags" since they inherit the usual bags-of-words assumption. We distinguish between different "bag-types" which each capture different types of features (e.g., the author name bag, title words bag, co-author bag, etc). The other nodes in the tree also contain feature variables (bags), but the values of these variables are determined by the current assignment of children to that parent node. In particular, a parent's bag is the sum of all its children's bags. In order to maintain this invariant, the bag representations must be updated to reflect the current state of inference, and hence for efficiency reasons, representations must be homomorphic with respect to operations that will be performed on the bags. Interpreting bags as vectors, the cosine distance between them is used to calculate their compatibility. The primary potential functions measure the compatibility between each child's bag and its parent's bag. There is one potential for each bag-type. For example, to measure the compatibility between a node z i and z j, let y ij be the binary variable that is 1 if and only if z j is the parent of z i, and let b j be a bag for z i and z j respectively, then the potential ψ for "bag 0" scores a coreference decision as: DISPLAYFORM0 where w is a real-valued weight and t is a real-valued translation parameter for potential ψ. The potentials for each bag-type have parameters w, t that we can fit to data. Because only a small handful of features are ever observed for a given mention, typical implementations of hierarchical coreference employ sparse-vector representations to represent bags (e.g., the implementation found in FACTORIE BID27 BID28).However, a key disadvantage of sparse vector representations is that they must store the indices and weights of the non-zero elements, which means that the data-structures must dynamically change in size as MCMC splits and merges entities. As the sizes of the entities grow, these operations become increasingly expensive. Similar issues arise in other entity-based models where features of entities are aggregated from those of mentions. Thus, while entity-based models avoid the quadratic comparison issue of pairwise models, a straight-forward sparse representation of their feature vectors is no longer efficient. Is there an alternative representation of feature vectors which (a) allows fast evaluation of cosine-similarity, and (b) can be efficiently dynamically updated? As we describe in the next section, the simhash hashing function BID7 provides a representation with property (a). However, the standard simhash cannot be updated as vectors are modified. We therefore develop a variant which we call homomorphic simhash, which has both properties (a) and (b). We also identify two other schemes that support these properties. We now discuss three different homomorphic representations that support addition and subtraction in the compressed space, while preserving similarity estimates. We propose homomorphic simhash and a related random projection for cosine similarity of high-dimensional vectors and multisets; and homomorphic minhash for Jacard similarity of sets. Background: simhash A locality-sensitive hash function for a distance metric d on a set of objects S is a function H such that given x, y ∈ S, we can estimate d(x, y) from the hashed representations H(x) and H(y). Simhash BID7 is a locality sensitive hash function for cosine similarity. To understand simhash, it is first helpful to consider the following randomized process: Imagine that we have two vectors a and b on the unit hypersphere in the Euclidean space R d with angle θ between them, and we want to produce an estimate of cos(θ). Select a random d-dimensional vector u by sampling each of its coordinates independently from N. Let the random variable X have value 1 if a and b are on different sides of the hyperplane orthogonal to u, and 0 otherwise. Then X is a Bernoulli random variable with expectation: DISPLAYFORM0 Let X 1,..., X n be the of independently repeating this process several times, and set DISPLAYFORM1 The idea behind simhash is to come up with a hash representation which lets us reproduce this randomized estimator: to construct a function H which produces n-bit hashes, we first randomly sample n vectors u 1,..., u n as above. Then, given a vector a, the hash H(a) is the length n bit sequence in which the ith bit is 1 if the sign of a · u i is positive and 0 1. Charikar BID7 notes that for some applications, 1 − θ π may be a good enough approximation of cos(θ), so that one can use Xn directly as an estimator of cos(θ).otherwise. Now, from two hashed representations H(a) and H(b), if the ith bit in the two hashes disagree, this is equivalent to X i in the randomized process above being equal to 1. Thus, by counting the number of positions where the hashes are distinct and dividing by n, we get X n, thereby producing an estimate of cos(θ).Rather than constructing the hash function H by sampling the u 1,..., u n vectors from the d-dimensional unit sphere uniformly, a common optimization is to instead sample them from {−1, 1} d. This has two advantages. First, it is faster to compute the dot product since no floating point multiplication is involved. Second, rather than having to explicitly sample and store each u i as a vector, we can replace each u i by a 1-bit feature hash function h i: the "value" of the vector represented by h i is 1 at coordinate j if h i (j) = 1 and is −1 if h i (j) = 0. We write a · h i for the dot product of a with the vector corresponding to h i.By restricting only to test vectors with coordinates of the form 1 and −1, the corresponding expected value of π(1 − X n) is no longer exactly θ (see [, Section 3.7 .3] for an example), but for high-dimensional spaces, this approximation is known to be effective in practice BID17. If we want to use simhash as a representation of feature vectors for entities in coreference resolution, then we need to be able to update the simhash representation as entities are merged and split. In particular, if we join nodes with feature vectors a and b, then the vector of their new parent will be a + b. However, if we only store H(a) and H(b), rather than the vectors a and b themselves, we cannot compute H(a + b): the ith bit of H(a) and H(b) just records the sign of a · h i and b · h i, and if these are different, we do not know what the sign of (a + b) · h i should be. A similar problem occurs when we split a child with vector b from a parent with vector a, since the updated parent's hash should be H(a − b).Our solution is instead to store the actual dot product of a · h i in the hash of a, rather than just the sign. That is, H(a) is now an array of length n instead of an n-bit sequence. And since DISPLAYFORM0 we can compute H(a + b) by adding component-wise the arrays for H(a) and H(b), and similarly for H(a − b). Finally, we can still efficiently compute the cosine distance between two vectors a and b by examining the signs of the entries of H(a) and H(b). We call this representation homomorphic because H is a homomorphism with respect to the additive group structure on vectors. Of course, storing each dot product instead of just the signs increases the size of our hashes. However, they are still small compared to the feature vectors and, more importantly, their sizes are fixed. We can also store both the dot product and the signs as a bit vector, making sure to update the sign vector after each operation based on the dot product. By storing the sign vector separately we can quickly count the signs in common between two vectors using bitwise operations. Statistical Properties Recall that since E[π(1 − X n)] = θ, we can derive a plausible estimate of cos(θ) from X n. In particular, let g(x) = cos(π(1−x)) and consider the estimator C n = g(X n). We now describe some statistical properties of C n. Our emphasis here is somewhat different from the standard analyses found in related work. The reason is that LSHs like simhash are most commonly used for duplicate detection BID17 and for approximate nearest neighbor search BID22, which is quite different from our use case. In those settings, one wants to show that if two items x and y are very similar, then the distance estimated from h(x) and h(y) will very likely be quite small, and conversely if x and y are very different, then their estimated distances will be large. In such cases, the linear approximation to cosine X n is sufficient. However, since we want to use the cosine distance estimates C n as part of the potential function in our CRF, we are interested in additional statistical properties of the estimator. Lemma 3.1. C n is consistent. In particular, C n a.s. Proof. By the strong law of large numbers, we have that X n a.s. DISPLAYFORM0 Since g is continuous, by the continuous mapping theorem BID40 ], DISPLAYFORM1 The first degree Taylor series for g(x) about µ is: DISPLAYFORM2 where R is the remainder term. We have then that: DISPLAYFORM3 Thus it suffices to bound |E[R(X n)]|, which can be done using Lagrange's remainder formula (see appendix). DISPLAYFORM4 For intuition, note that the Taylor series above for g shows us that DISPLAYFORM5, and plugging in the approximation we get: DISPLAYFORM6 To obtain the actual error bound, we carry out the same process but without dropping R(x) from the Taylor approximation for g, and then once again use Lagrange's remainder formula to bound the remainder (see appendix).Finally, since C n is equal to a Lipschitz continuous function of n independent Bernoulli random variables, we can use the the method of bounded differences [, Corollary 5.2], to derive the following Chernoff-like tail bound: DISPLAYFORM7 The statistics underlying simhash, whether the hyperplanes are drawn from Gaussians or the d-dimensional hypercube (i.e., Rademacher distributions), are actually random projections for which the Johnson-Lidenstrauss lemma applies BID19 BID18 BID0 ]. The lemma states that any set of m points in ddimensional Euclidean space can be embedded into n-dimensional Euclidean space, where n is logarithmic in m and independent of d, n = O(−2 log m), such that all pairwise distances are maintained within an arbitrarily small factor 1 ±, where 0 < < 1. Since the projections are linear, they are homomorphic with respect to addition and subtraction. Moreover, previous work shows that the norm-preserving property of a linear random projection A implies an angle-preserving property BID24; that is, for vectors u, v, let θ = (v, u) andθ = (Av, Au). The is the following: DISPLAYFORM0 where DISPLAYFORM1 and n ≥ 60 −2 log m. Therefore, we are justified in using the statistics underlying simhash to directly estimate the cosine similarity; viz., cos θ ≈ cosθ. More precisely, using a Taylor expansion, DISPLAYFORM2 Although we lose the bit-representation that supports the fast bit-operations that makes simhash so efficient in applications such as webpage deduplication that employ the linear estimator, we potentially gain an improvement in the quality of the cosine estimate. Certainly, the estimate will be smoother since simhash essentially quantizes the angles into a finite number of possibilities equal to the number of bits. Since the same representation supports both types of estimates, we could even alternate between the estimation strategies as dictated by the particular situation: if a large number of similarity comparisons are performed for each entity then simhash makes more sense, otherwise a direct computation of cosine makes more sense. Regardless, as we will later see, both schemes provide sufficiently fast and accurate cosine estimates for coreference resolution. Minhash BID4 BID6 ] is a locality-sensitive hash function for Jaccard similarity, defined for binary vectors (encoding sets). The Jaccard similarity between two sets S 1, S 2 ∈ Ω is J = |S 1 ∩S 2 | |S 1 ∪S 2 |. Minhash applies a random permutation (which in practice is accomplished using a random hash function, such as seeded 64-bit murmur hash) π: Ω → Ω, on a given set S ⊂ Ω and then extracts the minimum value h π (S) = min(π(S)). The probability that the minhash function for a random permutation of a set computes the same value for two sets is equal to the Jaccard similarity of the two sets BID37. In practice multiple (n) hash functions are employed to reduce the variance, ing in a vector v S = h π 0 (S), · · ·, h π n−1 (S) of n minimum values, one per hash function. There are three methods we need to design for homomorphic minhash: union, difference and score (to compute the Jaccard estimate). However, unlike simhash for cosine similarity, the operations underlying the statistics for minhash are non-linear due to the elementwise minimum operation that produces the underlying vector of hash values. Moreover, the set semantics on which the minhash score method relies are problematic because the union and difference methods need to maintain the invariance that a parent set is equal to the union of a collection of child sets: when we perform a difference operation between a parent set and a child set we cannot simply remove all of the child's elements from the parent since a sibling might also (redundantly) contribute some of those elements to the parent. We sketch a solution to these problems and include details in Appendix E. However, we note that our solution is not complete because there are several corner cases we do not yet handle. Nevertheless, we find that the representation works well in practice. First, we handle the problem of set-semantics by augmenting the n-dimensional minhash representation v S of each set S with an n-dimensional vector of counts c S, in which each dimension corresponds to a minimum hash value (essentially embracing multiset semantics, but in the hash space). The count indicates the number of child sets that contribute the associated hash value. For the union of two sets S = S 1 ∪ S 2, we either keep the count associated with the smaller hash value if they are different (that is, since we set v S = v S we set c S i = c S i where S = argmin S j ∈{S 1,S 2} (v S j i)), or sum the counts if they are the same (that is, c S i = c DISPLAYFORM0 i). For difference we employ the same strategy except we subtract the counts instead of add them. The counts are appropriately ignored when estimating the Jaccard since we want the estimate to reflect sets rather than multisets. Second, we must also address the fact that the minimum is a non-linear operator. The problem arises when the count associated with a hash value becomes zero. Then, how do we recompute what the new minimum value should be? Recomputing the minimum from scratch is a nonstarter because it would require keeping around the original sparse vector representations that we are trying to supplant. Rewriting the difference in terms of unions is also computationally expensive since it would require traversing the entire tree to check what the new minimum value should be. Instead, noting that minhash typically has hundreds of hash functions (e.g., n = 256) for each set, we propose to ignore the hash values associated with zero counts, and employ the remaining hashes to compute the Jaccard. This strategy has consequences for both bias and variance. First, since fewer hashes are employed, the variance increases, and if left unchecked may culiminate in a worst case in which all counts are zero and Jaccard can no longer be estimated. However, we can periodically refresh the counts by traversing the trees and hopefully we do not have to do such refresh operations too often in practice. Second, since the hashes associated with zero counts are correlated, the estimate is no longer unbiased. Therefore, as described in Appendix E, we modify the Jaccard estimate to make better use of the zero-counts rather than simply ignore them, and this eliminates the bias in some cases. However, we do not have a solution that works in every case. Finally, there is also the question of how to perform union and difference for cases in which the count is zero. For now, we ignore the zero counts during these operations, but are currently exploring strategies for incorporating them to further reduce bias and variance. Hierarchical coref involves other features, such as entropy and complexity-based penalties on the bag of words representations of context and topics associated with the entities. Although not a focus of this current study, we note that some of these representations depend on the ratios of p-norms that can be estimated with Johnson-Lindenstrauss style representations. Moreover, there exist hashing schemes to enable fast estimation of entropy. We save the homomorphic versions of these hashes for future work. In this section we empirically study the two cosine-preserving homomorphic compression schemes, simhash and its related random projection, on a conditional random field (CRF) model of author coreference resolution, the problem introduced in Section 2. First we study simhash and we hypothesize that the this representation will substantially improve the speed of inference for the reasons outlined earlier, but it is less clear how the simhash representation will affect the quality of the model. On the one hand, our theoretical analysis shows that the variance and bias reduce rapidly with the number of bits, but on the other hand, the sheer number of vector comparisons that take place during inference is substantial, making it likely for errors to occur anyway. With this in mind, we study how simhash affects the model quality in our first set of experiments. In our second set of experiments, we study the extent to which it improves inference speed. In a third set of experiments, we compare simhash with the random projection method. Finally, we present initial for homomorphic minhash to empirically assess how it compares to exact Jaccard, which is important given the increased bias and variance of our method. Data We employ the REXA author coreference dataset, 2 which comprises 1404 author mentions of 289 authors with ambiguous first-initial last-name combinations: D. Allen, A. Blum, S. Jones, H Robinson, S. Young, L. Lee, J. McGuire, A. Moore. We split the data such that training set contains mentions of the first five ambiguous names (950 mentions) while the testing set comprises the remaining three ambiguous names (454 mentions). The dataset is highly ambiguous since there are multiple entities with each name. In addition, we also employ the DBLP dataset which contains over one million paper citations from which we extract about five million unlabeled author mentions BID23.Model We investigate homomorphic simhash in the context of the hierarchical coreference model presented in Section 2. We employ two types of feature variables, a "name bag" that represents the features of the author's name and a "context bag" that represents the remaining features in the citation from which the author mention is extracted (co-authors, title, venue, topics, keywords). For more details about the features please see Appendix B.We employ the implementation of hierarchical coreference available in the FACTORIE toolkit, using FACTORIE's implementation of the variables, the model and the inference algorithm BID27. We additionally implement the simhash variables and potential functions inside this framework. We employ FACTORIE's default inference algorithm for hierarchical coreference which is essentially a greedy variant of multi-try Metropolis-Hastings in which the proposals make modifications to the sub-trees (e.g., move a subtree from one entity to another, or merge two trees under a common root node). More details are in previous work, and the implementation is publicly available in FACTORIE BID28 BID42. We estimate the parameters with hyper-parameter search on the training-set (Appendix B). representations. The closer points are to the identity reference line, the better. The dotted horizontal and vertical lines represent the decision threshold for inference. Points for which the two models agree are in blue, the agreements rates are: 88.6, 83.6, 97.0, 97.8 percent (respectively 32, 64, 128, 256 bits).Experiment 1: Simhash Estimation Quality Here we study the quality of models with simhash representations by comparing them to models with exact representations. First, we compare how these models evaluate and score the intermediate of MCMC inference. We run MCMC for 100,000 steps to optimize simhash-based models on the REXA test set (with either 256, 128, 64 and 32 bit hashes). The chains begin with the singleton configuration (all mentions in their own entity tree of size one), and at each step proposes changes to the current state which the model decides to either accept or reject. This process gradually produces larger and larger trees. For each proposed state change (sample), we record the log model ratio of both the simhash model and the exact model. We present every 10th sample in a scatter plot in FIG2. The closer the points are to the identity reference line y = x, the more accurate the simhash model is for those points. Varying the number of bits has a pronounced effect on the model's ability to judge MCMC states. For each step, we also record if the simhash model and exact model agree upon whether to accept the stochastic proposal (blue points) or not (red points). 3 The agreement rates are 97.8, 97.0, 93.6, 88.6 percent (respectively 256, 128, 64, 32 bits). We also plot the decision boundaries for this agreement as dotted lines. The upper-left and lower-right quadrants contain all the points for which the two models disagree, while the other two quadrants contain points for which they agree. In particular, the upper-right quadrants contain the points that both the simhash model and exact model believes should be accepted (true positives), while the lower-right quadrants contain the points that both models think should be rejected (true negatives). Most points lie in this quadrant since the chance of proposing a fruitful move is relatively low. Visually, there appears to be a qualitative gap between 64 bits and 128 bits on this data, leading to a recommendation of using at least 128 bits. We also compare the models in terms of their final coreference performance (according to B-cubed F1 BID1). The exact model achieves an F1 of 78.6, while the simhash variants achieve F1 scores of 77.6, 75.6, 62.8, 55.6 for 256, 128, 64, 32 bits respectively. For more detailed , with precision, recall and other types of F1, see Table 1 in the appendix. Overall, the accuracy of the 128 and 256-bit models are reasonable with 256 being competitive with the performance of the exact model. When using fewer bits, again, the performance decreases precipitously. Experiment 2: Simhash Speed In this experiment we study the extent to which the compressed simhash representation improves inference speed. As described before, we tune the models on the REXA training set. Then, to test the models on a larger dataset, we supplement the 454 labeled REXA test mentions with five million unlabeled mentions from DBLP. We run each model on this combined dataset, initializing to the singleton configuration and then running one billion samples. We record coreference quality on the labeled subset every 10,000 samples and plot how it changes over time in FIG3.Although Experiment 1 shows that the simhash models are slightly worse in terms of their final F1 accuracy on the REXA test set, we see a striking computational advantage for simhash. For each F1 value, we compute how much faster the simhash model achieves that value than the exact model and plot this speed-improvement as a function of F1-level in FIG3,2(d). As we can see, the speed improvement is more than a base-ten order of magnitude for most accuracy levels, and the speed difference increases with accuracy. This is because the exact representation slows down over time as the number of features in the representation grows during inference. Indeed if we look at the sampling rate over time for each model, we find that the simhash models run at about 20,000-25,000 samples per second the entire time, while the model with the exact representation starts at a rate of 3000-4000 samples per second, but then drops to under 1000 samples per second as the size of the entities get larger. This raises the question: can we improve the speed of the exact model by reducing the number of features? We address this question in Appendix C.2, but summarize here briefly: it is possible to improve the speed and reduce the gap, but simhash is still faster and there is a trade-off with coreference quality. In addition, whether feature ablation meaningfully improves performance depends on particular aspects of the data set, whereas the simhash representation can be applied generically. Experiment 3: JL Random Projections In this experiment, we compare simhash with an approach that directly computes cosine on the statistics that underly the bit representation. As argued previously in Section 3.2, this approach is justified because the statistics are random projections that are homomorphic and cosine-preserving. Intuitively, we expect this approach to work even better because simhash further compresses these statistics into single bits, whereas in the random projection approach, we compute cosine directly on the real-valued statistics. However, our current theoretical analysis does not allow us to compare one to another; therefore, we turn to the empirical studies in this section. We perform an experiment analogous to Experiment 1, except we employ the exact model to make decisions about what MCMC proposals to accept. This allows us to compare both approximation schemes on the same set of samples. For each accepted sample, we again ask the question how the approximate methods, simhash and JL, would have judged them. We find that JL does indeed perform better than simhash for the same number of bits. 4 In particuar, the Spearman's rho for JL increases from 93.2 to 97.2 over simhash when employing 256 bits (dimensions). For all but one case (128 bits), we find a similar improvement. Moreover, the coreference accuracy also increases in each case; for example, from 77.6 B3 F1 to 78.3. More detailed are in Table 3 in Appendix D, along with the associated scatter plots (Figure 6). We also investigate our homomorphic MinHash representation for Jaccard similarity. For lack of space, and because Jaccard is not employed by the hierarchical coreference, we relegate most of the evaluation to Appendix E.2. To briefly summarize, we find that 128 and 256 hash functions are sufficient for reasonable estimates and that the representation rarely needs to be refreshed to ensure that all the counts are above zero, enabling the computational efficiency we desire. However, there is room for improvement because only 16% of the hash functions on average have a non-zero entry after 100,000 samples. Thus, after 100,000 samples, a 256 hash version will begin to behave like a 40 hash version. Homomorphic representations have many possible uses in machine learning; for example, homomorphic encryption allows models to run directly on encrypted data for cases in which privacy is a concern [, Dowlin et al., 206]. Instead, our method is similar to homomorphic compression BID29 which allows our model to run directly on compressed data for cases in which computational efficiency is a concern. Our approach is based on simhash, a locality-sensitive hash function. Locality sensitive hashes such as simhash and minhash are useful in large scale streaming settings; for example, to detect duplicate webpages for web-crawlers or in nearest neighbor search BID5 BID25 BID22. They are sometimes employed in search and machinelearning applications including coreference for "blocking" the data in order to reduce the search space BID12. Note that this application of locality sensitive hash functions for coreference is complementary to ours, and does not require it to be homomorphic. Other hashing and sketching algorithms are also employed as a strategy to compress the sufficient statistics in Bayesian models so that they can scale better as the number of parameters increase with the dataset. For example, feature hashing, count-min sketches and approximate counters have been employed in scaling latent Dirichlet categorical models such as LDA by compressing, for example, the counts that assign latent variables to mixture components BID44 BID38.Our paper focuses on three homomorphic compression schemes including one for minhash. Our solution of furnishing the minhash values with counts resembles a strategy that similarly employs counts in a k minimum values (KMV) sketch for estimating the number of distinct values in a set BID2 BID3. A key difference is that that work develops an unbiased estimator for the number of distinct values that explicitly involves the counts as part of the estimate itself, whereas we develop a biased estimator that directly estimates Jaccard similarity and that employs the counts to bound our knowledge about possible minimum hash values when they vanish during difference operations. Our schemes are related to random projections, which are commonly used to improve computational efficiency in machine learning. Examples include feature-hashing BID41, fast matrix decomposition BID15 and fast kernel computations BID35. However, while some of these random projections happen to be homomorphic, this property is often not exploited. The reason is that they are typically used to compress static objects that remain fixed, such as the Gram matrix for kernel methods. In contrast, our setting requires compressing dynamic sets that change during inference. Word embeddings BID30 also provide low-dimensional dense representations that are useful for capturing context, but in practice, these embeddings are too smooth to be useful as the sole representation for disambiguating the names of peoples, places or organizations, as is necessary in coreference (e.g., the names "Brady" and "Belichick" would be highly similar according to a word-to-vec style embedding, even though they are unlikely to be coreferent). Deep learning might allow for more suitable representations to be learnt and its application to coreference is promising. However, the current research focus is on within-document noun-phrase coreference and entity linking, while emphasizing improvements in accuracy rather than inference speed and scalability BID13 BID43 BID21 BID36. Combining deep learning and conditional random fields for hierarchical coreference remains an open problem. Finally, in addition to the work mentioned throughout the paper, there is an abundance of related work on coreference resolution including noun-phrase coreference, cross-document coreference, record linking, entity linking and author coreference. While not possible to cover the breadth of the space here, we refer the reader to a few useful surveys and tutorials on the subject [, Getoor and BID12. More recently, as mentioned in the foregoing paragraph, deep learning approaches are beginning to show promise. There has also been promising recent work on scalable hierarchical clustering known as PERCH BID20. Since PERCH employs Euclidean distance rather than cosine similarity, it currently falls outside the purview of our study. However, the Johnson-Lindenstrauss applies to Euclidean distance making random projections a possibility for PERCH. In this paper we presented several homomorphic compression schemes for representing the sparse, high dimensional features in a graphical model for coreference resolution, even as these features change during inference. Our primary concern was cosine similarity for which we investigated simhash and angle-preserving random projections. We also proposed a homomorphic version of minhash for Jaccard similarity. In the particular case of simhash, we presented a new variant of the original estimator and analyzed its statistical propertiesincluding variance, bias and consistency -to help justify its use in a probabilistic model. We found that both simhash and angle-preserving random projections were sufficiently accurate, given enough dimensions, to represent features for coreference, and that the random projections produce slightly better estimates. Moreover, these representations were an order of magnitude faster than conventional sparse data structures, laying the foundation for greater scalability.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1gwRx5T6Q
We employ linear homomorphic compression schemes to represent the sufficient statistics of a conditional random field model of coreference and this allows us to scale inference and improve speed by an order of magnitude.
Motivated by applications to unsupervised learning, we consider the problem of measuring mutual information. Recent analysis has shown that naive kNN estimators of mutual information have serious statistical limitations motivating more refined methods. In this paper we prove that serious statistical limitations are inherent to any measurement method. More specifically, we show that any distribution-free high-confidence lower bound on mutual information cannot be larger than $O(\ln N)$ where $N$ is the size of the data sample. We also analyze the Donsker-Varadhan lower bound on KL divergence in particular and show that, when simple statistical considerations are taken into account, this bound can never produce a high-confidence value larger than $\ln N$. While large high-confidence lower bounds are impossible, in practice one can use estimators without formal guarantees. We suggest expressing mutual information as a difference of entropies and using cross entropy as an entropy estimator. We observe that, although cross entropy is only an upper bound on entropy, cross-entropy estimates converge to the true cross entropy at the rate of $1/\sqrt{N}$. Motivated by maximal mutual information (MMI) predictive coding BID11 BID16 BID13, we consider the problem of measuring mutual information. A classical approach to this problem is based on estimating entropies by computing the average log of the distance to the kth nearest neighbor in a sample BID7. It has recently been shown that the classical kNN methods have serious statistical limitations and more refined kNN methods have been proposed BID5. Here we establish serious statistical limitations on any method of estimating mutual information. More specifically, we show that any distribution-free high-confidence lower bound on mutual information cannot be larger than O(ln N) where N is the size of the data sample. Prior to proving the general case, we consider the particular case of the Donsker-Varadhan lower bound on KL divergence BID3 BID1. We observe that when simple statistical considerations are taken into account, this bound can never produce a highconfidence value larger than ln N. Similar comments apply to lower bounds based on contrastive estimation. The contrastive estimation lower bound given in BID13 does not establish mutual information of more than ln k where k is number of negative samples used in the contrastive choice. The difficulties arise in cases where the mutual information I(x, y) is large. Since I(x, y) = H(y) − H(y|x) we are interested in cases where H(y) is large and H(y|x) is small. For example consider the mutual information between an English sentence and its French translation. Sampling English and French independently will (almost) never yield two sentences where one is a plausible translation of the other. In this case the DV bound is meaningless and contrastive estimation is trivial. In this example we need a language model for estimating H(y) and a translation model for estimating H(y|x). Language models and translation models are both typically trained with crossentropy loss. Cross-entropy loss can be used as an (upper bound) estimate of entropy and we get an estimate of mutual information as a difference of cross-entropy estimates. Note that the upper-bound guarantee for the cross-entropy estimator yields neither an upper bound nor a lower bound guarantee for a difference of entropies. Similar observations apply to measuring the mutual information for pairs of nearby frames of video or pairs of sound waves for utterances of the same sentence. We are motivated by the problem of maximum mutual information predictive coding BID11 BID16 BID13. One can formally define a version of MMI predictive coding by considering a population distribution on pairs (x, y) where we think of x as past raw sensory signals (images or sound waves) and y as a future sensory signal. We consider the problem of learning stochastic coding functions C x and C y so as to maximize the mutual information I(C x (x), C y (y)) while limiting the entropies H(C x (x)) and H(C y (y)). The intuition is that we want to learn representations C x (x) and C y (y) that preserve "signal" while removing "noise". Here signal is simply defined to be a low entropy representation that preserves mutual information with the future. Forms of MMI predictive coding have been independently introduced in BID11 under the name "information-theoretic cotraining" and in BID13 under the name "contrastive predictive coding". It is also possible to interpret the local version of DIM (DIM(L)) as a variant of MMI predictive coding. A closely related framework is the information bottleneck BID17. Here one again assumes a population distribution on pairs (x, y). The objective is to learn a stochastic coding function C x so as to maximize I(C x (x), y) while minimizing I(C x (x), x). Here one does not ask for a coding function on y and one does not limit H(C x (x)).Another related framework is INFOMAX BID8 BID2. Here we consider a population distribution on a single random variable x. The objective is to learn a stochastic coding function C x so as to maximize the mutual information I(x, C x (x)) subject to some constraint or additional objective. As mentioned above, in cases where I(C x (x), C y (y)) is large it seems best to train a model of the marginal distribution of P (C y) and a model of the conditional distribution P (C y |C x) where both models are trained with cross-entropy loss. Section 5 gives various high confidence upper bounds on cross-entropy loss for learned models. The main point is that, unlike lower bounds on entropy, high-confidence upper bounds on cross-entropy loss can be guaranteed to be close to the true cross entropy. Out theoretical analyses will assume discrete distributions. However, there is no loss of generality in this assumption. Rigorous treatments of probability (measure theory) treat integrals (either Riemann or Lebesgue) as limits of increasingly fine binnings. A continuous density can always be viewed as a limit of discrete distributions. Although our proofs are given for discrete case, all our formal limitations on the measurement of mutual information apply to continuous case as well. See BID9 for a discussion of continuous information theory. Additional comments on this point are given in section 4. Mutual information can be written as a KL divergence. DISPLAYFORM0 Here P X,Y is a joint distribution on the random variables X and Y and P X and P Y are the marginal distributions on X and Y respectively. The DV lower bound applies to KL-divergence generally. To derive the DV bound we start with the following observation for any distributions P, Q, and G on the same support. Our theoretical analyses will assume discrete distributions. DISPLAYFORM1 Note that achieves equality for G(z) = P (z) and hence we have DISPLAYFORM2 Here we can let G be a parameterized model such that G(z) can be computed directly. However, we are interested in KL(P X,Y, P X P Y) where our only access to the distribution P is through sampling. If we draw a pair (x, y) and ignore y we get a sample from P X. We can similarly sample from P Y. So we are interested in a KL-divergence KL(P, Q) where our only access to the distributions P and Q is through sampling. Note that we cannot evaluate by sampling from P because we have no way of computing Q(z). But through a change of variables we can convert this to an expression restricted to sampling from Q. More specifically we define G(z) in terms of an unconstrained function F (z) as DISPLAYFORM3 Substituting FORMULA3 into FORMULA2 gives DISPLAYFORM4 Equation FORMULA4 is the Donsker-Varadhan lower bound. Applying this to mutual information we get DISPLAYFORM5 This is the equation underlying the MINE approach to maximizing mutual information BID1. It would seem that we can estimate both terms in through sampling and be able to maximize I(X, Y) by stochastic gradient ascent on this lower bound. In this section we show that the DV bound cannot be used to measure KL-divergences of more than tens of bits. In fact we will show that no high-confidence distribution-free lower bound on KL divergence can be used for this purpose. As a first observation note that involves E z∼Q e F (z). This expression has the same form as the moment generating function used in analyzing large deviation probabilities. The utility of expectations of exponentials in large deviation theory is that such expressions can be dominated by extremely rare events (large deviations). The rare events dominating the expectation will never be observed by sampling from Q. It should be noted that the optimal value for F (z) in is ln(P (z)/Q(z)) in which case the right hand side of simplifies to KL(P, Q). But for large KL divergence we will have that F (z) = ln(P (z)/Q(z)) is typically hundreds of bits and this is exactly the case where E z∼Q e F (z) cannot be measured by sampling from Q. If E z∼Q e F (z) is dominated by events that will never occur in sampling from Q then the optimization of F through the use of and sampling from Q cannot possibly lead to a function F (z) that accurately models the desired function ln(P (z)/Q(z)).To quantitatively analyze the risk of unseen outlier events we will make use of the following simple lemma where we write P z∼Q (Φ[z]) for the probability over drawing z from Q that the statement DISPLAYFORM0 Outlier Risk Lemma: For a sample S ∼ Q N with N ≥ 2, and a property Φ[z] such that P z∼Q (Φ[z]) ≤ 1/N, the probability over the draw of S that no z ∈ S satisfies Φ[z] is at least 1/4. Proof: The probability that Φ[z] is unseen in the sample is at least (1 − 1/N) N which is at least 1/4 for N ≥ 2 and where we have DISPLAYFORM1 We can use the outlier risk lemma to perform a quantitative risk analysis of the DV bound. We can rewrite as DISPLAYFORM2 We can try to estimate B(P, Q, G) from samples S P and S Q, each of size N, from the population distributions P and Q respectively. DISPLAYFORM3 While B(P, Q, F) is a lower bound on KL(P, Q), the sample estimateB(S P, S Q, F) is not. To get a high confidence lower bound on KL(P, Q) we have to handle unseen outlier risk. For a fair comparison with our analysis of cross-entropy estimators in section 5, we will limit the outlier risk by bounding F (z) to the interval [0, F max]. The largest possible value ofB(S P, S q, F) occurs when F (z) = F max for all z ∈ S P and F (z) = 0 for all z ∈ S Q. In this case we getB(S P, S Q, F) = F max. But by the outlier risk lemma there is still at least a 1/4 probability that DISPLAYFORM4 Any high confidence lower boundB(S P, S Q, F) must account for the unseen outlier risk. In particular we must haveB DISPLAYFORM5 Our negative can be strengthened by considering the preliminary bound where G(z) is viewed as a model of P (z). We can consider the extreme case of perfect modeling of the population P with a model G(z) where G(z) is computable. In this case we have essentially complete access to the distribution P. But even in this setting we have the following negative . Theorem 1 Let B be any distribution-free high-confidence lower bound on KL(P,Q) computed with complete knowledge of P but only a sample from Q.More specifically, let B(P, S, δ) be any real-valued function of a distribution P, a multiset S, and a confidence parameter δ such that, for any P, Q and δ, with probability at least (1 − δ) over a draw of S from Q N we have KL(P, Q) ≥ B(P, S, δ). For any such bound, and for N ≥ 2, with probability at least 1 − 4δ over the draw of S from Q N we have B(P, S, δ) ≤ ln N.Proof. Consider distributions P and Q and N ≥ 2. DefineQ bỹ DISPLAYFORM6 We now have KL(P,Q) ≤ ln N. We will prove that from a sample S ∼ Q N we cannot reliably distinguish between Q andQ.We first note that by applying the high-confidence guarantee of the bound toQ have DISPLAYFORM7 The distributionQ equals the marginal on z of a distribution on pairs (s, z) where s is the value of Bernoulli variable with bias 1/N such that if s = 1 then z is drawn from P and otherwise z is drawn from Q. By the outlier risk lemma the probability that all coins are zero is at least 1/4. Conditioned on all coins being zero the distributionsQ N and Q N are the same. Let Pure(S) represent the event that all coins are 0 and let Small(S) represent the event that B(P, S, δ) ≤ ln N. We now have DISPLAYFORM8 Mutual information is a special case of KL-divergence. It is possible that tighter lower bounds can be given in this special case. In this section we show similar limitations on lower bounding mutual information. We first note that a lower bound on mutual information implies a lower bound on entropy. The mutual information between X and Y cannot be larger than information content of X alone. So a lower bound on I(X, Y) gives a lower bound on H(X). We show that any distribution-free high-confidence lower bound on entropy requires a sample size exponential in the size of the bound. The above argument seems problematic for the case of continuous densities as differential entropy can be negative. However, for the continuous case we have DISPLAYFORM0 where C x and C y range over all maps from the underlying continuous space to discrete sets (all binnings of the continuous space). Hence an O(ln N) upper bound on the measurement of mutual information for the discrete case applies to the continuous case as well. The type of a sample S, denoted T (S), is defined to be a function on positive integers (counts) where T (S)(i) is the number of elements of S that occur i times in S. For a sample of N draws we have N = i iT (S)(i). The type T (S) contains all information relevant to estimating the actual probability of the items of a given count and of estimating the entropy of the underlying distribution. The problem of estimating distributions and entropies from sample types has been investigated by various authors BID12 BID15 BID14 BID0. Here we give the following negative on lower bounding the entropy of a distribution by sampling. Theorem 2 Let B be any distribution-free high-confidence lower bound on H(P) computed from a sample type T (S) with S ∼ P N.More specifically, let B(T, δ) be any real-valued function of a type T and a confidence parameter δ such that for any P, with probability at least (1 − δ) over a draw of S from P N, we have For any such bound, and for N ≥ 50 and k ≥ 2, with probability at least 1 − δ − 1.01/k over the draw of S from P N we have B(T (S), δ) ≤ ln 2kN 2.Proof: Consider a distribution P and N ≥ 100. If the support of P has fewer than 2kN 2 elements then H(P) < ln 2kN 2 and by the premise of the theorem we have that, with probability at least 1 − δ over the draw of S, B(T (S), δ) ≤ H(P) and the theorem follows. If the support of P has at least 2kN 2 elements then we sort the support of P into a (possibly infinite) sequence x 1, x 2, x 3,... so that P (x i) ≥ P (x i+1). We then define a distributionP on the elements x 1,..., x 2kN 2 bỹ DISPLAYFORM0 We will let Small(S) denote the event that B(T (S), δ) ≤ ln 2kN 2 and let Pure(S) abbreviate the event that no element x i for i > kN 2 occurs twice in the sample. SinceP has a support of size 2kN 2 we have H(P) ≤ ln 2kN 2. Applying the premise of the lemma toP gives DISPLAYFORM1 For a type T let P S∼P N (T) denote the probability over drawing S ∼ P N that T (S) = T. We now have DISPLAYFORM2 This gives the following. DISPLAYFORM3 For i > kN 2 we haveP (x i) ≤ 1/(kN 2) which gives DISPLAYFORM4 Using (1 − P) ≥ e −1.01 P for P ≤ 1/100 we have the following birthday paradox calculation. DISPLAYFORM5 Applying the union bound to FORMULA17 and FORMULA21 gives. DISPLAYFORM6 By a derivation similar to that of we get DISPLAYFORM7 Combining FORMULA19, FORMULA1 and FORMULA1 gives DISPLAYFORM8 Since mutual information can be expressed as a difference of entropies, the problem of measuring mutual information can be reduced to the problem of measuring entropies. In this section we show that, unlike high-confidence distribution-free lower bounds, high-confidence distribution-free upper bounds on entropy can approach the true cross entropy at modest sample sizes even when the true cross entropy is large. More specifically we consider the cross-entropy upper bound. DISPLAYFORM0 For G = P we get H(P, G) = H(P) and hence we have DISPLAYFORM1 In practice P is a population distribution and G is model of P. For example P might be a population distribution on paragraphs and G might be an autoregressive RNN language model. In practice G will be given by a network with parameters Φ. In this setting we have the following upper bound entropy estimator. Ĥ DISPLAYFORM2 The gap betweenĤ(P) and H(P) depends on the expressive power of the model class. The statistical limitations on distribution-free high-confidence lower bounds on entropy do not arise for cross-entropy upper bounds. For upper bounds we can show that naive sample estimates of the cross-entropy loss produce meaningful (large entropy) . We first define the cross-entropy estimator from a sample S.Ĥ DISPLAYFORM3 We can bound the loss of a model G by ensuring a minimum probability e −Fmax where F max is then the maximum possible log loss in the cross-entropy objective. In language modeling a loss bound exists for any model that ultimately backs off to a uniform distribution on characters. Given a loss bound of F max we have thatĤ(S, G) is just the standard sample mean estimator of an expectation of a bounded variable. In this case we have the following standard confidence interval. Theorem 3 For any population distribution P, and model distribution G with −ln G(x) bounded to the interval [0, F max], with probability at least 1 − δ over the draw of S ∼ P N we have DISPLAYFORM4 It is also possible to give PAC-Bayesian bounds on H(P, G Φ) that take into account the fact that G Φ is typically trained so as to minimize the empirical loss on the training data. The PAC-Bayesian bounds apply to"broad basin" losses and loss estimates such as the following. DISPLAYFORM0 Under mild smoothness conditions on G Φ (x) as a function of Φ we have DISPLAYFORM1 An L2 PAC-Bayesian generalization bound BID10 ) gives that for any parameterized class of models and any bounded notion of loss, and any λ > 1/2 and σ > 0, with probability at least 1 − δ over the draw of S from P N we have the following simultaneously for all parameter vectors Φ. DISPLAYFORM2 It is instructive to set λ = 5 in which case the bound becomes. DISPLAYFORM3 While this bound is linear in 1/N, and tighter in practice than square root bounds, note that there is a small residual gap when holding λ fixed at 5 while taking N → ∞. In practice the regularization parameter λ can be tuned on holdout data. One point worth noting is the form of the dependence of the regularization coefficient on F max, N and the basin parameter σ. It is also worth noting that the bound can be given in terms of "distance traveled" in parameter space from an initial (random) parameter setting Φ 0. DISPLAYFORM4 Evidence is presented in BID4 that the distance traveled bounds are tighter in practice than traditional L2 generalization bounds. Recall that in MMI predictive coding we assume a population distribution on pairs (x, y) where we think of x as past raw sensory signals (images or sound waves) and y as a future sensory signal. We then consider the problem of learning stochastic coding functions C x and C y that maximizes the mutual information I(C x (x), C y (y)) while limiting the entropies H(C x (x)) and H(C y (y)). Here we propose representing the mutual information as a difference of entropies. I(C x (x), C y (y)) = H(C y (y)) − H(C y (y)|C x (x))When the coding functions are parameterized by a function Ψ, the above quantities become a function of Ψ. We can then formulate the following nested optimization problem. The above quantities are expectations over the population distribution on pairs (x, y). In practice we have only a finite sample form the population. But the preceding section presents theoretical evidence that, unlike lower bound estimators, upper bound cross-entropy estimators can meaningfully estimate large entropies from feasible samples. DISPLAYFORM0 Maximum mutual information (MMI) predictive coding seems well motivated as a method of unsupervised pretraining of representations that maintain semantic signal while dropping uninformative noise. However, the maximization of mutual information is a difficult training objective. We have given theoretical arguments that representing mutual information as a difference of entropies, and estimating those entropies by minimizing cross-entropy loss, is a more statistically justified approach than maximizing a lower bound on mutual information. Unfortunately cross-entropy upper bounds on entropy fail to provide either upper or lower bounds on mutual information -mutual information is a difference of entropies. We cannot rule out the possible existence of superintelligent models, models beyond current expressive power, that dramatically reduce cross-entropy loss. Lower bounds on entropy can be viewed as proofs of the non-existence of superintelligence. We should not surprised that such proofs are infeasible.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
BkedwoC5t7
We give a theoretical analysis of the measurement and optimization of mutual information.
In this paper, we propose a neural network framework called neuron hierarchical network (NHN), that evolves beyond the hierarchy in layers, and concentrates on the hierarchy of neurons. We observe mass redundancy in the weights of both handcrafted and randomly searched architectures. Inspired by the development of human brains, we prune low-sensitivity neurons in the model and add new neurons to the graph, and the relation between individual neurons are emphasized and the existence of layers weakened. We propose a process to discover the best base model by random architecture search, and discover the best locations and connections of the added neurons by evolutionary search. Experiment show that the NHN achieves higher test accuracy on Cifar-10 than state-of-the-art handcrafted and randomly searched architectures, while requiring much fewer parameters and less searching time. Neural networks can be designed either by human experts or search algorithms, both of which have gained great success in image classification and language modeling BID45 BID33. Network architectures designed by both means are mostly layer-based or block-based, which means that the fundamental components are either operation layers or blocks that consist of several layers. A clear tendency can be observed that models with more parameters generally have better performances. It is a well-established fact that redundancy of parameters exists widely in handcrafted neural networks BID16 BID41 BID25. We find that such claim holds for architectures discovered by random search or evolutionary search as well. The pruning of unimportant neurons emphasizes the hierarchical relation between individual neurons. Additionally, the decrease in accuracy after parameter reduction is generally inevitable. Therefore, we propose a heuristic procedure to construct neuron-based network architectures by pruning redundant connections and neurons in layer-based models and adding new neurons to strengthen the neuron hierarchy while achieving competitive performances as layer-hierarchy models. Experiments show that NHN achieves higher test accuracy than DenseNet, SMASH BID4 and hierarchical representation with much fewer parameters. Handcrafted architectures. Successful convolutional neural networks (CNNs) designed by human experts can be sketchily categorized by the way data flow through the networks, i.e., plain networks and branching networks. A notable example of plain networks would be VGG nets BID37, where there are only one input and output path in each hidden layer. However, in a branching network, the computation flow splits somewhere in the network and merges in a latter layer BID0. The splitting and aggregation may occur multiple times in a single network. Many have discovered numerous branching network architectures whose performances surpass plain ones while requiring fewer parameters. Skip connections BID17 BID18 are increasingly popular in improving the performance of deep neural networks, and it becomes common to observe additional convolutions (or other forms of operations) stacked between large layers BID24 BID40. In fact, the "stacked-between" operations can be considered as part of a generalized residual block. Multi-branching computation graphs benefit addressing the gradient vanishing problem during the gradient descent training BID18. The distinguished techniques mentioned above (plus more listed in Table 1) share the same idea of weakening the hierarchy between layers by introducing complex paths to the data flow. The idea is further highlighted by architecture search algorithms. Random and evolutionary architectures. Machine learning algorithms evolve fast. Designing neural networks that perform remarkably on a given task requires ample experience. It has been found that neural networks are not only good at autonomically extracting useful features from raw data, but also capable of finding optimal network architectures to that end. Neural architecture search (NAS) has been attested to its ability to design network architectures for language modeling and image classification. However, candidate models have to be entirely randomly generated and fully trained, therefore NAS is extremely computation intensive, which dims its competitive performances to handcrafted architectures. Many efforts have been devoted to reducing the computational costs of NAS while ensuring sufficient capacity of search space BID47 BID5. Two major ideas to achieve the purpose are to design individual reusable cells rather than the entire networks or to share trained weights between models. Recently, BID33 proposed to describe each candidate model as a subgraph of a single directed acyclic graph (DAG). By sharing weights between submodels, the searching time of NAS is reduced by 1000X in term of GPU-hours. Genetic algorithms are also applied in searching the optimal architectures of CNNs BID29 BID43 BID34. BID35 proposed regularized evolution (RE) to remove the oldest models from the population, instead of removing the worst ones as in traditional tournament selection BID14. The best CNN model discovered by RE achieves the state-of-the-art performance on Cifar-10, i.e., 2.13% test accuracy on average. However, the number of parameters it requires is as large as nearly 35 million. Convolutional neural fabrics (CNF) BID36 BID42 and other forms of random search methods BID2 BID4 BID10 have been investigated as well. Layer-wise to neuron-wise hierarchy. Take the best model discovered by macro search in ENAS BID33 for example. The 12-layer CNN model contains over 21 million parameters and achieves 4.23% test accuracy on Cifar-10. If we remove 75% − 90% parameters in all 3 × 3 and 5 × 5 convolutions, the test accuracy is hardly compromised after the same duration of retraining. Even though the architectures in all the search methods are described as directed graphs, each node in these graphs represents either an operation layer (e.g. convolution or pooling layer) or an operation block (e.g. residual block). None of the nodes stands for an actual individual neuron in the network. On one hand, random search and evolutionary search tend to discover architectures that contain complex branching paths. On the other hand, the pruned versions of such architectures work nearly as well as intact ones. These facts bring about the hypothesis that the hierarchy of neurons should work as well as the hierarchy of layers. Please note that we do not simply replace layers with individual neurons, considering layers are composed of abundant neurons. We need the sufficient number of neurons to meet the feature representation requirements. A good hierarchical architecture of neurons may be discovered by either random search or evolutionary search, or combined. The search process must be carefully designed. In this paper, we propose a three-step course to discover the optimal neuron hierarchy for image classification (see FIG1, i.e., discover the optimal layer hierarchy with ENAS, prune unimportant neurons in the discovered layer-wise model, and randomly add new neurons to the pruned model to enrich the expressive capacity of neuron hierarchy networks. It is worth pointing out that the three-step procedure is also an imitation of the natural development of human brains . For example, the creation and searching by ENAS correspond to the mass neurogenesis before birth. The pruning of unimportant neurons corresponds to the apoptosis before puberty BID11. The addition of new neurons to the pruned model corresponds to the persisting neurogenesis during adulthood BID30. Although the existence of neurogenesis in mature brains is being debated in the field of neuroscience BID38 BID3, the software simulation of such process by our work indicates that it is helpful in improving the learning capability of neural networks. In this section, we propose a heuristic algorithm to derive network architectures that emphasize the relation between individual neurons based on traditional layer-based models by network pruning and adding new neurons. We directly employ the macro search method proposed by BID33 to discover a 12-layer CNN architecture for image classification. Note that we need to distinguish the important neurons from the less important ones, thus we are not performing Dropout BID39 during the searching and re-training in ENAS. Even though Dropout has proved to be helpful in increasing the generalization power of networks, it distributes the feature extracting capabilities evenly in all neurons, which makes it hard for us to tell which neurons are more important than others. We will, however, employ Dropout in the final fine-tuning stage. We use the cosine schedule BID28 ) that involves 4 warm restarts (for a total of 310 epochs) to adjust the learning rate during training, as in BID33, with the maximum value of 0.05 and minimum of 0.001. Cutout is used to enrich the sample capacity and improve the generalization of the model. The architecture discovered by ENAS is illustrated in FIG2 (the large blocks connected with solid lines) Network pruning BID16 a; BID25 has been used to get rid of unimportant parameters in CNNs. When followed by quantization and sparsification techniques, the network pruning methods can considerably reduce the memory and bandwidth consumption of deep networks in distributed systems. But it requires pre-learning of the connectivity and a very long time of re-training to restore the performance. We employ the dynamic network pruning (DNP) method proposed by to remove redundant weights. Specifically, during the search of optimal layer-hierarchy model, we replace the global average pooling BID24 with grouped convolution BID21, in which the group number equals the number of both input and output channels and the kernel sizes equal the sizes of input feature maps, so that the grouped convolution exactly maps each feature map to a single value. We also smooth out all 3 × 3 and 5 × 5 convolution kernels with 3 × 3 Gaussian filters after initialization. After training the optimal model discovered by ENAS from scratch, we find that only a small portion of trained kernels have apparent texture appearance, while a majority of them are practically flat. We remove nearly 85% of the kernels based on standard deviation thresholds. Accuracy drop is observed on the pruned model, it expresses the need for finding a way to improve the representation capability of our model, which leads to the idea of adding additional neurons to the network. In order to distinguish the newly added neurons from the existing neurons, we refer to the former ones as add-on neurons. The add-on neurons are free to be inserted between two arbitrary neurons that are originally located in different layers, since a connection within a layer could cause confusion about computation order. The input and output neurons of an add-on neuron must share the same data dimension, so that we can define a standard rule to create add-on neurons. Take the best architecture discovered by ENAS for example. The 12-layer architecture is divided by two average pooling layers into three segments, with each containing 4 convolution cells. The data dimensions are uniformed in each segment, therefore add-on neurons are not allowed to be inserted across separate segments. Two types of operations are allowed in an add-on neuron, i.e., 3 × 3 convolution and 5 × 5 convolution. Algorithm 1: Location-Direction Search (LDS) Input: A directed acyclic graph G, in which each node represents a layer in the layer-hierarchy network. Output:G = G + A, in which A represents the discovered add-on neurons and their connections to the input network G. Require: Population capacity K, cycle number T, train samples B train, test samples B test Location search Initialize: DISPLAYFORM0 Location and direction search. In order to increase the search speed for add-on neurons, we propose a location-direction search (LDS) method to search the specific positions and connections independently, which means that we first search for the most possible locations where the add-on neurons may occur, then decide on their specific input and output connections. The location search is carried out in an evolutionary fashion, as described in Algorithm 1. First, we define an initial population (GenerateLoc). Each individual in the population stands for a possible combination of add-on locations. During each evolution cycle, we train every model in the population for several iterations (M) and we crossover the best models (Recombine) to reproduce new models, mutate the best models (M utate) to produce new models, and remove the worst models from the population. The crossover (Recombine) is performed as follows. DISPLAYFORM1 Let p 1 l, p 2 l, p 3 l, p 4 l denote the best 4 models, in which lower indices are assigned to better models. The crossover pairs to generate 5 new models are selected as {After the best combination of add-on locations is discovered, an evolutionary search on add-on directions is performed. The direction of an add-on neuron (i.e. to choose which neurons to connect with) at given location is selected sparsely (GenerateDir), since we do not want add-on neurons at the same location to marginally form a new layer. Because the potential number of different combinations of add-on directions can be extremely large, we simply re-generate all combinations at each evolution cycle to find the best one. No mutation or crossover will be performed during direction search. For example, let us employ LDS to the first segment in FIG2, i.e., layers {0, 1, 2, 3}, as separately shown in FIG3. We define the location where one or several add-on neurons may occur as a new sparse convolution layer (e.g., a 0 in FIG2). All possible locations for add-on neurons are {,,,,,}, in which tuple (a, b) means that add-on neurons are inserted between layer a and b, as illustrated as gray blocks in FIG3. If we enforce a restraint that only one input add-on location with the same kernel size is allowed for each existing layer and include the pooling layer into search space, the overall location search complexity will be 1.9 × 10 8 (which is (4!) 3 · (4!) 3, since there are three segments in the network and we are using two different kernel sizes of add-on neurons). When the optimal add-on locations are determined, we perform the direction search with the sparsity of 0.1, which means that we will leave out 90% of all possible connections and leave the remaining effective neurons as the search . Keeping neurons with high sensitivities is recommended, because we need the add-on neurons to improve the performance of the model. Therefore, the standard deviation thresholds are used to determine the optimal add-on directions. Weight sharing in add-on search. We adopt the idea of sharing weights between submodels by BID33 to increase the search speed in LDS. Specifically, we define all possible connections at all possible locations beforehand. Let W = {w l,d |l ∈ L, d ∈ D} denote all shared weights for LDS, in which w l,d denotes the weight at location l and direction d, and L and D stand for all possible locations and directions, respectively. When generating new subgraphs for location or direction search, we simply select the corresponding weights in W k,t = {w l,d |l ∈ L k,t, d ∈ D k,t}, in which (k, t) stands for the kth model in the search population at evolution cycle t. In general, all w l,d in W are trained in an asynchronous fashion. The proposed approach for neuron hierarchical networks can be described as Algorithm 2. Require: Number of layers L Initialize: Layer-hierarchy network G Layer hierarchy search DISPLAYFORM0 TrainG from scratch returnG Table 1: Cifar-10 test errors of NHN and other methods. The first block represents handcrafted architectures. The second block represents architectures discovered by random search methods. The third block represents evolutionary architectures. Note that "d/o" stands for Dropout, "s/s" stands for Shake-Shake regulation, "c/o" stands for Cutout, and "d/p" stands for Drop-path. Score = (e/2) 2 + (n/25) 2, where e denotes the test error in percentage and n denotes the number of parameters in million. Smaller score indicates better overall performance. *The performance of Evolving DNN on Cifar-10 is reported by. Parameters Error ScoreResNet-110 BID18 1.7M 6.43% 3.22 pre-act-ResNet-1001 BID19 10.2M 4.62% 2.35 WRN-40-10 + d/o BID46 56M 3.80% 2.94 WRN-28-20 + SGDR BID28 145.8M 3.74% 6.12 DenseNet 27.2M 3.74% 2.16 ResNeXt 68.1M 3.58% 3.26 DenseNet + s/s BID13 26.2M 2.86% 1.77CNF BID36 21.2M 7.43% 3.81 MetaQNN BID2 11.2M 6.92% 3.49 Budgeted CNF BID42 7.6M 5.12% 2.58 BID23 38.6M 4.60% 2.77 PPP-Net BID10 11.3M 4.36% 2.23 ENAS macro BID33 21.3M 4.23% 2.28 SMASH BID4 16.0M 4.03% 2.11 NAS + more filters 37.4M 3.65% 2.36 Block-QNN-S + more filters BID47 39.8M 3.54% 2.38 ENAS micro BID33 4.6M 3.54% 1.78 EAS by NT (DenseNet) BID5 10.7M 3.44% 1.77 Progressive NAS 3.2M 3.41% 1.71 CL on ResNeXt BID0 34.4M 3.17% 2.10 DISPLAYFORM0 Genetic CNN -7.10% -Evolving DNN * -7.30% -Large-scale evolution BID34 -5.40% -Evo. search on hier. repr. 61.3M 3.63% 3.05 AmoebaNet + c/o BID35 34 We are using the Cifar-10 dataset to train and evaluate our neuron hierarchy network. The Cifar-10 dataset consists of 50k 32 × 32 3-channel training images in 10 classes, and 10k test images. The standard pre-processing is applied, i.e., subtracting the mean and dividing the standard deviation on each channel. The standard data augmentation is used for the training set, i.e., padding the images to 40 × 40 and randomly cropping down to 32 × 32, then flipping horizontally by a probability of 0.5. All experiments are run on a single NVIDIA GTX 1080Ti video card, which has sufficient GPU memory for the searching and training of the NHN.Layer hierarchy search. We use the same hyperparameters as in BID33 to perform the macro search on the entire architecture, as explained in Section 2.1. It takes ENAS no more than 16 hours to find a layer-hierarchy model with test accuracy of 96.55% on Cifar-10 with Cutout. If trained without Cutout, the model's test accuracy is 95.08%. The re-training of the best model from scratch takes almost 9 hours. Network pruning. We perform the DNP method to prune low sensitivity connections in the best model discovered by ENAS, as described in Section 2.2. We also clean up the network by removing all the neurons with zero input or output connection after DNP. We prune nearly 85% of input connections of all 3 × 3 and 5 × 5 convolution layers on average. In total, more than 70% of the parameters are pruned. The overall pruning ratio decreases because of the immense applications of 1 × 1 convolutions in ENAS, which are not directly removed by DNP. Evident impact on the performance by pruning is observed. We re-train the pruned network for another 160 epochs, which takes about 5 hours. The test accuracy is slightly dropped to 96.20% (with Cutout).Neuron hierarchy search. The population capacity (K) for location search is set to 20. The individual model p k l is described as the gene code of length of 24, in which the first 12 genes indicate whether an add-on location exists with kernel size of 3 × 3 at each existing layer (since there are 3 segments and we need 4 genes for each segment), and the last 12 genes are for 5 × 5. A positive gene indicates there are add-on neurons connecting into this layer and the value denotes the input layer index. A negative gene value (-1) means that no add-on neuron is connected to this layer. Out of avoiding too many add-on neurons being added to the model, the gene value of -1 is generated by a probability of 0.8, and the rest of the values are sampled uniformly. Each model is trained for 25 iterations (M) with the batch size of 100, so that a complete training cycle of the population exactly traverses the entire training set of Cifar-10. Submodels are evaluated by single iteration (N = 1) with the batch size of 500. Considering the sparsity of add-on locations and that a slight change in the gene value could lead to dramatic changes in performance, we simply re-generate 5 new models instead of performing mutation on the best models. A cosine schedule for learning rate that gradually decreases from 0.05 to 0.001 is used. The location search takes around 10 hours (300 evolution cycles) to complete. The best 3 sets of discovered locations are illustrated in FIG2 (the small blocks). It is apparent that having more add-on neurons at low-level layers works better for image classification. The direction search is performed as described in Section 2.3. The direction search takes about 9 hours (300 cycles) to finish. Only weights in add-on neurons are trained and weights in existing neurons are fixed during the LDS, because we are trying to find the add-on neurons that are most compatible with existing neurons. It also saves a great number of computation resources when the majority of parameters are not trained. After the best locations and directions of add-on neurons have been discovered, we re-train the entire network from scratch. Cutout BID9 and Dropout BID39 are used for better generalization. The training schedule resembles the one used by BID33. Nesterov BID31 ) is used as the gradient descent method, in which the momentum term is set to 0.9 and l 2 regularization of 2 × 10 −4 is used. Gradient clipping BID32 with the threshold of 5.0 is used to overcome the potential gradient explosion. In fact, most of the weights (i.e. the weights from the original layer-wise network) have undergone three passes of training, i.e., during ENAS, after DNP, and after LDS. It is reasonable to believe that the overall training has seen several warm restarts, hence the reason why no warm restart is introduced in the training stages after DNP and LDS. The final training takes 6 hours (260 epochs) to complete. The final test accuracy of NHN is 96.52%. The performance of NHN in comparison with other methods is listed in Table 1. The total number of parameters in the NHN is approximately 7M. We can see that the test accuracy of NHN is better than most of the methods listed in Table 1. However, several methods achieve lower test errors than NHN, e.g. DenseNet with Shake-Shake, connectivity learning on ResNeXt and AmoebaNet with Cutout. However, the amounts of parameters of these methods are huge. In order to examine the overall performances of methods in terms of test errors and parameter numbers, we define the performance score as follows. score = (e/2) 2 + (n/25) 2, where e denotes test error (%) and n denotes number of parameters (M). A lower score value indicates better performance of the model. We can see that except for Progressive NAS which has marginally better performance score, the NHN is highly competitive against all other methods in terms of low test error and small parameter number. On the whole, the searching and training of the neuron hierarchy network take approximately 46 hours in total, in which the search time of neuron hierarchy is 10 hours. The search time of NHN, however, is indeed more than that of ENAS, since we actually perform a complete ENAS process among other operations during the search. When comparing to NAS ) (which takes 450 GPUs to search for 3-4 days), the speedup is as high as 700X-940X and parameter number is reduced by 81%. NHN achieves higher accuracy than SMASH BID4 with 56% fewer parameters. Moreover, it also has higher accuracy than hierarchical representation with 89% fewer parameters and a speedup in search time of 152X, when the latter costs 1.5 days on 200 GPUs to evaluate 7000 samples. Our approach achieves more than 5X speedup with 35% fewer parameters compared to the EAS by NT BID5, which costs 2 days on 5 GPUs to achieve similar accuracy. Compared to another method that achieves similar accuracy, Progressive NAS, NHN achieves speedup in GPU time of 50X. BID5 proposed a network architecture search method which allows layer transformations during architecture search by enforcing Actor Networks for widening or deepening. Although it allows more neurons and parameters being added to the network during architecture search, the models it discovers are still layer-based, thus redundancy in weights cannot be overlooked. BID0 proposed to learn the connectivity in ResNeXt. They create a uniform grid on which each node represents a residual block, and connections are only allowed between neighboring rows of nodes, whereas in our work, each node represents an actual neuron, the network is not necessarily described as a grid, and connections are not under strict restrains. The performance of the best architecture discovered by connectivity learning is impressive, but the method only applies to ResNeXt. We did not choose fixed or handcrafted architectures as the base model because we believe that experiments conducted on a randomly searched model would be more compelling. There are also pruning-related issues with fixed models, for example, performances of ResNets and DenseNet are extremely sensitive to pruning. The training from scratch on the pruned architecture is crucial, because without it, the model only has a test accuracy of 15.7%. The NHN is not built upon the by ENAS micro search even though it presents higher test accuracy while requiring fewer parameters than macro search. It is mainly due to the mass employment of depthwise separable convolution BID7 in which kernels are pairs of vectors and cannot be directly pruned. If we replace all the depthwise separable convolutions with normal convolutions, the micro model merely gains accuracy advantage of 0.3% over the macro model. However, it instead contains 67.8M parameters, which is more than 4 times of macro (16.3M). Also, it will consume more than 28GB of memory space to perform the layer-hierarchy search. LDS FIG2 show that add-on neurons at lower layers work better, which indicates that rich representation of low-level features is crucial to the performance of NHN. When comparing the final test accuracy (96.52%) to the network without any add-on neuron (96.20%), we know that add-on neurons are helpful in increasing the performance of NHN. In fact, perturbation on the add-on genes discovered by LDS almost always leads to degradation of performance, and the total ablation of added neurons in the final model causes accuracy drop of 1.08%, which proves that the search are optimal. The main goal of this paper is neither to comprehensively discuss the properties of neuron fields BID12, nor to investigate a training method on an entirely randomly generated neuron graph. We'd like to point out that it is quite possible to directly generate a large number of free neurons with somewhat arbitrary connections and train this "random neuron field" to address the same task presented in this work. However, because modern GPUs, or to be more precise, the computation softwares that run on these GPUs are mainly designed for dense 4-d tensor calculation. It is hard to efficiently train such random neuron field at present. Therefore, as sophisticated as our approach may seem, it's an efficient method to construct network architectures that highlight the significance of individual neurons and perform competitively against other state-of-the-art methods. Neural networks that are designed by human experts and search algorithms perform outstandingly in image classification. However, redundancy in parameters exists widely in layer-based architectures. We propose a heuristic method to construct neuron-based architectures, namely, neuron hierarchical networks (NHN), that obviate the redundancy in weights and emphasize the relation between individual neurons. Experiments show that the NHN discovered based on ENAS and by locationdirection search (LDS) outperforms the original ENAS architecture and many other handcrafted and randomly searched models in Cifar-10 classification, while requiring much fewer parameters. Also, the search time of NHN is very efficient compared to several state-of-the-art network architecture search methods, while achieving competitive performance.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylxrsR9Fm
By breaking the layer hierarchy, we propose a 3-step approach to the construction of neuron-hierarchy networks that outperform NAS, SMASH and hierarchical representation with fewer parameters and shorter searching time.
Simulation is a useful tool in situations where training data for machine learning models is costly to annotate or even hard to acquire. In this work, we propose a reinforcement learning-based method for automatically adjusting the parameters of any (non-differentiable) simulator, thereby controlling the distribution of synthesized data in order to maximize the accuracy of a model trained on that data. In contrast to prior art that hand-crafts these simulation parameters or adjusts only parts of the available parameters, our approach fully controls the simulator with the actual underlying goal of maximizing accuracy, rather than mimicking the real data distribution or randomly generating a large volume of data. We find that our approach (i) quickly converges to the optimal simulation parameters in controlled experiments and (ii) can indeed discover good sets of parameters for an image rendering simulator in actual computer vision applications. In order to train deep neural networks, significant effort has been directed towards collecting largescale datasets for tasks such as machine translation BID16, image recognition BID7 or semantic segmentation BID14 BID6. It is, thus, natural for recent works to explore simulation as a cheaper alternative to human annotation BID11 BID19 BID18. Besides, simulation is sometimes the most viable way to acquire data for rare events such as traffic accidents. However, while simulation makes data collection and annotation easier, it is still an open question what distribution should be used to synthesize data. Consequently, prior approaches have used human knowledge to shape the generating distribution of the simulator BID20, or synthesized large-scale data with random parameters BID18. In contrast, this paper proposes to automatically determine simulation parameters such that the performance of a model trained on synthesized data is maximized. Traditional approaches seek simulation parameters that try to model a distribution that resembles real data as closely as possible, or generate enough volume to be sufficiently representative. By learning the best set of simulation parameters to train a model, we depart from the above in three crucial ways. First, the need for laborious human expertise to create a diverse training dataset is eliminated. Second, learning to simulate may allow generating a smaller training dataset that achieves similar or better performances than random or human-synthesized datasets BID18, thereby saving training resources. Third, it allows questioning whether mimicking real data is indeed the best use of simulation, since a different distribution might be optimal for maximizing a test-time metric (for example, in the case of events with a heavy-tailed distribution).More formally, a typical machine learning setup aims to learn a function h θ that is parameterized by θ and maps from domain X to range Y, given training samples (x, y) ∼ p(x, y). Data x usually arises from a real world process (for instance, someone takes a picture with a camera) and labels y are often annotated by humans (someone describing the content of that picture). The distribution p(x, y) is assumed unknown and only an empirical sample D = {(x i, y i)} N i=1 is available. The simulator attempts to model a distribution q(x, y; ψ). In prior works, the aim is to adjust the form of q and parameters ψ to mimic p as closely as possible. In this work, we attempt to automatically learn the parameters of the simulator ψ such that the loss L of a machine learning model h θ is minimized over some validation data set D val. This objective can be formulated as the bi-level optimization problem DISPLAYFORM0 where h θ is parameterized by model parameters θ, D q(x,y| ψ) describes a data set generated by the simulator and θ(ψ) denotes the implicit dependence of the model parameters θ on the model's training data and consequently, for synthetic data, the simulation parameters ψ. In contrast to BID10, who propose a similar setup, we focus on the actual data generation process q(x, y; ψ) and are not limited to selecting subsets of existing data. In our formulation, the upper-level problem (equation 1a) can be seen as a meta-learner that learns how to generate data (by adjusting ψ) while the lower-level problem (equation 1b) is the main task model (MTM) that learns to solve the actual task at hand. In Section 2, we describe an approximate algorithm based on policy gradients BID24 to optimize the objective 1. For our algorithm to interact with a black-box simulator, we also present an interface between our model's output ψ and the simulator input. In various experiments on both toy data and real computer vision problems, Section 4 analyzes different variants of our approach and investigates interesting questions, such as: "Can we train a model h θ with less but targeted high-quality data?", or "Are simulation parameters that approximate real data the optimal choice for training models?". The experiments indicate that our approach is able to quickly identify good scene parameters ψ that compete and in some cases even outperform the actual validation set parameters for synthetic as well as real data, on computer vision problems such as object counting or semantic segmentation. Given a simulator that samples data as (x, y) ∼ q(x, y; ψ), our goal is to adjust ψ such that the MTM h θ trained on that simulated data minimizes the risk on real data (x, y) ∼ p(x, y). Assume we are given a validation set from real data D val and we can sample synthetic datasets D q(x,y| ψ) ∼ q(x, y| ψ). Then, we can can train h θ on D q(x,y| ψ) by minimizing equation 1b. Note the explicit dependence of the trained model parameters θ * on the underlying data generating parameters ψ in equation 1b. To find ψ *, we minimize the empirical risk over the held-out validation set D val, as defined in equation 1a. Our desired overall objective function can thus be formulated as the bi-level optimization problem in equation 1.Attempting to solve it with a gradient-based approach poses multiple constraints on the lower-level problem 1b like smoothness, twice differentiability and an invertible Hessian BID2 BID5. For our case, even if we choose the model h θ to fulfill these constraints, the objective would still be non-differentiable as we (i) sample from a distribution that is parameterized by the optimization variable and (ii) the underlying data generation process (e.g., an image rendering engine) is assumed non-differentiable for the sake of generality of our approach. In order to cope with the above defined objective, we resort to policy gradients BID24 to optimize ψ. Our goal is to generate a synthetic dataset such that the main task model (MTM) h θ, when trained on this dataset until convergence, achieves maximum accuracy on the test set. The test set is evidently not available during train time. Thus, the task of our algorithm is to maximize MTM's performance on the validation set by generating suitable data. Similar to reinforcement learning, we define a policy π ω parameterized by ω that can sample parameters ψ ∼ π ω for the simulator. The simulator can be seen as a generative model G(x, y| ψ) which generates a set of data samples (x, y) conditioned on ψ. We provide more details on the interface between the policy and the data generating function in the following section and give a concrete example for computer vision applications in Section 4. The Reward R Figure 1: A high-level overview of our "learning to simulate" approach. A policy π ω outputs parameters ψ which are used by a simulator to generate a training dataset. The main task model (MTM) is then trained on this dataset and evaluated on a validation set. The obtained accuracy serves as reward signal R for the policy on how good the synthesized dataset was. The policy thus learns how to generate data to maximize the validation accuracy.policy receives a reward that we define based on the accuracy of the trained MTM on the validation set. Figure 1 provides a high-level overview. Specifically, we want to maximize the objective DISPLAYFORM0 with respect to ω. The reward R is computed as the negative loss L or some other accuracy metric on the validation set. Following the REINFORCE rule we obtain gradients for updating ω as DISPLAYFORM1 An unbiased, empirical estimate of the above quantity is DISPLAYFORM2 where k = R(ψ k) − b is the advantage estimate and b is a baseline that we choose to be an exponential moving average over previous rewards. In this empirical estimate, K is the number of different datasets D q(x,y| ψ k) sampled in one policy optimizing batch and R(ψ k) designates the reward obtained by the k-th MTM trained until convergence. Given the basic update rule for the policy π ω, we can design different variants of our algorithm for learning to simulate data by introducing three control knobs. First, we define the number of training epochs ξ of the MTM in each policy iteration as a variable. The intuition is that a reasonable reward signal may be obtained even if MTM is not trained until full convergence, thus reducing computation time significantly. Second, we define the size M of the data set generated in each policy iteration. Third, we either choose to retain the MTM parameters θ from the previous iteration and fine-tune on the newly created data or we estimate θ from scratch (with a random initialization). This obviously is a trade-off because by retaining parameters the model has seen more training data in total but, at the same time, may be influenced by suboptimal data in early iterations. We explore the impact of these three knobs in our experiments and appendix. Algorithm 1 summarizes our approach. DISPLAYFORM3 Train or fine-tune K main task models (MTM) for ξ epochs on data provided by M k Obtain rewards R(ψ k), i.e., the accuracy of the trained MTMs on the validation set Compute the advantage estimate k = R(ψ k) − b Update the policy parameters ω via equation 4 end Algorithm 1: Our approach for "learning to simulate" based on policy gradients. We defined a general black-box simulator as a distribution G(x, y| ψ) over data samples (x, y) parameterized by ψ. In practice, a simulator is typically composed of a deterministic "rendering" process R and a sampling step as DISPLAYFORM0, where the actual data description ρ (e.g., what objects are rendered in an image) is sampled from a distribution S parametrized by the provided simulation parameters ψ and specific rendering settings φ (e.g., lighting conditions) are sampled from a distribution P also parameterized by ψ. To enable efficient sampling (via ancestral sampling) BID1, the data description distribution is often modeled as a Bayesian network (directed acyclic graph) where ψ defines the parameters of the distributions in each node, but more complex models are possible too. The interface to the simulator is thus ψ which describes parameters of the internal probability distributions of the black-box simulator. Note that ψ can be modeled as an unconstrained continuous vector and still describe various probability distributions. For instance, a continuous Gaussian is modeled by its mean and variance. A K-dimensional discrete distribution is modeled with K real values. We assume the black-box normalizes the values to a proper distribution via a softmax. With this convention all input parameters to the simulator are unconstrained continuous variables. We thus model our policy as the multivariate Gaussian π ω (ρ, φ) = N (ω, σ 2) with as many dimensions as the sum of the dimensions of parameters ρ and φ. For simplicity, we only optimize for the mean and set the variance to 0.05 in all cases, although the policy gradients defined above can handle both. Note that our policy can be extended to a more complicated form, e.g., by including the variance. The proposed approach can be seen as a meta-learner that alters the data a machine learning model is trained on to achieve high accuracy on a validation set. This concept is similar to recent papers that learn policies for neural network architectures BID25 and optimizers BID0. In contrast to these works, we are focusing on the data generation parameters and actually create new, randomly sampled data in each iteration. While BID10 proposes a subset selection approach for altering the training data, we are actually creating new data. This difference is important because we are not limited to a fixed probability distribution at data acquisition time. We can thus generate or oversample unusual situations that would otherwise not be part of the training data. Similar to the above-mentioned papers, we also choose a variant of stochastic gradients (policy gradients BID24) to overcome the non-differentiable sampling and rendering and estimate the parameters of the policy π ω. While alternatives for black-box optimization exist, like evolutionary algorithms BID21 or sampling-based methods BID1, we favor policy gradients in this work for their sample efficiency and success in prior art. BID13 train a policy to generate a program that creates a copy of an input image. Similar to us, they use policy gradients to train the policy, although they use an adversarial loss to construct their reward. Again, BID15 seek to tune parameters of a simulator such that the marginal distribution of the synthetic data matches the distribution of the observed data. In contrast to both works, we learn parameters of a simulator that maximize performance of a main task model on a specific task. The learned distribution need not match the distribution of the observed data. When relying on simulated data for training machine learning models, the issue of "domain gap" between real and synthetic data arises. Many recent works BID12 BID23 focus on bridging this domain gap, particularly for computer vision tasks. Even if we are able to tune parameters perfectly, there exists a simulation-to-real domain gap which needs to be addressed. Thus, we believe the contributions of our work are orthogonal. The intent of our experimental evaluation is (i) to illustrate the concept of our approach in a controlled toy experiment (section 4.1), (ii) to analyze different properties of the proposed algorithm 1 on a high-level computer vision task (section 4.3) and (iii) to demonstrate our ideas on real data for semantic image segmentation (section 4.5). The decision boundaries (shaded areas) of a non-linear SVM trained on data generated by q(x, y| ψ i) for three different iterations i of our policy π ω. The data points overlaid are the test set. Bottom row: Decision boundary when trained on data sampled from p(x, y| ψ real) (left) and on the converged parameters ψ * (middle); Data sampled from q(x, y| ψ *) (right). To illustrate the concept of our proposed ideas we define a binary classification task on the 2-dimensional Euclidean space, where data distribution p(x, y| ψ real) of the two classes is represented by Gaussian mixture models (GMM) with 3 components, respectively. We generate validation and test sets from p(x, y| ψ real). Another GMM distribution q(x, y| ψ) reflects the simulator that generates training data for the main task model (MTM) h θ, which is a non-linear SVM with RBF-kernels in this case. To demonstrate the practical scenario where a simulator is only an approximation to the real data, we fix the number of components per GMM in ψ to be only 2 and let the policy π ω only adjust mean and variances of the GMMs. Again, the policy adjusts ψ such that the accuracy (i.e., reward R) of the SVM is maximized on the validation set. The top row of figure 2 illustrates how the policy gradually adjusts the data generating distribution q(x, y| ψ) such that reward R is increased. The learned decision boundaries in the last iteration (right) well separate the test data. The bottom row of figure 2 shows the SVM decision boundary when trained with data sampled from p(x, y| ψ real) (left) and with the converged parameters ψ * from the policy (middle). The third figure in the bottom row of figure 2 shows samples from q(x, y| ψ *). The sampled data from the simulator is clearly different than the test data, which is obvious given that the simulator's GMM has less components per class. However, it is important to note that the decision boundaries are still learned well for the task at hand. For the following experiments we use computer vision applications and thus require a generative scene model and an image rendering engine. We focus on traffic scenes as simulators/games for this scenario are publicly available (CARLA BID8 with Unreal engine as backend). However, we need to note that substantial extensions were required to actually generate different scenes according to a scene model rather than just different viewpoints of a static map. Many alternative simulators like BID17 BID18 BID22 are similar where an agent can navigate a few pre-defined maps, but the scene itself is not parameterized and cannot be changed on the fly. To actually synthesize novel scenes, we first need a model S(ρ | ψ) that allows to sample instances of scenes ρ given parameters ψ of the probability distributions of the scene model. Recall that ψ is produced by our learned policy π ω. Our traffic scene model S(ρ | ψ) handles different types of intersections, various car models driving on the road, road layouts and buildings on the side. Additionally our rendering model P (φ | ψ) handles shows the reward on the unseen test set. We observe that even when using the "adversarial" initialization of parameters, our approach converges to the same reward R, but at a slower rate. (c) shows the reward on the unseen test for different random parameter initializations.weather conditions. Please see the appendix for more details. In our experiments, the model is free to adjust some of the variables, e.g., the probability of cars being on the road, weather conditions, etc. Given these two distributions, we can sample a new scene and render it as R(S(ρ | ψ), P (φ | ψ)). Figure 3 shows examples of rendered scenes. As a first high-level computer vision task we choose counting cars in rendered images, where the goal is to train a convolutional neural network h θ to count all instances individually for five types of cars in an image. The evaluation metric and (also the loss) is the 1 distance between predicted and ground truth count, averaged over the different car types. The reward R is the negative 1 loss. For this experiment, we generate validation and test sets with a fixed and pre-defined distribution ψ real.Initialization: We first evaluate our proposed policy (dubbed "LTS" in the figures) for two different initializations, a "standard" random one and an initialization that is deliberately picked to be suboptimal (dubbed "adversarial"). We also compare with a model trained on a data set sampled with ψ real, i.e., the test set parameters. Figure 4 explains our . We can see in figure 4a that our policy π ω ("LTS") quickly reaches high reward R on the validation set, equal to the reward we get when training models with ψ real. Figure 4b shows that high rewards are also obtained on the unseen test set for both initializations, albeit convergence is slower for the adversarial initialization. The reward after convergence is comparable to the model trained on ψ real. Since our policy π ω is inherently stochastic, we show in figure 4c convergence for different random initializations and observe a very stable behavior. Accumulating data: Next, we explore the difference between training the MTM h θ from scratch in each policy iteration or retaining its parameters and fine-tune, see algorithm 1. We call the second option the "accumulated main task model (AMTM)" because it is not re-initialized and accumulates information over policy iterations. The intention of this experiment is to analyze the situation where the simulator is used for generating large quantities of data, like in BID18. First, by comparing figures 4b and 5a, we observe that the reward R gets significantly higher than when training MTM from scratch in each policy iteration. Note that we still use the MTM reward as our training signal, we only observe the AMTM reward for evaluation purposes. For the case of accumulating the MTM parameters, we further compare with two baselines. First, replicating a hand-crafted choice of simulation parameters, we assume no domain expertise and (a) (b) (c) Figure 5: (a) Reward R of the accumulated main task model on the car-counting task using different training schemes. We observe that training an accumulated main task network using a learning policy at each step is superior to training it with a dataset generated either using the final parameters of the policy or random parameters. (b) Learning-to-simulate converges even with an "adversarial" initialization of parameters, albeit in more epochs. (c) Reward of the accumulated main task model using different number of training epochs ξ for h θ.randomly sample simulator parameters (within a sensible range) in each iteration ("random policy params"). Second, we take the parameters given by our learned policy after convergence ("final policy params"). For reference, we train another AMTM with the ground truth validation set parameters ("validation params") as our upper-bound. All baselines are accumulated main task models, but with fixed parameters for sampling data, i.e., resembling the case of generating large datasets. We can see from figure 5a that our approach gets very close to the ground truth validation set parameters and significantly outperforms the random parameter baseline. Interestingly, "LTS" even outperforms the "final policy params" baseline, which we attribute to increased variety of the data distribution. Again, "LTS" converges to a high reward R even with an adversarial initialization, see figure 5bNumber of epochs: Similar to the previous experiment, we now analyze the impact of the number of epochs ξ used to train the main task model h θ in the inner loop of learning to simulate. Figure 5c shows the reward of the accumulated MTM for four different values of ξ. Our , for the car-counting task, is that learning to simulate is robust to lower training epochs, which means that even if the MTM has not fully converged yet the reward signal is good enough to provide guidance for our system leading to a potential wall-time speed up of the overall algorithm. All four cases converge, including the one where we train the MTM for only one epoch. Note that this is dependent on the task at hand, and a more complicated task might necessitate convergence of the main task model to provide discriminative rewards. For the next set of experiments we use semantic segmentation as our test bed, which aims at predicting a semantic category for each pixel in a given RGB image BID3. Our modified CARLA simulator BID8 provides ground truth semantic segmentation maps for generated traffic scenes, including categories like road, sidewalk or cars. For the sake of these experiments, we focus on the segmentation accuracy of cars, measured as intersection-over-union (IoU), and allow our policy π ω to adjust scene and rendering parameters to maximize reward R (i.e., car IoU). This includes the probability of generating different types of cars, length of road and weather type. The main task model h θ is a CNN that takes a rendered RGB image as input and predicts a per-pixel classification output. We first generate validation set parameters ψ val that reflect traffic scenes moderately crowded with cars, unbalanced car types, random intersections and buildings on the side. As a reference point for our proposed algorithm, we sample a few data sets with the validation set parameters ψ val, train MTMs and report the maximum reward (IoU of cars) achieved. We compare this with our learned policy π ω and can observe in figure 6a that it actually outperforms the validation set parameters. This is an interesting observation because it shows that the validation set parameters ψ val may not always be the optimal choice for training a segmentation model. We demonstrate the practical impact of our learning-to-simulate approach on semantic segmentation on by training a main task model (MTM) h θ with a reward signal coming from real data. Using simulated data for semantic segmentation was recently investigated from a (a) (b) Figure 6: (a) Reward curves of our approach compared to a model trained on data generated with the actual validation set parameters on the synthetic semantic segmentation task. (b) Reward curves on the real validation set of KITTI for semantic segmentation. We plot the learning-to-simulate, the maximum reward achieved using random search and the maximum and mean using random parameters. All methods use 600 iterations. Training data random params random search LTS KITTI train set Car IoU 0.480 0.407 0.579 0.778 Table 1: Segmentation Car IoU on the unseen KITTI test set for a ResNet-50 segmentation network trained using synthetic data generated by random parameters or learned parameters using random search or learning to simulate (LTS) for 600 epochs of each. We test the epoch with highest validation reward on the KITTI test set. We also report the maximum car IoU obtained by training on 982 annotated real KITTI training images.domain adaptation perspective BID23 BID18, where an abundant set of simulated data is leveraged to train models applicable on real data. Here, we investigate targeted generation of simulated data and its impact on real data. Since the semantic label space of KITTI and our CARLA simulator are not identical, we again focus on segmentation of cars by measuring IoU for that category. For our main task model h θ we use a CNN that takes a rendered RGB image as input and predicts a per-pixel classification output with a ResNet-50 backbone. As our baseline we train the main task model separately 600 times, with data generated by the simulator using different sets of random parameters for each one. We monitor the validation Car IoU metric for each of these networks and pick the one with highest validation reward. We then test it on the unseen KITTI test set and report the Car IoU in table 1. For illustration purposes we show the reward curve of our approach on the validation set as well as the maximum for random search and the maximum and mean for random parameters in Figure 6b.However, it is important to mention that parameters which are actually good for training an MTM h θ are unknown, making our automatic approach attractive in such situations. The on the unseen real KITTI test set in table 1 confirm the superior of learning-to-simulate. We train using synthetic data generated by random or learned parameters for 600 epochs of each. We pick the epoch with highest validation reward and test it on the KITTI test set. For reference, we also report the maximum car IoU obtained by our network by training on 982 annotated real KITTI training images. Additionally, we verify empirically that parameter optimization using policy gradients (learning to simulate) outperforms random search for this problem. Results are reported in table 1. Learning to simulate can be seen as a meta-learning algorithm that adjusts parameters of a simulator to generate synthetic data such that a machine learning model trained on this data achieves high accuracies on validation and test sets, respectively. Given the need for large-scale data sets to feed deep learning models and the often high cost of annotation and acquisition, we believe our approach is a sensible avenue for practical applications to leverage synthetic data. Our experiments illustrate the concept and demonstrate the capability of learning to simulate on both synthetic and real data. For future work, we plan to expand the label space in our segmentation experiments, apply the algorithm to other tasks like object detection and to explore a dynamic memory of previously generated data for improving our learning to simulate procedure. A TRAFFIC SCENE MODEL Our model comprises the following elements:• A straight road of variable length.• Either an L, T or X intersection at the end of the road.• Cars of 5 different types which are spawned randomly on the straight road.• Houses of a unique type which are spawned randomly on the sides of the road.• Four different types of weather. All of these elements are tied to parameters: ρ k can be decomposed into parameters which regulate each of these objects. The scene is generated "block" by "block". A block consists of a unitary length of road with sidewalks. Buildings can be generated on both sides of the road and cars can be generated on the road. ρ k,car designates the probability of car presence in any road block. Cars are sampled block by block from a Bernouilli distribution X ∼ Bern (ρ k,car). To determine which type of car is spawned (from our selection of 5 cars) we sample from a Categorical distribution which is determined by 5 parameters ρ k,cari where i is an integer representing the identity of the car and i ∈. ρ k,house designates the probability of house presence in any road block. Houses are sampled block by block from a Bernouilli distribution X ∼ Bern (ρ k,house).Length to intersection is sampled from a Categorical distribution determined by 10 parameters ρ k,lengthi with i ∈ where i denotes the length from the camera to the intersection in "block" units. Weather is sampled randomly from a Categorical distribution determined by 4 parameters φ k,weatheri where i is an integer representing the identity of the weather and i ∈. L, T and X intersections are sampled randomly with equal probability. In table 2, we present classification accuracy for the toy problem in Section 4.1 with Q distributions using different number of gaussians. We can observe that by using learning to simulate we obtain better classification than using a dataset generated using the test set parameters (mean and variance of gaussians in P distribution). In this section we visualize learning of parameters in the car counting problem described in Section 4.3. In particular we show how the parameters of weather type and car type evolve in time in FIG3. We explore the parameter M of our algorithm that controls the dataset size generated in each policy iteration. For example, when M = 100, we generate at each policy step a dataset of 100 images using the parameters from the policy which are then used to train our main task model h θ. We evaluate policies with sizes 20, 50, 100 and 200. In figure 8 we show a comparative graph of final errors on the validation and test sets for different values of M. For a fair comparison, we generate 40,000 images with our final learned set of parameters and train h θ for 5 epochs and evaluate on the test set. We observed that for this task a dataset size of just 20 suffices for our model to converge to good scene Figure 8: Test and validation error from main task networks trained on 40,000 images during 5 epochs using final learned parameters for different sizes of datasets generated per policy iteration.parameters ψ, which is highly beneficial for the wall-time convergence speed. Having less data per policy iteration means faster training of the MTM h θ. Since our method is stochastic in nature we verify that "learning to simulate" converges in the car counting task using different random seeds. We observer in figure 9a that the reward converges to the same value with three different random seeds. Additionally, in figure 9b, we observe that the accumulated main task network test reward also converges with different random seeds.(a) (b) Figure 9: Main task network reward and accumulated MTN reward converge using different random seeds.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJgkx2Aqt7
We propose an algorithm that automatically adjusts parameters of a simulation engine to generate training data for a neural network such that validation accuracy is maximized.
Modelling statistical relationships beyond the conditional mean is crucial in many settings. Conditional density estimation (CDE) aims to learn the full conditional probability density from data. Though highly expressive, neural network based CDE models can suffer from severe over-fitting when trained with the maximum likelihood objective. Due to the inherent structure of such models, classical regularization approaches in the parameter space are rendered ineffective. To address this issue, we develop a model-agnostic noise regularization method for CDE that adds random perturbations to the data during training. We demonstrate that the proposed approach corresponds to a smoothness regularization and prove its asymptotic consistency. In our experiments, noise regularization significantly and consistently outperforms other regularization methods across seven data sets and three CDE models. The effectiveness of noise regularization makes neural network based CDE the preferable method over previous non- and semi-parametric approaches, even when training data is scarce. While regression analysis aims to describe the conditional mean E[y|x] of a response y given inputs x, many problems such as risk management and planning under uncertainty require gaining insight about deviations from the mean and their associated likelihood. The stochastic dependency of y on x can be captured by modeling the conditional probability density p(y|x). Inferring such a density function from a set of empirical observations {(x n, y n)} N n=1 is typically referred to as conditional density estimation (CDE) and is the focus of this paper. In the recent machine learning literature, there has been a resurgence of interest in high-capacity density models based on neural networks (; ;). Since this line of work mainly focuses on the modelling of images based on large scale data sets, over-fitting and noisy observations are of minor concern in this context. In contrast, we are interested in CDE in settings where data may be scarce and noisy. When combined with maximum likelihood estimation, the flexibility of such high-capacity models in over-fitting and poor generalization. While regression typically assumes Gaussian conditional noise, CDE uses expressive distribution families to model deviations from the conditional mean. Hence, the overfitting problem tends to be even more severe in CDE than in regression. Classical regularization of the neural network weights such as weight decay has been shown to be effective for regression and classification. However, in the context of CDE, the output of the neural network merely controls the parameters of a density model such as a Gaussian Mixture or Normalizing Flow. This makes the standard regularization methods in the parameter space less effective and harder to analyze. Aiming to address this issue, we propose and analyze noise regularization, a method well-studied in the context of regression and classification, for the purpose of conditional density estimation. In that, the paper attempts to close a gap in previous research. By adding small random perturbations to the data during training, the conditional density estimate is smoothed and tends to generalize better. In fact, we show that adding noise during maximum likelihood estimation is equivalent to penalizing the second derivatives of the conditional log-probability. Visually, the respective regularization term punishes very curved or even spiky density estimators in favor of smoother variants, which proves to be a favorable inductive bias in many applications. Moreover, under some regularity conditions, we show that the proposed regularization scheme is asymptotically consistent, converging to the unbiased maximum likelihood estimator. This does not only support the soundness of the proposed method but also endows us with useful insight in how to set the regularization intensity relative to the data dimensionality and training set size. Overall, the proposed noise regularization scheme is easy to implement and agnostic to the parameterization of the CDE model. We empirically demonstrate its effectiveness on three different neural network based models. The experimental show that noise regularization outperforms other regularization methods significantly and consistently across various data sets. Finally, we demonstrate that, when properly regularized, neural network based CDE is able to improve upon state-of-the art non-parametric estimators, even when only 400 training observations are available. Density Estimation. Let X be a random variable with probability density function (PDF) p(x) defined over the domain X ⊆ R dx. Given a collection D = {x 1, ..., x n} of observations sampled from p(x), the goal is to find a good estimatef (x) of the true density function p. In parametric estimation, the PDFf is assumed to belong to a parametric family F = {f θ (·)|θ ∈ Θ} where the density function is described by a finite dimensional parameter θ ∈ Θ. The standard method for estimating θ is maximum likelihood estimation, wherein θ is chosen so that the likelihood of the data D is maximized. This is equivalent to minimizing the Kullback-Leibler divergence between the empirical data distribution p D (x) = 1 n n i=1 δ(||x − x i ||) (i.e., mixture of point masses in the observations x i) and the parametric distributionf θ: From a geometric perspective, can be viewed as an orthogonal projection of p D (x) onto F w.r.t. the reverse KL-divergence. Hence, is also commonly referred to as an M-projection . In contrast, non-parametric density estimators make implicit smoothness assumptions through a kernel function. The most popular non-parametric method, kernel density estimation (KDE), places a symmetric density function K(z), the so-called kernel, on each training data point x n . The ing density estimate reads asq(. Beyond the appropriate choice of K(·), a central challenge is the selection of the bandwidth parameter h which controls the smoothness of the estimated PDF . Conditional Density Estimation (CDE). Let (X, Y) be a pair of random variables with respective domains X ⊆ R dx and Y ⊆ R dy and realizations x and y. Let p(y|x) = p(x, y)/p(x) denote the conditional probability density of y given x. Typically, Y is referred to as a dependent variable (explained variable) and X as conditional (explanatory) variable. Given a dataset of observations D = {(x n, y n)} N n=1 drawn from the joint distribution (x n, y n) ∼ p(x, y), the aim of conditional density estimation (CDE) is to find an estimatef (y|x) of the true conditional density p(y|x). In the context of CDE, the KL-divergence objective is expressed as expectation over p(x): Corresponding to, we refer to the minimization of w.r.t. θ as conditional M-projection. Given a dataset D drawn i.i.d. from p(x, y), the conditional MLE following from can be stated as 3 RELATED WORK The first part of this section discusses relevant work in the field of CDE, focusing on high-capacity models that make little prior assumptions. The second part relates our approach to previous regularization and data augmentation methods. Non-parametric CDE. A vast body of literature in statistics and econometrics studies nonparametric kernel density estimators (KDE) and the associated bandwidth selection problem, which concerns choosing the appropriate amount of smoothing (; ;). To estimate conditional probabilities, previous work proposes to estimate both the joint and marginal probability separately with KDE and then computing the conditional probability as their ratio . Other approaches combine non-parametric elements with parametric elements (; ;). Despite their theoretical appeal, non-parametric density estimators suffer from poor generalization in regions where data is sparse (e.g., tail regions), causing rapid performance deterioration as the data dimensionality increases . CDE based on neural networks. Most work in machine learning focuses on flexible parametric function approximators for CDE. In our experiments, we use the work of and , who propose to use a neural network to control the parameters of a mixture density model. A recent trend in machine learning are latent density models such as cGANs and cVAEs . Although such methods have been shown successful for estimating distributions of images, the probability density function (PDF) of such models is intractable. More promising in this sense are normalizing flows (; ;), since they provide the PDF in tractable form. We employ a neural network controlling the parameters of a normalizing flow as our third CDE model to showcase the empirical efficacy of our regularization approach. Regularization. Since neural network based CDE models suffer from severe over-fitting when trained with the MLE objective, they require proper regularization. Classical regularization of the parameters such as weight decay (; ;), l 1 /l 2 -penalties and Bayesian priors have been shown to work well in the regression and classification setting. However, in the context of CDE, it is less clear what kind of inductive bias such a regularization imposes on the density estimate. In contrast, our regularization approach is agnostic w.r.t. parametrization and is shown to penalize strong variations of the log-density function. Regularization methods such as dropout are closely related to ensemble methods . Thus, they are orthogonal to our work and can be freely combined with noise regularization. Adding noise during training. Adding noise during training is a common scheme that has been proposed in various forms. This includes noise on the neural network weights or activations (; ;) and additive noise on the gradients for scalable MCMC posterior inference . While this line of work corresponds to noise in the parameter space, other research suggests to augment the training data through random and/or adversarial transformations of the data (; Burges & Schölkopf, 1996; ;). Our approach transforms the training observations by adding small random perturbations. While this form of regularization has been studied in the context of regression and classification problems (a; ; ; ;), this paper focuses on the regularization of CDE. In particular, we build on top of the of showing that training with noise corresponds to a penalty on strong variations of the log-density and extend previous consistency for regression of Holmstrom & Koistinen (1992a) to the more general setting of CDE. To our best knowledge, this is also the first paper to evaluate the empirical efficacy of noise regularization for density estimation. When considering expressive families of conditional densities, standard maximum likelihood estimation of the model parameters θ is ill suited. As can be observed in Figure 1, simply minimizing the negative log-likelihood of the data leads to severe over-fitting and poor generalization beyond the training data. Hence, it is necessary to impose additional inductive bias, for instance, in the form of regularization. Unlike in regression or classification, the form of inductive bias imposed by popular regularization techniques such as weight decay (; Kukačka et al., 2017) is less clear in the CDE setting, where the neural network weights often only indirectly control the probability density through a unconditional density model, e.g., a Gaussian Mixture. We propose to add noise perturbations to the data points during the optimization of the log-likelihood objective. This can be understood as replacing the original data points (x i, y i) by random variables tions K x (ξ x) and K y (ξ y) respectively. Further, we choose the noise to be zero centered as well as identically and independently distributed among the data dimensions, with standard deviation h: This can be seen as data augmentation, where "synthetic" data is generated by randomly perturbing the original data. Since the supply of noise vectors is technically unlimited, an arbitrary large augmented data set can be generated by repetitively sampling data points from D, and adding a random perturbation vector to the respective data point. This procedure is formalized in Algorithm 1. For notational brevity, we set Z:= X × Y, z:= (x, y) and denotef θ (z):=f θ (y|x). The presented noise regularization approach is agnostic to whether we are concerned with unconditional or conditional MLE. Thus, the generic notation also allows us to generalize the to both settings (derived in the remainder of the paper). Require: D = {z 1, ..., z n}, noise intensity h Require: number of perturbed samples r, 1: for j = 1 to r do 2: Select i ∈ {1, ..., n} with equal prob. Draw perturbation ξ ∼ K 6: When considering highly flexible parametric families such as Mixture Density Networks (MDNs) , the maximum likelihood solution in line 5 of Algorithm 1 is no longer tractable. In such case, one typically resorts to numerical optimization techniques such as mini-batch gradient descent and variations thereof. In this context, the generic procedure in Algorithm 1 can be transformed into a simple extensions of mini-batch gradient descent on the MLE objective (see Algorithm 2). Specifically, each mini-batch is perturbed with i.i.d. noise before computing the MLE objective function (forward pass) and the respective gradients (backward pass). Intuitively, the previously presented variable noise can be interpreted as "smearing" the data points during the maximum likelihood estimation. This alleviates the jaggedness of the density estimate arising from an un-regularized maximum likelihood objective in flexible density classes. We will now give this intuition a formal foundation, by mathematically analyzing the effect of the noise perturbations. Before discussing the particular effects of randomly perturbing the data during conditional maximum likelihood estimation, we first analyze noise regularization in a more general case. Let l (D) be a loss function over a set of data points D = {z 1, ..., z n}, which can be partitioned into a sum of losses l(D) = n i=1 l(z i), corresponding to each data point z i: The expected loss l(z i +ξ), ing from adding random perturbations, can be approximated by a second order Taylor expansion around z i. Using the assumption about ξ in, the expected loss an be written as where l(z i) is the loss without noise and ∂z 2 (z) zi the Hessian of l w.r.t z, evaluated at z i. Assuming that the noise ξ is small in its magnitude, O(ξ 3) is negligible. This effect has been observed earlier by and. See Appendix A for derivations. When concerned with maximum likelihood estimation of a conditional densityf θ (y|x), the loss function coincides with the negative conditional log-likelihood l(y i, x i) = − logf θ (y i |x i). Let the standard deviation of the additive data noise ξ x, ξ y be h x and h y respectively. Maximum likelihood estimation (MLE) with data noise is equivalent to minimizing the loss In that, the first term corresponds to the standard MLE objective, while the other two terms constitute a form of smoothness regularization. The second term in penalizes large negative second derivatives of the conditional log density estimate logf θ (y|x) w.r.t. y. As the MLE objective pushes the density estimate towards high densities and strong concavity in the data points y i, the regularization term counteracts this tendency to over-fit and overall smoothes the fitted distribution. The third term penalizes large negative second derivatives w.r.t. the conditional variable x, thereby regularizing the sensitivity of the density estimate to changes in the conditional variable. The intensity of the noise regularization can be controlled through the variance (h 2 x and h 2 y) of the random perturbations. Figure 1 illustrates the effect of the introduced noise regularization scheme on MDN estimates. Plain maximum likelihood estimation (left) leads to strong over-fitting, ing in a spiky distribution that generalizes poorly beyond the training data. In contrast, training with noise regularization (center and right) in smoother density estimates that are closer to the true conditional density. We now establish asymptotic consistency for the proposed noise regularization. In particular, we show that, under some regularity conditions, concerning integrability and decay of the noise regularization, the solution of Algorithm 1 converges to the asymptotic MLE solution. a continuous function of z and θ. Moreover, we assume that the parameter space Θ is compact. In the classical MLE setting, the idealized loss, corresponding to a (conditional) M-projection of the true data distribution onto the parametric family, reads as As we typically just have a finite number of samples from p(z), the respective empirical estimatê is used as training objective. Note that we now define the loss as function of θ, and, for fixed θ, treat l n (θ) as a random variable. Under some regularity conditions, one can invoke the uniform law of large numbers to show consistency of the empirical ML objective in the sense that sup θ∈Θ |l n (θ) − l(θ)| a.s. − − → 0 (see Appendix B for details). In case of the presented noise regularization scheme, the maximum likelihood estimation is performed using on the augmented data {z j} rather than the original data {z i}. For our analysis, we view Algorithm 1 from a slightly different angle. In fact, the data augmentation procedure of uniformly selecting a data point from {z 1, ..., z n} and perturbing it with a noise vector drawn from K can be viewed as drawing i.i.d. samples from a kernel density estimateq. Hence, maximum likelihood estimation with variable noise can be understood as 1. forming a kernel density estimateq (h) n of the training data 2. followed by a (conditional) M-projection ofq (h) n onto the parametric family. In that, step 2 aims to find the θ * that minimizes the following objective: Since is generally intractable, r samples are drawn from the kernel density estimate, forming the following Monte Carlo approximation of which corresponds to the loss in line 5 Algorithm 1: We are concerned with the consistency of the training procedure in Algorithm 1, similar to the classical MLE consistency discussed above. Hence, we need to show that − − → 0 as n, r → ∞. We begin our argument by decomposing the problem into easier sub-problems. In particular, the triangle inequality is used to obtain the following upper bound: Note thatl n,r (θ) is based on samples from the kernel density estimate, which are obtained by adding random noise vectors ξ ∼ K(·) to our original training data. Since we can sample an unlimited amount of such random noise vectors, r can be chosen arbitrarily high. This allows us to make sup θ∈Θ |l n (θ)| arbitrary small by the uniform law of large numbers. In order to make sup θ∈Θ |l (h) n (θ) − l(θ)| small in the limit n → ∞, the sequence of bandwidth parameters h n needs to be chosen appropriately. Such can then be combined using a union bound argument. In the following we outline the steps leading us to the desired . In that, the proof methodology is similar to Holmstrom & Koistinen (1992b). While they show consistency for regression with a quadratic loss function, our proof deals with generic and inherently unbounded log-likelihood objectives and thus holds for a much more general class of learning problems. The full proofs can be found in the Appendix. Initially, we have to make asymptotic integrability assumptions that ensure that the expectations in l (h) n (θ) and l(θ) are well-behaved in the limit (see Appendix C for details). Given respective integrability, we are able to obtain the following proposition. Proposition 1 Suppose the regularity conditions and are satisfied, and that almost surely. In we find conditions on the asymptotic behavior of the smoothing sequence (h n). These conditions also give us valuable guidance on how to properly choose the noise intensity in line 4 of Algorithm 1 (see Section 4.3 for discussion). The in demonstrates that, under the discussed conditions, replacing the empirical data distribution with a kernel density estimate still in an asymptotically consistent maximum likelihood objective. However, as previously discussed, l n (θ) is intractable and, thus, replaced by its sample estimatel n,r. Since we can draw an arbitrary amount of samples fromq (h) n, we can approximate l (h) n (θ) with arbitrary precision. Given a fixed data set D of size n > n 0, this means that lim r→∞ sup θ∈Θ n (θ) = 0 almost surely, by and the uniform law of large numbers. Since our original goal was to also show consistency for n → ∞, this is combined with Proposition 1, obtaining the following consistency theorem. Theorem 1 Suppose the regularity conditions and are satisfied, h n fulfills and Θ is compact. Then, lim almost surely. In that, lim used to denote the limit superior ("lim sup") of a sequence. Training a (conditional) density model with noise regularization means minimizingl n,r (θ) w.r.t. θ. As of this optimization, one obtains a parameter vectorθ (h) n,r, which we hope is close to the minimizing parameterθ of the ideal objective function l(θ). In the following, we establish asymptotic consistency , similar to Theorem 1, in the parameter space. Therefore we first have to formalize the concept of closeness and optimality in the parameter space. Since a minimizing parameterθ of l(θ) may not be unique, we define Θ * = {θ * | l(θ *) ≤ l(θ) ∀θ ∈ Θ} as the set of global minimizers of l(θ), and d(θ, Θ *) = min θ * ∈Θ * {||θ − θ * || 2} as the distance of an arbitrary parameter θ to Θ *. Based on these definitions, it can be shown that Algorithm 1 is asymptotically consistent in a sense that the minimizer ofθ n,r converges almost surely to the set of optimal parameters Θ *. Theorem 2 Suppose the regularity conditions and are satisfied, h n fulfills and Θ is compact. For r > 0 and n > n 0, letθ n,r ∈ Θ be a global minimizer of the empirical objectivel almost surely. Note that Theorem 2 considers global optimizers, but equivalently holds for compact neighborhoods of a local minimum θ * (see discussion in Appendix C). After discussing the properties of noise regularization, we are interested in how to properly choose the noise intensity h, for different training data sets. Ideally, we would like to choose h so that |l | is minimized, which is practically not feasible since l(θ) is intractable. Inequality gives as an upper bound on this quantity, suggesting to minimize l 1 distance between the kernel density estimate q (h) n and the data distribution p(z). This is in turn a well-studied problem in the kernel density estimation literature (see e.g.,). Unfortunately, general solutions of this problem require knowing p(z) which is not the case in practice. Under the assumption that p(z) and the kernel function K are Gaussian, the optimal bandwidth can be derived as h = 1.06σn . In that,σ denotes the estimated standard deviation of the data, n the number of data points and d the dimensionality of Z. This formula is widely known as the rule of thumb and often used as a heuristic for choosing h. In addition, the conditions in give us further intuition. The first condition tells us that h n needs to decay towards zero as n becomes large. This reflects the general theme in machine learning that the more data is available, the less inductive bias / regularization should be imposed. The second condition suggests that the bandwidth decay must happen at a rate slower than n − 1 d. For instance, the rule of thumb fulfills these two criteria and thus constitutes a useful guideline for selecting h. However, for highly non-Gaussian data distributions, the respective h n may decay too slowly and a faster decay rate such as n − 1 1+d may be appropriate. This section provides a detailed experimental analysis of the proposed method, aiming to empirically validate the theoretical arguments outlined previously and investigating the practical efficacy of our regularization approach. In all experiments we use Gaussian pertubations of the data, i.e., K(ξ) = N (0, I). Since one of the key features of our noise regularization scheme is that it is agnostic to the choice of model, we evaluate its performance on three different neural network based CDE models: Mixture Density Networks (MDN) , Kernel Mixture Networks (KMN) and Normalizing Flows Networks (NFN) . In our experiments, we consider both simulated as well as real-world data sets. In particular, we simulate data from a 4-dimensional Gaussian Mixture (d x = 2, d y = 2) and a Skew-Normal distribution whose parameters are functionally dependent on x (d x = 1, d y = 1). In terms of real-world data, we use the following three data sources: Euro Stoxx: Daily stock-market returns of the Euro Stoxx 50 index conditioned on various stock return factors relevant in finance (d x = 14, d y = 1). UCI datasets: Standard data sets from the UCI machine learning repository . In particular, Boston Housing (The reported scores are test log-likelihoods, averaged over at least 5 random seeds alongside the respective standard deviation. For further details regarding the data sets and simulated data, we refer to Appendix E. The experiment data and code is available at TODO We complement the discussion in 4.3 with an empirical investigation of different schedules of h n . In particular, we compare a) the rule of thumb h n ∝ n − 1 4+d b) a square root decay schedule h n ∝ n − 1 1+d c) a constant bandwidth h n = const. ∈ (0, ∞) and d) no noise regularization, i.e. h n = 0. Figure 2 plots the respective test log-likelihoods against an increasing training set size n for the two simulated densities Gaussian Mixture and Skew Normal. First, we observe that bandwidth rates which conform with the decay conditions seem to converge in performance to the non-regularized maximum likelihood estimator (red) as n becomes large. This reflects the theoretical of Theorem 1. Second, a fixed bandwidth across n (green), violating, imposes asymptotic bias and thus saturates in performance vastly before its counterparts. Third, as hypothesized, the relatively slow decay of h n through the rule of thumb works better for data distributions that have larger similarities to a Gaussian, i.e., in our case the Skew Normal distribution. In contrast, the highly non-Gaussian data from the Gaussian Mixture requires faster decay rates like the square root decay schedule. Most importantly, noise regularization substantially improves the estimator's performance when only little training data is available. We now investigate how the proposed noise regularization scheme compares to classical regularization techniques. In particular, we consider an l 1 and l 2 -penalty on the neural network weights as regularization term, the weight decay technique of 1, as well a Bayesian neural network trained with variational inference using a Gaussian prior and posterior . First, we study the performance of the regularization techniques on our two simulation benchmarks. Figure 3 depicts the respective test log-likelihood across different training set sizes. For each regularization method, the regularization hyper-parameter has been optimized via grid search. As one would expect, the importance of regularization, i.e., performance difference to un-regularized model, decreases as the amount of training data becomes larger. The noise regularization scheme Table 1: Comparison of various regularization methods for three neural network based CDE models across 5 data sets. We report the test log-likelihood and its respective standard deviation (higher log-likelihood values are better). yields similar performance across the different CDE models while the other regularizers vary greatly in their performance depending on the different models. This reflects the fact that noise regularization is agnostic to the parameterization of the CDE model while regularizers in the parameter space are dependent on the internal structure of the model. Most importantly, noise regularization performs well across all models and sample sizes. In the great majority of configurations it outperforms the other methods. Especially when little training data is available, noise regularization ensures a moderate test error while the other approaches mostly fail to do so. Next, we consider real world data sets. Since now the amount of data we can use for hyper-parameter selection, training and evaluation is limited, we use 5-fold cross-validation to select the parameters for each regularization method. The test log-likelihoods, reported in Table 1, are averages over 3 different train/test splits and 5 seeds each for initializing the neural networks. The held out test set amounts to 20% of the overall data sets. Consistent with the of the simulation study, noise regularization outperforms the other methods across the great majority of data sets and CDE models. We benchmark neural network based density estimators against state-of-the art CDE approaches. While neural networks are the obvious choice when a large amount of training data is available, we pose the questions how such estimators compete against well-established non-parametric methods in small data regimes. In particular, we compare to the three following CDE methods: Conditional Kernel Density Estimation (CKDE). Non-parametric method that forms a KDE of both p(x, y) and p(x) to compute its estimate asp(y|x):=p(x, y)/p(x) (Table 2 : Comparison of conditional density estimators across 5 data sets. Reported is the test loglikelihood and its respective standard deviation (higher log-likelihood values are better). -Neighborhood kernel density estimation (NKDE). Non-parametric method that considers only a local subset of training points to form a density estimate. Semi-parametric estimator that computes the conditional density as linear combination of fixed kernels . For the kernel density estimation based methods CKDE and NKDE, we perform bandwidth selection via the rule of thumb (R.O.T) and via maximum likelihood leave-one-out cross-validation (CV-ML) . In case of LSCDE, MDN, KMN and NFN, the respective hyper-parameters are selected via 5-fold cross-validation grid search on the training set. Note that, in contrast to Section 5.2 which focuses on regularization parameters, the grid search here extends to more hyper-parameters. The respective test log-likelihood scores are listed in Table 2. For the majority of data sets, all three neural network based methods outperform all of the non-and semi-parametric methods. Perhaps surprisingly, it can be seen that, when properly regularized, neural network based CDE works well even when training data is scarce, such as in case of the Boston Housing data set. This paper addresses conditional density estimation with high-capacity models. In particular, we propose to add small random perturbations to the data during training. We demonstrate that the ing noise regularization method corresponds to a smoothness regularization and prove its asymptotic consistency. The experimental underline the effectiveness of the proposed method, demonstrating that it consistently outperforms other regularization methods across various conditional density models and data sets. This makes neural network based CDE the preferable method, even when only little training data is available. While we assess the estimator performance in terms of the test log-likelihood, an interesting question for future research is whether the noise regularization also improves the respective uncertainty estimates for downstream tasks such as safe control and decision making. Let l(D) be a loss function over a set of data points D = {z 1, ..., z N}, which can be partitioned into a sum of losses corresponding to each data point x n: Also, let each z i be perturbed by a random noise vector ξ ∼ K(ξ) with zero mean and i.i.d. elements, i.e. E ξ∼K(ξ) [ξ] = 0 and E ξ∼K(ξ) ξ n ξ j = h 2 I The ing loss l(z i + ξ) can be approximated by a second order Taylor expansion around z i Assuming that the noise ξ is small in its magnitude, O(ξ 3) may be neglected. The expected loss under K(ξ) follows directly from: Using the assumption about ξ in we can simplify as follows: In that, l(z i) is the loss without noise and we denote the elements of the column vector z. The objective function corresponding to a conditional M-projection. The sample equivalent: Corollary 1 Let Θ be a compact set and andf θ: Proof. The corollary follows directly from the uniform law of large numbers. Lemma 1 Suppose for some > 0 there exists a constant B and there exists an n 0 such that for all n > n 0 there exists a constant B almost surely. Then, the inequality where C is a constant holds with probability 1 for all n > n 0. Proof of Lemma 1 Using Hoelder's inequality and the nonnegativity of p andq Employing the regularity conditions and and writing with probability 1. Lemma 1 states regularity conditions ensuring that the expectations in l (h) n (θ) and l(θ) are wellbehaved in the limit. In particular, and imply uniform and absolute integrability of the loglikelihoods under the respective probability measures induced by p andq (h) n. Since we are interested in the asymptotic behavior, it is sufficient for to hold for n large enough with probability 1. Inequality shows that we can make |l (h) n (θ) − l(θ)| small by reducing the l 1 -distance between the true density p and the kernel density estimateq (h) n. There exists already a vast body of literature, discussing how to properly choose the kernel K and the bandwidth sequence (h n) so that |q We employ the in for our purposes, leading us to Proposition 1. Proof of Proposition 1. Let A denote the event that ∃n 0 ∀n > n 0 inequality holds for some constant C . From our regularity assumptions it follows that P(A c) = 0. Given that A holds, we just have to show that |q − − → 0. Then, the upper bound in tends to zero and we can conclude our proposition. For any δ > 0 let B n denote the event whereinq (h) n (z) is a kernel density estimate obtained based on n samples from p(z). Under the conditions in we can apply Theorem 1 of , obtaining an upper bound on the probability that does not hold, i.e. ∃u, m 0 such that P(B c n) ≤ e −un for all n > m 0. Since we need both A and B n for n → ∞ to hold, we consider the intersection of the events (A ∩ B n). Using a union bound argument it follows that ∃k 0 such that ∀n > k 0: Note that we can simply choose k 0 = max{n 0, m 0} for this to hold. Hence, e u −1 < ∞ and by the Borel-Cantelli lemma we can conclude that lim In that 1(A) denotes an indicator function which returns 1 if A is true and 0 else. Next we consider the probability that the convergence in holds for random Z (n): Note that we can dP (Z (n) ) move outside of the inner integrals, since Z (n) is independent from I and Ξ (r). Hence, we can conclude that also holds, which we denote as event A, with probability 1 for random training data. From Proposition 1 we know, that with probability 1. We denote the event that holds as B. Since P (A c) = P (B c) = 0, we can use a union bound argument to show that P (A ∩ B) = 1. From and it follows that for any n > n 0, lim with probability 1. Finally, we combine this with, obtaining that almost surely, which concludes the proof. Proof of Theorem 2. The proof follows the argument used in Theorem 1 of. In the following, we assume that holds. From Theorem 1 we know that this is the case with probability 1. Respectively, we only consider realizations of our training data Z (n) and noise samples I (r), Ξ for which the convergence in holds (see proof of Theorem 1 for details on this notation). For such realization, let (θ From the triangle inequality, it follows that for any > 0 there exists k 0 so that ∀k > k 0 given the convergence established in Theorem 1 and the continuity of l in θ. Next, the above is extended to which again holds for k large enough. This due to, the minimizer of µ i k,j k, and µ i k,j k (θ) − l(θ) < by Theorem 1. Because can be made arbitrarily small, l(θ 0) ≤ l(θ) as k → ∞. Because θ ∈ Θ is arbitrary, θ 0 must be in Θ *. In turn, since (n i) i, (r i,j) j and (i k) k, (j k) k were chosen arbitrarily, every limit point of a sequence (v i k,j k) k must be in Θ *. In the final step, we proof the theorem by contradiction. Suppose that does not hold. In this case, there must exist an > 0 and sequences However, by the previous argument the limit point of the any sequence where chosen from a set with probability mass of 1, we can conclude our proposition that lim Discussion of Theorem 2. Note that, similar to θ *,θ n,r does not have to be unique. In case there are multiple minimizers ofl (h) n,r, we can chose one of them arbitrarily and the proof of the theorem still holds. Theorem 2 considers global optimizers over a set of parameters Θ, which may not be attainable in practical settings. However, the application of the theorem to the context of local optimization is straightforward when Θ is chosen as a compact neighborhood of a local minimum θ * of l (b . In particular, the parameters of the unconditional mixture distribution p(y) are outputted by the neural network, which takes the conditional variable x as input. For our purpose, we employ a Gaussian Mixture Model (GMM) with diagonal covariance matrices as density model. The conditional density estimatep(y|x) follows as weighted sum of K Gaussianŝ wherein w k (x; θ) denote the weight, µ k (x; θ) the mean and σ 2 k (x; θ) the variance of the k-th Gaussian component. All the GMM parameters are governed by the neural network with parameters θ and input x. The mixing weights w k (x; θ) must resemble a categorical distribution, i.e. it must hold that K k=1 w k (x; θ) = 1 and w k (x; θ) ≥ 0 ∀k. To satisfy the conditions, the softmax linearity is used for the output neurons corresponding to w k (x; θ). Similarly, the standard deviations σ k (x) must be positive, which is ensured by a sofplus non-linearity. Since the component means µ k (x; θ) are not subject to such restrictions, we use a linear output layer without non-linearity for the respective output neurons. For the experiments in 5.2 and 5.1, we set K = 10 and use a neural network with two hidden layers of size 32. While MDNs resemble a purely parametric conditional density model, a closely related approach, the Kernel Mixture Network (KMN), combines both non-parametric and parametric elements . Similar to MDNs, a mixture density model ofp(y) is combined with a neural network which takes the conditional variable x as an input. However, the neural network only controls the weights of the mixture components while the component centers and scales are fixed w.r.t. to x. For each of the kernel centers, M different scale/bandwidth parameters σ m are chosen. As for MDNs, we employ Gaussians as mixture components, wherein the scale parameter directly coincides with the standard deviation. Let K be the number of kernel centers µ k and M the number of different kernel scales σ m. The KMN conditional density estimate reads as follows: As previously, the weights w k,m correspond to a softmax function. The M scale parameters σ m are learned jointly with the neural network parameters θ. The centers µ k are initially chosen by k-means clustering on the {y i} n i=1 in the training data set. Overall, the KMN model is more restrictive than MDN as the locations and scales of the mixture components are fixed during inference and cannot be controlled by the neural network. However, due to the reduced flexibility of KMNs, they are less prone to over-fit than MDNs. For the experiments in 5.2 and 5.1, we set K = 50 and M = 2. The respective neural network has two hidden layers of size 32. The Normalizing Flow Network (NFN) is similar to the MDN and KMN in that a neural network takes the conditional variable x as its input and outputs parameters for the distribution over y. For the NFN, the distribution is given by a Normalizing Flow . It works by transforming a simple base distribution and an accordingly distributed random variable Z 0 through a series of invertible, parametrized mappings f = f N • · · · • f 1 into a successively more complex distribution p(f (Z 0)). The PDF of samples z N ∼ p(f (Z 0)) can be evaluted using the change-ofvariable formula: log det ∂f n ∂z n−1 The Normalizing Flows from were introduced in the context of posterior estimation in variational inference. They are optimized for fast sampling while the likelihood evaluation for externally provided data is comparatively slow. To make them useful for CDE, we invert the direction of the flows, defining a mapping from the transformed distribution p(Z N) to the base distribution p(Z 0) by settingf −1 i (z i) = f i (z i). We experimented with three types of flows: planar flows, radial flows as parametrized by and affine flows f −1 (z) = exp(a)z +b. We have found that one affine flow combined with multiple radial flows performs favourably in most settings. For the experiments in 5.2 and 5.1, we used a standard Gaussian as the base distribution that is transformed through one affine flow and ten radial flows. The respective neural network has two hidden layers of size 32. The data generating process (x, y) ∼ p(x, y) resembles a bivariate joint-distribution, wherein x ∈ R follows a normal distribution and y ∈ R a conditional skew-normal distribution (Anděl et al., 1984). The parameters (ξ, ω, α) of the skew normal distribution are functionally dependent on x. Specifically, the functional dependencies are the following: ξ(x) = a * x + b a, b ∈ R α(x) = α low + 1 1 + e −x * (α high − α low) y ∼ SkewN ormal ξ(x), ω(x), α(x) Accordingly, the conditional probability density p(y|x) corresponds to the skew normal density function: In that, N (·) denotes the density, and Φ(·) the cumulative distribution function of the standard normal distribution. The shape parameter α(x) controls the skewness and kurtosis of the distribution. We set α low = −4 and α high = 0, giving p(y|x) a negative skewness that decreases as x increases. This distribution will allow us to evaluate the performance of the density estimators in presence of skewness, a phenomenon that we often observe in financial market variables. Figure 4a illustrates the conditional skew normal distribution. The joint distribution p(x, y) follows a Gaussian Mixture Model in R 4 with 5 Gaussian components, i.e. K = 5. We assume that x ∈ R 2 and y ∈ R 2 can be factorized, i.e. p(x, y) = When x and y can be factorized as in, the conditional density p(y|x) can be derived in closed form: wherein the mixture weights are a function of x: For details and derivations we refer the interested reader to and. The weights w k are sampled from a uniform distribution U and then normalized to sum to one. The component means are sampled from a spherical Gaussian with zero mean and standard deviation of σ = 1.5. The covariance matrices Σ y,k ) and Σ y,k ) are sampled from a Gaussian with mean 1 and standard deviation 0.5, and then projected onto the cone of positive definite matrices. Since we can hardly visualize a 4-dimensional GMM, Figure 4b depicts a 2-dimensional equivalent, generated with the procedure explained above. The goal is to predict the conditional probability density of 1-day log-returns, conditioned on 14 explanatory variables. These conditional variables comprise classical return factors from finance as well as option implied moments. For details, we refer to. Overall, the target variable is one-dimensional, i.e. y ∈ Y ⊆ R, whereas the conditional variable x constitutes a 14-dimensional vector, i.e. x ∈ X ⊆ R 14.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rygtPhVtDS
A model-agnostic regularization scheme for neural network-based conditional density estimation.
Unsupervised representation learning holds the promise of exploiting large amount of available unlabeled data to learn general representations. A promising technique for unsupervised learning is the framework of Variational Auto-encoders (VAEs). However, unsupervised representations learned by VAEs are significantly outperformed by those learned by supervising for recognition. Our hypothesis is that to learn useful representations for recognition the model needs to be encouraged to learn about repeating and consistent patterns in data. Drawing inspiration from the mid-level representation discovery work, we propose PatchVAE, that reasons about images at patch level. Our key contribution is a bottleneck formulation in a VAE framework that encourages mid-level style representations. Our experiments demonstrate that representations learned by our method perform much better on the recognition tasks compared to those learned by vanilla VAEs. Due to the availability of large labeled visual datasets, supervised learning has become the dominant paradigm for visual recognition. That is, to learn about any new concept, the modus operandi is to collect thousands of labeled examples for that concept and train a powerful classifier, such as a deep neural network. This is necessary because the current generation of models based on deep neural networks require large amounts of labeled data . This is in stark contrast to the insights that we have from developmental psychology on how infants develop perception and cognition without any explicit supervision . Moreover, the supervised learning paradigm is ill-suited for applications, such as health care and robotics, where annotated data is hard to obtain either due to privacy concerns or high cost of expert human annotators. In such cases, learning from very few labeled images or discovering underlying natural patterns in large amounts of unlabeled data can have a large number of potential applications. Discovering such patterns from unlabeled data is the standard setup of unsupervised learning. Over the past few years, the field of unsupervised learning in computer vision has followed two seemingly different tracks with different goals: generative modeling and self-supervised learning. The goal of generative modeling is to learn the probability distribution from which data was generated, given some training data. A learned model can draw samples from the same distribution or evaluate the likelihoods of new data. Generative models are also useful for learning compact representation of images. However, we argue that these representations are not as useful for visual recognition. This is not surprising since the task of reconstructing images does not require the bottleneck representation to sort out meaningful data useful for recognition and discard the rest; on the contrary, it encourages preserving as much information as possible for reconstruction. In comparison, the goal in selfsupervised learning is to learn representations that are useful for recognition. The standard paradigm is to establish proxy tasks that don't require human-supervision but can provide signals useful for recognition. Due to the mismatch in goals of unsupervised learning for visual recognition and the representations learned from generative modeling, self-supervised learning is a more popular way of learning representations from unlabeled data. However, fundamental limitation of this self-supervised paradigm is that we need to define a proxy-task that can mimic the desired recognition. It is not always possible to establish such a task, nor are these tasks generalizable across recognition tasks. In this paper, we take the first steps towards enabling the unsupervised generative modeling approach of VAEs to learn representations useful for recognition. Our key hypothesis is that for a representation to be useful, it should capture just the interesting parts of the images, as opposed to everything in the images. What constitutes an interesting image part has been defined and studied in earlier works that pre-date the end-to-end trained deep network methods; ). Taking inspiration from these works, we propose a novel representation that only encodes such few parts of an image that are repetitive across the dataset, i.e., the patches that occur often in images. By avoiding reconstruction of the entire image our method can focus on regions that are repeating and consistent across many images. In an encoder-decoder based generative model, we constrain the encoder architecture to learn such repetitive parts -both in terms of representations for appearance of these parts (or patches in an image) and where these parts occur. We formulate this using variational auto-encoder (β-VAEs) , where we impose novel structure on the latent representations. We use discrete latents to model part presence or absence and continuous latents to model their appearance. We present this approach, PatchVAE, in Section 3 and demonstrate that it learns representations that are much better for recognition as compared to those learned by the standard β-VAEs . In addition, we propose in Section 3.4 that losses that favor foreground, which is more likely to contain repetitive patterns, in representations that are much better at recognition. In Section 4, we present on CIFAR100 , MIT Indoor Scene Recognition , Places , and ImageNet datasets. Our contributions are as follows: • We propose a novel patch-based bottleneck in the VAE framework that learns representations that can encode repetitive parts across images. • We demonstrate that our method, PatchVAE, learns unsupervised representations that are better suited for recognition in comparison to traditional VAEs. • We show that losses that favor foreground are better for unsupervised learning of representations for recognition. • We perform extensive ablation analysis to understand the importance of different aspects of the proposed PatchVAE architecture. Due to its potential impact, unsupervised learning (particularly for deep networks) is one of the most researched topics in visual recognition over the past few years. Generative models such as VAEs (; ; ;), PixelRNN (van den), PixelCNN , and their variants have proven effective when it comes to learning compressed representation of images while being able to faithfully reconstruct them as well as draw samples from the data distribution. GANs (; ; ;) on the other hand, while don't model the probability density explicitly, can still produce high quality image samples from noise. There has been work combining VAEs and GANs to be able to simultaneously learn image data distribution while being able to generate high quality samples from it . Convolution sparse coding is an alternative approach for reconstruction or image in-painting problems. Our work complements existing generative frameworks in that we provide a structured approach for VAEs that can learn beyond low-level representations. We show the effectiveness of the representations learned by our model by using them for standard visual recognition tasks. There has been a lot of work in interpreting or disentangling representations learned using generative models such as VAEs (; ;). However, there is little evidence of effectiveness of disentangled representations in visual recognition tasks. In our work, we focus on incorporating inductive biases in these generative models (e.g., VAEs) such that they can learn representations better suited for visual recognition tasks. A related, but orthogonal, line of work is self-supervised learning where a proxy task is designed to learn representation useful for recognition. These proxy tasks vary from simple tasks like arranging patches in an image in the correct spatial order (; and arranging frames from a video in correct temporal order , to more involved tasks like in-painting and context prediction . We follow the best practices from this line of work for evaluating the learned representations. Our encoder network computes a set of feature maps f using φ(x). This is followed by 2 independent single layer networks -bottom network generates part visibility parameters Q V. We combine Q V with output of top network to generate part appearance parameters Q A. We sample zvis and zapp to constructẑ as described in Section 3.2 which is input to the decoder network. We also visualize the corresponding priors for latents zapp and zvis in the dashed gray boxes. Our work builds upon VAE framework proposed by. We briefly review relevant aspects of the VAE framework and then present our approach. Standard VAE framework assumes a generative model for data where first a latent z is sampled from a prior p(z) and then the data is generated from a conditional distribution G(x|z). A variational approximation Q(z|x) to the true intractable posterior is introduced and the model is learned by minimizing the following negative variational lower bound (ELBO). Q(z|x) is often referred to as an encoder as it can be viewed as mapping data to the the latent space, while G(x|z) is referred to as a decoder (or generator) that can be viewed as mapping latents to the data space. Both Q and G are commonly paramterized as neural networks. Fig. 1a shows the commonly used VAE architecture. If the conditional G(x|z) takes a gaussian form, negative log likelihood in the first term of RHS of Eq. 1 becomes mean squared error between generator output x = G(x|z) and input data x. In the second term, prior p(z) is assumed to be a multi-variate normal distribution with zero-mean and diagonal covariance N (0, I) and the loss simplifies to When G and Q are differentiable, entire model can be trained with SGD using reparameterization trick . propose an extension for learning disentangled representation by incorporating a weight factor β for the KL Divergence term yielding VAE framework aims to learn a generative model for the images where the latents z represent the corresponding low dimensional generating factors. The latents z can therefore be treated as image representations that capture the necessary details about images. However, we postulate that representations produced by the standard VAE framework are not ideal for recognition as they are learned to capture all details, rather than capturing'interesting' aspects of the data and dropping the rest. This is not surprising since there formulation does not encourage learning semantic information. For learning semantic representations, in the absence of any relevant supervision (as is available in self-supervised approaches), inductive biases have to be introduced. Therefore, taking inspiration from works on unsupervised mid-level pattern discovery; ), we propose a formulation that encourages the encoder to only encode such few parts of an image that are repetitive across the dataset, i.e., the patches that occur often in images. Since the VAE framework provides a principled way of learning a mapping from image to latent space, we consider it ideal for our proposed extension. We chose β-VAEs for their simplicity and widespread use. In Section 3.2, we describe our approach in detail and in Section 3.4 propose a modification in the reconstruction error computation to bias the error term towards foreground high-energy regions (similar to the biased initial sampling of patterns in). Given an image x, let f = φ(x) be a deterministic mapping that produces a 3D representation f of size h × w × d e, with a total of L = h × w locations (grid-cells). We aim to encourage the encoder network to only encode parts of an image that correspond to highly repetitive patches. For example, a random patch of noise is unlikely to occur frequently, whereas patterns like faces, wheels, windows, etc. repeat across multiple images. In order capture this intuition, we force the representation f to be useful for predicting frequently occurring parts in an image, and use just these predicted parts to reconstruct the image. We achieve this by transforming f toẑ which encodes a set of parts at a small subset of L locations on the grid cells. We refer toẑ as "patch latent codes" for an image. Next we describe how we re-tool the β-VAE framework to learn these local latent codes. We first describe our setup for a single part and follow it up with a generalization to multiple parts (Section 3.3). Image Encoding. Given the image representation f = φ(x), we would like to learn part representations at each grid location l (where l ∈ {1, . . ., L}). A part is parameterized by its appearance z app and its visibility z l vis (i.e., presence or absence of the part at grid location l). We use two networks, vis | f ) of the part parameters z app and z l vis respectively. Since the mapping f = φ(x) is deterministic, we can re-write these distribu-. Therefore, given an image x the encoder networks estimate the posterior Note that f is a deterministic feature map, whereas z app and z l vis are stochastic. Image Decoding. We utilize a generator or decoder network G, that given z vis and z app, reconstructs the image. First, we sample a part appearanceẑ app (d p dimensional, continuous) and then sample part visibilitiesẑ l vis (L dimensional, binary) one for each location l from the posteriorŝ Next, we construct a 3D representationẑ by placingẑ app at every location l where the part is present (i.e.,ẑ l vis = 1). This can be implemented by a broadcasted product ofẑ app andẑ l vis. We refer toẑ as patch latent code. Again note that f is deterministic andẑ is stochastic. Finally, a deconvolutional network takesẑ as input and generates an imagex. This image generation process can be written aŝ Since all latent variables (z l vis for all l and z app) are independent of each other, they can be stacked as This enables us to use a simplified the notation (refer to and): Note that despite the additional structure, our model still resembles the setup of variational autoencoders. The primary difference arises from: use of discrete latents for part visibility, patchbased bottleneck imposing additional structure on latents, and feature assembly for generator. Training. We use the training setup of β-VAE and use the maximization of variational lower bound to train the encoder and decoder jointly (described in Section 3.1). The posterior Q A, which captures the appearance of a part, is assumed to be a zero-mean Normal distribution with diagonal covariance N (0, I). The posterior Q V, which captures the presence or absence a part, is assumed to be a Bernoulli distribution Bern z prior vis with prior z prior vis. Therefore, the ELBO for our approach can written as (refer to): where, the D KL term can be expanded as: Implementation details. As discussed in Section 3.1, the first and second terms of the RHS of can be trained using L2 reconstruction loss and reparameterization trick . In addition, we also need to compute KL Divergence loss for part visibility. Learning discrete probability distribution is a challenging task since there is no gradient defined to backpropagate reconstruction loss through the stochastic layer at decoder even when using the reparameterization trick. Therefore, we use the relaxed-bernoulli approximation for training part visibility distributions z, where (h, w) are spatial dimensions and d e is the number of channels. Therefore, the number of locations vis > 1, the part occurs at multiple locations in an image. Since all these locations correspond to same part, their appearance should be the same. To incorporate this, we take the weighted average of the part appearance feature at each location, weighted by the probability that the part is present. Since we use the probability values for averaging the is deterministic. This operation is encapsulated by the Q A encoder (refer to Figure 1b). During image generation, we sampleẑ app once and replicate it at each location whereẑ l vis = 1. During training, this forces the model to: only predictẑ l vis = 1 where similar looking parts occur, and learn a common representation for the part that occurs at these locations. Note that z app can be modeled as a mixture of distributions (e.g., mixture of gaussians) to capture complicated appearances. However, in this work we assume that the convolutional neural network based encoders are powerful enough to map variable appearance of semantic concepts to similar feature representations. Therefore, we restrict ourselves to a single gaussian distribution. Next we extend the framework described above to use multiple parts. To use N parts, we use N × 2 encoder networks Q vis parameterize the i th part. Again, this can be implemented efficiently as 2 networks by concatenating the outputs together. The image generator samplesẑ vis from the outputs of these encoder networks and constructsẑ (i). We obtain the final patch latent codeẑ by concatenating allẑ For this multiple part case, can be written as: Similarly, and can be written as: The training details and assumptions of posteriors follow the previous section. Figure 2: Concepts captured by parts: We visualize a few representative examples for several parts to qualitatively demonstrate the visual concepts captured by parts. For each part, we crop image patches centered on the part location where it is predicted to be present. Selected patches are sorted by part visibility probability as score. We have manually selected a diverse set from the top 50 occurrences from the training images. As visible, a single part may capture diverse set of concepts that are similar in shape or texture or occur in similar context, but belong to different categories. We show which categories the patches come from. The L2 reconstruction loss used for training β-VAEs (and other reconstruction based approaches) gives equal importance to each region of an image. This might be reasonable for tasks like image compression and image de-noising. However, for the purposes of learning semantic representations, not all regions are equally important. For example, "sky" and "walls" occupy large portions of an image, whereas concepts like "windows," "wheels,", "faces" are comparatively smaller, but arguably more important. To incorporate this intuition, we use a simple and intuitive strategy to weigh the regions in an image in proportion to the gradient energy in the region. More concretely, we compute laplacian of an image to get the intensity of gradients per-pixel and average the gradient magnitudes in 8 × 8 local patches. The weight multiplier for the reconstruction loss of each 8 × 8 patch in the image is proportional to the average magnitude of the patch. All weights are normalized to sum to one. We refer to this as weighted loss (L w). Note that this is similar to the gradient-energy biased sampling of mid-level patches used in;. In Appendix 6.1, we show examples of weight masks for some of the images. In addition, we also consider an adversarial training strategy from GANs to train VAEs as proposed by , where the discriminator network from GAN implicitly learns to compare images and gives a more abstract reconstruction error for the VAE. We refer to this variant by using'GAN' suffix in experiments. In Section 4, we demonstrate that the proposed weighted loss (L w) is complementary to the discriminator loss from adversarial training, and these losses in better recognition capabilities for both β-VAE and PatchVAE. Datasets. We evaluate our proposed model on CIFAR100 , MIT Indoor Scene Recognition , and Places datasets. Details of these datasets can be found in Appendix 6.2. Learning paradigm. In order to evaluate the utility of features learned for recognition, we setup the learning paradigm as follows: we will first train the model in an unsupervised manner on all the images other than test set images. After that, we discard the generator network and use only part of the encoder network φ(x) to train a supervised model on the classification task of the respective dataset. We study different training strategies for the classification stage as discussed later. Training details. In all experiments, we use the following architectures. For CIFAR100, Indoor67, and Place205, φ(x) has a conv layer followed by two residual blocks . For ImageNet, φ(x) is a ResNet18 model (a conv layer followed by four residual blocks). For all datasets, Q A and Q V have a single conv layer each. For classification, we start from φ(x), and add a fully-connected layer with 512 hidden units and a final fully-connected layer as classifier. More details can be found in Appendix 6.2 and 6.3. During the unsupervised learning part of training, all methods are trained for 90 epochs for CIFAR100 and Indoor67, 2 epochs for Places205, and 30 epochs for ImageNet dataset. All methods use ADAM optimizer for training, with initial learning rate of 1 × 10 −4 and a minibatch size of 128. For relaxed bernoulli in Q V, we start with the temperature of 1.0 with an annealing rate of 3 × 10 −5 (details in ). For training the classifier, all methods use stochastic gradient descent (SGD) with momentum with a minibatch size of 128. Initial learning rate is 1 × 10 −2 and we reduce it by a factor of 10 every 30 epochs. All experiments are trained for 90 epochs for CIFAR100 and Indoor67, 5 epochs for Places205, and 30 epochs for ImageNet datasets. Table 1: Classification on CIFAR100, Indoor67, and Places205. We initialize the classification model with the representations φ(x) learned from unsupervised learning task. The model φ(x) comprises of a conv layer followed by two residual blocks (each having 2 conv layers). First column (called 'Conv1') corresponds to Top-1 classification accuracy with pre-trained model with the first conv layer frozen, second and third columns correspond to with 3 conv layers and 5 conv layers frozen respectively. Details in Section 4.1. Baselines. We use the β-VAE model (Section 3.1) as our primary baseline. In addition, we use weighted loss and discriminator loss ing in the β-VAE-* family of baselines. We also compare against a BiGAN model from. We use similar backbone architectures for encoder/decoder (and discriminator if present) across all methods, and tried to keep the number of parameters in different approaches comparable to the best of our ability. Exact architecture details can be found in Appendix 6.3. In Table 1, we report the top-1 classification on CIFAR100, Indoor67, and Places205 datasets for all methods with different training strategies for classification. First, we keep all the pre-trained weights in φ(x) from the unsupervised task frozen and only train the two newly added conv layers in the classification network (reported under column 'Conv5'). We notice that our method (with different losses) generally outperforms the β-VAE counterpart by a healthy margin. This shows that the representations learned by PatchVAE framework are better for recognition compared to β-VAEs. Moreover, better reconstruction losses ('GAN' and L w) generally improve both β-VAE and PatchVAE, and are complementary to each other. Next, we fine-tune the last residual block along with the two conv layers ('Conv3' column). We observe that PatchVAE performs better than VAE under all settings except the for CIFAR100 with just L2 loss. However, when using better reconstruction losses, the performance of PatchVAE improves over β-VAE. Similarly, we fine-tune all but the first conv layer and report the in'Conv1' column. Again, we notice similar trends, where our method generally performs better than β-VAE on Indoor67 and Places205 dataset, but β-VAE performs better CIFAR100 by a small margin. When compared to BiGAN, PatchVAE representations are better on all datasets ('Conv5') by a huge margin. However, when fine-tuning the pre-trained weights, BiGAN performs better on two out of four datasets. We also report using pre-trained weights in φ(x) using supervised ImageNet ImageNet Results. Finally, we report on the large-scale ImageNet benchmark in Table 2. For these experiments, we use ResNet18 architecture for all methods. All weights are first learned using the unsupervised tasks. Then, we fine-tune the last two residual blocks and train the two newly added conv layers in the classification network (therefore, first conv layer and the following two residual blocks are frozen). We notice that PatchVAE framework outperforms β-VAE under all settings, and the proposed weighted loss helps both approaches. Finally, the last row in Table 2 reports classification of same architecture randomly initialized and trained end-to-end on ImageNet using supervised training for comparison. We study the impact of various hyper-parameters used in our experiments. For the purpose of this evaluation, we follow a similar approach as in the'Conv5' column of Table 1 and all hyperparameters from the previous section. We use CIFAR100 and Indoor67 datasets for ablation analysis. Maximum number of patches. Maximum number of parts N used in our framework. Depending on the dataset, higher value of N can provide wider pool of patches to pick from. However, it can also make the unsupervised learning task harder, since in a minibatch of images, we might not get too many repeat patches. Table 3 (left) shows the effect of N on CIFAR100 and Indoor67 datasets. We observe that while increasing number of patches improves the discriminative power in case of CIFAR100, it has little or negative effect in case of Indoor67. A possible reason for this decline in performance for Indoor67 can be smaller size of the dataset (i.e., fewer images to learn). Number of hidden units for a patch appearanceẑ app. Next, we study the impact of the number of channels in the appearance featureẑ app for each patch (d p). This parameter reflects the capacity of individual patch's latent representation. While this parameter impacts the reconstruction quality of images. We observed that it has little or no effect on the classification performance of the base features. Results are summarized in Table 3 (right) for both CIFAR100 and Indoor67 datasets. Prior probability for patch visibility z prior vis. In all our experiments, prior probability for a patch is fixed to 1/N, i.e., inverse of maximum number of patches. The intuition is to encourage each location on visibility maps to fire for at most one patch. Increasing this patch visibility prior will allow all patches to fire at the same location. While this would make the reconstruction task easier, it will become harder for individual patches to capture anything meaningful. Table 4 shows the deterioration of classification performance on increasing z prior vis. Patch visibility loss weight β vis. The weight for patch visibility KL Divergence has to be chosen carefully. If β vis is too low, more patches can fire at same location and this harms the the learning capability of patches; and if β vis is too high, decoder will not receive any patches to reconstruct from and both reconstruction and classification will suffer. Table 5 summarizes the impact of varying β vis. We presented a patch-based bottleneck in a VAE framework that encourages learning useful representations for recognition. Our method, PatchVAE, constrains the encoder architecture to only learn patches that are repetitive and consistent in images as opposed to learning everything, and therefore in representations that perform much better for recognition tasks compared to vanilla VAEs. We also demonstrate that losses that favor high-energy foreground regions of an image are better for unsupervised learning of representations for recognition. 6 APPENDIX 6.1 VISUALIZATION OF WEIGHTED LOSS Figure 3 shows an illustration of the reconstruction loss L w proposed in Section 3.4. Notice that in first column, guitar has more weight that rest of the image. Similarly in second, fourth and sixth columns that train, painting, and people are respectively weighed more heavily by L w than rest of the image; thus favoring capturing the foreground regions. Image Laplacian Figure 3: Masks used for weighted reconstruction loss Lw. First row contains images randomly samples from MIT Indoor datatset. Second and third rows have the corresponding image laplacians and final reconstruction weight masks respectively. In the last row, we take the product of first and third row to highlight which parts of image are getting more attention while reconstruction. 6.2 DATASET AND TRAINING DETAILS CIFAR100 consists of 60000 32 × 32 color images in 100 classes, with 600 images per class. There are 50000 training images and 10000 test images. Indoor dataset contains 67 categories, and a total of 15620 images ('Indoor67'). Train and test subsets consist of 80 and 20 images per class respectively. Places dataset has 2.5 millions of images with 205 categories ('Places205'). Finally, we report on the large-scale ImageNet dataset, which has ∼1.28M training and 50k validation images spanning 1000 categories. The generator network has two deconv layers with batchnorm and a final deconv layer with tanh activation. When training with'GAN' loss, the additional discriminator has four conv layers, two of with have batchnorm. In this section, we share the exact architectures used in various experiments. As discussed in Section 4, we evaluated our proposed model on CIFAR100, Indoor67, and Places205 datasets. We resize and center-crop the images such that input image size for CIFAR100 datasets is 32 × 32 × 3 while for Indoor67 and Places205 datasets input image size is 64 × 64 × 3. PatchVAE can treat images of various input sizes in exactly same way allowing us to keep the architecture same for different datasets. In case of VAE and BiGAN however, we have to go through a fixed size bottleneck layer and hence architectures need to be a little different for different input image sizes. Wherever possible, we have tried to keep the number of parameters in different architectures comparable. Tables 6 and 7 show the architectures for encoders used in different models. In the unsupervised learning task, encoder comprises of a fixed neural network backbone φ(x), that given an image of size h × w × 3 generated feature maps of size This backbone architecture is common to different models discussed in the paper and consists of a single conv layer followed by 2 residual blocks. We refer to this φ(x) as Resnet-9 and it is described as Conv1-5 layers in Table 10. Rest of the encoder architecture varies depending on the model in consideration and is described in the tables below. Tables 8 and 9 show the architectures for decoders used in different models. We use a pyramid like network for decoder where feature map size is doubled in consecutive layers, while number of channels is halved. Final non-linearity used in each decoder is tanh.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1x1kJHKDH
A patch-based bottleneck formulation in a VAE framework that learns unsupervised representations better suited for visual recognition.
Vanishing and exploding gradients are two of the main obstacles in training deep neural networks, especially in capturing long range dependencies in recurrent neural networks (RNNs). In this paper, we present an efficient parametrization of the transition matrix of an RNN that allows us to stabilize the gradients that arise in its training. Specifically, we parameterize the transition matrix by its singular value decomposition (SVD), which allows us to explicitly track and control its singular values. We attain efficiency by using tools that are common in numerical linear algebra, namely Householder reflectors for representing the orthogonal matrices that arise in the SVD. By explicitly controlling the singular values, our proposed svdRNN method allows us to easily solve the exploding gradient problem and we observe that it empirically solves the vanishing gradient issue to a large extent. We note that the SVD parameterization can be used for any rectangular weight matrix, hence it can be easily extended to any deep neural network, such as a multi-layer perceptron. Theoretically, we demonstrate that our parameterization does not lose any expressive power, and show how it potentially makes the optimization process easier. Our extensive experimental also demonstrate that the proposed framework converges faster, and has good generalization, especially when the depth is large. Deep neural networks have achieved great success in various fields, including computer vision, speech recognition, natural language processing, etc. Despite their tremendous capacity to fit complex functions, optimizing deep neural networks remains a contemporary challenge. Two main obstacles are vanishing and exploding gradients, that become particularly problematic in Recurrent Neural Networks (RNNs) since the transition matrix is identical at each layer, and any slight change to it is amplified through recurrent layers BID3 ).Several methods have been proposed to solve the issue, for example, Long Short Term Memory (LSTM) BID8 ) and residual networks BID7 ). Another recently proposed class of methods is designed to enforce orthogonality of the square transition matrices, such as unitary and orthogonal RNNs (oRNN) BID1; BID13 ). However, while these methods solve the exploding gradient problem, they limit the expressivity of the network. In this paper, we present an efficient parametrization of weight matrices that arise in a deep neural network, thus allowing us to stabilize the gradients that arise in its training, while retaining the desired expressive power of the network. In more detail we make the following contributions:• We propose a method to parameterize weight matrices through their singular value decomposition (SVD). Inspired by BID13 ), we attain efficiency by using tools that are common in numerical linear algebra, namely Householder reflectors for representing the orthogonal matrices that arise in the SVD. The SVD parametrization allows us to retain the desired expressive power of the network, while enabling us to explicitly track and control singular values.• We apply our SVD parameterization to recurrent neural networks to exert spectral constraints on the RNN transition matrix. Our proposed svdRNN method enjoys similar space and time complexity as the vanilla RNN. We empirically verify the superiority of svdRNN over RNN/oRNN, in some case even LSTMs, over an exhaustive collection of time series classification tasks and the synthetic addition and copying tasks, especially when the network depth is large.• Theoretically, we show how our proposed SVD parametrization can make the optimization process easier. Specifically, under a simple setting, we show that there are no spurious local minimum for the linear svdRNN in the population risk.• Our parameterization is general enough to eliminate the gradient vanishing/exploding problem not only in RNNs, but also in various deep networks. We illustrate this by applying SVD parametrization to problems with non-square weight matrices, specifically multi-layer perceptrons (MLPs) and residual networks. We now present the outline of our paper. In Section 2, we discuss related work, while in Section 3 we introduce our SVD parametrization and demonstrate how it spans the whole parameter space and does not limit expressivity. In Section 4 we propose the svdRNN model that is able to efficiently control and track the singular values of the transition matrices, and we extend our parameterization to non-square weight matrices and apply it to MLPs in Section 5. Section 6 provides the optimization landscape of svdRNN by showing that linear svdRNN has no spurious local minimum. Experimental on MNIST and a popular time series archive are present in Section 7. Finally, we present our and future work in Section 8. Numerous approaches have been proposed to address the vanishing and exploding gradient problem. Long short-term memory (LSTM) BID8 ) attempts to address the vanishing gradient problem by adding additional memory gates. Residual networks BID7 ) pass the original input directly to the next layer in addition to the original layer output. BID14 performs gradient clipping, while BID15 applies spectral regularization to the weight matrices. Other approaches include introducing L 1 or L 2 penalization on successive gradient norm pairs in back propagation BID15 ).Recently the idea of restricting transition matrices to be orthogonal has drawn some attention. BID12 proposed initializing recurrent transition matrices to be identity or orthogonal (IRNN). This strategy shows better performance when compared to vanilla RNN and LSTM. However, there is no guarantee that the transition matrix is close to orthogonal after a few iterations. The unitary RNN (uRNN) algorithm proposed in BID1 parameterizes the transition matrix with reflection, diagonal and Fourier transform matrices. By construction, uRNN ensures that the transition matrix is unitary at all times. Although this algorithm performs well on several small tasks, BID19 showed that uRNN only covers a subset of possible unitary matrices and thus detracts from the expressive power of RNN. An improvement over uRNN, the orthogonal RNN (oRNN), was proposed by BID13. oRNN uses products of Householder reflectors to represent an orthogonal transition matrix, which is rich enough to span the entire space of orthogonal matrices. Meanwhile, BID18 empirically demonstrate that the strong constraint of orthogonality limits the model's expressivity, thereby hindering its performance. Therefore, they parameterize the transition matrix by its SVD, W = U ΣV (factorized RNN) and restrict Σ to be in a range close to 1; however, the orthogonal matrices U and V are updated by geodesic gradient descent using the Cayley transform, thereby ing in time complexity cubic in the number of hidden nodes which is prohibitive for large scale problems. Motivated by the shortcomings of the above methods, our work in this paper attempts to answer the following questions: Is there an efficient way to solve the gradient vanishing/exploding problem without hurting expressive power?As brought to wide notice in BID7, deep neural networks should be able to preserve features that are already good. BID6 consolidate this point by showing that deep linear residual networks have no spurious local optima. In our work, we broaden this concept and bring it to the area of recurrent neural networks, showing that each layer is not necessarily near identity, but being close to orthogonality suffices to get a similar . Generalization is a major concern in training deep neural networks. BID2 provide a generalization bound for neural networks by a spectral Lipschitz constant, namely the product of spectral norm of each layer. Thus, our scheme of restricting the spectral norm of weight matrices reduces generalization error in the setting of BID2. As supported by the analysis in BID5, since our SVD parametrization allows us to develop an efficient way to constrain the weight matrix to be a tight frame BID17 ), we consequently are able to reduce the sensitivity of the network to adversarial examples. The SVD of the transition matrix W ∈ R n×n of an RNN is given by W = U ΣV T, where Σ is the diagonal matrix of singular values, and U, V ∈ R n×n are orthogonal matrices, i.e., BID16 ). During the training of an RNN, our proposal is to maintain the transition matrix in its SVD form. However, in order to do so efficiently, we need to maintain the orthogonal matrices U and V in compact form, so that they can be easily updated by forward and backward propagation. In order to do so, as in BID13, we use a tool that is commonly used in numerical linear algebra, namely Householder reflectors (which, for example, are used in computing the QR decomposition of a matrix). DISPLAYFORM0 Given a vector u ∈ R k, k ≤ n, the n × n Householder reflector H n k (u) is defined as: DISPLAYFORM1The Householder reflector is clearly a symmetric matrix, and it can be shown that it is orthogonal, i.e., H 2 = I . Further, when u = 0, it has n−1 eigenvalues that are 1, and one eigenvalue which is −1 (hence the name that it is a reflector). In practice, to store a Householder reflector, we only need to store u ∈ R k rather than the full matrix. Given a series of vectors {u i} n i=k where u k ∈ R k, we define the map: DISPLAYFORM2 where the right hand side is a product of Householder reflectors, yielding an orthogonal matrix (to make the notation less cumbersome, we remove the superscript from H n k for the rest of this section). Theorem 1. The image of M 1 is the set of all n × n orthogonal matrices. The proof of Theorem 1 is an easy extension of the Householder QR factorization Theorem, and is presented in Appendix A. Although we cannot express all n × n matrices with M k, any W ∈ R n×n can be expressed as the product of two orthogonal matrices U, V and a diagonal matrix Σ, i.e. by its SVD: DISPLAYFORM3, we finally define our proposed SVD parametrization: DISPLAYFORM4 Theorem 2. The image of M 1,1 is the set of n × n real matrices. i.e. DISPLAYFORM5 The proof of Theorem 2 is based on the singular value decomposition and Theorem 1, and is presented in Appendix A. The astute reader might note that M 1,1 seemingly maps an input space of n 2 + 2n dimensions to a space of n 2 dimensions; however, since H n k (u k) is invariant to the norm of u k, the input space also has exactly n 2 dimensions. Although Theorems 1 and 2 are simple extensions of well known linear algebra , they ensure that our parameterization has the ability to represent any matrix and so the full expressive power of the RNN is retained. Theorem 3. The image of M k1,k2 includes the set of all orthogonal n×n matrices if k 1 +k 2 ≤ n+2.Theorem 3 indicates that if the total number of reflectors is greater than n: (n − k 1 + 1) + (n − k 2 + 1) ≥ n, then the parameterization covers all orthogonal matrices. Note that when fixing σ = 1, DISPLAYFORM6 In this section, we apply our SVD parameterization to RNNs and describe the ing svdRNN algorithm in detail. Given a hidden state vector from the previous step h (t−1) ∈ R n and input x (t−1) ∈ R ni, RNN computes the next hidden state h (t) and output vector o (t) ∈ R no as: DISPLAYFORM0 DISPLAYFORM1 In svdRNN we parametrize the transition matrix W ∈ R n×n using m 1 + m 2 Householder reflectors as: DISPLAYFORM2 This parameterization gives us several advantages over the regular RNN. First, we can select the number of reflectors m 1 and m 2 to balance expressive power versus time and space complexity. By Theorem 2, the choice m 1 = m 2 = n gives us the same expressive power as vanilla RNN. Notice oRNN could be considered a special case of our parametrization, since when we set m 1 + m 2 ≥ n and σ = 1, we can represent all orthogonal matrices, as proven by Theorem 3. Most importantly, we are able to explicitly control the singular values of the transition matrix. In most cases, we want to constrain the singular values to be within a small interval near 1. The most intuitive method is to clip the singular values that are out of range. Another approach would be to initialize all singular values to 1, and add a penalty term σ − 1 2 to the objective function. Here, we have applied another parameterization of σ proposed in BID18: DISPLAYFORM3 where f is the sigmoid function andσ i is updated from u i, v i via stochastic gradient descent. The above allows us to constrain σ i to be within [σ * − r, σ * + r]. In practice, σ * is usually set to 1 and r 1. Note that we are not incurring more computation cost or memory for the parameterization. For regular RNN, the number of parameters is (n o + n i + n + 1)n, while for svdRNN it is (n o + DISPLAYFORM4 . In the extreme case where m 1 = m 2 = n, it becomes (n o + n i + n + 3)n. Later we will show that the computational cost of svdRNN is also of the same order as RNN in the worst case. In forward propagation, we need to iteratively evaluate h (t) from t = 0 to L using. The only different aspect from a regular RNN in the forward propagation is the computation of W h (t−1). Note that in svdRNN, W is expressed as product of m 1 + m 2 Householder matrices and a diagonal matrix. Thus W h (t−1) can be computed iteratively using (m 1 + m 2) inner products and vector additions. Denotingû k = 0 n−k u k, we have: DISPLAYFORM0 Thus, the total cost of computing W h (t−1) is O((m 1 + m 2)n) floating point operations (flops). Detailed analysis can be found in Section 4.2. Let L({u i}, {v i}, σ, M, Y, b) be the loss or objective function, DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 Back propagation for svdRNN requires DISPLAYFORM4 ∂Σ (t) and DISPLAYFORM5 ∂h (t−1). These partial gradients can also be computed iteratively by computing the gradient of each Householder matrix at a time. We drop the superscript (t) now for ease of exposition. Givenĥ = H k (u k)h and g = ∂L ∂ĥ, we have DISPLAYFORM6 Details of forward and backward propagation can be found in Appendix (B). One thing worth noticing is that the oRNN method in BID13 actually omitted the last term in by assuming that u k are fixed. Although the scaling of u k in the Householder transform does not affect the transform itself, it does produce different gradient update for u k even if it is scaled to norm 1 afterwards. TAB0 gives the time complexity of various algorithms. Hprod and Hgrad are defined in Algorithm 2 3 (see Appendix (B)). Algorithm 2 needs 6k flops, while Algorithm 3 uses (3n + 10k) flops. Since u k 2 only needs to be computed once per iteration, we can further decrease the flops to 4k and (3n + 8k). Also, in back propagation we can reuse α in forward propagation to save 2k flops. flops DISPLAYFORM0 In this section, we extend the parameterization to non-square matrices and use Multi-Layer Perceptrons(MLP) as an example to illustrate its application to general deep networks. For any weight matrix W ∈ R m×n (without loss of generality m ≤ n), its reduced SVD can be written as: DISPLAYFORM0 DISPLAYFORM1. Thus we can extend the SVD parameterization for any non-square matrix: DISPLAYFORM2 whereΣ = (diag(σ)|0) if m < n and (diag(σ)|0) otherwise. Next we show that we only need 2 min(m, n) reflectors (rather than m + n) to parametrize any m × n matrix. By the definition of H n k, we have the following lemma: DISPLAYFORM3 Here V *,i indicates the ith column of matrix V. According to Lemma 1, we only need at most first m Householder vectors to express V L, which in the following Theorem: Theorem 4. If m ≤ n, the image of M m,n 1,n−m+1 is the set of all m × n matrices; else the image of M m,n n−m+1,1 is the set of all m × n matrices. Similarly if we constrain u i, v i to have unit length, the input space dimensions of M m,n 1,n−m+1 and M m,n m−n+1,1 are both mn, which matches the output dimension. Thus we extend Theorem 2 to the non-square case, which enables us to apply SVD parameterization to not only the RNN transition matrix, but also to general weight matrices in various deep learning models. For example, the Multilayer perceptron (MLP) model is a class of feedforward neural network with fully connected layers: DISPLAYFORM4 say n t < n t−1, we have: DISPLAYFORM5 nt−1 (v nt−1). We can use the same forward/backward propagation algorithm as described in Algorithm 1. Besides RNN and MLP, SVD parameterization method also applies to more advanced frameworks, such as Residual networks and LSTM, which we will not describe in detail here. Since we can control and upper bound the singular values of the transition matrix in svdRNN, we can clearly eliminate the exploding gradient problem. In this section, we now analytically illustrate the advantages of svdRNN with lower-bounded singular values from the optimization perspective. For the theoretical analysis in this section, we will limit ourselves to a linear recurrent neural network, i.e., an RNN without any activation. Linear recurrent neural network. For simplicity, we follow a setting similar to BID6. For compact presentation, we stack the input data as X ∈ R n×t, where X = x |x | · · · |x (t−1), and transition weights as W ∈ R n×nt where W = W |W 2 | · · · |W t. Then we can simplify the output as: DISPLAYFORM0 By absorbing M and b in each data x (t) and assuming h = 0, we further simplify the output as: DISPLAYFORM1 Suppose the input data X ∼ D, and assume its underlying relation to the output is y = Avec(X)+η, where A ∈ R n×nt and residue η ∈ R n satisfies E X ∼D [η|X] = 0. We consider the individual loss: DISPLAYFORM2 2. Claim 1. With linear recurrent neural networks, the population risk DISPLAYFORM3 DISPLAYFORM4 Therefore when ∇ W R[W] = 0 suffices R(W) = R *, meaning W reaches the global minimum. Theorem 5 potentially explains why our system is easier to optimize, since with our scheme of SVD parametrization, we have the following corollary. Corollary 1. With the update rule in, linear svdRNNs have no spurious local minimum. While the above analysis lends further credence to our observed experimental , we leave it to future work to perform a similar analysis in the presence of non-linear activation functions. In this section, we provide empirical evidence that shows the advantages of SVD parameterization in both RNNs and MLPs. For RNN models, we compare our svdRNN algorithm with (vanilla) RNN, IRNN(Le et al. FORMULA2), oRNN BID13 ) and LSTM BID8 ). The transition matrix in IRNN is initialized to be orthogonal while other matrices are initialized by sampling from a Gaussian distribution. For MLP models, we implemented vanilla MLP, Residual Network (ResNet) BID7 ) and used SVD parameterization for both of them. We used a residual block of two layers in ResNet. In most cases leaky Relu is used as activation function, except for LSTM, where leaky Relu will drastically harm the performance. To train these models, we applied Adam optimizer with stochastic gradient descent BID11 ). These models are implemented with Theano (Al-Rfou et al. FORMULA2). In this experiment, we focus on the time series classification problem, where time series are fed into RNN sequentially, which then tries to predict the right class upon receiving the sequence end BID10 ). The dataset we choose is the largest public collection of classlabeled time-series with widely varying length, namely, the UCR time-series collection from BID4 2. We present the test accuracy on 20 datasets with RNN, LSTM, oRNN and svdRNN in TAB3 (Appendix C) and Figure 1. In all experiments, we used hidden dimension n h = 32, and chose total number of reflectors for oRNN and svdRNN to be m = 16 (for svdRNN m 1 = m 2 = 8).We choose proper depth t as well as input size n i. Given sequence length L, since tn i = L, we choose n i to be the maximum divisor of L that satisfies depth ≤ √ L. To have a fair comparison (a) (b) (c) Figure 1: Performance comparisons of the RNN based models on three UCR datasets.of how the proposed principle itself influences the training procedure, we did not use dropout in any of these models. As illustrated in the optimization process in Figure 1, this ed in some overfitting (see (a) CBF), but on the other hand it shows that svdRNN is able to prevent overfitting. This supports our claim that since generalization is bounded by the spectral norm of the weights BID2, svdRNN will potentially generalize better than other schemes. This phenomenon is more drastic when the depth is large (e.g. ArrowHead(251 layers) and FaceAll(131 layers)), since regular RNN, and even LSTM, have no control over the spectral norms. Also note that there are substantially fewer parameters in oRNN and svdRNN as compared to LSTM. In this experiment, we compare different models on the MNIST image dataset. The dataset was split into a training set of 60000 instances and a test set of 10000 instances. The 28 × 28 MNIST pixels are flattened into a vector and then traversed by the RNN models. Table 2 shows accuracy scores across multiple We tested different models with different network depth as well as width. Figure 2(a)(b) shows the test accuracy on networks with 28 and 112 layers (20 and 128 hidden dimensions) respectively. It can be seen that the svdRNN algorithms have the best performance and the choice of r (the amount that singular values are allowed to deviate from 1) does not have much influence on the final precision. Also we explored the effect of different spectral constraints and explicitly tracked the spectral margin (max i |σ i − 1|) of the transition matrix. Intuitively, the influence of large spectral margin should increase as the network becomes deeper. Figure 2(d) shows the spectral margin of different RNN models. Although IRNN has small spectral margin at first few iterations, it quickly deviates from orthogonal and cannot match the performance of oRNN and svdRNN. Figure 2(e) shows the magnitude of first layer gradient ∂L ∂h 2. RNN suffers from vanishing gradient at first 50k iterations while oRNN and svdRNN are much more stable. Note that LSTM can perform relatively well even though it has exploding gradient in the first layer. We also tested RNN and svdRNN with different amount of non-linearity, as shown in Figure 2 (c). This is achieved by adjusting the leak parameter in leaky Relu: f (x) = max(leak · x, x). With leak = 1.0, it reduces to the identity map and when leak = 0 we are at the original Relu function. From the figures, we show that svdRNN is resistant to different amount of non-linearity, namely converge faster and achieve higher accuracy invariant to the amount of the leak factor. To explore the reason underneath, we illustrate the gradient in Figure 2 (f), and find out svdRNN could eliminate the gradient vanishing problem on all circumstances, while RNN suffers from gradient vanishing when non-linearity is higher. FORMULA2 256(m = 32) ≈ 11k 97.2 RNN BID18 128 ≈ 35k 94.1 uRNN BID1 ) 512 ≈ 16k 95.1 RC uRNN BID19 ) 512 ≈ 16k 97.5 FC uRNN BID19 ) 116 ≈ 16k 92.8 factorized RNN BID18 ) 128 ≈ 32k 94.6 LSTM BID18 ) 128 ≈ 64k 97.3 Table 2: Results for the pixel MNIST dataset across multiple algorithms. For the MLP models, each instance is flattened to a vector of length 784 and fed to the input layer. After the input layer there are 40 layers with hidden dimension 32 (Figure 3(a) ) or 30 to 100 layers with hidden dimension 128 (Figure 3(b) ). On a 40-layer network, svdMLP and svdResNet achieve similar performance as ResNet while MLP's convergence is slower. However, when the network is deeper, both MLP and ResNet start to fail. With n h = 128, MLP is not able to function with L > 35 and ResNet with L > 70. On the other hand, the SVD based methods are resilient to increasing depth and thus achieve higher precision.(a) (b) Figure 3: MLP models on MNIST with L layers n h hidden dimension In this paper, we have proposed an efficient SVD parametrization of various weight matrices in deep neural networks, which allows us to explicitly track and control their singular values. This parameterization does not restrict the network's expressive power, while simultaneously allowing fast forward as well as backward propagation. The method is easy to implement and has the same time and space complexity as compared to original methods like RNN and MLP. The ability to control singular values helps in avoiding the gradient vanishing and exploding problems, and as we have empirically shown, gives good performance. Although we only showed examples in the RNN and MLP framework, our method is applicable to many more deep networks, such as Convolutional Networks etc. However, further experimentation is required to fully understand the influence of using different number of reflectors in our SVD parameterization. Also, the underlying structures of the image of M k1,k2 when k 1, k 2 = 1 is a subject worth investigating. DISPLAYFORM0 Proof of Proposition 1. For n = 1, note that H 1 1 (u 1) = ±1. By setting u 1 = 0 if B 1,1 > 0 and u 1 = 0 otherwise, we have the factorization desired. Assume that the holds for n = k, then for n = k + 1 set u k+1 = B 1 − B 1 e 1. Here B 1 is the first column of B and e 1 = (1, 0, ..., 0). Thus we have DISPLAYFORM1, whereB ∈ R k×k. Note that H k+1 k+1 (u k+1) = I k+1 when u k+1 = 0 and the above still holds. By DISPLAYFORM2 is an upper triangular matrix with positive diagonal elements. Thus the holds for any n by the theory of mathematical induction. A.2 PROOF OF THEOREM 1 Proof. Observe that the image of M 1 is a subset of O(n), and we now show that the converse is also true. Given A ∈ O(n), by Proposition 1, there exists an upper triangular matrix R with positive diagonal elements, and an orthogonal matrix Q expressed as DISPLAYFORM3, such that A = QR. Since A is orthogonal, we have A A = AA = I n, thus:A A = R Q QR = R R = I n; Q AA Q = Q QRR Q Q = RR = I n Thus R is orthogonal and upper triangular matrix with positive diagonal elements. So R = I n and DISPLAYFORM4 Proof. It is easy to see that the image of M 1,1 is a subset of R n×n. For any W ∈ R n×n, we have its SVD, W = U ΣV, where Σ = diag(σ). By Theorem 1, for any orthogonal matrix U, V ∈ R n×n, there exists DISPLAYFORM0 Proof. Let A ∈ R n×n be an orthogonal matrix. By Theorem 1, there exist DISPLAYFORM1, such that A = M 1 (a 1, ..., a n). Since A is also orthogonal, for the same reason, there exist DISPLAYFORM2 v t = 0, t = k 2 + k 1 − 2,..., n, and then we have: DISPLAYFORM3 Else, assign: DISPLAYFORM4.., n, and then we have: DISPLAYFORM5 A.5 PROOF OF THEOREM 4Proof. It is easy to see that the image of M m,n *, * is a subset of R m×n. For any W ∈ R m×n, we have its SVD, W = U ΣV, where Σ is an m × n diagonal matrix. By Theorem 1, for any orthogonal DISPLAYFORM6 Similarly, for n < m, we have: DISPLAYFORM7 Remark: here when W and ∆W are not commutative, each W i ∆W should instead be written as DISPLAYFORM8 Since the change of order doesn't impact the analysis, we informally simplify the expressions here. DISPLAYFORM9 C.1 DETAILS ON THE TIME SERIES CLASSIFICATION TASK For the time series classification task, we use the training and testing sets directly from the UCR time series archive http://www.cs.ucr.edu/˜eamonn/time_series_data/, and randomly choose 20% of the training set as validation data. We provide the statistical descriptions of the datasets and experimental in TAB3 BID1. The Adding task requires the network to remember two marked numbers in a long sequence and add them. Each input data includes two sequences: top sequence whose values are sampled uniformly from and bottom sequence which is a binary sequence with only two 1's. The network is asked to output the sum of the two values. From the empirical in Figure 4, we can see that when the network is not deep (number of layers L=30 in (a)(d)), every model outperforms the baseline of 0.167 (always output 1 regardless of the input). Also, the first layer gradients do not vanish for all models. However, on longer sequences (L=100 in (b)(e)), IRNN failed and LSTM converges much slower than svdRNN and oRNN. If we further increase the sequence length (L=300 in (c)(f)), only svdRNN and oRNN are able to beat the baseline within reasonable number of iterations. We can also observe that the first layer gradient of oRNN/svdRNN does not vanish regardless of the depth, while IRNN/LSTM's gradient vanish as L becomes lager.(a) (b) (c) (d) (e) (f) Figure 4: RNN models on the adding task with L layers and n h hidden dimension. The top plots show the test MSE, while the bottom plots show the magnitude of the gradient at the first layer. Let A = {a i} 9 i=0 be the alphabet. The input data sequence x ∈ A T +20 where T is the time lag. x 1:10 are sampled uniformly from i{a i} 7 i=0 and x T +10 is set to a 9. Rest of x i is set to a 8. The network is asked to output x 1:10 after seeing a 9. That is to copy x 1:10 from the beginning to the end with time lag T.A baseline strategy is to predict a 8 for T +10 entrees and randomly sample from {a i} 7 i=1 for the last 10 digits. From the empirical in FIG3, svdRNN consistently outperforms all other models. IRNN and LSTM models are not able to beat the baseline with large time lag. In fact, the loss of RNN/LSTM is very close to the baseline (memoryless strategy) indicates that they do not memorize any useful information throughout the time lag.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyL9u-WA-
To solve the gradient vanishing/exploding problems, we proprose an efficient parametrization of the transition matrix of RNN that loses no expressive power, converges faster and has good generalization.
Recent image style transferring methods achieved arbitrary stylization with input content and style images. To transfer the style of an arbitrary image to a content image, these methods used a feed-forward network with a lowest-scaled feature transformer or a cascade of the networks with a feature transformer of a corresponding scale. However, their approaches did not consider either multi-scaled style in their single-scale feature transformer or dependency between the transformed feature statistics across the cascade networks. This shortcoming ed in generating partially and inexactly transferred style in the generated images. To overcome this limitation of partial style transfer, we propose a total style transferring method which transfers multi-scaled feature statistics through a single feed-forward process. First, our method transforms multi-scaled feature maps of a content image into those of a target style image by considering both inter-channel correlations in each single scaled feature map and inter-scale correlations between multi-scaled feature maps. Second, each transformed feature map is inserted into the decoder layer of the corresponding scale using skip-connection. Finally, the skip-connected multi-scaled feature maps are decoded into a stylized image through our trained decoder network. Recent image style transferring methodsJohnson et al.; BID18 improved image generating speed up to sub-realtime processing by learning a feed-forward network of a single style or several fixed stylesDumoulin et al.. proposed an adaptive instance normalization layer (AdaIN) that adaptively transforms the statistics of an encoded content feature into that of a target style feature and they achieved style transferring into arbitrary input target style. However, they did not consider multi-scaled style characteristics of an imageGatys et al. but only a single scale feature in differentiating styles inside AdaIN layer. Li et al. Li et al. (2017b) proposed to use cascade networks that cumulatively transfer the multi-scaled style characteristics by using a network per scale as shown in FIG0 (a). They also transformed correlation between channels of feature map by using their whitening and coloring transformer (WCT). However, their cascade scheme requires multiple feed-forward passes to produce a stylized image and it is not guaranteed that the transferred style through a network is preserved after going through the subsequent networks because of inter-scale dependency in the multi-scaled styles of an image. Therefore, transferring multi-scaled style without interference between scales is still remained to study. In this paper, we propose an improved feed-forward network structure (FIG0) and a multi-scaled style transferring method, called total style transfer, to efficiently perform style transfer in all scales of feature maps through a single feed-forward pass. Our work has the following contributions.• Transforming both intra-scale and inter-scale statistics of multi-scaled feature map: There exist both of inter and intra-correlations in the encoded multi-scaled feature map as shown in fig.2 (b). Therefore, we match the second-order statistics, i.e., mean and covariance, of the encoded multi-scaled feature map considering the correlations not only between channels in each scale (intra-scale correlation) but also between scales (inter-scale correlation). Our feature transformer makes the transformed feature map closer to the target style feature map and this in an output style closer to the target style. Figure 2: Correlation between channels in the multi-scaled feature map of the input image (a) extracted from the pre-trained VGG16 BID16. The area corresponding to each scale of feature map is divided into red lines. In case of intra-scale feature transform, the diagonal rectangles on the correlation matrix are used. In case of inter-scale feature transform, entire region of the correlation matrix is considered.• Decoder learning with multi-scaled style loss: we use a multi-scaled style loss consistent to the feature transformer, i.e., mean and covariance loss between the concatenated feature map (FIG1). Using our multi-scaled style loss allows the decoder network to generate an output image of co-occurring multi-scale patterns which is better style expression than independently occurring scale patterns on the image that the existing methods generated.• Multi-scaled style transfer with a single feed-forward network: we use skip-connections for each decoder layer as shown in FIG0 (b) to consider the transformed feature map as well as the decoded feature map. By doing this, the style of scale corresponding to the layer and the transferred multi-scaled style so far are optimally merged into the next layer. Therefore, our method transfers multi-scaled style through a feed-forward pass in a single network instead of multiple feed-forward passes of cascade networks (FIG0) without considering inter-scale correlation. In the remained of this paper, we review previous works closely related to this work in Sec. 2, our multi-scaled style transforming method is described in Sec. 3, the effectiveness of our method is tested and proven by a bundle of experiments in Sec. 4, and this work is concluded in Sec. 5. Gatys et al. BID2 represented content and style features of an image using a deep feature map, i.e., the filtered responses of a learned convolutional neural network (CNN). To stylize an input image, they performed pixel-wise optimization of the image to reduce the feature losses of BID9 interpreted the process of generating a stylized image by matching Gram matrix BID2 as a problem of maximum mean discrepancy (MMD) specifically with a second-order polynomial kernel. Using a feed-forward neural network BID7; BID18 moved the time consuming online optimization process into an offline feed-forward network learning to speed up the image generating speed of the previous method BID2. The generated style quality was also improved by using instance normalization (IN) BID19. Dumoulin et al. extended the previous single style network to transfer multiple styles. They used conditional instance normalization (CIN) layers in a single network. As selecting learnable affine parameters corresponding the specific style in the CIN layers, the feed-forward network transfers the selected style. This method achieved generation of pre-trained multiple styles with a single network. To generalize a single network for arbitrary style transfer, Huang et al. BID5 proposed to use a feature transformer called adaptive instance normalization (AdaIN) layer between encoder and decoder networks. Once feature maps of content and target style images are encoded, AdaIN directly adjusts the mean and standard deviation of a content feature map into those of a target style feature map, and then the adjusted feature map is decoded into an output image of the target style. Li et al. BID10 further improved the arbitrary stylization by using covariance, instead of standard deviation, which considers the correlation between feature channels. To transfer multi-scaled style, Li et al. BID10 3.1 MULTI-SCALE FEATURE TRANSFORM As described in BID2, each scaled feature of CNN represents different style characteristics of an image. So, we utilize multiple feature transformers for each scale feature to transfer total style characteristics of an image. In this section, we explain two schemes of our total style transfer, i.e, intra-scale and inter-scale feature transform, with a single feed-forward network.3.1.1 INTRA-SCALE FEATURE TRANSFORM Our intra-scale feature transform is a set of independent single-scale style transforms as an extended multi-scale version of the single-scale correlation alignment of CORAL BID17 or WCT Li et al. (2017b of style image, where C i, H c,i (or H s,i), and W c,i (or W s,i) represent the number of channels, spatial height, and width of i-th scale features respectively. For a single-scale style transform with these features, CORAL or WCT performs style normalization and stylization sequentally. In the style normalization step, first zero-centered featureF c,i ∈ R Ci×(Hc,i×Wc,i) of the content feature F c,i is calculated and then the content feature F c,i is normalized intoF c,i by using its own covariance matrix cov(F c,i) ∈ R Ci×Ci as eq.1. DISPLAYFORM0 In the stylization step, the normalized content featureF c,i is stylized into F cs,i by using the square root of covariance matrix cov(F s,i) ∈ R Ci×Ci of zero-centered style featureF s,i and spatial mean µ s,i ∈ R Ci×1 of the style feature F s,i as eq.2. DISPLAYFORM1 Our intra-scale transform method applies the above single-scale transform independently to each scaled feature for i = 1..3 corresponding to {relu_1_2, relu_2_2, relu_3_3} layers. Then, those transformed features F cs,i, i = 1..3 are inserted into the subsequent decoder through skip-connenction. More detail about skip-connection will be described in Sec.3.1.3. As shown in fig.2 (b), there exists not only inter-channel correlation in a certain scale feature but also inter-scale correlation between multi-scale features. These correlations should be considered in order to transfer total style characteristics of an image. CORAL BID17 or WCT Li et al. (2017b) did not consider inter-scale correlation but only inter-channel correlation. Therefore, we propose inter-scale feature transformer which considers more style characteristics of image for style transfer. To perform feature transform considering both inter-channel and inter-scale correlations, we apply the intra-scale feature transform of Sec.3.1.1 to the concatenated feature F c ∈ R i Ci×(Hc,1×Wc,1) of content image and F s ∈ R i Ci×(Hs,1×Ws,1) of style image (eq.3) instead of independently applying to each scaled features F c,i and F s,i of Sec.3.1.1. DISPLAYFORM0 As shown in FIG1, the content features F c,i and style features F s,i for i = 1..3 are spatially upsampled into U (F c,i) and U (F s,i) of a common size (we use the largest size of F c,1 or F s,1 corresponding to {relu_1_2}) and concatenated into F c and F s respectively along the channel axis. After going through a transformer, the transformed feature F cs is split and downsampled into F cs,i ∈ R Ci× (Hc,i×Wc,i) of the original feature size as shown FIG1 (b) and eq.4, DISPLAYFORM1 where, D i (f) is a function which spatially downsamples f into H c,i × W c,i. These features are inserted into the subsequent decoder through skip-connenction of Sec.3.1.3. To utilize the transformed multi-scale features in generating output stylized image, decoding architecture of previous decoder network should be modified because each decoder layer has two input feature maps as FIG0, one is a decoded feature map from the previous decoder layer, the other is a (intra-scale or inter-scale) transformed feature from the transformer. We adopt skip-connection, which has been applied to several applications of computer vision field BID14; BID13; BID6 BID0 but not to image style transfer yet, to merge the two feature maps in decoding process as shown in FIG1. Skip-connected two scale features are optimally merged by learnable convolution layer and this improves the quality of the decoded image by considering multi-scale filter responses. Our method is different from the previous cascade scheme of BID10 because we use a single encoder/decoder network, parallel transformers for each scale feature, and merges multi-scaled styles optimally while the cascade scheme needs several encoder/decoder networks (one network per scale feature) and sequentially transfers scaled styles from large to small scale at the risk of degradation in previously transferred scale of style. also used a single decoder like ours but it sequentially applied feature transformers from large to small scale without considering possible degradation of the previously transferred scale. We need an appropriate objective function for decoder network to generate a stylized image from the transformed feature map. Among the existing losses such as Gram BID2, , and reconstruction error BID10, we adopt Mean-Std loss BID5 with some modification because of its consistency with AdaIN transformer. Instead of using Mean-Std loss as it is, we use Mean-Covariance loss to additionally consider interchannel and inter-scale correlations, which is consistent with our feature transformers described in Sec.3.1.In case of using intra-scale feature transform (Sec.3.1.1), our style loss (eq.5) is calculated as the summation of mean loss BID5 and covariance loss, i.e., square root of Frobenius distance between covariance matrices of feature maps of output and target style images. In case of using inter-scale feature transform (Sec.3.1.1), the summation of mean and covariance losses of the concatenated features are used as the style loss (eq.6). DISPLAYFORM0 DISPLAYFORM1 where subscript o represents of output stylized image. We used VGG16 feature extractor BID16 as the encoder and a mirror-structured network as the decoder of our style transfer network. Our decoder network has 2 times larger number of channels in the corresponding layer of skipconnections than the previous methods; BID5. {relu_1_2, relu_2_2, relu_3_3, relu_4_3} layers were used in calculating style loss and {relu_3_3} layer in calculating content loss. Here, we used the same content loss of BID2 and our multi-scaled style loss in Sec.3.2. For training data set, MS-COCO train2014 BID11 and Painter By Numbers BID8 were used as content image set and large style image set respectively. Each dataset consists of about 80,000 images. And we used an additional small style image set of 77 style images to verify the effect of our proposed method as the number of training style increases. Each image was resized into 256 pixels in short side maintaining the original aspect ratio in both training and test phases, and, only for training phase, randomly cropped into pixels to avoid boundary artifact. We trained networks with 4 batches of random (content, style) image pairs, 4 epochs, and learning rate of 10. And all experiments were performed on Pytorch 0.3.1 framework with NVIDIA GTX 1080 Ti GPU card, CUDA 9.0, and CuDNN 7.0. In order to verify the effect of our multi-scale feature transform for varying number of training style images, we trained two networks, one with the small style image set of 77 images and the other with the large style image set of about 80,000 images. Then we compared the output stylized images of the networks. Fig.4 shows an example of the output stylized images using our intra-scale or inter-scale feature transform method. With the network trained by a small style image set, the images generated by our intra-scale transform (fig.4 (c) ) show very similar texture style to the target style images (fig.4 (b) ). And those by our inter-scale transform (fig.4 (d) ) show even better style of texture. With the network trained by a large style image set (fig.4 (e,d) ), the images also show the same tendency that inter-scale is better in expressing the texture of target style. Because of the existing correlations between scales as shown in fig.2 (b), inter-scale feature transform which considers interscale correlations shows the better quality of style than intra-scale transform. To verity the effect of skip-connections in our style transfer network, we trained three different networks. The first one has the conventional single layer encoder/transformer/decoder architecture Huang & Belongie FORMULA0; BID10 which a sinlge feature transformer on {relu_3_3} without skip-connection. The second one has muti-scale feature transformers on {relu_3_3,relu_2_2} and one skip-connection on {relu_2_2}. The last one has multiscale feature transformers on {relu_3_3,relu_2_2,relu_1_2} and two skip-connections on {relu_2_2,relu_1_2}. Fig.5 shows an example of the output stylized image from the three different networks. As the number of skip-connections increases from fig.5 (c) to (e), the style loss decreases from 0.535 to 0.497 and, accordingly, the color tone of the stylized image is getting better matched to the target style (fig.5(b) ) and small patterns appear. To clarify the contributions of the skip-connected feature from the transformer and the decoded feature from the previous scale of decoder layer, we observed the absolute values of loss gradients with respect to the convolution weights of skip-connected decoder layers during the network learning process. As shown in fig.6(a), the gradient values for the skip-connected feature (channel indices from 129 to 256) on {relu_2_2} layer of decoder network are much larger than those for the decoded feature (channel indices from 1 to 128) at the beginning of training. This means that the skip-connected feature which already has target style through transformer dominantly affected to the decoder learning at the start of training phase. This happens because the previous decoder layer {relu_3_3} has random initial weights and outputs noisy feature at the start of training phase. As iteration goes, the gradient values for both features became similar to each other. This means that both skip-connected feature and decoded feature were equally utilized to generate an image of multi-scaled style. FIG0 shows that the gradient values for the skip-connected feature (channel indices from 65 to 128) are smaller than those for the decoded feature (channel indices from 1 to 64) at the latter decoder layer. This means that the decoded feature of the latter decoder layer already has accumulated multi-scaled styles by the previous skip-connection and this ed in the less impact of the skip-connected feature. However, using the skip-connection with the stylized feature of smaller scale has a certain effect on the image in color tone matching as shown in fig.5(d,e). We compared the image quality of our method with those of the existing methods BID2; BID5; BID10. We took the output stylized images for BID2 after 700 iterations of Adam optimization with learning rate 10 DISPLAYFORM0, for BID5 with the same setting of our method except transformer and loss, and for BID10 with style strength α = 0.6 (as mentioned in their work) and 3 cascade networks of VGG16 structure. It e ra ti o n s(× 5 0 0) C h a n n e l s (b) Learning Gradient of relu_1_2 C h a n n e l s It e ra ti o n s(× 5 0 0) Figure 6: Amplitude of loss gradients with resprct to the convolution weights in the skip-connected decoder layers during the learning process: The gradients are drawn every 500 iterations. The former half of the channels are for decoded feature from the previous scale and the latter are for skip-connected feature from transformer. Based on the gradients of 1st skip-connected layer (a) and 2nd skip-connected layer (b), the skip-connected (transformed) feature highly seems to affect to the decoder in initial interations but both decoded and transformed features samely affect as iteration goes. And the latter decoder layer (b) is less affected by the skip-connected feature than the former layer (a). FIG6 shows the generated images from the existing and our intra/inter-scale feature transform methods. Compared to the online optimization method BID2 FIG6 ), the other methods based on feed-forward network generated images of somewhat degraded style quality (FIG6). However, thanks to our muti-scaled style transfer which considering inter-channel or inter-scale correlation, texture detail and color tone of the generated image of our method with inter-scale (FIG6) or intra-scale feature transform (FIG6) are more similar to the target style than those of single-scale style transfer without considering inter-channel correlation BID5 (FIG6) are. BID2, (b) is, (c) is BID5 and (d) is BID10.Compared to BID10, the generated images of our method present the styles closer to the target styles because our methods trained a decoder network to minimize style loss between target style and output images while BID10 trained its decoder network to minimize reconstruction loss of content images. We also compared our method to that performs multi-scale feature transform with a single decoder network. For a fair comparison, as the structure of , we used VGG19 network BID16 up to {relu_4_1} layer as the encoder and its mirrored structure as the decoder, and we also used additional style loss (eq.5 or eq.6) on image-level which is corresponding to the image reconstruction loss of Avatar-Net. As shown in FIG8, Both our intra (e) and inter (f) methods generated the stylized images with both detailed shapes of content images (a) and multi-scaled strokes of target style images (b). In contrast, the generated images of Avatar-Net (c, d) show somewhat deemed content shapes and blurred or burnt color patterns without detailed strokes. And selecting appropriate patch size corresponding to the scale of style pattern was necessary in Avatar-Net (in second row of FIG8, {patch size=1} (c) did not show large square patterns but {patch size=3} (d) did.) while our method did not require any scale matching parameter due to multi-scaled skip-connections in the decoder network. For a quantitative comparison, we compared the content and style losses of the generated images of the existing methods and ours. Table. 1 shows the measured average (standard deviation) losses across 847 tests of style transfer. Online optimization method BID2 achieved the smallest style loss with a low content loss. Among the feed-forward networks trained with a small style image set, achieved the lowest style loss (red colored numbers). This is because it has a learnable transformer and this ed in a most optimized transformer. However, it is not extendable to arbitrary style transfer. Among the arbitrary style transferring methods, our method achieved the lowest style loss (blue colored numbers) with inter-scale feature transformer, and the second lowest style loss (green colored numbers) with intra-scale feature transformer. Among the feed-forward networks trained with a large style image set, our method shows the lowest style loss with inter-scale feature transformer and the second lowest style loss with intra-scale feature transformer in the same manner of the with a small style image set. For the content loss with large style image set, the best method in style loss (our-inter) shows the highest content loss and the second best method (our-intra) shows the lowest content loss. This interesting can be interpreted that inter-scale correlation has not only style of an image but also content of the image. Tasks of transferring style and preserving content are a trade-off in feedforward network methods. The of with small style image set also shows the same content/style trade-off. Therefore, we can select either inter-scale or intra-scale feature transformer according to user preference or purpose of application. Our method achieved 31% less encoder/decoder feed-forward time (4.4 ms in average of 1000 trials with images of 240 by 240 pixels) and 4% less number of parameters (3,655,296 parameters) than the existing cascade network scheme Li et al. (2017b) (6.4 ms, 3,769,856 parameters). In this paper, we proposed a total style transfer network that generates an image through a single feed-forward network by utilizing multi-scale features of content and style images. Our intra-scale feature transformer transfers multi-scale style characteristics of the target style image and our inter-scale feature transformer transfers even more style characteristics of inter-scale correlation into the content image. By using our intra/inter scale feature transform, our total style transfer network achieved the lowest style loss among the existing feed-forward network methods. In addition, we modified the feed-forward network structure by using skip-connections which make our decoder network to utilize all transformed multi-scale features. This modification allowed a single feed-forward network to generate image of multi-scaled style without using multiple feedforward networks of cascade scheme, and ed in the reduced test time by 31% and memory consumption by 4% compared to cascade network scheme.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJ4AFsRcFQ
A paper suggesting a method to transform the style of images using deep neural networks.
Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling. One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill that is trivial for humans. Research in this area is made difficult by the paucity of suitable publicly available datasets both for emotion and dialogues. This work proposes a new task for empathetic dialogue generation and EmpatheticDialogues, a dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems. Our experiments indicate that dialogue models that use our dataset are perceived to be more empathetic by human evaluators, while improving on other metrics as well (e.g. perceived relevance of responses, BLEU scores), compared to models merely trained on large-scale Internet conversation data. We also present empirical comparisons of several ways to improve the performance of a given model by leveraging existing models or datasets without requiring lengthy re-training of the full model. Natural communication is frequently prompted by people sharing their feelings or circumstances. As examples, a recent study found that 80% of Twitter users seem to post mostly about themselves BID28, and ELIZA BID44, one of the earliest chatbots developed, focused on asking its conversational partners why they were feeling a certain way. Interacting in these conversations requires reacting to what people share with an understanding of others' implied feelings. For instance, while the crossed-out response in FIG0 is topically relevant, "Congrats! That's great!" is more natural because it acknowledges the underlying feelings of accomplishment. Responding to people in a way that is empathetic or that acknowledges how the other person feels is a desirable trait for a dialogue agent, but still a nontrivial communicative skill. It is also currently difficult to measure. Although recent work has used large-scale corpora to train reasonably fluent and engaging dialogue agents (e.g. BID24), existing chitchat dialogue benchmarks are not designed to capture whether those agents are responding in an empathetic way to implicit emotional cues. This may indeed be unlikely, given that the sources of Internet conversation data used for training are known to harbor aggressive and callous responses BID0.This works aims to make it easier to evaluate machines' ability to respond in an empathetic way. We introduce a new task for dialogue systems to respond to people discussing everyday situations, based on EMPATHETICDIALOGUES, a novel dataset with 25k personal dialogues. Each dialogue is grounded in a specific situation where a speaker was feeling a given emotion, with a listener responding. The dataset is larger and contains a more extensive set of emotions than many similar emotion prediction datasets from other text domains such as BID34, BID39, BID26, and BID11. Previous dialogue datasets of a similar scale that include emotion labels BID19 BID11 come from crawled Table 1: Two examples from EMPATHETICDIALOGUES training set. The first worker (the speaker) is given an emotion label and writes their own description of a situation when they've felt that way. Then, the speaker tells their story in a conversation with a second worker (the listener).Label: Afraid Situation: Speaker felt this when... "I've been hearing noises around the house at night" Conversation: Speaker: I've been hearing some strange noises around the house at night. Listener: oh no! That's scary! What do you think it is? Speaker: I don't know, that's what's making me anxious. Listener: I'm sorry to hear that. I wish I could help you figure it out Label: Proud Situation: Speaker felt this when... "I finally got that promotion at work! I have tried so hard for so long to get it!" Conversation: Speaker: I finally got promoted today at work! Listener: Congrats! That's great! Speaker: Thank you! I've been trying to get it for a while now! Listener: That is quite an accomplishment and you should be proud! conversations extracted from settings that are quite different from a one-on-one conversation (educational dialogues for English learners for DAILYDIALOG, public social media content for BID11) and cover either a very limited or a very imbalanced set of emotions: only ≈ 5% of the DailyDialog utterances have a label other than'none' or'happy', and BID11 only labels'happy','sad', and'angry'. The open resource we propose consists of crowdsourced one-on-one conversations, and covers a large set of emotions in a balanced way. We then examine how to train a dialogue system that is more adept at responding to emotional cues. While a rule-based system can be built around mapping predicted emotions to responses, end-toend dialogue systems relying on neural networks and trained on conversation corpora BID36 BID42 BID35 BID3 BID24 BID48 offer the promise of better generalization to new contexts. Through an extensive set of experiments, we show that fine-tuning a dialogue agent on our dataset in better performance on a novel empathetic dialogue task. The pretraining of the dialogue agent on Internet conversation data is the most time-consuming step of this pipeline, by an order of magnitude. To make it easier for practitioners to improve performance of a model on the empathetic task with minimal re-training while re-using existing resources, we compare various ways of supplementing a pretrained model with additional representations from external tasks, and show that even simplistic schemes can lead to better performance. The contributions of this work are thus threefold: 1) we release a novel empathetic dialogue dataset as a new benchmark; 2) we show that using this dataset for training can improve the performance of an end-to-end dialogue system on empathetic dialogue; and 3) we compare multiple ways to further improve performance with combined representations while not requiring onerous re-training. Responding well to emotions requires sufficient coverage of human expression. Multiple schemas have attempted to organize the spectrum of emotions, from a handful of basic emotions derived from biological responses BID5 BID32 to larger sets of subtle emotions inferred from contextual situations BID37. We incorporate emotions from multiple annotation schemas, noting that emotions merely inferred from a situation are important in dialogue scenarios. Rich information can be represented by learning multidimensional distributional embeddings from data, as has proven successful for many language applications BID9. These distributional representation approaches are at the core of the current state of the art in emotion classification BID4 BID26 ) that build on deep networks pretrained on large-scale weakly labelled data such as emojis BID6 or hashtags BID27, gathered from public social media content published on Twitter. The SEMEVAL2019 EmoContext challenge also uses conversation data for detection of three basic s e n ti m e n ta l p ro u d c o n fi d e n t fu ri o u s c a ri n g a n g ry te rr ifi e d n o s ta lg ic tr u s ti n g s a d h o p e fu l je a lo u s a s h a m e d g ra te fu l a n x io u s a n ti c ip a ti n g a p p re h e n s iv e lo n e ly d is a p p o in te d e m b a rr a s s e d fa it h fu l Figure 2: Distribution of situation/conversation labels within EMPATHETICDIALOGUES. Percentages per class are also listed in the appendix.emotions over two turns of context from Twitter exchanges BID11. While public social media content has the advantage of being spontaneous (not elicited) data, it suffers from two shortcomings when used to train a model intended for one-on-one conversation (as opposed to, say, a bot designed to post on Twitter). First, the content is extracted from a context of communication in front of large "peripheral audiences" BID8 which include potentially everyone with an Internet connection, where the need for curated self-presentation BID7 and the uncertainty as to how wide that audience may be have been shown to lead to different choices of subject matters compared to private messaging, with people sharing more intense and negative emotions through private channels BID2 BID20. Second, Tweets are generally a short-form format limited to 140 characters, which is not a constraint that applies to general conversation. In this work, we attempt to generate a more balanced coverage of emotions than would appear in public social media content, within a one-on-one framing of unconstrained utterances that is closer to our ultimate goal of training a model for conversation that can respond to any emotion. Several works have attempted to make chit-chat dialogue models more engaging by grounding them in personal contexts BID18 BID48 BID24, but focusing on personal facts ("I am from New York") rather than situations. The DAILYDIALOG dataset BID19, comprising about 13k dialogues obtained by crawling educational websites intended for learners of English, includes many dialogues anchored in everyday situations and has been annotated post-hoc with emotion labels, but only ≈ 5% of the utterances have a label other than "none" or "happy", and dialogues are mostly limited to domains deemed appropriate for use as a language learning tool (ordering from a restaurant, asking for directions, shopping for a specific item, introductions, etc). Our task focuses explicitly on conversations about an emotionally grounded situation, and considers a richer, evenly distributed set of emotions. We also introduce an explicit single listener in the conversation who is reacting to the situation being described in an empathetic way, to make the setting as close as possible to our desired goal of a one-on-one empathetic conversation. Several other works have focused on controlling the emotional content of a text response either through a manually specified target BID50 BID49 BID43 BID12 BID13 or through a general term to encourage higher levels of affect BID1, with evaluations focused on matching a predetermined desired emotion rather than empathetic responding. BID29 generate responses conditioned on a specified politeness setting (polite, rude or neutral), where politeness is viewed as a style of language. BID14 investigate how to respond to emotions detected from an image. By contrast, our work examines how to produce empathetic responses that are appropriate to signals inferred purely from text, and not intended to themselves convey a pre-specified emotion. We consider an open-domain one-on-one conversational setting where two people are discussing a situation that happened to one of them and that led to the experience of a given feeling. Emotional situation grounding Each conversation is grounded in a situation, which one participant writes about in association with a given emotion label. We consider 32 emotion labels, listed in Figure 2. To select this set of labels, we drew inspiration from previous datasets (; BID39 BID37 BID19 BID27, consolidating the labels from each into a merged list. The person who wrote the situation description (Speaker) initiates a conversation to talk about it. The other conversation participant (Listener) becomes aware of the underlying situation through what the Speaker says and responds. Speaker and Listener then exchange up to 6 more turns. We include two example conversations from the training data in Table 1. The models discussed below are tested in the role of Listener responding to the Speaker. The situation description generated by the Speaker is not given to the models (just as it was not given to the Listener during dialogue collection). Our data could also be used to generate conversations for the Speaker conditioned on the situation description; we leave this for later work. The ing dataset comprises 24,850 conversations about a situation description, gathered from 810 different participants, which will be made publicly available online and through ParlAI. The crowdsourced dialogs were obtained using the ParlAI framework to interact with the Amazon Mechanical Turk (MTurk) platform. Details of the crowdsourcing procedure are given in the Supplemental Material. The distribution of emotion label prompts TAB1 ) is close to evenly distributed across categories with a few categories that are selected slightly more/less often. The average situation description is 19.8 words. Each conversation is allowed to be 4-8 utterances long (the average is 4.3 utterances per conversation). The average utterance length is 15.2 words long. We split the conversations into approximately 80% train, 10% validation, and 10% test partitions. To prevent overlap of discussed situations by a same participant between partitions in case a participant chose to discuss the same situation twice, we split the data so that all sets of conversations with the same speaker providing the initial situation description would be in the same partition. The final train/val/test split was 19533 / 2770 / 2547 conversations, respectively. In this section, we examine how our dataset can be used to make generic chitchat models more empathetic, and different ways existing models can be combined to produce more empathetic responses. We use our dialogues to train and evaluate models in the task of generating conversation responses, with the model playing the Listener role. At test time, the dialogue model has access to previous utterances in the dialogue, but not to the emotion word prompt (e.g., "proud"), nor to the situation description generated by the Speaker, as would be the case in a normal conversation. Given a dialogue context x of n previous conversation utterances concatenated and tokenized as x 1, · · ·, x m, followed by a target responseȳ, our models are trained to maximize the likelihood p(ȳ|x) of producing the target response. We investigate both generation and retrieval settings as described in FIG2. We base our models on Transformer networks BID40, which have proven successful in machine translation and dialogue generation tasks BID48 BID24.Retrieval In the retrieval set-up, the model is given a large set Y of candidate responses and picks the "best" one, y *. We use the retrieval Transformer-based architecture from: two Transformer encoders separately embedding the context, x, and candidates, y ∈ Y, as h x and h y, respectively. The model chooses the candidate sentence according to a softmax on the dot product: h x · h y. We minimize the negative log-likelihood of selecting the correct candidate. At train time, we use all of the sentences from the batch as candidates, with a large batch size of 512 to give the model more negative examples. At inference time, we experiment with multiple sets of candidate sentences for the model to choose from. First, we use all of the response utterances in the EMPATHETICDIALOGUES training set (Y ED). We also try including candidate utterances from two other large dialogue datasets: the DailyDialog BID19 training set (Y DD) and up to a million utterances from a dump of 1.7 billion Reddit conversations (Y R). Left: In the retrieval set-up, each candidate y is tokenized into y 1, y 2, · · · and encoded into vector h y by the candidate encoder. The system outputs the candidate y * that maximizes dot product h x · h y. Right: In the generative set-up, the encoded context h x is used as input to the decoder to generate start symbol < /s > and tokens y 1, y 2, · · ·. The model is trained to minimize the negative log-likelihood of target sequenceȳ conditioned on context x. Generation In the generation set-up, we use the full Transformer architecture BID40, consisting of an encoder and a decoder. The Transformer decoder uses the encoder output to predict a sequence of words y, and is trained to minimize the negative log-likelihood of the target sequenceȳ. At inference time, we use diverse beam search from BID41.Training Details We pretrain our models on predicting replies from a dump of 1.7 billion Reddit conversations. We limit the maximum number of word tokens in the context and response to be 100 each. The Transformer networks used in all experiments have the same base architecture (four layers and six transformer heads) from BID24, unless specified. For all models, we train for up to 10 epochs, keeping the version that has the lowest loss on the validation set. We use 300-d word embeddings pretrained on common-crawl data using fastText BID9. Conversations utterances Fine-tuning over the task domain data may improve the model, since our data was explicitly collected with instructions to be empathetic, in a one-on-one setting, which is different from the Reddit conversation data used for pretraining. We fine-tune pretrained models to predict the next utterance over our EMPATHETICDIALOGUES with a context window of four previous utterances, which is the average length of a conversation in our dataset. These models are hereafter referred to as "Base" models. This fine-tuning is conducted for all architectures except those referred to as "Pretrained".Emotion labels If the most appropriate response depends on some information for which supervision is available, e.g., the emotions at play, nudging the model to encode this information could in better performance. We experiment with this by training the base architecture in the one-tomany style of multi-task learning that has been used for NLP seq2seq settings BID23. In this set-up FIG3, MULTITASK, we alter the objective function to also optimize for predicting the emotion label of the conversation to which the utterances being encoded belong. We add to the context encoder a linear layer and softmax that predicts the emotion label from the context sentences. The objective function is altered to be the average of the negative log-likelihood of predicting the next utteranceȳ and the negative log-likelihood of the added linear layer being able to predict the correct emotion. Many existing models have been pretrained on supervised tasks that may be relevant to empathetic responding. For instance, BID6 have released a model trained on more than a billion tweets to predict emoji labels. Combining these models with the representation of our base architecture may reap benefits from previous training time and external training data without having to: Three ways to incorporate additional supervised information, here from an emotion classification task. Left: the context representation h x outputted by the context encoder is used both as input to a classifier, and to generate the next utterance as in the base setting. The encoder is trained with gradients from both output branches. Middle: an input sequence (that can be either a dialogue context or a candidate) is first run through a pretrained classifier, and the top k output labels are prepended to the sequence, which is then run through the corresponding (context or candidate) encoder to output a hidden representation h w (either h x or h y) as in the base setting. Right: an input sequence (that can be either a dialogue context or a candidate) is run through the corresponding encoder as well as a pretrained classifier with the last layer removed. The outputs h w and h c are concatenated and linearly projected into a representation h e.redo the work or requiring access to that data, which may matter to practitioners. Note that this may considerably augment the effective capacity of the ing models, as well as the total amount of training data used overall, so it isn't surprising that this should in better performance. Our goal here is to experiment with a large range of settings, in order to get an empirical sense of how robust performance improvement is to variations in architecture set-up or supervision domain. We experiment with two set-ups for adding explicit supervised information: prepending predicted label words, and ensemble learning over encoders trained on prediction objectives. Prepending Top-K Predicted Labels This set-up FIG3, PREPEND-K, is a very simple way to add supervised information to data, requires no architecture modification, and can be used with black-box classifiers. The top-K predicted labels from the supervised classifier are merely prepended to the beginning of the token sequence as encoder input:I finally got promoted! −→ proud excited joyful I finally got promoted! Similar methods have been used for controlling the style of generated text (e.g. BID29). Here, we use a fastText model BID16 as prediction architecture. Both the context and the candidates are run through the classifier and receive prepended labels. We experiment with two sources of supervision (EMOPREPEND and TOPICPREPEND, respectively). Closest to our task, we train a classifier to predict the emotion label from the description of the situation written by the Speaker before the dialogue for the training set dialogues of our EMPATHETICDIALOGUES. 1 To gauge whether supervision from a more distant task would still be helpful, we also experiment with a classifier trained on the 20-Newsgroup dataset BID15, for topic classification. In this set-up (ENSEM), we augment the encoders to incorporate latent representations from pretrained supervised architectures. We replace each of the encoders in our Transformer networks with the Ensemble encoder in FIG3, similar to a many-to-one style encoder-decoder arhcitecture BID23. This encoder takes the encoding h w from our basic Transformer encoder (either h x or h y), already trained on our data, and concatenates it with the representation h c extracted from the inner layer of a classification network. Here, we use the penultimate layer of a deep classifier. The concatenated encodings are projected linearly to the dimension required by the decoder, whose architecture doesn't change. When training the dialogue model, we freeze both the base Transformer encoder and the pretrained classifier (grayed out in FIG3, and train only the linear layers (and the decoder for generative systems). We used supervision from two sources that are related to emotion: Emojis from Twitter, through the use of the trained Deepmoji system BID6 released by the authors, either as-is (ENSEM-DM) or fine-tuned on the situation descriptions of EMPATHETICDIALOGUES(ENSEM-DM+), and a similarly large-scale dataset of public social media content labelled by their writers with emotion tags such as'annoyed', used to train a second Transformer encoder (ENSEM-TRAN). We evaluate the models on their ability to reproduce the Listener's portion of the conversation (i.e. the ability to react to someone else's story). We use both automated metrics and human evaluation to score each model's retrievals/generations. Human evaluation is useful, as automated metrics don't always correlate with human judgments of dialogue quality BID21 ), but we provide automated metrics to give a sense of how well they align with human judgment on this task. Automated Metrics (Table 2) For both retrieval and generative systems, we compute BLEU scores BID30 for the final response and compare against the gold label (the actual response), following the practice of earlier work in dialogue generation BID45 BID36. For the generative systems, we additionally report perplexity of the actual gold response. For the retrieval systems, we additionally compute p@1,100, the accuracy of the model at choosing the correct response out of a hundred randomly selected examples in the test set. When we compute p@1,100, the actual response is included in the candidates, unlike inference from the retrieval systems for all other metrics, which only uses training utterances as candidates. Human Ratings TAB2 ) We run two sets of crowdsourcing tasks on MTurk for humans to score the model responses. In the first task, participants are given a model's output for a randomly selected test set example and asked to score different aspects of the model. The rating task provides a means of comparing aspects of responses, and we are able to ask raters specifically about whether the response is acknowledging the conversation partner's feelings. We collected at least 100 ratings per model and asked about three aspects of performance, all rated on a likert scale (1: not at all, 3: somewhat, 5: very much):• Empathy/Sympathy: did the responses show understanding of the feelings of the person talking about their experience?• Relevance: did the responses seem appropriate to the conversation? Were they on-topic?• Fluency: could you understand the responses? Did the language seem accurate?Human A/B Rankings TAB4 In the second human evaluation task, participants were given output from two (randomly ordered) models and asked to select the better response, with an additional option of selecting "equal" or "neither". For this task, we only gave workers test set examples where the pair of models had differing responses. We collected at least 50 ratings per pair of models. TAB1 shows that fine-tuning to predict conversation responses on our data improves all automated metrics. Using only in-domain candidates leads to slightly higher BLEU scores. Training in the multitask setting degrades automated metrics compared to fine-tuning without emotion label supervision, except for average BLEU in the retrieval setting. Human evaluations in TAB2 show that fine-tuning a conversational model on the EMPATHETICDIALOGUES data and using the candidates in the dataset substantially improves performance on all metrics, in particular on the Empathy subscore of most interest to us, in both retrieval and generation set-ups. The bulk of the improvement comes from fine-tuning on the dataset, while from the multitask setting suggest slight improvements in the Empathy rating compared to the model fine-tuned with utterances only. Resources and capacity Fine-tuning the pretrained conversation model on dialogue utterances from our data does not change the size of the model, so the performance improvement is not due to increased model capacity. However, the multitask setting slightly increases the capacity of the base architecture (about 10k more parameters out of 84.3M or 85.1M for the retrieval and generative architectures, respectively), which may account for the performance improvement. To give a better sense of how resources and capacities balance out, TAB3 provides figures of resource use and number of parameters. We also include for a larger Transformer model (5 layers instead of 4). A generative model trained on our data performs as well or better than a generative pretrained Transformer model with one extra layer, which has a substantially larger capacity (a million more parameters), but not as well as such a model after it has been fine-tuned on our data. Augmenting conversation models with external pretrained classifiers TAB1 shows automated metrics for retrieval models augmented with external classifiers. Generally, the accuracy of the rankings (p@1,100) worsens, but the average BLEU scores improves. For generative models, the automated metrics are improved only in the Ensemble setting. Human evaluations TAB2 suggest that nearly all models tend to improve over the Base model on the Empathy score, but only a few setups lead to significant improvement. The best performance is achieved by the ensemble trained on emotion tags (Ensem-Tran), but models that prepend topics are overall performing better than similar models trained on emotion supervision, or than the fine-tuned DeepMoji ensemble. Relevance is significantly improved only by models augmented with topic classifiers. Fluency scores are mostly not significantly changed, and remain above 4. The benefit of combining pretrained models is that this requires minimal re-training of parameters and allows for leveraging models that are already trained, or that can be trained in a few hours in the case of fastText classifiers. If re-training is not a concern, pre-training a larger Transformer model and fine-tuning it on our data leads to better performance (see TAB3). We observe that augmenting that model with pre-trained unsupervised sentence representations in an ensemble setting could also be a promising direction, but we leave that for future investigations. 1.56 Table 6: Examples of model responses. In the first example, responses from the models with emotion prediction components focusing on the feelings of the speaker are more generic. In the second example, they focus on the feelings on the speaker while remaining topically specific. Tomorrow is my first university day and I'm very nervousGen-base What are you studying? Gen-multitask Good luck! Gen-emoPrepend-5 I'm sure you will be fine. Gen-ensemb-DM+ Are you nervous? I had to go out of town by myself for three weeks. It was hard not being able to talk to anyone I knew. Do you still talk to the people you went with? Ret-multitask I'm sorry to hear that. Do you think you struggle to communicate with people? Ret-emoPrepend-5 That sounds difficult. Did you try talking to people in your neighborhood? Ret-ensemb-DM+ Did you start to feel a bit lonely?A/B comparisons To try and capture the main takeaways, we show averaged over pairs of models with similar characteristics TAB4. Responses from retrieval systems are frequently chosen over generation systems (generation:retrieval ratio: 0.62). Responses from models with added emotion supervision are often ranked above the raw pretrained model (ratios of 3.67 and 5.73), and less so against the base Transformer model fine-tuned on our data: in the retrieval case, the ratio of picking a line from a model with emotion supervision vs. base model, at 0.97, indicates that raters generally picked from each equally. However, in the generation case, the raters may have favored the models with emotion supervision explicitly represented (average ratio of 1.56). We introduce a new dataset of 25k dialogues grounded in situations prompted by specific emotion labels. Our experiments show that using this dataset to fine-tune conversation models leads to responses that are evaluated by humans as more empathetic, and that simple schemes to augment a fine-tuned model with an external pretrained classifier can lead to better performance without requiring onerous retraining. Future work will investigate how to use this dataset to model the Speaker, and how to integrate empathetic responding into more general dialogue when, for example, the needs for empathy have to be balanced with staying on topic or providing information (see Table 6).Other possible directions would be to see if this data can serve as additional weakly supervised data for more complex emotion-related tasks that look at emotion evolution or causality BID10 BID33. We hope that our and dataset will stimulate more research in the important direction of making dialog systems more empathetic. In TAB5, we include the exact percentage of emotion labels for the situation descriptions in our final dataset. We collected crowdsourced dialogues using the ParlAI platform BID25 to interact with Amazon Mechanical Turk (MTurk). A pair of workers are asked to (i) select an emotion word each and describe a situation when they felt that way, and (ii) have a conversation about each of the situations, as outlined below. Writing a situation description prompted by an emotion label In the first stage of the task, workers are asked to describe in a few sentences a situation based on a feeling label. Each worker is given three labels from our list of 32 emotions. They are asked to select one of the options and write a short description of a personal situation where they felt that way. We ask the workers to try to keep these descriptions between 1-3 sentences. The average response is 19.8 words. Having a conversation In the second stage, two workers are paired and asked to have two short chats with each other. In each chat, one worker (speaker) starts a conversation about the situation they previously described, and the other worker (listener) responds. Neither can see what the other worker was given as emotion label or the situation description they submitted, so they must respond to each others' stories based solely on cues within the conversation. Each conversation is allowed to be 4-8 utterances long (the average is 4.31 utterances per conversation). The average utterance length was 15.2 words long. Ensuring balanced prompt coverage After the first few initial rounds of data collection, we forced workers to select prompts among three emotion labels that had been the least chosen overall so far if it was their first time working on the task. If they had already performed the task, the offered emotion labels were among those that they had not chosen before, or among the three least chosen ones if they had worked on nearly all of them at least once. This process made workers select emotions that they might not spontaneously have preferred, but we observed an initial bias for situations that were easier to describe (e.g, a situation causing surprise). Given that a conversation model trained for empathetic responding needs to be able to handle emotions even if they are less frequent, we opted for this balancing procedure to make training for these categories easier, while still allowing for some measure of choice for workers. Workers 810 US workers were recruited using MTurk. Each worker had to contribute at least one situation description, and one pair of conversations: one as Speaker about the situation they contributed, and one as Listener about the situation contributed by another worker. Workers were allowed to accept additional hits and contribute more sets of situation descriptions, conversations as Speaker, conversations as Listener. The median number of conversation per worker was 8, while We recently got 9 chicks and we've been having to work on making them a coop! I had to do so much research but I think we finally have a place that they'll enjoy living when they aren't able to free range. Listener: OHH! I Love chickens! I have always wanted some. I have a duck! lol-What kind of chickens are they? Speaker: We currently have 2 Australorps, 3 Rhode Island Reds, 3 Barred Plymouth Rocks, and 1 Welsummer, but 4 of the 9 ended up being roosters. Ugh! Listener: Oh man! They fight sometimes. I hope they aren't too bad about waking you up in the morning. Chickens can be very sweet though! Speaker: I love my little hens, especially one I've named Curly. The roosters might get replaced by hens though because the crowing is so frustrating! Label: Surprised Situation: Speaker felt this when... "I got a lottery ticket while I was at work today. I won $100 on the scratch off. I was shocked. I never win." Conversation: Speaker: I won $100 on a scratch off today. I was shocked. I never win. Listener: Wow! How often do you play the lottery? Speaker: I usually go on our Tuesday break to buy one with coworkers. Listener: Neat! Well that is a fantastic feat. Maybe you can win again sometime? the average was 61, and some workers were definitely contributing more hits than others. To ensure quality, we hand-checked random subsets of conversations by our most-frequent workers. They were allowed to participate in as many of these HITs as they wanted for the first 10k conversations, then we added qualifications to limit the more frequently active workers to a maximum of 100 conversations. We include ten randomly selected dialogues from our training set in TAB6. Our dataset can also be used to train or fine-tune an emotion classifier, as we do in our PREPEND-K and ENSEM-DM+ set-ups. To give a sense of where the difficulty falls compared to existing emotion and sentiment classification benchmarks, we reproduce the table from BID6 and add when fine-tuning the Deepmoji model on our dataset, or using a fastText classifier TAB7. A.4.1 CROWDSOURCING DESCRIPTION Human evaluations were collected on MTurk. For the rating task, workers were shown one randomly subsampled example from the test set for a randomly selected model (this was done 100 times per model) and asked to rate that single response. 217 US workers participated in the rating task, and had to perform a minimum of one rating. For the human comparison task, workers were shown a dialogue context, and the two responses from a pair of models presented in a randomized order (this was done 50 times per pair of models). They had to select if they preferred one, the other, both equally, or neither. 337 US workers participated in the model comparison task. In Figure 5, we provide the exact comparisons between model responses for the ranking task. Scores less than 1 indicate that the vertical model is preferred, whereas scores greater than one indicate more of a preference for the horizontal model.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyesW2C9YQ
We improve existing dialogue systems for responding to people sharing personal stories, incorporating emotion prediction representations and also release a new benchmark and dataset of empathetic dialogues.
Granger causality is a widely-used criterion for analyzing interactions in large-scale networks. As most physical interactions are inherently nonlinear, we consider the problem of inferring the existence of pairwise Granger causality between nonlinearly interacting stochastic processes from their time series measurements. Our proposed approach relies on modeling the embedded nonlinearities in the measurements using a component-wise time series prediction model based on Statistical Recurrent Units (SRUs). We make a case that the network topology of Granger causal relations is directly inferrable from a structured sparse estimate of the internal parameters of the SRU networks trained to predict the processes’ time series measurements. We propose a variant of SRU, called economy-SRU, which, by design has considerably fewer trainable parameters, and therefore less prone to overfitting. The economy-SRU computes a low-dimensional sketch of its high-dimensional hidden state in the form of random projections to generate the feedback for its recurrent processing. Additionally, the internal weight parameters of the economy-SRU are strategically regularized in a group-wise manner to facilitate the proposed network in extracting meaningful predictive features that are highly time-localized to mimic real-world causal events. Extensive experiments are carried out to demonstrate that the proposed economy-SRU based time series prediction model outperforms the MLP, LSTM and attention-gated CNN-based time series models considered previously for inferring Granger causality. The physical mechanisms behind the functioning of any large-scale system can be understood in terms of the networked interactions between the underlying system processes. Granger causality is one widely-accepted criterion used in building network models of interactions between large ensembles of stochastic processes. While Granger causality may not necessarily imply true causality, it has proven effective in qualifying pairwise interactions between stochastic processes in a variety of system identification problems, e.g., gene regulatory network mapping , and the mapping of human brain connectome . This perspective has given rise to the canonical problem of inferring pairwise Granger causal relationships between a set of stochastic processes from their time series measurements. At present, the vast majority of Granger causal inference methods adopt a model-based inference approach whereby the measured time series data is modeled using with a suitable parameterized data generative model whose inferred parameters ultimately reveal the true topology of pairwise Granger causal relationships. Such methods typically rely on using linear regression models for inference. However, as illustrated in the classical bivariate example by , linear model-based Granger causality tests can fail catastrophically in the presence of even mild nonlinearities in the measurements, thus making a strong case for our work which tackles the nonlinearities in the measurements by exploring new generative models of the time series measurements based on recurrent neural networks. Consider a multivariate dynamical system whose evolution from an initial state is fully characterized by n distinct stochastic processes which can potentially interact nonlinearly among themselves. Our goal here is to unravel the unknown nonlinear system dynamics by mapping the entire network of pairwise interactions between the system-defining stochastic processes, using Granger causality as the qualifier of the individual pairwise interactions. In order to detect the pairwise Granger causal relations between the stochastic processes, we assume access to their concurrent, uniformly-sampled measurements presented as an n-variate time series x = {x t : t ∈ N} ⊂ R n. Let x t,i denote the i th component of the n-dimensional vector measurement x t, representing the measured value of process i at time t. Motivated by the framework proposed in , we assume that the measurement samples x t, t ∈ N are generated sequentially according to the following nonlinear, component-wise autoregressive model: x t,i = f i (x t−p:t−1,1, x t−p:t−1,2, . . ., x t−p:t−1,n) + e t,i, i = 1, 2,... n, where x t−p:t−1,j {x t−1,j, x t−2,j, . . ., x t−p,j} represents the most recent p measurements of the j th component of x in the immediate past relative to current time t. The scalar-valued component generative function f i captures all of the linear and nonlinear interactions between the n stochastic processes up to time t − 1 that decide the measured value of the i th stochastic process at time t. The residual e i,t encapsulates the combined effect of all instantaneous and exogenous factors influencing the measurement of process i at time t, as well as any imperfections in the presumed model. Equation 1 may be viewed as a generalization of the linear vector autoregressive (VAR) model in the sense that the components of x can be nonlinearly dependent on one another across time. The value p is loosely interpreted to be the order of the above nonlinear autoregressive model. We now proceed to interpret Granger causality in the context of the above component-wise time series model. Recalling the standard definition by , a time series v is said to Granger cause another time series u if the past of v contains new information above and beyond the past of u that can improve the predictions of current or future values of u. For x with its n components generated according to equation 1, the concept of Granger causality can be extended as suggested by as follows. We say that series j does not Granger cause series i if the componentwise generative function f i does not depend on the past measurements in series j, i.e., for all t ≥ 1 and all distinct pairs x t−p:t−1,j and x t−p:t−1,j, f i (x t−p:t−1,1, . . ., x t−p:t−1,j, . . ., x t−p:t−1,n) = f i x t−p:t−1,1,..., x t−p:t−1,j,..., x t−p:t−1,n. From equation 1, it is immediately evident that under the constraint in equation 2, the past of series j does not assert any causal influence on series i, in alignment with the core principle behind Granger causality. Based on the above implication of equation 2, the detection of Granger noncausality between the components of x translates to identifying those components of x whose past is irrelevant to the functional description of each individual f i featured in equation 1. Note that any reliable inference of pairwise Granger causality between the components of x is feasible only if there are no unobserved confounding factors in the system which could potentially influence x. In this work, we assume that the system of interest is causally sufficient , i.e., none of the n stochastic processes (whose measurements are available) have a common Granger-causing-ancestor that is unobserved. We undertake a model-based inference approach wherein the time series measurements are used as observations to learn an autoregressive model which is anatomically similar to the componentwise generative model described in equation 1 except for the unknown functions f i replaced with their respective parameterized approximations denoted by g i. Let Θ i, 1 ≤ i ≤ n denote the complete set of parameters encoding the functional description of the approximating functions {g i} n i=1. Then, the pairwise Granger causality between series i and the components of x is deduced from Θ i which is estimated by fitting g i's output to the ordered measurements in series i. Specifically, if the estimated Θ i suggests that g i's output is independent of the past measurements in series j, then we declare that series j is Granger noncausal for series i. We aim to design the approximation function g i to be highly expressive and capable of well-approximating any intricate causal coupling between the components of x induced by the component-wise function f i, while simultaneously being easily identifiable from underdetermined measurements. By virtue of their universal approximation property (Schäfer &), recurrent neural networks or RNNs are a particularly ideal choice for g i towards inferring the pairwise Granger causal relationships in x. In this work, we investigate the use of a special type of RNN called the statistical recurrent unit (SRU) for inferring pairwise Granger causality between multiple nonlinearly interacting stochastic processes. Introduced by , an SRU is a highly expressive recurrent neural network designed specifically for modeling multivariate time series data with complex-nonlinear dependencies spanning multiple time lags. Unlike the popular gated RNNs (e.g., long short-term memory (LSTM) and gated recurrent unit (GRU)) , the SRU's design is completely devoid of the highly nonlinear sigmoid gating functions and thus less affected by the vanishing/exploding gradient issue during training. Despite its simpler ungated architecture, an SRU can model both short and long-term temporal dependencies in a multivariate time series. It does so by maintaining multi-time scale summary statistics of the time series data in the past, which are preferentially sensitive to different older portions of the time series x. By taking appropriate linear combinations of the summary statistics at different time scales, an SRU is able to construct predictive causal features which can be both highly component-specific and lag-specific at the same time. From the causal inference perspective, this dual-specificity of the SRU's predictive features is its most desirable feature, as one would argue that causal effects in reality also tend to be highly localized in both space and time. The main contributions of this paper can be summarized as follows: 1. We propose the use of statistical recurrent units (SRUs) for detecting pairwise Granger causality between the nonlinearly interacting stochastic processes. We show that the entire network of pairwise Granger causal relationships can be inferred directly from the regularized block-sparse estimate of the input-layer weight parameters of the SRUs trained to predict the time series measurements of the individual processes. 2. We propose a modified SRU architecture called economy SRU or eSRU in short. The first of the two proposed modifications is aimed at substantially reducing the number of trainable parameters in the standard SRU model without sacrificing its expressiveness. The second modification entails regularizing the SRU's internal weight parameters to enhance the interpretability of its learned predictive features. Compared to the standard SRU, the proposed eSRU model is considerably less likely to overfit the time series measurements. 3. We conduct extensive numerical experiments to demonstrate that eSRU is a compelling model for inferring pairwise Granger causality. The proposed model is found to outperform the multi-layer perceptron (MLP), LSTM and attention-gated convolutional neural network (AG-CNN) based models considered in the earlier works. In the proposed scheme, each of the unknown generative functions f i, 1 ≤ i ≤ n in the presumed component-wise model of x in is individually approximated by a distinct SRU network. The i th SRU network sequentially processes the time series measurements x and outputs a next-step prediction sequencex denotes the predicted value of component series i at time t + 1. The predictionx i,t+1 is computed in a recurrent fashion by combining the current input sample x t at time t with the summary statistics of past samples of x up to and including time t − 1 as illustrated in Figure 1. The following update equations describe the sequential processing of the input time series x within the i th SRU network in order to generate a prediction of x i,t+1. Feedback: Recurrent statistics: Multi-scale summary statistics: Single-scale summary statistics: Output features: Output prediction: The function h in the above updates is the elementwise Rectified Linear Unit (ReLU) operator, h(·):= max(·, 0), which serves as the nonlinear activation in the three dedicated single layer neural networks that generate the recurrent statistics φ i,t, the feedback r i,t and the output features o i,t in the i th SRU network. In order to generate the next-step prediction of series i at time t, the i th SRU network first prepares the feedback r i,t by nonlinearly transforming its last hidden state u i,t−1. As stated in equation 3a, a single layer ReLU network parameterized by weight matrix W Linear C o n c a t e n a t e Concatenate H α1 (z)... For values of scale α ≈ 1, the single-scale summary statistic u α i,t in equation 3d is more sensitive to the recent past measurements in x. On the other hand, α ≈ 0 yields a summary statistic that is more representative of the older portions of the input time series. elaborates on how the SRU is able to generate output features (o i,t, 1 ≤ i ≤ n) that are preferentially sensitive to the measurements from specific past segments of x by taking appropriate linear combinations of the summary statistics corresponding to different values of α in A. in regulates the influence of the individual components of the input time series x on the generation of the recurrent statistics φ i,t, and ultimately the next-step prediction of series i. In real-world dynamical systems, the networked interactions are typically sparse which implies that very few dimensions of the input time series x actually play a role in the generation of its individual components. Bearing this property of the networked interactions in mind, we are interested in learning the parameters Θ We propose to learn the parameters Θ SRU of the i th SRU network by minimizing the penalized mean squared prediction error loss as shown below. In the above, the network outputx i,t depends nonlinearly on W in according to the composite relation described by the updates (3a)-(3f) and W in (:, j) being estimated as the all-zeros vector is that the past measurements in series j do not influence the predicted future value of series i. In this case, we declare that series j does not Granger-cause series i. Moreover, the index set supporting the non-zero columns in the estimated weight matrixŴ in enumerates the components of x which are likely to Granger-cause series i. Likewise, the entire network of pairwise Granger causal relationships in x can be deduced from the non-zero column support of the estimated weight matrices W (i) in, 1 ≤ i ≤ n in the n SRU networks trained to predict the components of x. The component-wise SRU optimization problem in equation 4 is nonconvex and potentially has multiple local minima. To solve forΘ SRU, we use first-order gradient-based methods such as stochastic gradient descent which have been found to be consistently successful in finding good solutions of nonconvex deep neural network optimization problems . Since our approach of detecting Granger noncausality hinges upon correctly identifying the all-zero columns of W (i) in, it is important that the first-order gradient based parameter updates used for minimizing the penalized SRU loss ensure that majority of the coefficients in W in, we follow the same approach as and resort to a first-order proximal gradient descent algorithm to find a regularized solution of the SRU optimization. The gradients needed for executing the gradient descent updates of the SRU network parameters are computed efficiently using the backpropagation through time (BPTT) procedure . By computing the summary statistics of past measurements at sufficiently granular time scales, an SRU can learn predictive causal features which are highly localized in time. While a higher granularity of α in A translates to a more general SRU model that fits better to the time series measurements, it also entails substantial increase in the number of trainable parameters. Since measurement scarcity is typical in causal inference problems, the proposed component-wise SRU based time series prediction model is usually overparameterized and thus susceptible to overfitting. The typical high dimensionality of the recurrent statistic φ t accentuates this issue. To alleviate the overfitting concerns, we propose two modifications to the standard SRU aimed primarily at reducing its likelihood of overfitting the time series measurements. The modifications are relevant regardless of the current Granger causal inference context, and henceforth we refer to the modified SRU as Economy-SRU (eSRU). We propose to reduce the number of trainable parameters in the i th SRU network by substituting the feedback ReLU network parameterized by W Figure 2: Proposed two-stage feedback in economy-SRU. the associated time series measurements, their highdimensional summary statistics learned by the SRU network as u i,t tend to be highly structured, and thus u i,t has significantly fewer degrees of freedom relative to its ambient dimension. Thus, by projecting the md φ -dimensional u i,t onto the d r (md φ) rows of D (i) r, we obtain its low-dimensional embedding v i,t which nonetheless retains most of the contextual information conveyed by the uncompressed u i,t 1;. The second stage of the proposed feedback network is a single/multi-layer ReLU network which maps the sketched summary statistics v i,t to the feedback vector r i,t. The second stage ReLU network is parameterized by weight matrix W,(i) r ∈ R dr×d r and bias b Compared to the standard SRU's feedback whose generation is controlled by md φ d r + d r trainable parameters, the proposed feedback network has only d r d r + d r trainable parameters, which is substantially fewer when d r md φ. Consequently, the modified SRU is less susceptible to overfitting. In the standard SRU proposed by , there are no restrictions on the weight matrix W In this spirit, we propose the following penalized optimization problem to estimate the parameters Θ r } of the eSRU model equipped with the two-stage feedback proposed in Section 4.1: Here λ 1 and λ 2 are positive constants that bias the group sparse penalizations against the eSRU's fit to the measurements in the i th component series. The term o obtained by extracting the weight coefficients indexed by set G j,k. As shown via an example in Fig. 3, the index set G j,k enumerates the m weight coefficients in the row vector W o to the effect that each predictive feature in o i,t depends on only a few components of the recurrent statistic φ i,t via their linearly mixed multi-scale exponentially weighted averages. We opine that the learned linear mixtures, represented by the intermediate products W, are highly sensitive to certain past segments of the input time series x. Consequently, the output features in o i,t are both time-localized and component-specific, a common trait of real-world causal effects. 1 Gaussian random matrices of appropriate dimensions are approximately isometries with overwhelming probability . However, instead of using n independent instantiations of a Gaussian random matrix for initializing D (i) r, 1 ≤ i ≤ n, we recommend initializing them with the same random matrix, as the latter strategy reduces the probability that any one of them is spurious encoder by n-fold. Figure 3: An illustration of the proposed group-wise mixing of the multi-timescale summary statistics u i,t in the i th SRU (with d φ = 5) towards generating the j th predictive feature in o i,t. The weights corresponding to the same colored connections belong to the same group. The above group-sparse regularization of the weight coefficients in W in, is pivotal to enforcing that the occurrence of any future pattern in a time series can be attributed to the past occurrences of a few highly time-localized patterns in the ancestral time series. The of our numerical experiments further confirm that by choosing λ 1 and λ 2 appropriately, the proposed group-wise sparsity inducing regularization of W We evaluate the performance of the proposed SRU-and eSRU-based component-wise time series models in inferring pairwise Granger causal relationships in a multivariate time series. The proposed models are compared to the existing MLP-and LSTM-based models in and the attention-gated CNN-based model (referred hereafter as Temporal Causal Discovery Framework (TCDF)) in. To ensure parity between the competing models, the maximum size of all the input/hidden/output layers in the different NN/RNN time series models is fixed to 10, unless specified otherwise. The complete list of tuned hyperparameters of the considered models used for different datasets is provided in Appendix G. The performance of each method is qualified in terms of its AUROC (Area Under the Receiver Operating Characteristic curve). Here, the ROC curve illustrates the trade off between the true-positive rate (TPR) and the false-positive rate (FPR) achieved by the methods towards the detection of n 2 pairwise Granger causal relationships between the n measured processes in the experiment. The ROC curves of SRU and eSRU models are obtained by sweeping through different values of the regularization parameter λ 1 in equation 4 and equation 5, respectively. Likewise, the ROCs of component-wise MLP and LSTM models are obtained by varying λ 1's counterpart in. For TCDF, the ROC curve is obtained by varying the threshold that is applied to attention scores of the trained AG-CNN model in. In the first set of experiments, the time series measurements x intended for Granger causal inference are generated according to the Lorenz-96 model which has been extensively used in climate science for modeling and prediction purposes . In the Lorenz-96 model of an nvariable system, the individual state trajectories of the n variables are governed by the following set of odinary differential equations: where the first and the second terms on the RHS represent the advection and the diffusion in the system, respectively, and the third term F is the magnitude of the external forcing. The system dynamics becomes increasingly chaotic for higher values of F . We evaluate and compare the accuracy of the proposed methods in inferring pairwise Granger causal relationships between n = 10 variables with Lorenz-96 dynamics. We consider two settings: F = 10 and F = 40 in order to simulate two different strengths of nonlinearity in the causal interactions between the variables. Here, the ground truth is straightforward i.e., for any 1 ≤ i ≤ n, the i th component of time series x is Granger caused by its components with time indices from i − 2 to i + 1. In the case of weak nonlinear interactions (F = 10), from Table 1a, we observe that eSRU achieves the highest AUROC among all competing models. The gap in performance is more pronounced when fewer time series measurements (T = 250) are available. In case of stronger nonlinear interactions (F = 40), we observe that both SRU and eSRU are the only models that are able to perfectly recover the true Granger causal network (Table 1b). Surprisingly, the SRU and eSRU models perform poorer when F is small. This could be attributed to the proposed models not sufficiently regularized when fitted to weakly-interacting time series measurements that are less nonlinear. In the second set of simulations, we consider the time series measurements x to be generated according to a 3 rd order linear VAR model: where the matrices A (i), i = 1, 2, 3 contain the regression coefficients which model the linear interactions between its n = 10 components. The noise term w t is Gaussian distributed with zero mean and covariance 0.01I. We consider a sparse network of Granger causal interactions with only 30% of the regression coefficients in A i selected uniformly being non-zero and the regression matrices A i being collectively joint sparse (same setup as in). All non-zero regression coefficients are set equal to 0.0994 which guarantees the stability of the simulated VAR process. From Table 2, we observe that all time series models generally achieve a higher AUROC as the number of measurements available increases. For T = 500, the component-wise MLP and the proposed eSRU are statistically tied when comparing their average AUROCs. For T = 1000, eSRU significantly outperforms the rest of the time series models and is able to recover the true Granger causal network almost perfectly. In the third set of experiments, we apply the different learning methods to estimate the connections in the human brain from simulated blood oxygenation level dependent (BOLD) imaging data. Here, the individual components of x comprise T = 200 time-ordered samples of the BOLD signals simulated for n = 15 different brain regions of interest (ROIs) in a human subject. To conduct the experiments, we use simulated BOLD time series measurements corresponding to the five different human subjects (labelled as 2 to 6) in the Sim-3.mat file shared at https://www.fmrib.ox.ac.uk/datasets/netsim/index.html. The generation of the Sim3 dataset is described in. The goal here is to detect the directed connectivity between different brain ROIs in the form of pairwise Granger causal relationships between the components of x. From Table 3, it is evident that eSRU is more robust to overfitting compared to the standard SRU and detects the true Granger causal relationships more reliably. Interestingly, a single-layer cMLP model is found to outperform more complex cLSTM and attention gated-CNN (TCDF) models; however we expect the latter models to perform better when more time series measurements are available. In the final set of experiments, we evaluate the performance of the different time series models in inferring gene regulation networks synthesized for the DREAM-3 In Silico Network Challenge . Here, the time series x represents the in silico measurements of the gene expression levels of n = 100 genes, available for estimating the gene regulatory networks of E.coli and yeast. A total of five gene regulation networks are to be inferred (two for E.coli and three for yeast) from the networks' gene expression level trajectories recorded while they recover from 46 different perturbations (each trajectory has 21 time points). All NN/RNN models are implemented with 10 neurons per layer, except for the componentwise MLP model which has 5 neurons per layer. From Table 4, we can observe that the proposed SRU and eSRU models are generally more accurate In this work, we addressed the problem of inferring pairwise Granger causal relationships between stochastic processes that interact nonlinearly. We showed that the such causality between the processes can be robustly inferred from the regularized internal parameters of the proposed eSRU-based recurrent models trained to predict the time series measurements of the individal processes. Future work includes: i Investigating the use of other loss functions besides the mean-square error loss which can capture the exogenous and instantaneous causal effects in a more realistic way. ii Incorporating unobserved confounding variables/processes in recurrent models. iii Inferring Granger causality from multi-rate time series measurements. Initial efforts in testing for nonlinear Granger causality focused mostly on the nonparameteric approach. to the multivariate setting. The biggest common drawback of these nonparameteric tests is the large sample sizes required to robustly estimate the conditional probabilities that constitute the test statistic. Furthermore, the prevalent strategy in these methods of testing each one of the variable-pairs individually to detect pairwise Granger causality is unappealing from a computational standpoint, especially when a very large number of variables are involved. In the model driven approach, the Granger causal relationships are inferred directly from the parameters of a data generative model fitted to the time series measurements. Compared to the nonparameteric approach, the model-based inference approach is considerably more sample efficient, however the scope of inferrable causal dependencies is dictated by the choice of data generative model. Nonlinear kernel based regression models have been found to be reasonably effective in testing of nonlinear Granger causality. Kernel methods rely on linearization of the causal interactions in a kernel-induced high dimensional feature space; the linearized interactions are subsequently modeled using a linear VAR model in the feature space. Based on this idea, proposes a kernel Granger causality index to detect pairwise nonlinear Granger causality in the multivariate case.; , the nonlinear dependencies in the time series measurements are modeled using nonlinear functions expressible as sums of vector valued functions in the induced reproducing kernel Hilbert space (RKHS) of a matrix-valued kernel. , additional smoothness and structured sparsity constraints are imposed on the kernel parameters to promote consistency of the time series fitted nonlinear model. proposes a nonlinear kernel-based structural VAR model to capture instantaneous nonlinear interactions. The existing kernel based regression models are restrictive as they consider only additive linear combinations of the RKHS functions to approximate the nonlinear dependencies in the time series. Furthermore, deciding the optimal order of kernel based regression models is difficult as it requires prior knowledge of the mimimum time delay beyond which the causal influences are negligible. By virtue of their universal approximation ability, RNNs offer a pragmatic way forward in modeling of complex nonlinear dependencies in the time series measurements for the purpose of inferring Granger causal relationships. However, they all adopt the same naïve strategy whereby each pairwise causal relationship is tested individually by estimating its causal connection strength. The strength of the causal connection from series j to series i is determined by the ratio of mean-squared prediction errors incurred by unrestricted and restricted RNN models towards predicting series i using the past measurement sequences of all n component including and excluding the j th component alone, respectively. The pairwise testing strategy however does not scale well computationally as the number of component series becomes very large. This strategy also fails to exploit the typical sparse connectivity of networked interactions between the processes which has unlocked significant performance gains in the existing linear methods . In a recent work by , the pairwise Granger causal relationships are inferred directly from the weight parameters of component-wise MLP or LSTM networks fitted to the time series measurements. By enforcing column-sparsity of the input-layer weight matrices in the fitted MLP/LSTM models, their proposed approach returns a sparsely connected estimate of the underlying Granger causal network. Due to its feedforward architecture, a traditional MLP network is not well-suited for modeling ordered data such as a time series. demonstrated that the MLP network can learn short range temporal dependencies spanning a few time delays by letting the network's input stage process multi-lag time series data over sliding windows. However, modeling long-range temporal dependencies using the same approach requires a larger sliding window size which entails an inconvenient increase in the number of trainable parameters. The simulation in indicate that MLP models are generally outperformed by LSTM models in extracting the true topology of pairwise Granger causality, especially when the processes interact in a highly nonlinear and intricate manner. While purposefully designed for modeling short and long term temporal dependencies in a time series, the LSTM is very general and often too much overparameterized and thus prone to overfitting. While using overparameterized models for inference is preferable when there is abundant training data available to leverage upon, there are several applications where the data available for causal inference is extremely scarce. It is our opinion that using a simpler RNN model combined with meaningful regularization of the model parameters is the best way forward in inferring Granger causal relationships from underdetermined time series measurements. Building on the ideas put forth by Here, 2 is the unregularized SRU loss function, η is the gradientdescent stepsize and S λ1η is the elementwise soft-thresholding operator defined below. The columns of weight matrix W in the i th eSRU model are also updated in exactly the same fashion as above. Likewise, the j th row of the group-norm regularized weight matrix W o in the eSRU optimization in equation 5 is updated as shown below. The gradient of the unregularized loss function l i, 1 ≤ i ≤ n associated with the SRU and eSRU models used in the above updates is evaluated via the backpropagation through time (BPTT) procedure (Table 5, we compare the Granger causality detection performance of this particular eSRU variant and the proposed design wherein D We observe that the performance of these two models is statistically tied, which indicates that the randomly constructed D (i) r is able to distill the necessary information from the high-dimensional summary statistics u i,t−1 required for generating the feedback r i,t. Based on these , we recommend using the proposed eSRU design with its randomly constructed encoding map D (i) r, because of its simpler design and reduced training complexity. In order to highlight the importance of learning time-localized predictive features in detecting Granger causality, we compare the following two time series models: Once again, we use the same experimental settings as mentioned in Section 5. From Table 6, we observe that barring the Lorenz-96(T =250/500,F =40) datasets, for which nearly perfect recovery of the Granger causal network is achieved, the average AUROC improves consistently for the other datasets by switching from unstructured ridge regularization to the proposed group-sparse regularization of the output weight matrix W • Activation function for SRU and eSRU models While the standard SRU proposed by uses ReLU neurons, we found in our numerical experiments that using the Exponential Linear Unit (ELU) activation ed in better performance. The ELU activation function is defined as In our simulations, the constant α is set equal to one. • Number of neural layers in SRU model To approximate the generative functions f i in equation 1, we consider the simplest architecture for the SRU networks, whereby the constituent ReLU networks generating the recurrent features, output features and feedback have a single layer feedforward design with equal number of neurons. • Number of neural layers in Economy-SRU model The ReLU networks used for generating the recurrent and output features in the proposed eSRU model have a single-layer feedforward design. However, the second stage of eSRU's modified feedback can be either single or multi-layered feedforward network. Provided that d r md φ, a multi-layer implementation of the second stage of eSRU's feedback can still have fewer trainable parameters overall compared to the SRU's single layer feedback network. The simulation in Section 5 are obtained using a two-layer ReLU network in the second stage of eSRU's feedback for the DREAM-3 experiments, and while using a three-layer design for the Lorenz-96, VAR and NetSim experiments. • Self-interactions in Dream-3 gene networks The in-silico gene networks synthesized for the DREAM-3 challenge have no selfconnections. Noting that none of the Granger causal inference methods evaluated in our experiments intentionally suppress the self-interactions, the reported AUROC values are computed by ignoring any self-connections in the inferred Granger causal networks. cMLP & cLSTM models Pytorch implementation of the componentwise MLP and LSTM models are taken from https: //github.com/icc2115/Neural-GC. Pytorch implementation of the attention-gated CNN based Temporal Causal Discovery Framework (TCDF) is taken from https://github.com/M-Nauta/TCDF. Proposed SRU and Economy-SRU models Pytorch implementations of the proposed componentwise SRU and eSRU models are shared at https://github.com/sakhanna/SRU_for_GCI. The receiver operating characteristics (ROC) of different Granger causal inference methods are compared in Figures 4-7. Here, an ROC curve represents the trade-off between the true-positive rate (TPR) and the false-positive rate (FPR) achieved by a given method while inferring the underlying pairwise Granger causal relationships. Table 11: Economy-SRU model configuration
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyxV9ANFDH
A new recurrent neural network architecture for detecting pairwise Granger causality between nonlinearly interacting time series.
Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes nodes' representation recursively from their neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have any convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop a preprocessing strategy and two control variate based algorithms to further reduce the receptive field size. Our algorithms are guaranteed to converge to GCN's local optimum regardless of the neighbor sampling size. Empirical show that our algorithms have a similar convergence speed per epoch with the exact algorithm even using only two neighbors per node. The time consumption of our algorithm on the Reddit dataset is only one fifth of previous neighbor sampling algorithms. Graph convolution networks (GCNs) BID1 generalize convolutional neural networks (CNNs) to graph structured data. The "graph convolution" operation applies same linear transformation to all the neighbors of a node, followed by mean pooling. By stacking multiple graph convolution layers, GCNs can learn nodes' representation by utilizing information from distant neighbors. GCNs have been applied to semi-supervised node classification BID1, inductive node embedding (a), link prediction (; BID1 and knowledge graphs , outperforming multi-layer perceptron (MLP) models that do not use the graph structure and graph embedding approaches (; ;) that do not use node features. However, the graph convolution operation makes it difficult to train GCN efficiently. A node's representation at layer L is computed recursively by all its neighbors' representations at layer L − 1. Therefore, the receptive field of a single node grows exponentially with respect to the number of layers, as illustrated in FIG0. Due to the large receptive field size, BID1 proposed training GCN by a batch algorithm, which computes the representation for all the nodes altogether. However, batch algorithms cannot handle large scale datasets because of their slow convergence and the requirement to fit the entire dataset in GPU memory. Hamilton et al. (2017a) made an initial attempt on developing stochastic algorithms to train GCNs, which is referred as neighbor sampling (NS) in this paper. Instead of considering all the neighbors, they randomly subsample D (l) neighbors at the l-th layer. Therefore, they reduce the receptive field size to l D (l), as shown in FIG0 (b). They found that for two layer GCNs, keeping D = 10 and D = 25 neighbors can achieve comparable performance with the original model. However, there is no theoretical guarantee on the predictive performance of the model learnt by NS comparing with the original algorithm. Moreover, the time complexity of NS is still D D = 250 times larger than training an MLP, which is unsatisfactory. In this paper, we develop novel stochastic training algorithms for GCNs such that D (l) can be as low as two, so that the time complexity of training GCN is comparable with training MLPs. Our methods are built on two techniques. First, we propose a strategy which preprocesses the first graph convolution layer, so that we only need to consider all neighbors within L−1 hops instead of L hops. This is significant because most GCNs only have L = 2 layers BID1; a). Second, we develop two control variate (CV) based stochastic training algorithms. We show that our CV-based algorithms have lower variance than NS, and for GCNs without dropout, our algorithm provably converges to a local optimum of the model regardless of D (l).We empirically test on six graph datasets, and show that our techniques significantly reduce the bias and variance of the gradient from NS with the same receptive field size. Our algorithm with D (l) = 2 achieves the same predictive performance with the exact algorithm in comparable number of epochs on all the datasets, while the training time is 5 times shorter on our largest dataset. We now briefly review graph convolutional networks (GCNs) BID1 and the neighbor sampling (NS) algorithm (a). The original GCN was presented in a semi-supervised node classification task BID1. We follow this setting throughout this paper. Generalization of GCN to other tasks can be found in; BID1 and Hamilton et al. (2017b). In the node classification task, we have an undirected graph G = (V, E) with V = |V| vertices and E = |E| edges, where each vertex v consists of a feature vector x v and a label y v. The label is only observed for some vertices V L and we want to predict the label for the rest vertices V U:= V\V L. The edges are represented as a symmetric V × V adjacency matrix A, where A v,v is the weight of the edge between v and v, and the propagation matrix P is a normalized version of A:à = A + I,D vv = v à vv, and P =D DISPLAYFORM0. A graph convolution layer is defined as DISPLAYFORM1 where H (l) is the activation matrix in the l-th layer, whose each row is the activation of a graph node. H = X is the input feature matrix, W (l) is a trainable weight matrix, σ(·) is an activation function, and Dropout p (·) is the dropout operation with keep probability p., where f (·, ·) can be the square loss, cross entropy loss, etc., depending on the type of the label. When P = I, GCN reduces to a multi-layer perceptron (MLP) model which does not use the graph structure. Comparing with MLP, GCN is able to utilize neighbor information for node classification. We define n(v, L) as the set of all the L-neighbors of node v, i.e., the nodes that are reachable from v within L hops. It is easy to see from FIG0 that in an L-layer GCN, a node uses the information from all its L-neighbors. This makes GCN more powerful than MLP, but also complicates the stochastic training, which utilizes an approximated gradient ∇L ≈ DISPLAYFORM0 v ), where V B ⊂ V L is a minibatch of training data. The large receptive field size | ∪ v∈V B n(v, L)| per minibatch leads to high time complexity, space complexity and amount of IO. See Table 1 for the average number of 1-and 2-neighbors of our datasets. We introduce alternative notations to help compare different algorithms. Let DISPLAYFORM0 v, we focus on studying how u v is computed based on node v's neighbors. To keep notations simple, we omit all the subscripts and tildes, and exchange the ID of nodes such TAB4: Number of vertexes, edges, average number of 1-and 2-neighbors per node for each dataset. Undirected edges are counted twice and self-loops are counted once. Reddit is already subsampled to have a max degree of 128 following Hamilton et al. (2017a). DISPLAYFORM1 DISPLAYFORM2 | is the number of neighbors. We get the propagation rule u = D v=1 p v h v, which is used interchangeably with the matrix form U (l) = PH (l). To reduce the receptive field size, Hamilton et al. (2017a) propose a neighbor sampling (NS) algorithm. On the l-th layer, they randomly choose D (l) neighbors for each node, and develop an estimator u N S of u based on Monte-Carlo approximation DISPLAYFORM0 In this way, they reduce the receptive field size from DISPLAYFORM1 Neighbor sampling can also be written in a matrix form as DISPLAYFORM2 whereP (l) is a sparser unbiased estimator of P, i.e., EP DISPLAYFORM3 used for testing and for computing stochastic gradient DISPLAYFORM4 CV,v ) during training. The NS estimator u N S is unbiased. However it has a large variance, which leads to biased prediction and gradients after the non-linearity in subsequent layers. Due to the biased gradients, training with NS does not converge to the local optimum of GCN. When D (l) is moderate, NS may has some regularization effect like dropout , where it drops neighbors instead of features. However, for the extreme ease D (l) = 2, the neighbor dropout rate is too high to reach high predictive performance, as we will see in Sec. 5.4. Intuitively, making prediction solely depends on one neighbor is inferior to using all the neighbors. To keep comparable prediction performance with the original GCN, Hamilton et al. (2017a) use relatively large D = 10 and D = 25. Their receptive field size D × D = 250 is still much larger than MLP, which is 1. We first present a technique to preprocess the first graph convolution layer, by approximating ADropout p (X) with Dropout p (AX). The model becomes DISPLAYFORM0 This approximation does not change the expectation because E ADropout p (X) = E Dropout p (AX), and it does not affect the predictive performance, as we shall see in Sec. 5.1.The advantage of this modification is that we can preprocess U = P H = P X and takes Uas the new input. In this way, the actual number of graph convolution layers is reduced by one -the first layer is merely a fully connected layer instead of a graph convolution one. Since most GCNs only have two graph convolution layers BID1 a), this gives a significant reduction of the receptive field size from the number of DISPLAYFORM1 The numbers are reported in Table 1. We now present two novel control variate based estimators that have smaller variance as well as stronger theoretical guarantees than NS. We assume that the model does not have dropout for now and will address dropout in Sec. DISPLAYFORM0 where v is a random neighbor, and ∆h v = h v −h v. For the ease of presentation, we assume that we only use the latest activation of one neighbor, while the implementation also include the node itself besides the random neighbor, so D (l) = 2. Using historical activations is cheap because they need not to be computed recursively using their neighbors' activations, as shown in FIG0. Unlike NS, we apply Monte-Carlo approximation on v p v ∆h v instead of v p v h v. Since we expect h v andh v to be close, ∆h v will be small and u CV should have a smaller variance than u N S. Particularly, if the model weight is kept fixed,h v should be eventually equal with h v, so that , Chapter 5), which has zero mean and large correlation with u N S, to reduce its variance. We refer this stochastic approximation algorithm as CV, and we will formally analyze the variance and prove the convergence of the training algorithm using CV for stochastic gradient in subsequent sections. DISPLAYFORM1 In matrix form, CV computes the approximate predictions as follows, where we explicitly write down the iteration number i and add the subscript CV to the approximate activations DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 CV,i,v stores the latest activation of node v on layer l computed before time i. Formally, let m (l) i ∈ R V ×V be a diagonal matrix, and (m DISPLAYFORM5 After finishing one iteration we update historyH with the activations computed in that iteration as Eq.. With dropout, the activations H are no longer deterministic. They become random variables whose randomness come from different dropout configurations. Therefore, ∆h v = h v −h v is not necessarily small even if h v andh v have the same distribution. We develop another stochastic approximation algorithm, control variate for dropout (CVD), that works well with dropout. Our method is based on the weight scaling procedure to approximately compute the mean DISPLAYFORM0 That is, along with the dropout model, we can run a copy of the model with no dropout to obtain the mean µ v, as illustrated in FIG0. With the mean, we can obtain a better stochastic approximation by separating the mean and variance DISPLAYFORM1 whereμ v is the historical mean activation, obtained by storing µ v instead of h v, and ∆µ = µ v −μ v. u CV D an unbiased estimator of u because the term √ Dp v (h v − µ v) has zero mean, and the Monte-Carlo approximation DISPLAYFORM2 is made by assuming h v's to be independent Gaussians, which we will soon clarify. The pseudocodes for CV and CVD are in Appendix E. We analyze their variance in a simple independent Gaussian case, where we assume that activations are Gaussian random variables Table 2: Variance of different algorithms in the independent Gaussian case.. Without loss of generality, we assume that all the activations h v are one dimensional. We also assume that all the activations h 1,..., h D and historical activationsh 1,...,h D are independent, where the historical DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 We introduce a few more notations. ∆µ v and ∆s Table 2, where the derivations can be found in Appendix C. We decompose the variance as two terms: variance from Monte-Carlo approximation (VMCA) and variance from dropout (VD).If the model has no dropout, the activations have zero variance, i.e., s v =s v = 0, and the only source of variance is VMCA. We want VMCA to be small. As in Table 2, the VMCA for the exact estimator is 0. For the NS estimator, VMCA is DISPLAYFORM3 2, whose magnitude depends DISPLAYFORM4 Similarly, VMCA for both CV and CVD estimators is DISPLAYFORM0, which is likely because ∆µ v should be smaller than µ v. Since CV and CVD estimators have the same VMCA, we adopt the CV estimator for models without dropout, due to its simplicity. The VD of the exact estimator is s 2, which is overestimated by both NS and CV. NS overestimates VD by D times, and CV has even larger VD. Meanwhile, the VD of the CVD estimator is the same as the exact estimator, indicating CVD to be the best estimator for models with dropout. Besides smaller variance, CV also has stronger theoretical guarantees than NS. We can show that during testing, CV's prediction becomes exact after a few testing epochs. For models without dropout, we can further show that training using the stochastic gradients obtained by CV converges to GCN's local optimum. We present these in this section and Sec. 4.5. Note that the analysis does not need the independent Gaussian assumption. Given a model W, we compare the exact predictions (Eq. 1) and CV's approximate predictions (Eq. 5,6) during testing, which uses the deterministic weight scaling procedure. To make predictions, we run forward propagation by epochs. In each epoch, we randomly partition the vertex set V as I minibatches V 1,..., V I and in the i-th iteration, we run a forward pass to compute the prediction for nodes in V i. Note that in each epoch we scan all the nodes instead of just testing nodes, to ensure that the activation of each node is computed at least once per epoch. The following theorem reveals the connection of the exact predictions and gradients, and their approximate versions by CV.Theorem 1. For a fixed W and any i > LI we have: (Exact Prediction) The activations computed by CV are exact, i.e., Z (l) DISPLAYFORM0 Theorem 1 shows that at testing time, we can run forward propagation with CV for L epoches and get the exact prediction. This outperforms NS, which cannot recover the exact prediction. Comparing with directly making exact predictions by a batch algorithm, CV is more scalable because it does not need to load the entire graph into memory. The proof can be found in Appendix A. The following theorem shows that for a model without dropout, training using CV's approximated gradients converges to a local optimum of GCN, regardless of the neighbor sampling size D (l). Therefore, we can choose arbitrarily small D (l) without worrying about the convergence. Theorem 2. Assume that all the activations are ρ-Lipschitz, the gradient of the cost func- DISPLAYFORM0 is the inner product of matrix A and matrix B. We randomly run SGD for R ≤ N iterations, where DISPLAYFORM1. Then, for the updates W i+1 = W i − γ i g CV (W i) and step sizes DISPLAYFORM2 }, there exists constants K 1 and K 2 which are irrelevant with N, s.t. ∀N > LI, DISPLAYFORM3 The proof can be found in Appendix B. Particularly, DISPLAYFORM4 Therefore, our algorithm converges to a local optimum. Finally we discuss the time complexity of different algorithms. We decompose the time complexity as sparse time complexity for sparse-dense matrix multiplication such as PH (l), and dense time complexity for dense-dense matrix multiplication such as U times higher than NS.Our implementation is similar as BID1. We store the node features in the main memory, without assuming that they fit in GPU memory as Hamilton et al. (2017a), which makes our implementation about 2 times slower than theirs. We keep the histories in GPU memory for efficiency since they are only LH < K dimensional. We examine the variance and convergence of our algorithms empirically on six datasets, including Citeseer, Cora, PubMed and NELL from BID1 and Reddit, PPI from Hamilton et al.(2017a), as summarized in Table 1. To measure the predictive performance, we report Micro-F1 for the multi-label PPI dataset, and accuracy for all the other multi-class datasets. We use the same model architectures with previous papers but slightly different hyperparameters (see Appendix D for the details). We repeat the convergence experiments 10 times on Citeseer, Cora, PubMed and NELL, and 5 times on Reddit and PPI. The experiments are done on a Titan X (Maxwell) GPU. We first examine the approximation in Sec. 3 that switches the order of dropout and aggregating the neighbors. Let M0 be the original model (Eq. 1) and M1 be our approximated model (Eq. 3), we compare three settings: M0, D (l) = ∞ is the exact algorithm without any neighbor sampling. M1+PP, D (l) = ∞ changes the model from M0 to M1. Preprocessing does not affect the training for DISPLAYFORM0 uses NS with a relatively large number of neighbors. In Table 3 we can see that all the three settings performs similarly, i.e., our approximation does not affect the predictive performance. Therefore, we use M1+PP, D (l) = 20 as the exact baseline in following convergence experiments because it is the fastest among these three settings. We now study how fast our algorithms converge with a very small neighbor sampling size D (l) = 2. We compare the following algorithms: Exact, which is M1+PP, D We first validate Theorem 2, which states that CV+PP converges to a local optimum of Exact, for models without dropout, regardless of D (l). We disable dropout and plot the training loss with respect to number of epochs as FIG1. We can see that CV+PP can always reach the same training loss with Exact, which matches the of Theorem 2. Meanwhile, NS and NS+PP have a higher training loss because their gradients are biased. Next, we compare the predictive accuracy obtained by the model trained by different algorithms, with dropout turned on. We use different algorithms for training and the same Exact algorithm for testing, and report the validation accuracy at each training epoch. The is shown in FIG5. We find that CVD+PP is the only algorithm that is able to reach comparable validation accuracy with Exact on all datasets. Furthermore, its convergence speed with respect to the number of epochs is comparable with Exact despite its D (l) is 10 times smaller. Note that CVD+PP performs much better than Exact on the PubMed dataset; we suspect it finds a better local optimum. Meanwhile, simper algorithms CV+PP and NS+PP work acceptably on most of the datasets. CV+PP reaches a comparable accuracy with Exact for all datasets except PPI. NS+PP works slightly worse but the final validation accuracy is still within 2%. These algorithms can be adopted if there is no strong need for predictive performance. We however emphasize that exact algorithms must be used for making predictions, as we will show in Sec. 5.4. Finally, the algorithm NS without preprocessing works much worse than others, indicating the significance of our preprocessing strategy. TAB4 reports the average number of epochs, time, and total number of floating point operations to reach a given 96% validation accuracy on the largest Reddit dataset. Sparse and dense computations are defined in Sec. 4.6. We found that CVD+PP is about 5 times faster than Exact due to the significantly reduced receptive field size. Meanwhile, simply setting D (l) = 2 for NS does not converge to the given accuracy. We compare the quality of the predictions made by different algorithms, using the same model trained by Exact in Fig. 4. As Thm. 1 states, CV reaches the same testing accuracy as Exact, while NS and NS+PP perform much worse. Testing using exact algorithms (CV or Exact) corresponds to the weight scaling algorithm for dropout .Finally, we compare the average bias and variance of the gradients per dimension for first layer weights relative to the weights' magnitude in Fig. 5. For models without dropout, the gradient of CV+PP is almost unbiased. For models with dropout, the bias and variance of CV+PP and CVD+PP are ususally smaller than NS and NS+PP, as we analyzed in Sec. 4.3. The large receptive field size of GCN hinders its fast stochastic training. In this paper, we present a preprocessing strategy and two control variate based algorithms to reduce the receptive field size. Our algorithms can achieve comparable convergence speed with the exact algorithm even the neighbor sampling size D (l) = 2, so that the per-epoch cost of training GCN is comparable with training MLPs. We also present strong theoretical guarantees, including exact prediction and convergence to GCN's local optimum, for our control variate based algorithm. DISPLAYFORM0 H (l+1) DISPLAYFORM1 After one more epoch, all the activations h (l+1)CV,i,v are computed at least once for each v, soH DISPLAYFORM2 for all i > (l + 2)I. By induction, we know that after LI steps, we havē DISPLAYFORM3 2. We omit the time subscript i and denote DISPLAYFORM4 CV,v ). By back propagation, the approximated gradients by CV can be computed as follows DISPLAYFORM5 where • is the element wise product and σ (Z DISPLAYFORM6 CV) is the element-wise derivative. Similarly, denote DISPLAYFORM7 v ), the exact gradients can be computed as follows DISPLAYFORM8 Applying EP = EP,...,P (L) to both sides of Eq. 8, and utilizing DISPLAYFORM9 we have DISPLAYFORM10 Comparing Eq. 10 and Eq. 9 we get DISPLAYFORM11 We proof Theorem 2 in 3 steps:1. Lemma 1: For a sequence of weights W,..., W (N) which are close to each other, CV's approximate activations are close to the exact activations.,..., W (N) which are close to each other, CV's gradients are close to be unbiased.3. Theorem 2: An SGD algorithm generates the weights that changes slow enough for the gradient bias goes to zero, so the algorithm converges. The following proposition is needed in our proof DISPLAYFORM0 is the number of columns of the matrix A.• DISPLAYFORM1 Proof. DISPLAYFORM2 Proposition 2. There are a series of T inputs X 1,..., X T, X CV,1,..., X CV,T and weights W 1,..., W T feed to an one-layer GCN with CV DISPLAYFORM3 and an one-layer exact GCN DISPLAYFORM4 2. X CV,i − X CV,j ∞ < and X CV,i − X i ∞ < for all i, j ≤ T and > 0.Then there exists some K > 0, s.t., H CV,i − H CV,j ∞ < K and H CV,i − H i ∞ < K for all I < i, j ≤ T, where I is the number of iterations per epoch. Proof. Because for all i > I, the elements ofX CV,i are all taken from previous epochs, i.e., X CV,1,..., X CV,i−1, we know that DISPLAYFORM5 By triangular inequality, we also know DISPLAYFORM6 Since X CV,1 ∞,..., X CV,T ∞ are bounded, X CV,i ∞ is also bounded for i > I. Then, DISPLAYFORM7 and DISPLAYFORM8 The following lemma bounds CV's approximation error of activations Lemma 1. Given a sequence of model weights W 1,..., W T. If W i − W j ∞ <, ∀i, j, and all the activations are ρ-Lipschitz, there exists K > 0, s.t., DISPLAYFORM9 Proof. We prove by induction. Because DISPLAYFORM10 Repeatedly apply Proposition B.1 for L − 1 times, we get the intended . The following lemma bounds the bias of CV's approximate gradient Lemma 2. Given a sequence of model weights W 1,..., W T, if DISPLAYFORM0 2. all the activations are ρ-Lipschitz, 3. the gradient of the cost function ∇ z f (y, z) is ρ-Lipschitz and bounded, then there exists K > 0, s.t., DISPLAYFORM1 Proof. By Lipschitz continuity of ∇ z f (y, z) and Lemma 1, there exists K > 0, s.t., DISPLAYFORM2 By Eq. 9, Eq. 10 and Lemma 1, we have DISPLAYFORM3 By induction we know that for l = 1,..., L there exists K, s.t., DISPLAYFORM4 Again by Eq. 9, Eq. 10, and Lemma 1, DISPLAYFORM5 Finally, DISPLAYFORM6 Proof. This proof is a modification of , but using biased stochastic gradients instead. We assume the algorithm is already warmed-up for LI steps with the initial weights W 0, so that Lemma 2 holds for step i > 0. DISPLAYFORM0 By smoothness we have DISPLAYFORM1 Consider the sequence of LI + 1 weights W i−LI,..., W i. DISPLAYFORM2 By Lemma 2, there exists K > 0, s.t. DISPLAYFORM3 where DISPLAYFORM4 Taking EP,V B to both sides of Eq. 14 we have DISPLAYFORM5 Summing up the above inequalities and re-arranging the terms, we obtain, DISPLAYFORM6 Dividing both sides by DISPLAYFORM7. DISPLAYFORM8 Particularly, when N → ∞, we have E R∼P R EP,V B ∇L(W R) 2 = 0, which implies that the gradient is asymptotically unbiased. We test 3-layer GCNs on the Reddit dataset. The settings are the same with 2-layer GCNs in Sec. 5.3. To ensure the exact algorithm can run in a reasonable amount of time, we subsample the graph so that the maximum degree is 10. The convergence is shown as FIG7, which is similar with the two-layer models. The time consumption to reach 0.94 testing accuracy is shown in TAB6. We justify the independent Gaussian assumption in Sec. 4.3 by showing that for a 2-layer GCN with the first layer pre-processed, the neighbor's activations are independent. Without loss of generality, suppose that we want to compute z Assumption 1 is not GCN-specific and is discussed in , we now prove assumption 2 by the following lemma. Lemma 3. If a and b are independent random variables, then their transformations f 1 (a) and f 2 (b) are independent. Because for any event A and B, P (f 1 (a) ∈ f 1 (A), f 2 (b) ∈ f 2 (B)) = P (a ∈ A, b ∈ B) = P (a ∈ A)P (b ∈ B) = P (f 1 (a) ∈ f 1 (A))P (f 2 (B) ∈ f 2 (B)), where f 1 (A) = {f 1 (a)|a ∈ A} and f 2 (B) = {f 2 (b)|b ∈ B}. v and hv are independent. The can be further generalized to deeper models. If the receptive fields of two nodes does not overlap, they should be independent.where the indices i, j ∈n (l) (v). Then, we compute the average correlation of all pairs of neighbors i = j. AvgCorr (l,v,d):= 1 n (l) (v) (n (l) (v) − 1) i =j Corr (l,v,d) ij, and define the average neighbor correlation on layer l as AvgCorr (l,v,d) averaged over all the nodes v and dimensions d. We report the average feature correlation and the average neighbor correlation per layer, on the Citeseer, Cora, PubMed and PPI datasets. These quantities are too expensive to compute for NELL and Reddit. On each dataset, we train a GCN with 10 graph convoluation layers until early stopping criteria is met, and compute the average feature correlation and the average neighbor correlation for layer 1 to 9. We are not interested in the correlation on layer 10 because there are no more graph convolutional layers after it. The is shown as FIG10. As analyzed in Sec. G.1, the average neighbor correlation is close to zero on the first layer, but it is not exactly zero due to the finite sample size for computing the empirical covariance. There is no strong tendency of increased correlation as the number of layers increases, after the third layer. The average neighbor correlation and the average feature correlation remain on the same order of magnitude, so bringing correlated neighbors does not make the activations much more correlated than the MLP case . Finally, both correlations are much smaller than one.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylejExC-
A control variate based stochastic training algorithm for graph convolutional networks that the receptive field can be only two neighbors per node.
Low bit-width integer weights and activations are very important for efficient inference, especially with respect to lower power consumption. We propose to apply Monte Carlo methods and importance sampling to sparsify and quantize pre-trained neural networks without any retraining. We obtain sparse, low bit-width integer representations that approximate the full precision weights and activations. The precision, sparsity, and complexity are easily configurable by the amount of sampling performed. Our approach, called Monte Carlo Quantization (MCQ), is linear in both time and space, while the ing quantized sparse networks show minimal accuracy loss compared to the original full-precision networks. Our method either outperforms or achieves competitive with methods that do require additional training on a variety of challenging tasks. Developing novel ways of increasing the efficiency of neural networks is of great importance due to their widespread usage in today's variety of applications. Reducing the network's footprint enables local processing on personal devices without the need for cloud services. In addition, such methods allow for reducing power consumption -also in data centers. Very compact models can be fully stored and executed on-chip in specialized hardware like for example ASICs or FPGAs. This reduces latency, increases inference speed, improves privacy concerns, and limits bandwidth cost. Quantization methods usually require re-training of the quantized model to achieve competitive . This leads to an additional cost and complexity. The proposed method, Monte Carlo Quantization (MCQ), aims to avoid retraining by approximating the full-precision weight and activation distributions using importance sampling. The ing quantized networks achieve close to the full-precision accuracy without any kind of additional training. Importantly, the complexity of the ing networks is proportional to the number of samples taken. First, our algorithm normalizes the weights and activations of a given layer to treat them as probability distributions. Then, we randomly sample from the corresponding cumulative distributions and count the number of hits for every weight and activation. Finally, we quantize the weights and activations by their integer count values, which form a discrete approximation of the original continuous values. Since the quality of this approximation relies entirely on (quasi)random sampling, the accuracy of the quantized model is directly dependent on the amount of sampling performed. Thus, accuracy may be traded for higher sparsity and speed by adjusting the number of samples. On the challenging tasks of image classification, language modeling, speech recognition, and machine translation, our method outperforms or is competitive with existing quantization methods that do require additional training. The computational cost of neural networks can be reduced by pruning redundant weights or neurons, which has been shown to work well; ). Alternatively, the precision of the network weights and activations may be lowered, potentially introducing sparsity. Using low precision computations to reduce the cost and sparsity to skip computations allows for efficient hardware implementations ). This is the approach used in this paper. BinaryConnect proposed training with binary weights, while XNOR-Net and BNN extended this binarization to activations as well. TWN proposed ternary quantization instead, increasing model expressiveness. Similarly, TTQ used ternary weights with a positive and negative scaling learned during training. LR-Net made use of both binary and ternary weights by using stochastic parameterization while INQ constrained weights to powers of two and zero. FGQ categorized weights in different groups and used different scaling factors to minimize the element-wise distance between full and low-precision weights. used the hardware accelerator's feedback to perform hardware-aware quantization using reinforcement learning. jointly trained quantized networks and respective quantizers. used Bloomier filters to compactly encode network weights. Similarly, quantization techniques can also be applied in the backward pass. Therefore, some previous work quantized not only weights and activations but also the gradients to augment training performance (; ;). In particular, RQ propose a differentiable quantization procedure to allow for gradient-based optimization using discrete values and recently proposed to discretize weights, activations, gradients, and errors both at training and inference time. These quantization techniques have great benefits and have shown to successfully reduce the computation requirements compared to full-precision models. However, all the aforementioned methods require re-training of the quantized network to achieve close to full-precision accuracy, which can introduce significant financial and environmental cost . On the other hand, our method instantly quantizes pre-trained neural networks with minimal accuracy loss as compared to their full-precision counterparts without any kind of additional training. Neural networks make extensive use of randomization and random sampling techniques. Examples are random initialization of network weights, stochastic gradient descent , regularization techniques such as Dropout and DropConnect , data augmentation and data shuffling, recurrent neural networks' regularization (a), or the generator's noise input on generative adversarial networks . Many state-of-the-art networks use ReLU , which has interesting properties such as scale-invariance. This enables a scaling factor to be propagated through all network layers without affecting the network's original output. This principle can be used to normalize network values, such as weights and activations, as further described in Section 3.1. After normalization, these values can be treated as probabilities, which enables the simulation of discrete probability densities to approximate the corresponding full-precision, continuous distributions (Section 3.2). Assuming the exclusive use of the ReLU activation function in the hidden layers, the scale-invariance property of the ReLU activation function allows for arbitrary scaling of the weights or activations without affecting the network's output. Given weights w l−1,i,j connecting the i-th neuron in layer l − 1 to the j-th neuron in layer l, where i ∈ [0, N l−1 − 1] and j ∈ [0, N l − 1], with N l−1 and N l the number of neurons of layer l −1 and l, respectively. Let a l,j be the j-th activation in the l-th layer and f ∈ R +: a l,j = max 0, Biases and incoming weights for neuron j in layer l may then be normalized by f = w l−1,j 1 = N l−1 −1 i=0 |w l−1,i,j |, enabling weights to be seen as a probability distribution over all connections to a neuron. A similar procedure could be used to normalize all activations a l,j of layer l. Propagating these scaling factors forward layer by layer in a single scalar (per output), which converts the outputs of the normalized network to the same range as the original network. This technique allows for the usage of integer weights and activations throughout the entire network without requiring rescaling or conversion to floating point at every layer. Taking advantage of the normalized network, we can simulate discrete probability densities by constructing a probability density function (PDF) and then sampling from the corresponding cumulative density function (CDF). The number of references of a weight is then the quantized integer approximation of the continuous value. For simplicity, the following discussion shows the quantization procedure for weights; activations can be quantized in the same way at inference time. Without loss of generality, given n weights, assuming n−1 k=0 |w k | = w 1 = 1 and defining a partition of the unit interval by P m:= m k=1 |w k | we have the following partitions: Then, given N uniformly distributed samples x i ∈, we can approximate the weight distribution as follows: where j i ∈ {0, . . ., n − 1} is uniquely determined by P ji−1 ≤ x i < P ji. One can further improve this sampling process by using jittered equidistant sampling. Thus, given a random variable ξ ∈, we generate N uniformly distributed samples x i ∈ such that The combination of equidistant samples and a random offset improves the weight approximation, as the samples are more uniformly distributed. The variance of different sampling seeds is discussed in the Appendix. Our approach builds on the aforementioned ideas of network normalization and quantization using random sampling to quantize an entire pre-trained full-precision neural network. As before, we focus on weight quantization; online activation quantization is discussed in Section 4.4. Our method, called Monte Carlo Quantization (MCQ), consists of the following steps, which are executed layer by layer: Create a probability density function (PDF) for all N l,w weights of layer l such that Perform importance sampling on the weights based on their magnitude by sampling from the corresponding cumulative density function (CDF) and counting the number of hits per weight (Section 4.2). Replace each weight with its quantized integer value, i.e. its hit count, to obtain a low bit-width, integer weight representation (Section 4.3). The pseudo-code for our method is shown in Algorithm 1 of the Appendix. Figure 1 illustrates both the normalization and importance sampling processes for a layer with 10 weights and 1 sample per weight, i.e. K = 1.0. Performing normalization neuron-wise, as introduced in Section 3.1 may in an inferior approximation, especially when the number of weights to sample from is small, as for example in convolutional layers with a small number of filters or input channels. To mitigate this, we propose to normalize all neurons simultaneously in a layer-wise manner. This has the additional advantage that samples can be redistributed from low-importance neurons to high-importance neurons (according to and uniformly sample from the corresponding CDF (c). The sampling process produces quantized integer network weights based on the number of hits per weight (d). Note that since weights 7, 8, and 9 were not hit, sparsity is introduced which can be exploited by hardware accelerators. some metric), ing in an increased level of sparsity. Additionally, there is more opportunity for global optimization, so the overall weight distribution approximation improves as well. We use the 1-norm of all weights of a given layer l as the scaling factor f used to perform weight normalization. Thus, each normalized weight can be seen as a probability with respect to all connections between layer l − 1 and layer l, instead of a single neuron. This layer-wise normalization technique is similar to Weight Normalization , which decouples the neuron weight vector magnitude from its direction. As introduced in Section 3.2, we generate ternary samples (hit positive weight, hit negative weight, or no hit), and count such hits during the sampling process. Note that even though the individual samples are ternary, the final quantized values may not be, because a single weight can be sampled multiple times. For jittered sampling, we use one random offset per layer, with a number of samples N = K · N values, where K ∈ R + is a user-specified parameter to control the number of samples and N values represents the number of weights of a given layer. By varying K, the computational cost of sampling can be traded off better approximation (more bits per weight) of the original weight distribution, leading to higher accuracy. In our experiments, K is set the same for all network layers. One simple modification to enhance the quality of the discrete approximation is to sort the continuous values prior to creating the PDF. Applying sorting mechanisms to Monte Carlo schemes has been shown to be beneficial in the past (; . Sorting groups smaller values together in the overall distribution. Since we are using a uniform sampling strategy, smaller weights are then sampled less often, which in both higher sparsity and a better quantized approximation of the larger weights in practice. This effect is particularly significant on smaller layers with fewer weights. Since the quantized integer weights span a different range of values than the original weights, and biases remain unchanged, care must be taken to ensure the activations of each neuron are calculated correctly. After the integer multiply-accumulate (MAC) operation, the must then be scaled by f N before adding the bias. This requires the storage of one floating point scaling value per layer. However, weights are stored as low bit-width integers and the computational cost is greatly reduced since the MAC operations use low-precision integers only instead of floating point numbers. The number of bits required for the weights B W l ∈ N, for layer l and its quantized weights Q(w l,i), corresponds to the bit amount needed to represent the highest hit count during sampling, including its sign: B W l = 1 + log 2 (max 0≤i≤Nw−1 |Q(w l,i)|) + 1. Alternatively, positive and negative weights could be separated into two sets. While weights are quantized offline, i.e. after training and before inference, activations are quantized online during inference time using the same procedure as weight quantization previously described. Thus, in the normalization step (Section 4.1), all N l,a activations of a given layer l are treated as a probability distribution over the output features, such that N l,a −1 j=0 |a l,j | = 1. Then, in the importance sampling step (Section 4.2), activations are sub-sampled using possibly different relative sampling amounts, i.e. K, than the ones used for the weights (we use the same K for both weights and activations in all of our experiments). The required number of bits B A l for the quantized activations Q(a l,j) can also be calculated similarly as described in Section 4.3, although no additional bit sign is required when using ReLU since all activations are non-negative. The proposed method is extensively evaluated on a variety of tasks: for image classification we use CIFAR-10 , SVHN , and ImageNet , on multiple models each. We further evaluate MCQ on language modeling, speech recognition, and machine translation, to assess the preformance of MCQ across different task domains. Due to the automatic quantization done by MCQ, some layers may be quantized to lower or higher levels than others. We indicate the quantization level for the whole network by the average number of bits, e.g.'8w-32a' means that on average 8 bits were used for weights and 32 bits for activations on each layer. Many works note that quantizing the first or last network layer reduces accuracy significantly; ). We use footnotes 1, 2, and 3 to denote the special treatment of first or last layers respectively. For MCQ we report the with both quantized and full-precision first layer. We do not quantize Batch Normalization layers as the parameters are fixed after training and can be incorporated into the weights and biases . Tables 1 to 4 show the accuracy difference ∆ between the quantized and full-precision models. For other compared works this difference is calculated using the baseline models reported in each of the respective works. We didn't perform any search over random sampling seeds for MCQ's . The best accuracies on VGG-7, VGG-14, and ResNet-20 produced by our method using K = 1.0 on CIFAR-10 are shown in Table 1. We refer to the Appendix for model and training details. MCQ outperforms or shows competitive showing minimal accuracy loss on all tested models against the compared methods that require network re-training. The full-precision baselines for BNN and XNOR-Net are from BC as these works use the same model. Similarly, BWN 's on VGG-7 are the ones reported in TWN since they did not report the baseline in the original paper. Figure 2 shows the effects of varying the amount of sampling, i.e. using K ∈ [0.1...2.0].The average percentage of used weights/activations per layer and corresponding bit-widths of the final quantized model is also presented on each graph. We observe a rapid increase of the accuracy even when sparsity levels are high on all tested models. For SVHN, the tested models are identical to the compared methods. Models B, C, and D have the same architecture as Model A but with a 50%, 75%, and 87.5% reduction in the number of filters in each convolutional layer, respectively . We refer to the Appendix for further model and training details. Table 2 shows MCQ's for several models on SVHN using K = 1.0. On bigger models, i.e. VGG-7* and Model A, we see minimal accuracy loss when compared to the full-precision baselines. For the smaller models, we observe a slight accuracy degradation as model size decreases due to the reduction in the sample size, ing in a poorer approximation. However, we used only about 4 bits per weight/activation for such models. Thus, increasing the number of samples would improve Figure 2: Results of quantizing both weights and activations on CIFAR-10 using different sampling amounts. The quantized models reach close to full-precision accuracy at around half the sample size while using only around half the weights and one-third of the activations of the full-precision models. accuracy while still maintaining a low bit-width. Figure 3 illustrates the consequences of varying the number of samples. Less samples are required than on CIFAR-10 for bigger models to achieve close to full-precision accuracy. Potentially this is because layers have a larger number of weights and activations, so a larger sample size reduces quantization noise since the important values being more likely to be better approximated. For ImageNet, we evaluate MCQ on AlexNet, ResNet-18, and ResNet-50 using the pre-trained models provided by Pytorch's model zoo ). Table 3 shows the on ImageNet with K = 5.0 for the different models. The shown for DoReFa, BWN, TWN (; ;) are the ones reported in TTQ . Figure 4 shows the accuracy of the quantized model when using different sample sizes, i.e., K ∈ [0.25, ..., 5.0]. We observe that more sampling is required to achieve a close to full-precision model accuracy on ImageNet. On this dataset, sorting the CDF before sampling didn't in any improvements, so reported are without sorting. All the quantized models achieve close to full-precision accuracy, though more samples are required than for the previous datasets ing in a higher required bit-width. To assess the robustness of MCQ, we further evaluate MCQ on several models in natural language and speech processing. We evaluate language modeling on Wikitext-103 using a Transformer-based model and Wikitext-2 using a 2-layer LSTM , speech recognition on VCTK using Deepspeech2 , and machine translation on WMT-14 English-to-French using a Transformer . Additional details are provided in the Appendix. Table 4 shows the comparison to full-precision models for these various tasks. (1W-32A) +0.14 ----∆ BNN (1W-1A) - Results of quantizing both weights and activations on SVHN using different sampling amounts. The quantized VGG-7* model reaches close to full-precision accuracy using around 0.5 samples per weight/activation, requiring around 8 bits and using 22% of the weights of the original model, with 22% nonzero activations. Model A, B, C, and D are less redundant models that require more sampling to achieve close to full-precision accuracy. The experimental show the performance of MCQ on multiple models, datasets, and tasks, demonstrated by the minimal loss of accuracy compared to the full-precision counterparts. MCQ either outperforms or is competitive to other methods that require additional training of the quantized network. Moreover, the trade-off between accuracy, sparsity, and bit-width can be easily controlled by adjusting the number of samples. Note that the complexity of the ing quantized network is proportional to the number of samples in both space and time. One limitation of MCQ, however, is that it often requires a higher number of bits to represent the quantized values. On the other hand, this sampling-based approach directly translates to a good approximation of the real full-precision values without any additional training. proposed to outlier channel splitting, which is orthogonal work to MCQ and could be used to reduce the bit-width required for the highest hit counts. There are several paths that could be worth following for future investigations. In the importance sampling stage, using more sophisticated metrics for importance ranking, e.g. approximation of the Hessian by Taylor expansion could be beneficial . Automatically selecting optimal sampling levels on each layer could lead to a lower cost since later layers seem to tolerate more sparsity and noise. For efficient hardware implementation, it's important that the quantized Figure 4: Results of quantizing both weights and activations on ImageNet using different sampling amounts. All quantized models reach close to full-precision accuracy at K = 3. Table 4: Evaluation of MCQ on language modeling, speech recognition, and machine translation. All quantized models reach close to full precision performance. Note that, as opposed to the image classification task, we did not study different sampling amounts nor the effect of quantization on specific network layers. A more in-depth analysis could then help to achieve close to full-precision accuracy at a lower bit-width on these additional models. network can be executed using integer operations only. Bias quantization and rescaling, activation rescaling to prevent overflow or underflow, and quantization of errors and gradients for efficient training leave room for future work. In this work, we showed that Monte Carlo sampling is an effective technique to quickly and efficiently convert floating-point, full-precision models to integer, low bit-width models. Computational cost and sparsity can be traded for accuracy by adjusting the number of sampling accordingly. Our method is linear in both time and space in the number of weights and activations, and is shown to achieve similar as the full-precision counterparts, for a variety of network architectures, datasets, and tasks. In addition, MCQ is very easy to use for quantizing and sparsifying any pre-trained model. It requires only a few additional lines of code and runs in a matter of seconds depending on the model size, and requires no additional training. The use of sparse, low-bitwidth integer weights and activations in the ing quantized networks lends itself to efficient hardware implementations. A ALGORITHM An overview of the proposed method is given in Algorithm 1. Input: Pre-trained full-precision network Output: Quantized network with integer weights for K=0 to L-1 do // Update layer's precision B W K ← 1 + f loor(log 2 (max(abs(W K)))) + 1; end Algorithm 1: Monte Carlo Quantization (MCQ) on network weights. L represents the number of trainable layers, K indicates the percentage of samples to be sampled per weight. The process is performed equivalently for quantizing activations at inference time. Our algorithm is linear in both time and space in the number of weights and activations. When using integer weights, care has to be taken to avoid overflows in the activations. For that, activations can be scaled using a dynamically computed shifting factor as in. With Monte Carlo sampling, since we know the expected value of the next-layer activations, we can scale accordingly. With the activation equation presented in Section 3.1 and N I connections from the input layer to every neuron in the second layer: With The activations of a neuron need to be scaled by its number of inputs (the receptive field F in), multiplied with the number of samples per weight and the number of samples per activation. This is also valid for neurons in convolutional layers, where the receptive field is 3D, e.g. 3 × 3 × 128. Moreover, care must be taken to scale biases correctly, by taking both the scaling of weights and activations into account: We trained our full-precision baseline models on the CIFAR-10 dataset , consisting of 50000 training samples. We evaluated both our full-precision and quantized models similarly on the rest of the 10000 testing samples. The full-precision VGG-7 (2×128C3−M P 2−2× 256C3−M P 2−2×512C3−M P 2−1024F C −Sof tmax) and VGG-14 (2×64C3−M P 2−2× 128C3−M P 2−3×256C3−M P 2−3×512C3−M P 2−3×512C3−M P 2−1024F C−Sof tmax) models were trained using the code at https://github.com/bearpaw/pytorch-classification. Each was trained for 300 epochs with the Adam optimizer, with a learning rate starting at 0.1 and decreased by factor 10 at epochs 150 and 225, batch size of 128, and weights decay of 0.0005. The ResNet-20 model uses the standard configuration described, with 64, 128 and 256 filters in the respective residual blocks. We used more filters to increase the number of available weights in the first block to sample from. This could be similarly performed by sampling more on this specific model to reduce the accuracy loss. The ResNet-20 model is trained using the same hyperparameter settings as the VGG models. We trained our full-precision baseline models on the Street View House Numbers (SVHN) dataset , consising of 73257 training samples. We evaluated both our full-precision and quantized models similarly using the 26032 testing samples provided in this dataset. The fullprecision VGG-7* model (2 × 64C3 − M P 2 − 2 × 128C3 − M P 2 − 2 × 256C3 − M P 2 − 1024F C − Sof tmax) was trained for 164 epochs, using the Adam optimizer with learning rate starting at 0.001 and divided by 10 at epochs 80 and 120, weight decay 0.001, and batch size 200. Models A (48C3 − M P 2 − 2 × 64C3 − M P 2 − 3 × 128C3 − M P 2 − 512C3 − Sof tmax), B, C, and D were trained using the code at https://github.com/aaron-xichen/pytorch-playground and the same hyperparameter settings as VGG-7* but trained for 200 epochs. We evaluated both our full-precision and quantized models similarly on the validation set of the ILSVRC12 classification dataset , consisting of 50K valida-tion images. The full-precision pre-trained models are taken from Pytorch's model zoo https://pytorch.org/docs/stable/torchvision/models.html . CSTR's VCTK Corpus (Centre for Speech Technology Voice Cloning Toolkit) includes speech data uttered by 109 native speakers of English with various accents, where each speaker reads out about 400 sentences, most of which were selected from a newspaper. The evaluated model uses 2 convolutional layers and 5 GRU layers of 768 hidden units, using code from https://github.com/SeanNaren/deepspeech.pytorch . The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers -all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies. The WikiText-2 model was a 2-layer LSTM with 650 hidden neurons, and an embedding size of 400. It was trained using the setup and code at https://github.com/salesforce/awdlstm-lm (b). The WikiText-102 model was a pretrained model available at https://github.com/pytorch/fairseq/tree/master/examples/language model, along with evaluation code . The dataset is WMT14 English-French, cmobining data from several other corpuses, amongst others the Europarl corpus, the News Commentary corpus, and the Common Crawl corpus . The model was a pretrained model available at https://github.com/pytorch/fairseq/tree/master/examples/scaling nmt, along with evaluation code . Figures 5, 6, and 7 show the effects of varying the amounts of sampling when quantizing only the weights. E QUANTIZING ACTIVATIONS ONLY Figures 8, 9, and 10 show the effects of varying the amounts of sampling when quantizing only the activations. We observe less sampling is required to achieve full-precision accuracy when quantizing only the activations when compared to quantizing the weights only. In a small experiment on CIFAR-10, we observe that using different sampling seeds can in up to a ≈ 0.5% absolute variation in accuracy of the different quantized networks (Figure 11). Grid searching over several sampling seeds may then be beneficial to achieve a better quantized model in the end, depending on the use-case.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1e5NySKwH
Monte Carlo methods for quantizing pre-trained models without any additional training.
We propose the Information Maximization Autoencoder (IMAE), an information theoretic approach to simultaneously learn continuous and discrete representations in an unsupervised setting. Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid discrete and continuous representation with the objective of maximizing the mutual information between the data and their representations. A decoder is included to approximate the posterior distribution of the data given their representations, where a high fidelity approximation can be achieved by leveraging the informative representations. We show that the proposed objective is theoretically valid and provides a principled framework for understanding the tradeoffs regarding informativeness of each representation factor, disentanglement of representations, and decoding quality. A central tenet for designing and learning a model for data is that the ing representation should be compact yet informative. Therefore, the goal of learning can be formulated as finding informative representations about the data under proper constraints. Generative latent variable models are a popular approach to this problem, where a model parameterized by θ of the form p θ (x) = p θ (x|z)p(z)dz is used to represent the relationship between the data x and the low dimensional latent variable z. The model is optimized by fitting the generative data distribution p θ (x) to the training data distribution p(x), which involves maximizing the likelihood for θ. Typically, this model is intractable even for moderately complicated functions p θ (x|z) with continuous z. To remedy this issue, variational autoencoder (VAE) BID13 BID19 proposes to maximize the evidence lower bound (ELBO) of the marginal likelihood objective. However, as was initially pointed out in BID10, maximizing ELBO also penalizes the mutual information between data and their representations. This in turn makes the representation learning even harder. Many recent efforts have focused on resolving this problem by revising ELBO. Generally speaking, these works fall into two lines. One of them targets "disentangled representations" by encouraging the statistical independence between representation components BID9 BID12 BID8 BID4 BID7, while the other line of work seeks to control or encourage the mutual information between data and their representations BID16 BID3 BID1 BID6 ). However, these approaches either in an invalid lower bound for the VAE objective or cannot avoid sacrificing the mutual information. Instead of building upon the generative latent variable model, we start with a stochastic encoder p θ (z|x) and aim at maximizing the mutual information between the data x and its representations z. In this setting, a reconstruction or generating phase can be obtained as the variational inference of the true posterior p θ (x|z). By explicitly seeking for informative representations, the proposed model yields better decoding quality. Moreover, we show that the information maximization objective naturally induces a balance between the informativeness of each latent factor and the statistical independence between them, which gives a more principled way to learn semantically meaningful representations without invalidating ELBO or removing individual terms from it. Another contribution of this work is proposing a framework for simultaneously learning continuous and discrete representations for categorical data. Categorical data are ubiquitous in real-world tasks, where using a hybrid discrete and continuous representation to capture both categorical information and continuous variation in data is more consistent with the natural generation process. In this work, we focus on categorical data that are similar in nature, i.e., where different categories still share similar variations (features). We seek to learn semantically meaningful discrete representations while maintaining disentanglement of the continuous representations that capture the variations shared across categories. We show that, compared to the VAE based approaches, our proposed objective gives a more natural yet effective way for learning these hybrid representations. Recently, there has been a surge of interest in learning interpretable representations. Among them, β-VAE BID9 ) is a popular method for learning disentangled representations, which modifies ELBO by increasing the penalty on the KL divergence between the variational posterior and the factorized prior. However, by using large weight for the KL divergence term, β-VAE also penalizes the mutual information between the data and the latent representations more than a standard VAE does, ing in more severe under utilization of the latent representation space. Several follow up works propose different approaches to address the limitations of β-VAE. BID6 BID1 BID3 BID16 propose to constrain the mutual information between the representations and the data by pushing its upper bound, i.e., the KL divergence term in ELBO, towards a progressively increased target value. However, specifying and tuning this target value can itself be very challenging, which makes this method less practical. Moreover, this extra constraint in an invalid lower bound for the VAE objective. Alternatively, drops the mutual information term in ELBO. By pushing only the aggregated posterior towards a factorial prior, they implicitly encourage independence across the dimensions of latent representations without sacrificing the informativeness of the representations. However, simply removing the mutual information term also violates the lower bound of the VAE objective. Another relevant line of work BID8 BID12 BID4 BID7 seek to learn disentangled representations by explicitly encouraging statistical independence between latent factors. They all propose to minimize the total correlation term of the latent representations, either augmented as an extra term to ELBO or obtained by reinterpreting or re-weighting the terms in the VAE objective, as a way to encourage statistical independence between the representation components. In contrast, we show that our information maximization objective inherently contains the total correlation term while simultaneously seeking to maximize the informativeness of each representation factor. In this paper, we introduce a different perspective to the growing body of the VAE based approaches for unsupervised representation learning. Starting by seeking informative representations for the data, we follow a more intuitive way to maximize the mutual information between the data and the representations. Moreover, we augment the continuous representation with a discrete one, which allows more flexibilities to model real world data that are generated from different categories. We invoke the information maximization principle BID15 BID2 with proper constraints implied by the objective itself to avoid degenerate solutions. The proposed objective gives a theoretically elegant yet effective way to learn semantically meaningful representations. Given data x ∈ R d, we consider learning a hybrid continuous-discrete representation, denoted respectively with variables z ∈ R K1 and y ∈ {1, . . ., K 2}, using a stochastic encoder parameterized by θ, i.e., p θ (y, z|x). We seek to learn compact yet semantically meaningful representations in the sense that they should be low dimensional but informative enough about the data. A natural approach is to maximize the mutual information BID5 ) I θ (x; y, z) between the data and its representations under the constraint K 1, K 2 d. Here the mutual information between two random variables, e.g., x and z, is defined as DISPLAYFORM0 is the entropy of z and H θ (z|x) = −E p θ (x,z) [log p θ (z|x)] is the conditional entropy of z given x. The mutual information can be interpreted as the decrease in uncertainty of one random variable given another random variable. In other words, it quantifies how much information one random variable has about the other. A probabilistic decoder q φ (x|y, z) is adopted to approximate the true posterior p θ (x|y, z), which can be hard to estimate or even intractable. The dissimilarity between them is optimized by minimizing the KL divergence D KL (p θ (x|y, z)||q φ (x|y, z)). In summary, IMAE considers the following, DISPLAYFORM1 Given that H(x) is independent of the optimization procedure, we can show that optimizing is equivalent to optimize the following 1, DISPLAYFORM2 We set β > 0 to balance between maximizing the informativeness of latent representations and maintaining the decoding quality. The second term is often interpreted as the "reconstruction error" which can be optimized using the reparameterization tricks proposed by BID13 and BID11 for continuous representation z and discrete representation y respectively. Now we introduce proper method to optimize the first term I θ (x; y, z) in. We first show that I θ (x; y, z) inherently involves two keys terms that quantify the informativeness of each representation factor and the statistical dependence between these factors. Assuming the conditional distribution of the representation (y, z) given x is factorial, we also assume the marginal distribution of y and z are independent, i.e., DISPLAYFORM0 The first two terms of the RHS quantify how much information each latent factor, i.e., y or z k, carry about the data. The last term is known as the total correlation of z BID21, which quantifies the statistical independence between the continuous latent factors and achieves the minimum if and only if they are independent of each other. As is implied by, maximizing I θ (x; y, z) can be conducted by maximizing informativeness of each latent factor while simultaneously promoting statistical independence between the continuous factors. Various Monte Carlo based sampling strategies have been proposed to optimize the total correlation term BID4 BID7; in this work we follow this line (see Appendix B). Next we proceed by constructing tractable approximations for I θ (x; z k) and I θ (x; y) respectively. Without any constraints, the mutual information I θ (x; z k) between a continuous latent factor and data can be trivially maximized by severely fragmenting the latent space. To be more precise, consider the following proposition. While similar have likely been established in the information theory literature, we include this proposition to motivate our objective design. DISPLAYFORM0 The equality in is attained if and only if z k is Gaussian distributed, given which we have DISPLAYFORM1 Note here both µ k (x) and σ k (x) are random variables. The above implies that z k is more informative about x if it has less uncertainty given x yet captures more variance in data, i.e., σ k (x) is small while µ k (x) disperses within a large space. However, this can in discontinuity of z k, where in the extreme case each data sample is associated with a delta distribution in the latent space. In light of this, we can make what we described above more precise. A vanishing variance of the conditional distribution p(z k |x) leads to a plain autoencoder that maps each data sample to a deterministic latent point, which can fragment the latent space in a way that each data sample corresponds with a delta distribution in the latent space DISPLAYFORM2 On the other hand, Proposition 1 also implies that controlling the variance σ k (x) to be finite, I θ (x; z k) will be maximized by pushing µ k (x) towards two extremes (±∞). To remedy this issue while achieving the upper bound, a natural resolution is to squeeze z k within the domain of a Gaussian distribution with finite mean and variance. By doing so, we can avoid the degenerate solution while achieving a more reasonable trade-off between enlarging the spread of µ k (x) and maintaining the continuity of z. Therefore, we consider the following as the surrogate for maximizing I θ (x; z k), DISPLAYFORM3 Here r(z k) are i.i.d scaled normal distribution with finite variance. That is, we push each p θ (z k) towards a Gaussian distribution r(z k) by minimizing the KL divergence between them. Unlike the continuous representation, the mutual information I θ (x; y) between a discrete representation and data can be well approximated, given the fact that the cardinality of the space of y is typically low. To be more specific, given N i.i.d samples {x n} N n=1 of the data, the empirical estimation of I θ (x; y) under the conditional distribution p θ (y|x n) follows as DISPLAYFORM0 As shown in Proposition 2, with a suitably large batch of samples, the empirical mutual information I θ (x; y) is a good approximation to I θ (x; y). This enables us to optimize I θ (x; y) in a theoretically justifiable way that is amenable to stochastic gradient descent with minibatches of data. Proposition 2. Let y be a discrete random variable that belongs to some categorical class C. Assume the marginal probabilities of the true and the predicted labels are bounded below, i.e. p θ (y), p θ (y) ∈ [1/(CK 2), 1] for all y ∈ C with some constant C > 1. Then for any δ ∈, DISPLAYFORM1 Here N denotes the number of samples used to establish I θ (x; y) according to Eq.Therefore, to maximize the mutual information I θ (x; y), we consider the following: max L θ (y):= I θ (x; y). Maximizing the the mutual information I θ (x; y) provides a natural way to learn discrete categorical representations. To see this, notice that I θ (x; y) contains two fundamental quantities, the category balance term H θ (y) and the category separation term H θ (y|x). In other words, maximizing I θ (x; y) trades off uniformly assigning data over categories and seeking highly confident categorical identity for each sample x. The maximum is achieved if p θ (y|x) is deterministic while the marginal distribution p θ (y) is uniform, that is H θ (y|x) = 0 and H θ (y) = log K 2.Overall Objective As a summary of and, our overall objective is DISPLAYFORM2 The first three terms associate with our information maximization objective, while the last one aims at better approximation of the posterior p θ (x|y, z). A better balance between these two targets can be achieved by weighting them differently. One the other hand, the informativeness of each latent factor can be optimized through L θ (z) and L θ (y), while statistically independent latent continuous factors can be promoted by minimizing the total correlation term D KL p(z)||Π K1 k=1 p(z k). Therefore, trade-offs can be formalized regarding the informativeness of each latent factor, disentanglement of the representation, and better decoding quality. This motivates us to consider the following objective, let β, γ > 0, DISPLAYFORM3 We compare IMAE against various VAE based approaches that are summarized in Figure 1. We would like to demonstrate that IMAE can (i) successfully learn a hybrid of continuous and discrete representations, with y matching the intrinsic categorical information y true well and z capturing the disentangled feature information shared across categories; (ii) outperform the VAE based models by achieving a better trade-off between representation interpretability and decoding quality. We choose the priors r(z) and r(y) to be the isotropic Gaussian distribution and uniform distribution respectively. Detailed experimental settings are provided in Appendix G. Figure 1: Summarization of relevant work. β-VAE modifies ELBO by increasing the penalty on the KL divergence terms. InfoVAE drops the mutual information terms from ELBO. JointVAE seeks to control the mutual information by pushing the their upper bounds (the associated KL divergence terms) towards progressively increased values, C y &C z. We drop the subscripts θ and φ hereafter. DISPLAYFORM0 We first qualitatively demonstrate that informative representations can yield better interpretability. For the continuous representation, FIG0 validates Proposition 1 by showing that, with roughly same amount of variance for each latent variable z k, those achieving high mutual information with the data have mean values µ k (x) of the conditional probability p(z k |x) disperse across data samples and variances σ k (x) decrease to small values for all data samples. As a qualitative evaluation, we traverse latent dimensions corresponding with different levels of I(x, z k). As seen in FIG0 (b)-(d), informative variables in the continuous representation have uncovered intuitive continuous factors of the variation in the data, while the factor z 8 has no mutual information with the data and shows no variation. We observe the same phenomenon for the discrete representation y in FIG0 (e)&(f), which were obtained with two different values of β and γ, where the more informative one discovers matches the natural labels better. This provides further evidence for that interpretable latent factors can be attained by maximizing the mutual information between the representations and the data. We set γ = 2β for IMAE. For each β, we run each method over 10 random initializations. In this section, we perform quantitative evaluations on MNIST , Fashion MNIST and dSprites BID17. We show that IMAE achieves better interpretability vs. decoding quality trade-off. Unsupervised learning of discrete latent factor Before we present our main , we first describe an assumption that we make on the discrete representations. For the discrete representation, a reasonable assumption is that the conditional distribution p(y|x) should be locally smooth so that the data samples that are close on their manifold should have high probability of being assigned to the same category BID0. This assumption is crucial for using neural networks to learn discrete representations, since it's easy for a high capacity model to learn a non-smooth function p(y|x) that can abruptly change its predictions without guaranteeing similar data samples will be mapped to similar y. To remedy this issue, we adopt the virtual adversarial training (VAT) trick proposed by BID18 and augment L θ (y) as follows: DISPLAYFORM0 The second term of RHS regularizes p θ (y|x) to be consistent within the norm ball of each data sample so as to maintain the local smoothness of the prediction model. For fair comparison, we augment all four methods with VAT. As demonstrated in Appendix D, using VAT is essential for all of them except β-VAE to learn interpretable discrete representations. We start by evaluating different methods on MNIST and Fashion MNIST, for which we train over a range of β values (we set γ = 2β for IMAE).Discrete representations For the discrete representations, by simply pushing the conditional distribution p(y|x) towards the uniform distribution r(y), β-VAE sacrifices the mutual information I(x; y) and hence struggles in learning interpretable discrete representation even with VAT. As a comparison, InfoVAE performs much better by dropping I(x; y) from ELBO. For data that are distinctive enough between categories (MNIST), with large β values InfoVAE performs well by uniformly distributing the whole data over categories through minimizing D KL (p(y)||r(y)) while simultaneously encouraging local smoothness with VAT. However, InfoVAE struggles with less distinctive data (Fashion-MNIST), where it cannot give fairly confident category separation by only DISPLAYFORM0 Figure 4: For each image, the first row is the digit type learnt by the model, where each entry is obtained by feeding the decoder with the averaged z values corresponding with the learnt y. The second row is obtained by traversing the "angle" latent factor within [−2, 2] on digit 6. IMAE is capable of uncovering the underlying discrete factor over a wide range of β values. More interpretable continuous representations can be obtained when the method is capable of learning discrete representations, since less overlap between the mainfolds of each category is induced.requiring local smoothness. In contrast, IMAE achieves much better performance by explicitly encouraging confident category separation via minimizing the conditional entropy H(y|x), while using VAT to maintain local smoothness so as to prevent overfitting of neural network. Although JointVAE performs much better than β-VAE by pushing the upper bound of I(x; y) towards a progressively increasing target value C y, we found it can easily get stuck at some bad local optima where I(x; y) is comparatively large while the accuracy is poor. A heuristic is that once JointVAE enters the local region of a local optima, progressively increasing C y only induces oscillation within that region. Informativeness, interpretability and decoding quality As illustrated in Figure 1, by using large β values, β-VAE sacrifices more mutual information between the data and its representations, which in turn (see FIG1) in less informative representations followed by poor decoding quality. In contrast, the other three methods can remedy this issue to different degrees, and hence attains better trade-off regarding informativeness of latent representations and decoding quality. Compared to JointVAE and InfoVAE, IMAE is more capable of learning discrete presentations over a wide range of β, γ values, which implies less overlap between the manifolds of different categories is induced. As a , IMAE is expected to yield better decoding quality for each category. Although InfoVAE and JointVAE can also learn comparatively good discrete representations when using large and small β values respectively, the corresponding of these two regions associate with either poor decoding quality or much lower disentanglement score (see section 4.2.2). In contrast, IMAE consistently performs well with different hyperparameters, especially in the region of interest where the decoding quality as well as the informativeness of latent representations are good enough. In this section, we quantitatively evaluate the disentanglement capability of IMAE on dSprites where the ground truth factors of both continuous and discrete representaions are available. We use the disentanglement metric proposed by BID4, which is defined in terms of the gap between the top two empirical mutual information of each latent representation factor and a ground truth factor. The disentanglement score is defined as the weighted average of the gaps. A high disentanglement score implies that each ground truth factor associates with one single representation factor that is more informative than the others, i.e., the learnt representation factors are more disentangled.4 FIG3 shows that, with large β values, β-VAE penalizes the mutual information too much and this degrades the usefulness of representations. while all other three methods achieve higher disentanglement score with better decoding quality. For JointVAE, higher β values push the upper bound of mutual information converges to the prefixed target value, it therefore can maintain more mutual (a) IMAE performs well regarding the disentanglement score vs. decoding quality trade-off, especially in the region of interest where both decoding quality and informativeness of representations are fairly good.(b) Negative correlation between total correlation and disentanglement score. It also implies that the disentanglement score tends to decrease along with the total correlation if using even larger β, due to the diminishing informativeness of representation factors. In the extreme case, both total correlation and disentanglement score can degrade to zero. information between the data and the whole latent representations and give better decoding quality. However, the disentanglement quality is poor in this region, which implies that simply restricting the overall capacity of the latent representations is not enough for learning disentangled representations. While InfoVAE yields comparatively better disentanglement score by pushing the marginal joint distribution of the representations towards a factorial distribution harder with large values of β, the associated decoding quality and informativeness of latent representations are both poor. In contrast, IMAE is capable of achieving better trade-off between the disentanglement score and the decoding quality in the region of interest where the decoding quality as well as the informativeness are fairly good. We attribute this to the effect of explicitly seeking for statistically independent latent factors by minimizing the total correlation term in our objective. We have proposed IMAE, a novel approach for simultaneously learning the categorical information of data while uncovering latent continuous features shared across categories. Different from VAE, IMAE starts with a stochastic encoder that seeks to maximize the mutual information between data and their representations, where a decoder is used to approximate the true posterior distribution of the data given the representations. This model targets at informative representations directly, which in turn naturally yields an objective that is capable of simultaneously inducing semantically meaningful representations and maintaining good decoding quality, which is further demonstrated by the numerical . Unsupervised joint learning of disentangled continuous and discrete representations is a challenging problem due to the lack of prior for semantic awareness and other inherent difficulties that arise in learning discrete representations. This work takes a step towards achieving this goal. A limitation of our model is that it pursues disentanglement by assuming or trying to encourage independent scalar latent factors, which may not always be sufficient for representing the real data. For example, data may exhibit category specific variation, or a subset of latent factors might be correlated. This motivates us to explore more structured disentangled representations; one possible direction is to encourage group independence. We leave this for future work. H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.S. Zhao, J. Song, and S. Ermon. Infovae: Information maximizing variational autoencoders. arXiv preprint arXiv:1706.02262, 2017. Balance between posterior inference fidelity and information maximization Notice that we can rewrite the mutual information between the data x and its representations as the following, DISPLAYFORM0 It then follows that, DISPLAYFORM1 Since H(x) is independent of the optimization procedure, we have the following, DISPLAYFORM2 where β trade-off the informativeness of the latent representation and generation fidelity. Decomposition of I θ (x; y, z) Let b = (z, y) denote the joint random variable consisting of the continuous random variable b and discrete random variable y. Note that I θ (x; y, z) = I θ (x; b) can be written as: DISPLAYFORM3 The second term in Eq has the form: DISPLAYFORM4 where ϑ 1 follows by the assumption that p θ (b|x) is factorial. For the first term in Eq, we have: DISPLAYFORM5 Substituting Eqs FORMULA1 & FORMULA1 into Eq yields the : DISPLAYFORM6 Since y and z are assumed to be marginally independent, i.e., p θ (y; z) = p θ (y)p θ (z), then DISPLAYFORM7 1 N N n=1 p θ (y|x n) denote the Monte Carlo estimator of the true probability DISPLAYFORM8 for all x ∈ X, then applying the Hoeffding's inequality for bounded random variables [Theorem 2.2.6, BID20] yields, DISPLAYFORM9 Given Eq, we first establish the concentration of the entropy H p θ (y) with respect to the empirical distribution p θ (y). Assume For all y ∈ C, we have p θ (y), p θ (y) bounded below by 1/(CK 2) for some fixed constant C > 1. This assumption is practical since the distributions of true data and predicted data are approximately uniform and therefore p θ (y), p θ (y) ≈ 1/K 2 for all y ∈ C. Consider the function t log t, with derivative 1 + log t DISPLAYFORM10 (1 + log t)dt DISPLAYFORM11 Summing over C gives DISPLAYFORM12 Let δ = K 2 δ, then Eq together with Eq yield the following, DISPLAYFORM13 Next we are going to bound the divergence between H θ (y|x) and H θ (y|x) which are defined as, DISPLAYFORM14 Note that h log h ∈ [−1/e, 0] for all h ∈, then again applying [Theorem 2.2.6, BID20] yields, DISPLAYFORM15 Following the similar arguments as before, let δ = 2 exp −2t 2 e 2 N, then DISPLAYFORM16 Now let δ = K 2 δ, then applying the union bound we have DISPLAYFORM17 hold with probability 1 − δ. Conclude from Eqs &, we have DISPLAYFORM18 hold with probability at least 1 − 2δ. Computing the marginal distributions of the continuous representations z and z k requires the entire dataset, e.g., DISPLAYFORM0 To scale up our method to large datasets, we propose to estimate based on the minibatch data, e.g., DISPLAYFORM1 Now consider the entropy H(z) of z, which we approximate in the following way, DISPLAYFORM2 We estimate the integral of z by sampling z ∼ p θ (z|x i) and perform the Monte Carlo approximation. Although we minimize the unbiased estimator of the lower bound of the KL divergence, the term inside the logarithm is a summation of probability densities of Gaussians. In particular, we record the distribution of the variances output by our encoder and observe that the mean of the variances of the Gaussians is bounded between 0.2 and 2, which implies that the values of probability densities do not range in a large scale. Since logarithm is locally affine, we argue that our bound in is tight. Other quantities involved in our objective function are estimated in a similar fashion. In VAE, they assume a generative model specified by a stochastic decoder p θ (x|z), taking the continuous representation as an example, and seek an encoder q φ (z|x) as a variational approximation of the true posterior p θ (z|x). The model is fitted by maximizing the evidence lower bound (ELBO) of the marginal likelihood, DISPLAYFORM0 Here the KL divergence term can be further decomposed as BID10, DISPLAYFORM1 That is, minimizing the KL divergence also penalizes the mutual information I θ (x; z), thus reduces the amount of information z has about x. This can make the inference task q φ (z|x) hard and lead to poor reconstructions of x as well. Many recent efforts have been focused on resolving this problem by revising ELBO. Although approaches differ, it can be summarized as either dropping the mutual information term in Eq, or encouraging statistical independence across the dimensions of z by increasing the penalty on the total correlation term extracted from the KL divergence D KL (q φ (z)||r(z)) with respect to q φ (z). However, these approaches either in an invalid lower bound for the VAE objective, or cannot avoid minimizing the mutual information I θ (x; z) between the representation and the data. In contrast, IMAE starts with a stochastic encoder p θ (z|x) and aims at maximizing the mutual information between the data x and the representations z from the very beginning. By following the constraints which are naturally implied by the objective in order to avoid degenerated solutions, IMAE targets at both informative and statistical independent representations. On the other hand, in IMAE the decoder q φ (x|z) serves as a variational approximation to the true posterior p θ (x|z). As we will show in Section 4, being able to learn more interpretable representations allows IMAE to reconstruct and generate data with better quality. (a) IMAE performs well regarding the disentanglement score vs. decoding quality trade-off, especially in the region of interest where both decoding quality and informativeness of representations are fairly good.(b) Negative correlation between total correlation and disentanglement score. It also implies that the disentanglement score tends to decrease along with the total correlation if using even larger β, due to the diminishing informativeness of representation factors. In the extreme case, both total correlation and disentanglement score can degrade to zero. Training procedure: • MNIST & Fashion MNIST: We use momentum to train all models. The initial learning rate is set as 1e-3, and we decay the learning rate by 0.98 every epoch.• dSprites: We use Adam to train all models. The learning rate is set as 1e-3.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyVpB2RqFX
Information theoretical approach for unsupervised learning of unsupervised learning of a hybrid of discrete and continuous representations,
Learning rules for neural networks necessarily include some form of regularization. Most regularization techniques are conceptualized and implemented in the space of parameters. However, it is also possible to regularize in the space of functions. Here, we propose to measure networks in an $L^2$ Hilbert space, and test a learning rule that regularizes the distance a network can travel through $L^2$-space each update. This approach is inspired by the slow movement of gradient descent through parameter space as well as by the natural gradient, which can be derived from a regularization term upon functional change. The ing learning rule, which we call Hilbert-constrained gradient descent (HCGD), is thus closely related to the natural gradient but regularizes a different and more calculable metric over the space of functions. Experiments show that the HCGD is efficient and leads to considerably better generalization. Large neural networks can overfit to training data, but we desire networks that instead learn general aspects that apply to new data. A learning rule can encourage generalization through regularization, which may be implicit to the learning rule or explicitly added to the cost function. Many regularization techniques introduce penalties in the space of network parameters. Canonical examples include weight decay and the constraints upon parameter movement inherent to Stochastic Gradient Descent (SGD). Ultimately, however, it is the complexity of the parameterized function that we seek to regularize, not pf the parameters themselves. A well-known example of this more direct approach is the natural gradient, which constrains how much the output distribution of a network can change due to an update. Here, we introduce a new learning rule that, like the natural gradient, limits how much the output function can change during learning. However, we use a different and more calculable metric over function space: the expected L 2 norm. Since the L 2 -space is a Hilbert space, we term the rule Hilbert-constrained gradient descent (HCGD).The interpretation of the natural gradient as ing from regularization competes in the literature with many other interpretations and justifications. In order to establish a foundation for the Hilbert constraint as analogous to the natural gradient, we begin by reviewing and discussing the natural gradient. In simplest mathematical terms, one performs the natural gradient by computing an expectation of the covariance of gradients E P θ [JJ T], and then multiplying the current gradients by its inverse. This covariance matrix is known as the Fisher information metric F, and the natural gradient step is in the direction F −1 J. In addition to being seen as a regularizer of functional change, as we make precise below, variants of the natural gradient have appeared with no less than four justifications. These are data efficiency, minimizing a regret bound during learning, speeding optimization, and the benefits of whitened gradients. We review these disparate methods to show that they are equivalent, and to emphasize how widespread the natural gradient is for the optimization of neural networks. Amari originally developed the natural gradient in the light of information geometry and efficiency BID2; ). If some directions in parameter space are more informative of the network's outputs than others, then we might wish to scale updates by each dimension's informativeness. In the terminology of Amari, this is to say that we want to find a learning rule that works when the loss geometry is not Euclidean but Riemannian . We can equivalently speak of the informativeness of new examples. If not all examples carry equal information about a distribution, then the update step should be modified to make use of highly informative examples. That is, we wish to find a Fisher-efficient algorithm (see BID3). The natural gradient uses the Fisher information matrix to scale the update by parameters' informativeness. The'adaptive' family of learning rules derive from a framework closely related to data efficiency. The original Adagrad paper showed that the F −1 J update reduces the bound on the regret relative to gradient descent BID5 ). There, however, F is referred to as the Malahanobis norm and is computed over the same examples as J. (Note that since the Fisher is properly taken in expectation over the output distribution, the F computed over the training batch is referred to elsewhere as the 'empirical Fisher'.) The number of parameters in neural networks typically prohibits calculating and storing a full gradient covariance matrix, let alone inverting one. Adagrad is thus most commonly implemented using a diagonal approximation of the empirical Fisher. Related adaptive methods like Adadelta and RMSprop can be also be seen as employing related diagonal approximations of the empirical Fisher . These update rules are in wide use in neural network optimization but not commonly referred to as approximations of the natural gradient. There is a strong connection between the natural gradient and techniques that normalize and whiten gradients. The term F −1 J, after all, simply ensures that steps are made in a parameter space that is whitened by the covariance of the gradients. Whitening the gradients thus has the effect that SGD becomes more similar to the natural gradient. Activation whitening methods are also known to speed convergence. It appears that many approaches to normalize and whiten activations or gradients have been forwarded in the literature BID15; BID18; BID17; BID4 BID19 BID10; BID16. A similar effect is at play for Batch Normalization, as well BID9 ). By normalizing and whitening the gradients, or by proxy, the activations, these various methods ensure that parameter space is a better proxy for function space. A good review of the thought behind recent natural gradient approximations can be found in BID12 and BID11. K-FAC is likely the most accurate scalable approximation of the natural gradient to date, reported to converge in 14x fewer iterations than SGD with momentum (and in less real time). Additionally, Pascanu and Bengio provide a clear exposition of the natural gradient as taking constant steps in the space of output distributions BID13 ), as measured by the Kullbeck-Leibler (KL) divergence. We present a version of this argument below but with emphasis on the interpretation as regularization. Here we show that the natural gradient is the optimal update when one penalizes the incremental change in a network's output distribution as measured by the KL divergence. A simple way to characterize a function is to examine the function's outputs for a set of inputs. Given an input distribution X, we will write the distribution of the network's outputs as P θ, where θ is the set of the network's parameters. Let us plan to regularize the change in the output distribution P θ throughout optimization of the parameters θ. For this, we need a measure of similarity between two distributions. One such measure is the Kullbeck-Leibler (KL) divergence. The KL divergence from P θt to P θt+1 is defined as DISPLAYFORM0 P θ is the density of P θ. To ensure the output distribution changes little throughout optimization, we define a new cost function DISPLAYFORM1 where C 0 is the original cost function and λ is a hyperparameter that controls the importance of this regularization term. Optimization would be performed with respect to the proposed update θ t+1.In information theory, the KL divergence from P θt to P θt+1 is called the relative entropy of P θt+1 with respect to P θt. We can thus equivalently speak of regularizing the change in the entropy of the output distribution throughout learning. If we assume that the initial parameterization leads to an output distribution with maximal entropy (as would be the case if, reasonably, the inputs are treated nearly equally by the initial parameterization), and the average update decreases output entropy, then regularizing the relative entropy between optimization steps will lead to a solution with higher entropy. Evaluating the KL divergence directly is problematic because it is infeasible to define the output density P θ everywhere. One can obtain a more calculable form by expanding D KL (P θt+1 P θt) around θ t to second order with respect to θ. The Hessian of the KL divergence is the Fisher information metric F. If J is the the Jacobian J = (∇ θ C 0), the Fisher metric is defined as E P θ [JJ T], i.e. the expectation (over the output distribution) of the covariance matrix of the gradients. With ∆θ ≡ (θ t+1 − θ t), we can rewrite our regularized cost function as DISPLAYFORM2 To optimize C via gradient descent we first replace C 0 with its first order approximation. DISPLAYFORM3 At each evaluation, J is evaluated before any step is made, and we seek the value of ∆θ that minimizes Equation 2. By setting the derivative with respect to ∆θ zero, we can see that this value is DISPLAYFORM4 When λ = 1 this update is exactly equal to the natural gradient. Thus, the natural gradient emerges as the optimal update when one explicitly regularizes the change in the output distribution during learning. The Fisher metric is not the only way to define (and regularize) a space of functions. We propose instead to use the L 2 space with norm: DISPLAYFORM0 Here µ is a measure and corresponds to the probability density of the input distribution X. The | · | 2 operator refers to the 2-norm to account for vector-valued functions. This norm leads to a notion of distance between two functions f and g given by DISPLAYFORM1 Since µ is a density, X dµ = 1, and we can write DISPLAYFORM2 We will regularize change of a network's output function as measured by this notion of distance. If a network would have been trained to adjust the parameters θ to minimize some cost C 0, we will instead minimize at each step t a new cost given by: DISPLAYFORM3 Like all regularization terms, this can also be viewed as a Langrangian that satisfies a constraint. Here, this constraint is upon gradient descent and ensures that change in L 2 -space does not exceed some constant value. To evaluate Equation 6, we can approximate the norm with an empirical expectation over X. DISPLAYFORM4 Here, the data x i may derive from some validation batch but must pull from the same distribution X. This cost function imposes a penalty upon the difference between the output of the current network at time t and the proposed network at t + 1. This cost implies a learning rule, which we call Hilbert-constrained gradient descent (HCGD).We can write an update rule to minimize Equation 6 that is a modification of gradient descent. Our implementation, displayed as Algorithm 1, takes some lessons from the natural gradient. Just as the natural gradient is the optimal solution to Equation 4 at each step, here we seek the optimal solution to Equation 6. Thus we seek to converge to a ∆θ at each update step, where DISPLAYFORM5 Minimization can be performed in an inner loop by a first order method. We first propose some ∆θ 0 = − J = − ∇ θ C 0 (for learning rate) and then iteratively correct this proposal by gradient descent towards ∆θ. If only one correction is performed, we can simply add the derivative of the Hilbert-constraining term after ∆θ 0 has been proposed. However it is possible that solving equation 7 to high precision is beneficial, so we include the possibility of multiple iterations in the algorithm. We found empirically that a single correction was often sufficient. In the appendix, we tighten the analogy between HCGD and the natural gradient by discussing how one can approximate the natural gradient with an inner first-order optimization loop. We discuss there that evaluating the Fisher and the gradient on the same batch of data causes poor behavior (see also BID13 BID11). Failure to use a different batch will in a type of overfitting, as gradients become, in a sense, judges of their own trustworthiness on the test set. We thus evaluate the empirical expectation of the L 2 distance on a validation batch (or, at least, a different batch than used for the initial proposed update). It would also be possible to use unlabeled data. Using a different batch of data for the inner loop ensures that the update does not overfit to the training set at the expense of its previous behavior. DISPLAYFORM6 ∆θ 0 ← −v Obtain proposed update via SGD with momentum 10: DISPLAYFORM7 First correction towards ∆θ 13: DISPLAYFORM8 Correct momentum buffer 14:for 1 < j < n do Optional additional corrections 15: DISPLAYFORM9 16: DISPLAYFORM10 17: DISPLAYFORM11 18: DISPLAYFORM12 return θ t SGD is commonly improved with momentum. Instead of following the instantaneous gradient J, the momentum procedure for SGD follows a'velocity' term v which is adjusted at each step with the rule v ← βv + J. To implement momentum for HCGD, we also keep a velocity term but update it with the final Hilbert-constrained update ∆θ rather than J. The velocity is used to propose the initial ∆θ 0 in the next update step. We found that this procedure both quickened optimization and lowered generalization error. Hilbert-constrained gradient descent is computationally cheaper than the exact natural gradient. We are not required to approximately invert any large matrix F, nor are we required to calculate the any per-example gradients separately. When the validation batch X V is drawn anew for each corrective iteration (step 8 in Algorithm 2), HCGD requires an additional two forward passes and one backwards pass for each correction i < n, for a total of 2 + 3n passes each outer step. This can be reduced by 1 pass for i ≥ 1 if X V is drawn anew just for i = 0. We demonstrate our method on training networks at the task of MNIST digit classification, with two network architectures, and on image classification in the CIFAR10 dataset. In all tests, we use a tuned learning rate for SGD, and then use the same learning rate for HCGD. We use values of λ = 0.5 and η = 0.02. (For the n = 1 version, λ can be folded into the inner learning rate η. Values were chosen so that λη = 0.01.) We chose the batch size for the "validation" batch to be 256. While the examples in each "validation" batch were different than the training batch, they were also drawn from the train set. All models were implemented in PyTorch BID14 ).We focus first on the clean example of training a dense multilayer perceptron without any modifications. We employ an 850 − 90 − 50 − 10 architecture with ReLU activations, and do not use dropout or batch normalization. The output is a softmax activation and the cost is the cross-entropy. As can be seen in Figure 1, HCGD notably improves performance on the test set. This is ture for both algorithms with momentum (1b) and without momentum (1c). We use = 0.04 with momentum and = 0.1 without momentum. The versions in which the gradient is corrected towards the ideal Hilbert-constrained update just once (n = 1) or many times (n = 10) behaved similarly. We use only the n = 1 version in future tests. Figure 1: Test and train accuracy on MNIST digit classification for a dense multilayer perceptron without dropout or normalization. Traces and envelopes represent the mean ± standard deviation of the traces of 10 runs. HCGD converges faster than SGD and generalizes better both for n = 1 and n = 10 corrections to the gradient. Both SGD and HCGD employ momentum (β) in (a) and (b), but use no momentum in (c).While HCGD converges more quickly than SGD, it requires a larger number of passes through the computational graph. In FIG0, we plot the same as Figure 1a,b but choose the total number of passes as the x-axis. It can be seen that when the number of inner-loop corrections n = 1, HCGD converges in similar compute time as SGD (but generalizes better).Next, we test the various optimizers at training a 3-layer convolutional neural network with Batch Normalization (BN) on MNIST digit classification. HCGD still outperforms standard SGD and ADAM, but by a much slimmer margin FIG1 ). Batch normalization has the effect that the activations, and thus the gradients, become normalized and whitened. For a BN network, then, Finally, we test the performance of HCGD as applied to the CIFAR10 image classification problem. We train a Squeezenet v1.1, a convolutional neural network model optimized for parameter efficiency BID8. As is common for training large models, we train at a large learning rate and then decrease the learning rate by a factor of 10 for fine tuning. When HCGD is trained with the same learning rate as SGD (initial = 0.1), it outperforms SGD while the learning rate is high, but performs worse than SGD once the learning rate is decreased (Figure 5). We suspect that this is because HCGD effectively reduces the learning rate, removing some of the positive annealing effects of a high initial learning rate. Note that we also decrease the inner learning rate η by a factor of 10. When we increase the initial learning rate such that the test error of HCGD matches that of SGD, the final train error decreases below that of SGD. Averaging the final 40 epochs to reduce noise, SGD achieves an mean error percentage of 8.00.01, while HCGD at = 0.3 achieved an error of 7.80.025 (with indicating the standard error of the mean). HCGD thus generally decreases the test error at a given learning rate, but needs to be trained at a higher learning rate to a achieve a given level of gradient noise. Figure 4: Results of a Squeezenet v1.1 trained on CIFAR10. The learning rate is decreased by a factor of 10 at epoch 150. HCGD requires a higher learning rate to a achieve a similar level of gradient noise, which is important to decrease overfitting. A central theme of this work is that regularization and optimization should occur in the space of functions, not merely the space of parameters. In this section we investigate this difference. In the space of parameters, SGD is a strongly local update rule. Large jumps are generally prohibited. SGD is thus more likely to find solutions that are close to the initialization, and furthermore to trace a path of limited length. This discourages the sampling a large volume of parameter space during optimization, which may lead to overfitting. This locality may partly explain the unexpected generalization abilities of SGD (e.g. BID21). Early stopping, which lowers generalization error, also limits great exploration. If SGD is successful due to its movement though parameter space, then it moves similarly in function space only to the extent to which there is a reasonably smooth mapping between these two spaces. Distances in these two spaces may still be qualitatively different. We can partially examine the difference between parameter and function space by examining how a network moves differently through them. In figure 5, we have plotted the cumulative squared distance traveled during optimization for an example network trained on MNIST. Figure 5a,c display the cumulative squared distance traveled (CSDT) in L 2 function space, while Figure 5b,d display the CSDT in parameter space. We first examine the behavior of SGD. While SGD moves more more slowly over time through parameter space (5c), reflecting a decreasing gradient, the CSDT of SGD in L 2 -space grows linearly for a large portion of optimization. Ideally, a network would cease to change before entering the overfitting regime. However, the network continues to drift through L 2 -space even after the test error saturates at around epoch 15 (see Figure 1). Note that SGD with momentum differs from plain SGD in that the scale of the total distance traveled is significantly reduced (Figure 5b,d). This effect could be due to a decreased level of noise in the update. Network drift in parameter space and in L 2 -space thus display qualitatively different behavior for SGD.Figure 5: The cumulative squared distance traveled through L 2 -space, top row, and through parameter space, bottom row, for an MLP trained on MNIST. It can be seen that SGD continues to drift in L 2 -space during the overfitting regime, while HCGD plateaus. This is true for networks trained with momentum, left column, and without momentum, right column. Note that momentum significantly decreases the scale of distance traveled. Individual traces represent random seeds. We measure distance in L 2 -space in the same manner as in the HCGD algorithm: by registering the Euclidean distance between the network's outputs on a single validation batch before and after an update. The HCGD algorithm is designed to reduce motion through L 2 -space. Figure 5a,b show that HCGD indeed greatly reduces motion through L 2 -space whether or not momentum is used. The plateau of the CSDT indicates that the function has converged to a single location; it ceases to change. SGD, on the other hand, does not converge to a single function even long after test error saturates. It is interesting to note that HCGD allows the parameters to continue to drift (5c,d) even though the function has generally converged. Neural networks encode functions, and it is important to consider the behavior of optimizers through the space of possible functions. The L 2 Hilbert space defined over distribution of input examples is a tractable and useful space for analysis. In this paper we propose to regularize the change in L 2 space between successive updates. The idea is to limit the movement of the function, just as gradient descent limits movement of the parameters. Our ing learning rule, Hilbert-constrained gradient descent (HCGD), increases test performance on standard image classification architectures. We hope that this work inspires more thought and analysis of behavior in L 2 -space. A alternative explanation of our algorithm is that it penalizes directions that are very sensitive controls of the outputs, similar to the natural gradient, while still allowing learning. In addition, since we evaluate the change in L 2 -space and the gradient on different data, HCGD asks the model to learn from current examples only in ways that will not affect what has already been learned from other examples. These intuitions are equivalent to the idea of limiting changes in L 2 -space. Given these empirical , it would be desirable to theoretically prove better generalization bounds for a method regularized in L 2 -space. One promising framework is stability analysis, which has recently been applied to establish some bounds on the generalization error of SGD itself BID7 ). It can be shown that generalization error is bounded by the stability of an algorithm, defined as the expected difference of the loss when two networks are trained on datasets that are identical except for one example. BID7 analyzes the stability of SGD in parameter space, then uses a Lipschitz condition to move to function space and bound the stability of the error. We expect that bounding the movement through L 2 -space leads to increased error stability compared to bounding movement through parameter space (as is done by SGD), simply by removing reliance on the assumed Lipschitz condition. We leave a proof of this idea to later work. It interesting to ask if there is support in neuroscience for learning rules that diminish the size of changes when that change would have a large effect on other tasks. It is unlikely that the nervous system performs precisely the natural gradient or HCGD, but there is some evidence that some analog is in play. One otherwise perplexing finding is that behavioral learning rates in motor tasks are dependent on the direction of an error but independent of the magnitude of that error BID6 ). This is not expected by most models of gradient descent, but would be expected if the size of the change in the output distribution (i.e. behavior) were regulated to be constant. Regularization upon behavior change (rather than synaptic change) would predict that neurons that are central to many actions, like neurons in motor pools of the spinal cord, would learn very slowly after early development, despite the fact that their gradient to the error on any one task (if indeed it is calculated) is likely to be quite large. Given our general resistance to overfitting during learning, and the great variety of roles of neurons, it is likely that some type of regularization of behavioral and perceptual change is at play. In order to better compare the natural gradient to the Hilbert-constrained gradient, we propose a natural gradient algorithm of a similar style. Previous work on the natural gradient has aimed to approximate F −1 as best and as cheaply as possible. This is equivalent to minimizing Equation 2 (i.e. J∆θ + λ 2 ∆θ T F ∆θ) with a single iteration of a second-order optimizer. For very large neural networks, however, it is much cheaper to calculate matrix-vector products than to approximately invert a large matrix. It is possible that the natural gradient may be more accessible via an inner gradient descent, which would be performed during each update step as an inner loop. We describe this idea at high level in Algorithm 2. After an update step is proposed by a standard optimizer, the algorithm iteratively corrects this update step towards the natural gradient. To start with a good initial proposed update, it is better to use a fast diagonal approximation of the natural gradient (such as Adagrad or RMSprop) as the main optimizer. Each additional correction requires just one matrix-vector product after the gradients are calculated. Depending on the quality of the proposed update, the number of iterations required is likely to be small, and even a small number of iterations will improve the update. Algorithm 2 Natural gradient by gradient descent. This algorithm can be paired with any optimizer to increase its similarity to the natural gradient. Require: n Number of corrective steps. ∆θ i+1 = ∆θ i − η(J + λF ∆θ i)Step towards θ ← θ + ∆θ 8:return θ tSince the Fisher matrix F can be calculated from the covariance of gradients, it never needs to be fully stored. Instead, for an array of gradients G of size (# parameters, # examples), we can write DISPLAYFORM0 The choice of G is an important one. It cannot be a vector of aggregated gradients (i.e. J), as that would destroy covariance structure and would in a rank-1 Fisher matrix. Thus, we must calculate the gradients on a per-example basis. To compute G efficiently it is required that a deep learning framework implement forward-mode differentiation, which is currently not supported in popular frameworks. If we choose G to be the array of per-example gradients on the minibatch, F is known as the'empirical Fisher'. As explained in BID11 and in BID13, the proper method is to calculate G from the predictive (output) distribution of the network, P θ (y|x). This can be done as in BID12 by sampling randomly from the output distribution and re-running backpropagation on these fictitious targets, using (by necessity) the activations from the minibatch. Alternatively, as done in BID13, one may also use unlabeled or validation data to calculate G on each batch.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1l8sz-AW
It's important to consider optimization in function space, not just parameter space. We introduce a learning rule that reduces distance traveled in function space, just like SGD limits distance traveled in parameter space.
Stochastic gradient descent (SGD), which dates back to the 1950s, is one of the most popular and effective approaches for performing stochastic optimization. Research on SGD resurged recently in machine learning for optimizing convex loss functions and training nonconvex deep neural networks. The theory assumes that one can easily compute an unbiased gradient estimator, which is usually the case due to the sample average nature of empirical risk minimization. There exist, however, many scenarios (e.g., graphs) where an unbiased estimator may be as expensive to compute as the full gradient because training examples are interconnected. proposed using a consistent gradient estimator as an economic alternative. Encouraged by empirical success, we show, in a general setting, that consistent estimators in the same convergence behavior as do unbiased ones. Our analysis covers strongly convex, convex, and nonconvex objectives. We verify the with illustrative experiments on synthetic and real-world data. This work opens several new research directions, including the development of more efficient SGD updates with consistent estimators and the design of efficient training algorithms for large-scale graphs. Consider the standard setting of supervised learning. There exists a joint probability distribution P (x, y) of data x and associated label y and the task is to train a predictive model, parameterized by w, that minimizes the expected loss between the prediction and the ground truth y. Let us organize the random variables as ξ = (x, y) and use the notation (w; ξ) for the loss. If ξ i = (x i, y i), i = 1,..., n, are iid training examples drawn from P, then the objective function is either one of the following well-known forms: expected risk f (w) = E[(w; ξ)]; empirical risk f (w) = 1 n n i=1 (w; ξ i). Stochastic gradient descent (SGD), which dates back to the seminal work of , has become the de-facto optimization method for solving these problems in machine learning. In SGD, the model parameter is updated until convergence with the rule where γ k is a step size and g k is an unbiased estimator of the gradient ∇f (w k). Compared with the full gradient (as is used in deterministic gradient descent), an unbiased estimator involves only one or a few training examples ξ i and is usually much more efficient to compute. This scenario, however, does not cover all learning settings. A representative example that leads to costly computation of the unbiased gradient estimator ∇ (w, ξ i) is graph nodes. Informally speaking, a graph node ξ i needs to aggregate information from its neighbors. If information is aggregated across neighborhoods, ξ i must request information from its neighbors recursively, which in inquiring a large portion of the graph. In this case, the sample loss for ξ i involves not only ξ i, but also all training examples within its multihop neighborhood. The worst case scenario is that computing ∇ (w, ξ i) costs O(n) (e.g., for a complete graph or small-world graph), as opposed to O in the usual learning setting because only the single example ξ i is involved. In a recent work, proposed a consistent gradient estimator as an economic alternative to an unbiased one for training graph convolutional neural networks, offering substantial evidence of empirical success. A summary of the derivation is presented in Section 2. The subject of this paper is to provide a thorough analysis of the convergence behavior of SGD when g k in is a consistent estimator of ∇f (w k). We show that using this estimator in the same convergence behavior as does using unbiased ones. Definition 1. An estimator g N of h, where N denotes the sample size, is consistent if g N converges to h in probability: plim N →∞ g N = h. That is, for any > 0, lim N →∞ Pr(g N − h >) = 0. It is important to note that unbiased and consistent estimators are not subsuming concepts (one does not imply the other), even in the limit. This distinction renders the departure of our convergence , in the form of probabilistic bounds on the error, from the usual SGD that bound instead the expectation of the error. In what follows, we present examples to illustrate the distinctions between unbiasedness and consistency. To this end, we introduce asymptotic unbiasedness, which captures the idea that the bias of an estimator may vanish in the limit. Definition 2. An estimator g N of h, where N denotes the sample size, is asymptotically unbiased An estimator can be (asymptotically) unbiased but inconsistent. Consider estimating the mean h = µ of the normal distribution N (µ, σ 2) by using N independent samples X 1,..., X N. The estimator g N = X 1 (i.e., always use X 1 regardless of the sample size N) is clearly unbiased because E[X 1] = µ; but it is inconsistent because the distribution of X 1 does not concentrate around µ. Moreover, the estimator is trivially asymptotically unbiased. An estimator can be consistent but biased. Consider estimating the variance h = σ 2 of the normal distribution N (µ, σ 2) by using N independent samples X 1,..., X N. The estimator Hence, it is consistent owing to a straightforward invocation of the Chebyshev inequality, by noting that the mean approaches σ 2 and the variance approaches zero. However, the estimator admits a nonzero bias σ 2 /N for any finite N. An estimator can be consistent but biased even asymptotically. In the preceding example, the bias σ 2 /N approaches zero and hence the estimator is asymptotically unbiased. Other examples exist for the estimator to be biased even asymptotically. Consider estimating the quantity h = 0 with an estimator g N that takes the value 0 with probability (N − 1)/N and the value N with probability 1/N. Then, the probability that g N departs from zero approaches zero and hence it is consistent. However, E[g N] = 1 and thus the bias does not vanish as N increases. To the best of our knowledge, this is the first work that studies the convergence behavior of SGD with consistent gradient estimators, which from a real-world graph learning scenario that will be elaborated in the next section. With the emergence of graph deep learning models (; ; ; ; ; ; Velicković et al., 2018), the scalability bottleneck caused by the expensive computation of the sample gradient becomes a pressing challenge for training (as well as inference) with large graphs. We believe that this work underpins the theoretical foundation of the efficient training of a series of graph neural networks. The theory reassures practitioners of doubts on the convergence of their optimization solvers. Encouragingly, consistent estimators in a similar convergence behavior as do unbiased ones. The obtained here, including the proof strategy, offer convenience for further in-depth analysis under the same problem setting. This work opens the opportunity of improving the analysis, in a manner similar to the proliferation of SGD work, from the angles of relaxing assumptions, refining convergence rates, and designing acceleration techniques. We again emphasize that unbiasedness and consistency are two separate concepts; neither subsumes the other. One may trace that we intend to write the error bounds for consistent gradient estimators in a manner similar to the expectation bounds in standard SGD . Such a resemblance (e.g., in convergence rates) consolidates the foundation of stochastic optimization built so far. For a motivating application, consider the graph convolutional network model, GCN , that learns embedding representations of graph nodes. The l-th layer of the network is compactly written as where A is a normalization of the graph adjacency matrix, W (l) is a parameter matrix, and σ is a nonlinear activation function. The matrix H (l) contains for each row the embedding of a graph node input to the l-th layer, and similarly for the output matrix H (l+1). With L layers, the network transforms an initial feature input matrix H to the output embedding matrix H (L). For a node v, the embedding H (L) (v, :) may be fed into a classifier for prediction. Clearly, in order to compute the gradient of the loss for v, one needs the corresponding row of H (L), the rows of H (L−1) corresponding to the neighbors of v, and further recursive neighbors across each layer, all the way down to H. The computational cost of the unbiased gradient estimator is rather high. In the worst case, all rows of H are involved. To resolve the inefficiency, proposed an alternative gradient estimator that is biased but consistent. The simple and effective idea is to sample a constant number of nodes in each layer to restrict the size of the multihop neighborhood. For notational clarity, the approach may be easier to explain for a network with a single layer; theoretical for more layers straightforwardly follow that of Theorem 1 below, through induction. The approach generalizes the setting from a finite graph to an infinite graph, such that the matrix expression becomes an integral transform. In particular, the input feature vector H (u, :) for a node u is generalized to a feature function X(u), and the output embedding vector H (v, :) for a node v is generalized to an embedding function Z(v), where the random variables u and v in two sides of the layer reside in different probability spaces, with probability measures P (u) and P (v), respectively. Furthermore, the matrix A is generalized into a bivariate kernel A(v, u) and the loss is written as a function of the output Z(v). Then, and become Such a functional generalization facilitates sampling on all network layers for defining a gradient estimator. In particular, defining B(v) = A(v, u)X(u) dP (u), simple calculation reveals that the gradient with respect to the parameter matrix W is Then, one may use t iid samples of u in the input and s iid samples of v in the output to define an estimator of G: The gradient estimator G st so defined is consistent; see a proof in the supplementary material. Theorem 1. If q is continuous and f is finite, then plim s,t→∞ G st = G. We now settle the notations for SGD. We are interested in the (constrained) optimization problem where the feasible region S is convex. This setting includes the unconstrained case S = R d. We assume that the objective function f: R d → R is subdifferentiable; and use ∂f (w) to denote the subdifferential at w. When it is necessary to refer to an element of this set, we use the notation h. If f is differentiable, then clearly, ∂f (w) = {∇f (w)}. The standard update rule for SGD is w k+1 = Π S (w k − γ k g k), where g k is the negative search direction at step k, γ k is the step size, and Π S is the projection onto the feasible region: Π S (w):= argmin u∈S w − u. For unconstrained problems, the projection is clearly omitted: Denote by w * the global minimum. We assume that w * is an interior point of S, so that the subdifferential of f at w * contains zero. For differentiable f, this assumption simply means that ∇f (w *) = 0. Typical convergence are concerned with how fast the iterate w k approaches w *, or the function value f (w k) approaches f (w *). Sometimes, the analysis is made convenient through a convexity assumption on f, such that the average of historical function values f (w i), i = 1,..., k, is lowered bounded by f (w k), with w k being the cumulative moving average The following definitions are frequently referenced. Definition 3. We say that f is l-strongly convex (with l > 0) if for all w, u ∈ R d and h u ∈ ∂f (u), Recall that an estimator g N of h is consistent if for any > 0, In our setting, h corresponds to an element of the subdifferential at step k; i.e., h k ∈ ∂f (w k), g N corresponds to the negative search direction g k, and N corresponds to the sample size N k. That g converges to h k in probability does not imply that g N k k is unbiased. Hence, a natural question asks what convergence guarantees exist when using g N k k as the gradient estimator. This section answers that question. First, note that the sample size N k is associated with not only g We omit the superscript N k in these vectors to improve readability. Similar to the analysis of standard SGD, which is built on the premise of the unbiasedness of g k and the boundedness of the gradient, in the following subsection we elaborate the parallel assumptions in this work. They are stated only once and will not be repeated in the theorems that follow, to avoid verbosity. The convergence of the estimator does not characterize how fast it approaches the truth. One common assumption is that the probability in decreases exponentially with respect to the sample size. That is, we assume that there exists a step-dependent constant C k > 0 and a nonnegative function τ (δ) on the positive axis such that for all k > 1 and δ > 0. A similar assumption is adopted by that studied stochastic optimization through sample average approximation. In this case, the exponential tail occurs when the individual moment generating functions exist, a simple application of the Chernoff bound. For the motivating application GCN, the tail is indeed exponential as evidenced by Figure 3. Note the conditioning on the history g 1,..., g k−1 in. The reason is that h k (i.e., the gradient ∇f (w k) if f is differentiable) is by itself a random variable dependent on history. In fact, a more rigorous notation for the history should be filtration, but we omit the introduction of unnecessary additional definitions here, as using the notion g 1,..., g k−1 is sufficiently clear. Assumption 1. The gradient estimator g k is consistent and obeys. The use of a tail bound assumption, such as, is to reverse-engineer the required sample size given the desired probability that some event happens. In this particular case, consider the setting where T SGD updates are run. For any δ ∈, define the event Given and any ∈, one easily calculates that if the sample sizes satisfy for all k, then, Hence, all in this section are established under the event E δ that occurs with probability at least 1 −, a sufficient condition of which is. The sole purpose of the tail bound assumption is to establish the relation between the required sample sizes (as a function of δ and) and the event E δ, on which convergence in this work are based. One may replace the assumption by using other tail bounds as appropriate. It is out of the scope of this work to quantify the rate of convergence of the gradient estimator for a particular use case. For GCN, the exponential tail that agrees with is illustrated in Section 5.4. Additionally, parallel to the bounded-gradient condition for standard SGD analysis, we impose the following assumption. Assumption 2. There exists a finite G > 0 such that h ≤ G for all h ∈ ∂f (w) and w ∈ S. Let us begin with the strongly convex case. For standard SGD with unbiased gradient estimators, ample exist that indicate O(1/T) convergence 2 for the expected error, where T is the number of updates; see, e.g., (2.9)-(2.10) of and Section 3.1 of. We derive similar for consistent gradient estimators, as stated in the following Theorem 2. Different from the unbiased case, it is the error, rather than the expected error, to be bounded. The tradeoff is the introduction of the relative gradient estimator error δ, which relates to the sample sizes as in for guaranteeing satisfaction of the bound with high probability. Theorem 2. Let f be l-strongly convex with l ≤ G/ w 1 − w *. Assume that T updates are run, with diminishing step size γ k = [(l − δ)k] −1 for k = 1, 2,..., T, where δ = ρ/T and ρ < l is an arbitrary constant independent of T. Then, for any such ρ, any ∈, and sufficiently large sample sizes satisfying, with probability at least 1 −, we have and Note the assumption on l in Theorem 2. This assumption is mild since if f is l-strongly convex, it is also l -strongly convex for all l < l. The assumption is needed in the induction proof of when establishing the base case w 1 − w *. One may remove this assumption at the cost of a cumbersome right-hand side of, over which we favor a neater expression in the current form. With an additional smoothness assumption, we may eliminate the logarithmic factor in and obtain a for the iterate w T rather than the running average w T. The is a straightforward consequence of. Theorem 3. Under the conditions of Theorem 2, additionally let f be L-smooth. Then, for any ρ satisfying the conditions, any ∈, and sufficiently large sample sizes satisfying, with probability at least 1 −, we have In addition to O(1/T) convergence, it is also possible to establish linear convergence (however) to a non-vanishing right-hand side, as the following indicates. To obtain such a , we use a constant step size. show a similar for the function value with an additional smoothness assumption in a different setting; we give one for the iterate error without the smoothness assumption using consistent gradients. Theorem 4. Under the conditions of Theorem 2, except that one sets a constant step size γ k = c with 0 < c < (2l − δ) −1 for all k, for any ρ satisfying the conditions, any ∈, and sufficiently large sample sizes satisfying, with probability at least 1 −, we have Compare with in Theorem 2. The former indicates that in the limit, the squared iterate error is upper bounded by a positive term proportional to G 2; the remaining part of this upper bound decreases at a linear speed. The latter, on the other hand, indicates that the squared iterate error in fact will vanish, although it does so at a sublinear speed O(1/T). For convex (but not strongly convex) f, typically O(1/ √ T) convergence is asserted for unbiased gradient estimators; see., e.g., Theorem 2 of. These are often derived based on an additional assumption that the feasible region is compact. Such an assumption is not restrictive, because even if the problem is unconstrained, one can always confine the search to a bounded region (e.g., an Euclidean ball). Under this condition, we obtain a similar for consistent gradient estimators. Theorem 5. Let f be convex and the feasible region S have finite diameter D > 0; that is, sup w,u∈S w − u = D. Assume that T updates are run, with diminishing step size γ k = c/ √ k for k = 1, 2,..., T and for some c > 0. Let δ = ρ/ √ T where ρ > 0 is an arbitrary constant independent of T. Then, for any such ρ, any ∈, and sufficiently large sample sizes satisfying, with probability at least 1 −, we have One may obtain a of the same convergence rate by using a constant step size. In the case of unbiased gradient estimators, see Theorem 14.8 of. For such a , one assumes that the step size is inversely proportional to √ T. Such choice of the step size is common and is also used in the next setting. For the general (nonconvex) case, convergence is typically gauged with the gradient norm. One again obtains O(1/ √ T) convergence for unbiased gradient estimators; see, e.g., Theorem 1 of (which is a simplified consequence of the theory presented in). We derive a similar for consistent gradient estimators. Theorem 6. Let f be L-smooth and S = R d. Assume that T updates are run, with constant step size is an arbitrary constant. Then, for any such δ, any ∈, and sufficiently large sample sizes satisfying, with probability at least 1 −, we have All the in the preceding subsection assert convergence for SGD with the use of a consistent gradient estimator. As with the use of an unbiased one, the convergence for the strongly convex case is O(1/T), or linear if one tolerates a non-vanishing upper bound, and the convex and nonconvex cases O(1/ √ T). These theoretical , however, are based on assumptions of the sample size N k and the step size γ k that are practically challenging to verify. Hence, in a real-life machine learning setting, the sample size and the learning rate (the initial step size) are treated as hyperparameters to be tuned against a validation set. Nevertheless, these establish a qualitative relationship between the sample size and the optimization error. Naturally, to maintain the same failure probability, the relative gradient estimator error δ decreases inversely with the sample size N k. This intuition holds true in the tail bound condition with, when τ (δ) is a monomial or a positive combination of monomials with different degrees. With this assumption, the larger is N k, the smaller is δ (and also ρ, the auxiliary quantity defined in the theorems); hence, the smaller are the error bounds-. Theorem 4 presents a linear convergence for the strongly convex case, with a non-vanishing right-hand side. In fact, it is possible to obtain a with the same convergence rate but a vanishing right-hand side, if one is willing to additionally assume L-smoothness. The following theorem departs from the set of theorems in Section 4.2 on the assumption of the sufficient sample size N k and the gradient error δ. Theorem 7. Let f be l-strongly convex and L-smooth with l < L. Assume that T updates are run with constant step size γ k = 1/L for k = 1, 2,..., T. Let δ k, k ≥ 1 be a sequence where lim k→∞ δ k+1 /δ k ≤ 1. Then, for any positive η < l/L, ∈, and sample sizes with probability at least 1 −, we have where Here, δ k is the step-dependent gradient error. If it decreases to zero, then so does E T. Theorem 7 is adapted from , who studied unbiased gradients as well as noisy gradients. We separate Theorem 7 from those in Section 4.2 only for the sake of presentation clarity. The spirit, however, remains the same. Namely, consistent estimators in the same convergence behavior (i.e., rate) as do unbiased ones. All require an assumption on sufficient sample size owing to the probabilistic convergence of the gradient estimator. In this section, we report several experiments to illustrate the convergence behavior of SGD by using consistent gradient estimators. We base the experiments on the training of the GCN model motivated earlier (cf. Section 2). The code repository will be revealed upon paper acceptance. We use three data sets for illustration, one synthetic and two real-world benchmarks. The purpose of a synthetic data set is to avoid the regularity in the sampling of training/validation/test examples. The data set, called "Mixture," is a mixture of three overlapping Gaussians. The points are randomly connected, with a higher probability for those within the same component than the ones straddling across components. See the supplementary material for details of the construction. Because of the significant overlap, a classifier trained with independent data points unlikely predicts well the component label, but a graph-based method is more likely to be successful. Additionally, we use two benchmark data sets, Cora and Pubmed, often seen in the literature. These graphs are citation networks and the task is to predict the topics of the publications. We follow the split used in. See the supplementary material for a summary of all data sets. The GCN model is hyperparameterized by the number of layers. Without any intermediate layer, the model can be considered a generalized linear model and thus the cross-entropy loss function is convex. Moreover, with the use of an L 2 regularization, the loss becomes strongly convex. The predictive model reads P = softmax(AXW ), where X is the input feature matrix and P is the output probability matrix, both row-wise. One easily sees that the only difference between this model and logistic regression P = softmax(XW ) is the neighborhood aggregation AX. Standard batched training in SGD samples a batch (denoted by the index set I 1) from the training set and evaluates the gradient of the loss of softmax(A(I 1, :)XW ). In the analyzed consistentgradient training, we additionally uniformly sample the input layer with another index set I 0 and evaluate instead the gradient of the loss of softmax(Figure 1 shows the convergence curves as the iteration progresses. The plotted quantity is the overall loss on all training examples, rather than the batch loss for only the current batch. Hence, not surprisingly the curves are generally quite smooth. We compare standard SGD with the use of consistent gradient estimators, with varying sample size |I 0 |. Additionally, we compare with the Adam training algorithm , which is a stochastic optimization approach predominantly used in practice for training deep neural networks. One sees that for all data sets, Adam converges faster than does standard SGD. Moreover, as the sample size increases, the loss curve with consistent gradients approaches that with an unbiased one (i.e., standard SGD). This phenomenon qualitatively agrees with the theoretical ; namely, larger sample size improves the error bound. Note that all curves in the same plot from the same parameter initialization; and all SGD variants apply the same learning rate. It is important to note that the training loss is only a surrogate measure of the model performance; and often early termination of the optimization acts as a healthy regularization against over-fitting. In our setting, a small sample size may not satisfy the assumptions of the theoretical , but it proves to be practically useful. In Table 1 (left), we report the test accuracy attained by different training algorithms at the epoch where validation accuracy peaks. One sees that Adam and standard SGD achieves similar accuracies, and that SGD with consistent gradient sometimes surpasses these accuracies. For Cora, a sample size 400 already yields an accuracy noticeably higher than do Adam and standard SGD., and a GCN with more layers is analogous. We repeat the experiments in the preceding subsection. The are reported in Figure 2 and Table 1 (right). The observation of the loss curve follows the same as that in the convex case. Namely, Adam converges faster than does unbiased SGD; and the convergence curve with a consistent gradient approaches that with an unbiased one. On the other hand, compared with 1-layer GCN, 2-layer GCN yields substantially higher test accuracy for the data set Mixture, better accuracy for Cora, and very similar accuracy for Pubmed. Within each data set, the performances of different training algorithms are on par. In particular, a small sample size (e.g., 400) suffices for achieving comparable to the state of the art (cf.). The nature of a consistent estimator necessitates a characterization of the speed of probability convergence for building further , such as the ones in this paper. The speed, however, depends on the neural network architecture and it is out of the scope of this work to quantify it for a particular use case. Nevertheless, for GCN we demonstrate empirical findings that agree with the exponential tail assumption. In Figure 3 (solid curves), we plot the tail probability as a function of the sample size N at different levels of estimator error δ, for the initial gradient step in 1-layer GCN. For each N, 10,000 random gradient estimates were simulated for estimating the probability. Because the probability is plotted in the logarithmic scale, the fact that the curves bend down indicates that the convergence may be faster than exponential. Additionally, the case of 2-layer GCN is demonstrated by the dashed curves in Figure 3. The curves tend to be straight lines in the limit, which indicates an exponential convergence. To the best of our knowledge, this is the first work that studies the convergence behavior of SGD with consistent gradient estimators, and one among few studies of first-order methods that employ biased (d';) or noisy (; ;) estimators. The motivation originates from learning with large graphs and the main message is that the convergence behavior is well-maintained with respect to the unbiased case. While we analyze the classic SGD update formula, this work points to several immediate extensions. One direction is the design of more efficient update formulas resembling the variance reduction technique for unbiased estimators (; ;). Another direction is the development of more computation-and memory-efficient training algorithms for neural networks for large graphs. GCN is only one member of a broad family of message passing neural networks that suffer from the same limitation of neighborhood aggregation. Learning in these cases inevitably faces the costly computation of the sample gradient. Hence, a consistent estimator appears to be a promising alternative, whose construction is awaiting more innovative proposals. We are grateful to an anonymous reviewer who inspired us of an interesting use case (other than GCN). Learning to rank is a machine learning application that constructs ranking models for information retrieval systems. In representative methods such as RankNet and subsequent improvements , s i is the ranking function for document i and the learning amounts to minimizing the loss where the summation ranges over all pairs of documents such that i is ranked higher than j. The pairwise information may be organized as a graph and the loss function may be similarly generalized as a double integral analogous to. Because of nonlinearity, Monte Carlo sampling of each integral will in a biased but consistent estimator. Therefore, a new training algorithm is to sample i and j separately (forming a consistent gradient) and apply SGD. The theory developed in this work offers guarantees of training convergence. A.9 PROOF OF THEOREM 7 Theorem 2.2 of states that when the gradient error inequality holds. It remains to show that the probability that happens is at least 1 −. The assumption on the sample size N k means that Then, substituting δ k = δ h k into assumption yields Hence, the probability that happens is (1 − /T) ≥ 1 −, which concludes the proof. B EXPERIMENT DETAILS, and σ 3 = 0.25 are equally weighted but significantly overlap with each other. Random connections are made between every pair of points. For points in the same component, the probability that they are connected is p intra = 1e-3; for points straddle across components, the probability is p inter = 2e-4. See Figure 4 (a) for an illustration of the Gaussian mixture and Figure 4 (b) for the graph adjacency matrix. Table 2 for a summary of the data sets used in this work. Table 3 for the hyperparameters used in the experiments. For parameter initialization, we use the Glorot uniform initializer . B.4 RUN TIME See Table 4 for the run time (per epoch). As expected, a smaller sample size is more computationally efficient. SGD with consistent gradients runs faster than the standard SGD and Adam, both of which admit approximately the same computational cost.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rygMWT4twS
Convergence theory for biased (but consistent) gradient estimators in stochastic optimization and application to graph convolutional networks
We consider the problem of uncertainty estimation in the context of (non-Bayesian) deep neural classification. In this context, all known methods are based on extracting uncertainty signals from a trained network optimized to solve the classification problem at hand. We demonstrate that such techniques tend to introduce biased estimates for instances whose predictions are supposed to be highly confident. We argue that this deficiency is an artifact of the dynamics of training with SGD-like optimizers, and it has some properties similar to overfitting. Based on this observation, we develop an uncertainty estimation algorithm that selectively estimates the uncertainty of highly confident points, using earlier snapshots of the trained model, before their estimates are jittered (and way before they are ready for actual classification). We present extensive experiments indicating that the proposed algorithm provides uncertainty estimates that are consistently better than all known methods. The deployment of deep learning models in applications with demanding decision-making components such as autonomous driving or medical diagnosis hinges on our ability to monitor and control their statistical uncertainties. Conceivably, the Bayesian framework offers a principled approach to infer uncertainties from a model; however, there are computational hurdles in implementing it for deep neural networks BID9. Presently, practically feasible (say, for image classification) uncertainty estimation methods for deep learning are based on signals emerging from standard (non Bayesian) networks that were trained in a standard manner. The most common signals used for uncertainty estimation are the raw softmax response BID4, some functions of softmax values (e.g., entropy), signals emerging from embedding layers BID20, and the MC-dropout method BID9 ) that proxies a Bayesian inference using dropout sampling applied at test time. These methods can be quite effective, but no conclusive evidence on their relative performance has been reported. A recent NIPS paper provides documentation that an ensemble of softmax response values of several networks performs better than the other approaches BID17.In this paper, we present a method of confidence estimation that can consistently improve all the above methods, including the ensemble approach of BID17. Given a trained classifier and a confidence score function (e.g., generated by softmax response activations), our algorithm will learn an improved confidence score function for the same classifier. Our approach is based on the observation that confidence score functions extracted from ordinary deep classifiers tend to wrongly estimate confidence, especially for highly confident instances. Such erroneous estimates constitute a kind of artifact of the training process with an stochastic gradient descent (SGD) based optimizers. During this process, the confidence in "easy" instances (for which we expect prediction with high confidence) is quickly and reliably assessed during the early SGD epochs. Later on, when the optimization is focused on the "hard" points (whose loss is still large), the confidence estimates of the easy points become impaired. Uncertainty estimates are ultimately provided in terms of probabilities. Nevertheless, as previously suggested BID10 BID20 BID17, in a non-Bayesian setting (as we consider here) it is productive to decouple uncertainty estimation into two separate tasks: ordinal ranking according to uncertainty, and probability calibration. Noting that calibration (of ordinal confidence ranking) already has many effective solutions BID21 BID23 BID27 BID11, our main focus here is on the core task of ranking uncertainties. We thus adopt the setting of BID17, and others BID10 BID20, and consider uncertainty estimation for classification as the following problem. Given labeled data, the goal is to learn a pair (f, κ), where f (x) is a classifier and κ(x) is a confidence score function. Intuitively, κ should assign lower confidence values to points that are misclassified by f, relative to correct classifications (see Section 2 for details).We propose two methods that can boost known confidence scoring functions for deep neural networks (DNNs). Our first method devises a selection mechanism that assigns for each instance an appropriate early stopped model, which improves that instance's uncertainty estimation. The mechanism selects the early-stopped model for each individual instance from among snapshots of the network's weights that were saved during the training process. This method requires an auxiliary training set to train the selection mechanism, and is quite computationally intensive to train. The second method approximates the first without any additional examples. Since there is no consensus on the appropriate performance measure for scoring functions, we formulate such a measure based on concepts from selective prediction BID10 BID26. We report on extensive experiments with four baseline methods (including all those mentioned above) and four image datasets. The proposed approach consistently improves all baselines, often by a wide margin. For completeness, we also validate our using probably-calibrated uncertainty estimates of our method that are calibrated with the well-known Platt scaling technique BID23 and measured with the negative log-likelihood and Brier score. In this work we consider uncertainty estimation for a standard supervised multi-class classification problem. We note that in our context uncertainty can be viewed as negative confidence and vice versa. We use these terms interchangeably. Let X be some feature space (e.g., raw image pixels) and Y = {1, 2, 3, . . ., k}, a label set for the classes. Let P (X, Y) be an unknown source distribution over X × Y. A classifier f is a function f: X → Y whose true risk w.r.t. DISPLAYFORM0 + is a given loss function, for example, the 0/1 error. Given a labeled set DISPLAYFORM1 We consider deep neural classification models that utilize a standard softmax (last) layer for multi-class classification. Thus, for each input x ∈ X, the vector f (x) = (f (x) 1,..., f (x) k ) ∈ R k is the softmax activations of the last layer. The model's predicted classŷ =ŷ f (x) = argmax i∈Y f (x) i.Consider the training process of a deep model f through T epochs using any mini-batch SGD optimization variant. For each 1 ≤ i ≤ T, we denote by f[i] a snapshot of the partially trained model immediately after epoch i. For a multi-class model f, we would like to define a confidence score function, κ(x, i, |f), where x ∈ X, and i ∈ Y. The function κ should quantify confidence in predicting that x is from class i, based on signals extracted from f. A κ-score function should induce a partial order over points in X, and thus is not required to distinguish between points with the same score. For example, for any softmax classifier f, the vanilla confidence score function is κ(x, i|f) = ∆ f (x) i (i.e., the softmax response values themselves). Perhaps due to the natural probabilistic interpretation of the softmax function (all values are non-negative and sum to 1), this vanilla κ has long been used as a confidence estimator. Note, however, that we are not concerned with the standard probabilistic interpretation (which needs to be calibrated to properly quantify probabilities BID11).An optimal κ (for f) should reflect true loss monotonicity in the sense that for every two labeled instances (x 1, y 1) ∼ P (X, Y), and (DISPLAYFORM2 In the domain of (deep) uncertainty estimation there is currently no consensus on how to measure performance (of ordinal estimators). For example, BID17 used the Brier score and the negative-log-likelihood to asses their , while treating κ values as absolute scores. In BID20 the area under the ROC curve was used for measuring performance. In this section we propose a meaningful and unitless performance measure for κ functions, which borrows elements from other known approaches. In order to define a performance measure for κ functions, we require a few concepts from selective classification BID7 BID25. As noted in , any κ function can be utilized to construct a selective classifier (i.e., a classifier with a reject option). Thus, selective classification is a natural application of confidence score functions based on which it is convenient and meaningful to assess performance. The structure of this section is as follows. We first introduce the (well known) terms selective classifier, selective risk and coverage. Then we introduce the risk-coverage curve. We propose to measure the performance of a κ function as the area under the risk-coverage curve (AURC) of a selective classifier induced by κ. The proposed measure is a normalization of AURC where we subtract the AURC of the best κ in hindsight. The benefit of the proposed normalization is that it allows for meaningful comparisons accross problems. We term the this normalized metric "excess AURC" (E-ARUC) and it will be used throughout the paper for performance evaluation of κ functions. A selective classifier is a pair (f, g), where f is a classifier, and g: X → {0, 1} is a selection function, which serves as a binary qualifier for f as follows, DISPLAYFORM0 The performance of a selective classifier is quantified using coverage and risk. Coverage, defined to DISPLAYFORM1, is the probability mass of the non-rejected region in X. The selective risk of DISPLAYFORM2 These two measures can be empirically evaluated over any finite labeled set S m (not necessarily the training set) in a straightforward manner. Thus, the empirical selective risk is, DISPLAYFORM3 whereφ is the empirical coverage,φ(f, DISPLAYFORM4 The overall performance profile of a family of selective classifiers (optimized for various coverage rates) can be measured using the risk-coverage curve (RC-curve), defined to be the selective risk as a function of coverage. Given a classifier f and confidence score function κ defined for f, we define an empirical performance measure for κ using an independent set V n of n labeled points. The performance measure is defined in terms of the following selective classifier (f, g) (where f is our given classifier), and the selection functions g is defined as a threshold over κ values, DISPLAYFORM5 Let Θ be the set of all κ values of points in V n, Θ = ∆ {κ(x,ŷ f (x)|f ): (x, y) ∈ V n }; for now we assume that Θ contains n unique points, and later we note how to deal with duplicate values. The performance of κ is defined to be the area under the (empirical) RC-curve (AURC) of the pair DISPLAYFORM6 Intuitively, a better κ will induce a better selective classifier that will tend to reject first the points that are misclassified by f. Accordingly, the associated RC-curve will decrease faster (with decreasing coverage) and the AURC will be smaller. For example, in Figure 1 we show (in blue) the RC-curve of classifier f obtained by training a DNN trained over the CIFAR-100 dataset. The κ induced by f is the softmax response confidence score, κ(x) = max i f (x) i. The RC-curve in the figure is calculated w.r.t. to an independent labeled set V n of n = 10, 000 points from CIFAR-100. Each point on the curve is the empirical selective risk of a selective classifier (f, g θ) such that θ ∈ Θ. As can be seen, the selective risk is monotonically increasing with coverage. For instance, at full coverage = 1, the risk is approximately 0.29. This risk corresponds to a standard classifier (that always predicts and does not reject anything). The risk corresponding to coverage = 0.5 is approximately 0.06 and corresponds to a selective classifier that rejects half of the points (those whose confidence is least). Not surprisingly, its selective risk is significantly lower than the risk obtained at full coverage. Figure 1: RC-curve for the CIFAR100 dataset with softmax response confidence score. Blue: the RC curve based on softmax response; black: the optimal curve that can be achieved in hindsight. An optimal in hindsight confidence score function for f, denoted by κ *, will yield the optimal risk coverage curve. This optimal function rates all misclassified points (by f) lower than all correctly classified points. The selective risk associated with κ * is thus zero at all coverage rates below 1 −r(f |V n). The reason is that the optimal function rejects all misclassified points at such rates. For example, in Figure 1 we show the RC-curve (black) obtained by relying on κ *, which reaches zero at coverage of 1 − 0.29 = 0.71 (red dot); note that the selective risk at full coverage is 0.29.Since the AURC of all RC-curves for f induced by any confidence scoring function will be larger than the AURC of κ *, we normalize by AURC(κ *) to obtain a unitless performance measure. To compute the AURC of κ *, we compute the discrete integral ofr (w.r.t. κ *) from the coverage level of 1 −r(f |V n) (0 errors) to 1 (nr errors). Thus, DISPLAYFORM7 We approximate using the following integral: DISPLAYFORM8 For example, the gray area in Figure 1 is the AURC of κ *, which equals 0.04802 (and approximated by 0.04800 using the integral).To conclude this section, we define the Excess-AURC (E-AURC) as E-AURC(κ, f |V n) = AURC(κ, f |V n) − AURC(κ *, f |V n). E-AURC is a unitless measure in, and the optimal κ will have E-AURC = 0. E-AURC is used as our main performance measure. The area of uncertainty estimation is huge, and way beyond our scope. Here we focus only on non-Bayesian methods in the context of deep neural classification. Motivated by a Bayesian approach, BID9 proposed the Monte-Carlo dropout (MC-dropout) technique for estimating uncertainty in DNNs. MC-dropout estimates uncertainty at test time using the variance statistics extracted from several dropout-enabled forward passes. The most common, and well-known approach for obtaining confidence scores for DNNs is by measuring the classification margin. When softmax is in use at the last layer, its values correspond to the distance from the decision boundary, where large values tend to reflect high confidence levels. This concept is widely used in the context of classification with a reject option in linear models and in particular, in SVMs BID0 BID3 BID8. In the context of neural networks, BID4 BID5 were the first to propose this approach and, for DNNs, it has been recently shown to outperform the MC-dropout on ImageNet BID10.A K-nearest-neighbors (KNN) algorithm applied in the embedding space of a DNN was recently proposed by BID20. The KNN-distances are used as a proxy for classconditional probabilities. To the best of our knowledge, this is the first non-Bayesian method that estimates neural network uncertainties using activations from non-final layers. A new ensemble-based uncertainty score for DNNs was proposed by BID17. It is well known that ensemble methods can improve predictive performance BID1. Their ensemble consists of several trained DNN models, and confidence estimates were obtained by averaging softmax responses of ensemble members. While this method exhibits a significant improvement over all known methods (and is presently state-of-the-art), it requires substantially large computing resources for training. When considering works that leverage information from the network's training process, the literature is quite sparse. BID14 proposed to construct an ensemble, composed of several snapshots during training to improve predictive performance with the cost of training only one model. However, due to the use of cyclic learning rate schedules, the snapshots that are averaged are fully converged models and produce a that is both conceptually and quantitatively different from our use of snapshots before convergence. BID15 similarly proposed to average the weights across SGD iterations, but here again the averaging was done on fully converged models that have been only fine-tuned after full training processes. Thus both these ensemble methods are superficially similar to our averaging technique but are different than our method that utilizes "premature" ensemble members (in terms of their classification performance). In this section we present an example that motivates our algorithms. Consider a deep classification model f that has been trained over the set S m through T epochs. Denote by f [i] the model trained at the ith epoch; thus, f = f [T]. Take an independent validation set V n of n labeled points. We monitor the quality of the softmax response generated from f (and its intermediate variants f [i] ), through the training process, as measured on points in V n. The use of V n allows us to make meaningful statements about the quality of softmax response values (or any other confidence estimation method) for unseen test points. We construct the example by considering two groups of instances in V n defined by confidence assessment assigned using the softmax response values f gives to points in V n. The green group contains the highest (99%-100%) percentile of most confident points in V n, and the red group contains the lowest (0%-1%) percentile of least confident points. Although the softmax response is rightfully criticized in its ability to proxy confidence BID9, it is reasonable to assume it is quite accurate in ranking green vs. red points (i.e., a prediction by f regarding a red point is likely to be less accurate than its prediction about a green point).We observe that the prediction of the green points' labels is learned earlier during training of f, compared to a prediction of any red point. This fact is evident in FIG0 where we see the training of f over CIFAR-100. Specifically, we see that the softmax response values of green points stabilize at their maximal values around Epoch 80. We also note that the green points in this top percentile are already correctly classified very early, near Epoch 25 (not shown in the figure). In contrast, red points continue to improve their confidence scores throughout. This observation indicates that green points can be predicted very well by an intermediate model such as f 130. Can we say that f 130 can estimate the confidence of green points correctly? Recall from Section 3 that a useful method for assessing the quality of a confidence function is the E-AURC measure (applied over an independent validation set). We now measure the quality of the softmax response of all intermediate classifiers shows the E-AURC of the red points. We see that for the green points, the confidence estimation quality improves (almost) monotonically and then degrades (almost) monotonically. The best confidence estimation is obtained by intermediate classifiers such as f 130. Surprisingly, the final model f [T] is one of the worst estimators for green points! In sharp contrast, the confidence estimates for the red points monotonically improves as training continues. The best estimator for red points is the final model f [T]. This behavior can be observed in all the datasets we considered (not reported). DISPLAYFORM0 The above dynamics indicates that the learning of uncertainty estimators for easy instances conceptually resembles overfitting in the sense that the assessment of higher confidence points in the test set degrades as training continues after a certain point. To overcome this deficiency we propose an algorithm that uses the concept of early stopping in a pointwise fashion, where for each sample (or set of samples) we find the best intermediate snapshot for uncertainty estimation. In this section, first we present a supervised algorithm that learns an improved scoring function for a given pair (f, κ), where f is a trained deep neural classifier, and κ is a confidence scoring function for f's predictions. In principle, κ: X → R, where κ(x) can be defined as any mapping from the activations of f applied on x to R. All the confidence estimation methods we described above comply with this definition.1 Our algorithm requires a labeled training sample. The second algorithm we present is an approximated version of the first algorithm, which does not rely on additional training examples. Let f be a neural classifier that has been trained using any (mini-batch) SGD variant for T epochs, and let F = ∆ {f [i]: 1 ≤ i ≤ T } be the set of intermediate models obtained during training (f [i] is the model generated at epoch i). We assume that f, the snapshots set F, and a confidence score function for f, κ(·, ·|f): X →), are given.2 Let V n be an independent training set. The Pointwise Early Stopping (PES) algorithm for confidence scores (see pseudo-code in Algorithm 1) operates as follows. The pseudo-code contains both the training and inference procedures. At each iteration of the training main loop (lines 3-11), we extract from V (which is initialized as a clone of the set V n) a set of the q most uncertain points. We abbreviate this set by S (the "layer"). The size of the layer is determined by the hyperparameter q. We then find the best model in F using the DISPLAYFORM0 for i = 0 to n/q do 4: DISPLAYFORM1 indicates the qth order statistic of r 6: DISPLAYFORM2 10: DISPLAYFORM3 end for 12:return K,Θ 13: end function 14: function ESTIMATE CONFIDENCE(x,f,K,Θ) 15: DISPLAYFORM4 DISPLAYFORM5 end function E-AURC measure with respect to S. This model, denoted f [j], is found by solving DISPLAYFORM6 The best performing confidence score over S, and the threshold over the confidence level, θ, are saved for test time (lines 8-9) and used to associate points with their layers. We iterate and remove layer after layer until V is empty. Our algorithm produces a partition of X comprising layers from least to highest confidence. For each layer we find the best performing κ function based on models from F.To infer the confidence rate for given point x at test time, we search for the minimal i that satisfies DISPLAYFORM7, where κ i and θ i are the i'th elements of K and Θ respectively. DISPLAYFORM8, where i is added to enforce full order on the confidence score between layers, recall that κ ∈. As we saw in Section 6.1, the computational complexity of the PES algorithm is quite intensive. Moreover, the algorithm requires an additional set of labeled examples, which may not always be available. The Averaged Early Stopping (AES) is a simple approximation of the PES motivated by the observation that "easy" points are learned earlier during training as shown in FIG0. By summing the area under the learning curve (a curve that is similar to 2(a)) we leverage this property and avoid some inaccurate confidence assessments generated by the last model alone. We approximate the area under the curve by averaging k evenly spaced points on that curve. Let F be a set of k intermediate models saved during the training of f, DISPLAYFORM0 where linspace(t, T, k) is a set of k evenly spaced integers between t and T (including t and T). We define the output κ as the average of all κs associated with models in F, DISPLAYFORM1 As we show in Section 7, AES works surprisingly well. In fact, due to the computational burden of running the PES algorithm, we use AES in most of our experiments below. We now present of our AES algorithm applied over the four known confidence scores: softmax response, NN-distance BID20, MC-dropout BID9 ans Ensemble BID17 ) (see Section 4). For implementation details for these methods, see Appendix A. We evaluate the performance of these methods and our AES algorithm that uses them as its core κ. In all cases we ran the AES algorithm with k ∈ {10, 30, 50}, and t = 0.4T. We experiment with four standard image datasets: CIFAR-10, CIFAR-100, SVHN, and Imagenet (see Appendix A for details).Our are reported in Table 4. The table contains four blocks, one for each dataset. Within each block we have four rows, one for each baseline method. To explain the structure of this table, consider for example the 4th row, which shows the corresponding to the softmax response for CIFAR-10. In the 2nd column we see the E-AURC (×10 3) of the softmax response itself (4.78). In the 3rd column, the of AES applied over the softmax response with k = 10 (reaching E-AURC of 4.81). In the 4th column we specify percent of the improvement of AES over the baseline, in this case -0.7% (i.e., in this case AES degraded performance). For the imagenet dataset, we only present for the softmax response and ensemble. Applying the other methods on this large dataset was computationally prohibitive. Let us now analyze these . Before considering the relative performance of the baseline methods compares to ours, it is interesting to see that the E-AURC measure nicely quantifies the difficulty level of the learning problems. Indeed, CIFAR-10 and SVHN are known as relatively easy problems and the E-AURC ranges we see in the table for most of the methods is quite small and similar. CIFAR-100 is considered harder, which is reflected by significantly larger E-AURC values recorded for the various methods. Finally, Imagenet has the largest E-AURC values and is considered to be the hardest problem. This observation supports the usefulness of E-AURC. A non-unitless measure such as AUC, the standard measure, would not be useful in such comparisons. DISPLAYFORM0 It is striking that among all 42 experiments, our method improved the baseline method in 39 cases. Moreover, when applying AES with k = 30, it always reduced the E-AURC of the baseline method. For each dataset, the ensemble estimation approach of BID17 is the best among the baselines, and is currently state-of-the-art. It follows that for all of these datasets, the application of AES improves the state-of-the-art. While the ensemble method (and its improvement by AES) achieve the best on these datasets, these methods are computationally intensive. It is, therefore, interesting to identify top performing baselines, which are based on a single classifier. In CIFAR-10, the best (single-classifier) method is softmax response, whose E-AURC is improved 6% by AES (ing in the best single-classifier performance). Interestingly, in this dataset, NN-distance incurs a markedly bad E-AURC (35.1), which is reduced (to 4.58) by AES, making it on par with the best methods for this dataset. Turning to CIFAR-100, we see that the (single-classifier) top method is NN-distance, with an E-AURC of 45.56, which is improved by 22% using AES. Table 2: NLL and Brier score of AES method applied with Platt scaling on CIFAR-10, CIFAR-100, SVHN and Imagenet compared to the baseline method (calibrated as well). Next we examine AES applied together with probability calibration. We calibrate the of the AES algorithm using the Platt scaling technique; see BID23 for details. Platt scaling is applied on the of the AES algorithm with k = 30, and compared to the independently scaled underlying measure without AES. Performance is evaluated using both negative log-likelihood (NLL) and the Brier score BID2. For further implementation details of this experiment see Appendix A. The appear in Table 2. As can easily be seen, the probability scaling are remarkably consistent with our raw uncertainty estimates (measured with the E-AURC) over all datasets and underlying uncertainty methods. We conclude that AES also improves calibrated probabilities of the underlying uncertainty measures, and the E-AURC can serve as a reliable proxy also for calibrated probabilities. We implemented the PES algorithm only over the softmax response method (SR) for several datasets. To generate an independent training set, which is required by PES, we randomly split the original validation set (in each dataset) into two parts and took a random 70% of the set for training our algorithm, using the remaining 30% for validation. Table 3: E-AURC and % improvement for the Pointwise Early Stopping algorithm (PES) compared to the softmax response (SR) on CIFAR-10, CIFAR-100 and SVHN. All E-AURC values are multiplied by 10 3 for clarity.applying PES over NN-distance, the time complexity is nmT k DISPLAYFORM0, where k is the number of neighbours and C f (S m) is the time complexity of running a forward pass of m samples using the classifier f. Similarly, the complexity of PES when the underlying scores are from MC-dropout is O(dT C f (V n) where d is the number of dropout iterations (forward passes) of the MC-dropout algorithm. Thus, when n = 7000, T = 250 (the parameters used for applying PES over CIFAR-100), and with d = 100 (as recommended in BID9), this amounts to 175,000,000 forward passes. We set q = n/3. We repeated the experiment over 10 random training-validation splits and report the average and the standard errors in Table 3.As seen, PES reduced the E-AURC of softmax on all datasets by a significant rate. The best improvement was achieved on CIFAR-100 (E-AURC reduced by 18%).Our difficulties when applying the PES algorithm on many of the underlying confidence methods, and the outstanding of the AES motivate further research that should lead to improving the algorithm and making it more efficient. We presented novel uncertainty estimation algorithms, which are motivated by an observation regarding the training process of DNNs using SGD. In this process, reliable estimates generated in early epochs are later on deformed. This phenomenon somewhat resembles the well-known overfitting effect in DNNs. The PES algorithm we presented requires an additional labeled set and expensive computational resources for training. The approximated version (AES) is simple and scalable. The ing confidence scores our methods generate systematically improve all existing estimation techniques on all the evaluated datasets. Both PES and AES overcome confidence score deformations by utilizing available snapshot models that are generated anyway during training. It would be interesting to develop a loss function that will explicitly prevent confidence deformations by design while maintaining high classification performance. In addition, the uncertainty estimation of each instance currently requires several forward passes through the network. Instead it would be interesting to consider incorporating distillation BID13 so as to reduce inference time. Another direction to mitigate the computational effort at inference time is to approximate PES using a single model per instance based on an early stopping criterion similar to the one proposed by BID19. Softmax Response: For the softmax response method (SR) we simply take the relevant softmax value of the sample, κ(x, i|f) = f (x) i. We implemented the NN-distance method using k = 500 for the nearest neighbors parameter. We didn't implemented the two proposed extensions (embedding regularization, and adversarial training), this add-on will degrade the performance of f for better uncertainty estimation, which we are not interested in. Moreover, running the NN-distance with this add-on will require to add it to all other methods to manage a proper comparison. The MC-dropout implemented with p = 0.5 for the dropout rate, and 100 feed-forward iterations for each sample. Ensemble: The Ensemble method is implemented as an average of softmax values across ensemble of 5 DNNs. Platt scaling BID23: The Platt scaling is applied as follows. Given a confidence measure κ and a validation set V, the scaling is the solution of the logistic regression from κ(x,ŷ f (x)|f ) to κ * (x,ŷ f (x)|f ), where κ * (x,ŷ f (x)|f ) is defined as 0 when x =ŷ f (x) and 1 otherwise. We train the logistic regression models based on all points in V. To validate the training of this calibration we randomly split the original test set to a training and test subsets. The calibration is learned over the training subset and evaluated on the test subset. The performance of the ing scaled probabilities has been evaluated using both negative log likelihood (NLL) and the Brier score BID2, which is simply the average L 2 distance between the predicted and the true probabilities. We provide here the table of the experiments of AES for softmax response and NN-distance now with standard errors. Due to computational complexity the standard error for all other methods has not been computed. Table 4: E-AURC and % improvement for AES method on CIFAR-10, CIFAR-100, SVHN and ImageNET for various k values compared to the baseline method. All E-AURC values are multiplied by 10 3 for clarity. In Section 5 we motivated our method by dividing the domain X to "easy points" (green) and "hard points" (red). We demonstrated that the "easy points" have a phenomenon similar to overfitting, where at some point during training the E-AURC measured for "easy points" start degrading. This observation strongly motivates our strategy that extracts information from early stages of the training process that helps to recover uncertainty estimates of the easy points. Here, we extend this demonstration that previously was presented done with respect to the softmax κ function. In Figures 3 and 4 we show plots similar to FIG0 (b,c) for the MC-dropout and NN-distance, respectively. It is evident that the overfitting occurs in all cases. but to a much lesser extent in the case of MC-dropout. This is consistent with the of the AES algorithm where E-AURC improvement over the MC-dropout was smaller compared to the improvements achieved for the other two methods. In the case of NN-distance a slight overfitting also affects the easy points, but the hard instances are affected much more severely. Thus, from this perspective in all three cases the proposed correction stratgey is potentially useful.(a) (b) Figure 3: The E-AURC of MC-dropout on CIFAR-100 along training for 5000 points with highest confidence (a), and 5000 points with lowest confidence (b).(a) (b) Figure 4: The E-AURC of NN-distance on CIFAR-100 along training for 5000 points with highest confidence (a), and 5000 points with lowest confidence (b).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJfb5jCqKm
We use snapshots from the training process to improve any uncertainty estimation method of a DNN classifier.
Existing public face image datasets are strongly biased toward Caucasian faces, and other races (e.g., Latino) are significantly underrepresented. The models trained from such datasets suffer from inconsistent classification accuracy, which limits the applicability of face analytic systems to non-White race groups. To mitigate the race bias problem in these datasets, we constructed a novel face image dataset containing 108,501 images which is balanced on race. We define 7 race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups. Evaluations were performed on existing face attribute datasets as well as novel image datasets to measure the generalization performance. We find that the model trained from our dataset is substantially more accurate on novel datasets and the accuracy is consistent across race and gender groups. We also compare several commercial computer vision APIs and report their balanced accuracy across gender, race, and age groups. To date, numerous large scale face image datasets (; ; ; ; ; ; ; ; ; ; ;) have been proposed and fostered research and development for automated face detection (b;), alignment (Xiong & De la ;), recognition , generation (; ; ;), modification (; ;), and attribute classification . These systems have been successfully translated into many areas including security, medicine, education, and social sciences. Despite the sheer amount of available data, existing public face datasets are strongly biased toward Caucasian faces, and other races (e.g., Latino) are significantly underrepresented. A recent study shows that most existing large scale face databases are biased towards "lighter skin" faces (around 80%), e.g. White, compared to "darker" faces, e.g. Black . This means the model may not apply to some subpopulations and its may not be compared across different groups without calibration. Biased data will produce biased models trained from it. This will raise ethical concerns about fairness of automated systems, which has emerged as a critical topic of study in the recent machine learning and AI literature . For example, several commercial computer vision systems (Microsoft, IBM, Face++) have been criticized due to their asymmetric accuracy across sub-demographics in recent studies . These studies found that the commercial face gender classification systems all perform better on male and on light faces. This can be caused by the biases in their training data. Various unwanted biases in image datasets can easily occur due to biased selection, capture, and negative sets . Most public large scale face datasets have been collected from popular online media -newspapers, Wikipedia, or web search-and these platforms are more frequently used by or showing White people. To mitigate the race bias in the existing face datasets, we propose a novel face dataset with an emphasis on balanced race composition. Our dataset contains 108,501 facial images collected primarily from the YFCC-100M Flickr dataset (Thomee et al.), which can be freely shared for a research purpose, and also includes examples from other sources such as Twitter and online newspaper outlets. We define 7 race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Our dataset is well-balanced on these 7 groups (See Figures 1 and 2) Our paper makes three main contributions. First, we emprically show that existing face attribute datasets and models learned from them do not generalize well to unseen data in which more nonWhite faces are present. Second, we show that our new dataset performs better on novel data, not only on average, but also across racial groups, i.e. more consistently. Third, to the best of our knowledge, our dataset is the first large scale face attribute dataset in the wild which includes Latino and Middle Eastern and differentiates East Asian and Southeast Asian. Computer vision has been rapidly transferred into other fields such as economics or social sciences, where researchers want to analyze different demographics using image data. The inclusion of major racial groups, which have been missing in existing datasets, therefore significantly enlarges the applicability of computer vision methods to these fields. The goal of face attribute recognition is to classify various human attributes such as gender, race, age, emotions, expressions or other facial traits from facial appearance (; ;). Table 1 summarizes the statistics of existing large scale public and in-the-wild face attribute datasets including our new dataset. As stated earlier, most of these datasets were constructed from online sources and are typically dominated by the White race. Face attribute recognition has been applied as a sub-component to other computer vision tasks such as face verification and person re-idenfication (; a;). It is imperative to ensure that these systems perform evenly well on different gender and race groups. Failing to do so can be detrimental to the reputations of individual service providers and the public trust about the machine learning and computer vision research community. Most notable incidents regarding the racial bias include Google Photos recognizing African American faces as Gorilla and Nikon's digital cameras prompting a message asking "did someone blink?" to Asian users . These incidents, regardless of whether the models were trained improperly or how much they actually affected the users, often in the termination of the service or features (e.g. dropping sensitive output categories). For this reason, most commercial service providers have stopped providing a race classifier. Face attribute recognition is also used for demographic surveys performed in marketing or social science research, aimed at understanding human social behaviors and their relations to demographic s of individuals. Using off-the-shelf tools and commercial services, social scientists have begun to use images of people to infer their demographic attributes and analyze their behaviors. Notable examples are demographic analyses of social media users using their photographs (; ; ; ;). The cost of unfair classification is huge as it can over-or under-estimate specific sub-populations in their analysis, which may have policy implications. AI and machine learning communities have increasingly paid attention to algorithmic fairness and dataset and model biases (; ; ; . There exist many different definitions of fairness used in the literature . In this paper, we focus on balanced accuracy-whether the attribute classification accuracy is independent of race and gender. More generally, research in fairness is concerned with a model's ability to produce fair outcomes (e.g. loan approval) independent of protected or sensitive attributes such as race or gender. Studies in algorithmic fairness have focused on either 1) discovering (auditing) existing bias in datasets or systems (; ; ;), 2) making a better dataset , or 3) designing a better algorithm or model (; ; ; ;). Our paper falls into the first two categories. The main task of interest in our paper is (balanced) gender classification from facial images. demonstrated many commercial gender classification systems are biased and least accurate on dark-skinned females. The biased may be caused by biased datasets, such as skewed image origins (45% of images are from the U.S. in Imagenet) or biased underlying associations between scene and race in images . It is, however, "infeasible to balance across all possible co-occurrences" of attributes , except in a lab-controlled setting. Therefore, the contribution of our paper is to mitigate, not entirely solve, the current limitations and biases of existing databases by collecting more diverse face images from non-White race groups. We empirically show this significantly improves the generalization performance to novel image datasets whose racial compositions are not dominated by the White race. Furthermore, as shown in Table 1, our dataset is the first large scale in-the-wild face image dataset which includes Southeast Asian and Middle Eastern races. While their faces share similarity with East Asian and White groups, we argue that not having these major race groups in datasets is a strong form of discrimination. 3 DATASET CONSTRUCTION 3.1 RACE TAXONOMY Our dataset defines 7 race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Race and ethnicity are different categorizations of humans. Race is defined based on physical traits and ethnicity is based on cultural similarities . For example, Asian immigrants in Latin America can be of Latino ethnicity. In practice, these two terms are often used interchangeably. We first adopted a commonly accepted race classification from the U.S. Census Bureau (White, Black, Asian, Hawaiian and Pacific Islanders, Native Americans, and Latino). Latino is often treated as an ethnicity, but we consider Latino a race, which can be judged from the facial appearance. We then further divided subgroups such as Middle Eastern, East Asian, Southeast Asian, and Indian, as they look clearly distinct. During the data collection, we found very few examples for Hawaiian and Pacific Islanders and Native Americans and discarded these categories. All the experiments conducted in this paper were therefore based on 7 race classification. An important criterion to measure dataset bias is on which basis the bias should be measured: skin color or race? A few recent studies use skin color as a proxy to racial or ethnicity grouping. While skin color can be easily computed without subjective annotations, it has limitations. First, skin color is heavily affected by illumination and light conditions. The Pilot Parliaments Benchmark (PPB) dataset only used profile photographs of government officials taken in well controlled lighting, which makes it non-in-the-wild. Second, within-group variations of skin color are huge. Even same individuals can show different skin colors over time. Third, most importantly, race is a multidimensional concept whereas skin color (i.e. brightness) is one dimensional. Figure 5 in Appendix shows the distributions of the skin color of multiple race groups, measured by Individual Typology Angle (ITA) . As shown here, the skin color provides no information to differentiate many groups such as East Asian and White. Therefore, we explicitly use race and annotate the physical race by human annotators' judgments. To complement the limits of race categorization, however, we also use skin color, measured by ITA, following the same procedure used by. Many existing face datasets have been sourced from photographs of public figures such as politicians or celebrities (; ; ; ;). Despite the easiness of collecting images and ground truth attributes, the selection of these populations may be biased. For example, politicians may be older and actors may be more attractive than typical faces. Their images are usually taken by professional photographers in limited situations, leading to the quality bias. Some datasets were collected via web search using keywords such as "Asian boy" . These queries may return only stereotypical faces or prioritize celebrities in those categories rather than diverse individuals among general public. Our goal is to minimize the selection bias introduced by such filtering and maximize the diversity and coverage of the dataset. We started from a huge public image dataset, Yahoo YFCC100M dataset (Thomee et al.), and detected faces from the images without any preselection. A recent work also used the same dataset to construct a huge unfiltered face dataset (Diversity in Faces, DiF) . Our dataset is smaller but more balanced on race (See Figure 1). For an efficient collection, we incrementally increased the dataset size. We first detected and annotated 7,125 faces randomly sampled from the entire YFCC100M dataset ignoring the locations of images. After obtaining annotations on this initial set, we estimated demographic compositions of each country. Based on this statistic, we adaptively adjusted the number of images for each country sampled from the dataset such that the dataset is not dominated by the White race. Consequently, we excluded the U.S. and European countries in the later stage of data collection after we sampled enough White faces from those countries. The minimum size of a detected face was set to 50 by 50 pixels. This is a relatively smaller size compared to other datasets, but we find the attributes are still recognizable and these examples can actually make the classifiers more robust against noisy data. We only used images with "Attribution" and "Share Alike" Creative Commons licenses, which allow derivative work and commercial usages. We used Amazon Mechanical Turk to annotate the race, gender and age group for each face. We assigned three workers for each image. If two or three workers agreed on their judgements, we took the values as ground-truth. If all three workers produced different responses, we republished the image to another 3 workers and subsequently discarded the image if the new annotators did not agree. These annotations at this stage were still noisy. We further refined the annotations by training a model from the initial ground truth annotations and applying back to the dataset. We then manually re-verified the annotations for images whose annotations differed from model predictions. We first measure how skewed each dataset is in terms of its race composition. For the datasets with race annotations, we use the reported statistics. For the other datasets, we annotated the race labels for 3,000 random samples drawn from each dataset. See Figure 1 for the . As expected, most existing face attribute datasets, especially the ones focusing on celebrities or politicians, are biased toward the White race. Unlike race, we find that most datasets are relatively more balanced on gender ranging from 40%-60% male ratio. To compare model performance of different datasets, we used an identical model architecture, ResNet-34, to be trained from each dataset. We used ADAM optimization with a learning rate of 0.0001. Given an image, we detected faces using the dlib's (dlib.net) CNN-based face detector and ran the attribute classifier on each face. The experiment was done in PyTorch. Throughout the evaluations, we compare our dataset with three other datasets: UTKFace , LFWA+, and CelebA . Both UTKFace and LFWA+ have race annotations, and thus, are suitable for comparison with our dataset. CelebA does not have race annotations, so we only use it for gender classification. See Table 1 for more detailed dataset characteristics..971 --* CelebA doesn't provide race annotations. The was obtained from the whole set (white and non-white). † FairFace defines 7 race categories but only 4 races (White, Black, Asian, and Indian) were used in this to make it comparable to UTKFace. Using models trained from these datasets, we first performed cross-dataset classifications, by alternating training sets and test sets. Note that FairFace is the only dataset with 7 races. To make it compatible with other datasets, we merged our fine racial groups when tested on other datasets. CelebA does not have race annotations but was included for gender classification. Tables 2 and 3 show the classification for race, gender, and age on the datasets across subpopulations. As expected, each model tends to perform better on the same dataset on which it was trained. However, the accuracy of our model was highest on some variables on the LFWA+ dataset and also very close to the leader in other cases. This is partly because LFWA+ is the most biased dataset and ours is the most diverse, and thus more generalizable dataset. To test the generalization performance of the models, we consider three novel datasets. Note that these datasets were collected from completely different sources than our data from Flickr and not used in training. Since we want to measure the effectiveness of the model on diverse races, we chose the test datasets that contain people in different locations as follows. Geo-tagged Tweets. First we consider images uploaded by Twitter users whose locations are identified by geo-tags (longitude and latitude), provided by . From this set, we chose four countries (France, Iraq, Philippines, and Venezuela) and randomly sampled 5,000 faces. Media Photographs. Next, we also use photographs posted by 500 online professional media outlets. Specifically, we use a public dataset of tweet IDs posted by 4,000 known media accounts, e.g. @nytimes. Note that although we use Twitter to access the photographs, these tweets are simply external links to pages in the main newspaper sites. Therefore this data is considered as media photographs and different from general tweet images mostly uploaded by ordinary users. We randomly sampled 8,000 faces from the set. Protest Dataset. Lastly, we also use a public image dataset collected for a recent protest activity study . The authors collected the majority of data from Google Image search by using keywords such as "Venezuela protest" or "football game" (for hard negatives). The dataset exhibits a wide range of diverse race and gender groups engaging in different activities in various countries. We randomly sampled 8,000 faces from the set. These faces were annotated for gender, race, and age by Amazon Mechanical Turk workers. Table 4: Gender classification accuracy measured on external validation datasets across gender-race groups. Table 7 shows the classification accuracy of different models. Because our dataset is larger than LFWA+ and UTKFace, we report the three variants of the FairFace model by limiting the size of a training set (9k, 18k, and Full) for fair comparisons. Improved Accuracy. As clearly shown in the , the model trained by FairFace outperforms all the other models for race, gender, and age, on the novel datasets, which have never been used in training and also come from different data sources. The models trained with fewer training images (9k and 18k) still outperform other datasets including CelebA which is larger than FairFace. This suggests that the dataset size is not the only reason for the performance improvement. Balanced Accuracy. Our model also produces more consistent -for race, gender, age classification -across different race groups compared to other datasets. We measure the model consistency by standard deviations of classification accuracy measured on different sub-populations, as shown in Table 5. More formally, one can consider conditional use accuracy equality (Berk et al.) or equalized odds as the measure of fair classification. For gender classification: where Y is the predicted gender, Y is the true gender, A refers to the demographic group, and D is the set of different demographic groups being considered (race). When we consider different gender groups for A, this needs to be modified to measure accuracy equality Berk et al.: We therefore define the maximum accuracy disparity of a classifier as follows: Table 4 shows the gender classification accuracy of different models measured on the external validation datasets for each race and gender group. The FairFace model achieves the lowest maximum accuracy disparity. The LFWA+ model yields the highest disparity, strongly biased toward the male category. The CelebA model tends to exhibit a bias toward the female category as the dataset contains more female images than male. The FairFace model achieves less than 1% accuracy discrepancy between male ↔ female and White ↔ non-White for gender classification (Table 7). All the other models show a strong bias toward the male class, yielding much lower accuracy on the female group, and perform more inaccurately on the non-White group. The gender performance gap was the biggest in LFWA+ (32%), which is the smallest among the datasets used in the experiment. Recent work has also reported asymmetric gender biases in commercial computer vision services , and our further suggests the cause is likely due to the unbalanced representation in training data. Data Coverage and Diversity. We further investigate dataset characteristics to measure the data diversity in our dataset. We first visualize randomly sampled faces in 2D space using t-SNE as shown in Figure 3. We used the facial embedding based on ResNet-34 from dlib, which was trained from the FaceScrub dataset , the VGG-Face dataset and other online sources, which are likely dominated by the White faces. The faces in FairFace are well spread in the space, and the race groups are loosely separated from each other. This is in part because the embedding was trained from biased datasets, but it also suggests that the dataset contains many non-typical examples. LFWA+ was derived from LFW, which was developed for face recognition, and therefore contains multiple images of the same individuals, i.e. clusters. UTKFace also tends to focus more on local clusters compared to FairFace. To explicitly measure the diversity of faces in these datasets, we examine the distributions of pairwise distance between faces (Figure 4). On the random subsets, we first obtained the same 128-dimensional facial embedding from dlib and measured pair-wise distance. Figure 4 shows the CDF functions for 3 datasets. As conjectured, UTKFace had more faces that are tightly clustered together and very similar to each other, compared to our dataset. Surprisingly, the faces in LFWA+ were shown very diverse and far from each other, even though the majority of the examples contained a white face. We believe this is mostly due to the fact that the face embedding was also trained on a very similar white-oriented dataset which will be effective in separating white faces, not because the appearance of their faces is actually diverse. (See Figure 2) Figure 4: Distribution of pairwise distances of faces in 3 datasets measured by L1 distance on face embedding. Previous studies have reported that popular commercial face analytic models show inconsistent classification accuracies across different demographic groups . We used the FairFace images to test several online APIs for gender classification: Microsoft Face API, Amazon Rekognition, IBM Watson Visual Recognition, and Face++. Compared to prior work using politicians' faces, our dataset is much more diverse in terms of race, age, expressions, head orientation, and photographic conditions, and thus serves as a much better benchmark for bias measurement. We used 7,476 random samples from FairFace such that it contains an equal number of faces from each race, gender, and age group. We left out children under the age of 20, as these pictures were often ambiguous and the gender could not be determined for certain. The experiments were conducted on August 13th -16th, 2019..923.966.901.955.925.949.918.914.921.987.951.979.906.983.941.030 Microsoft.822.777.766.717.824.775.852.794.843.848.863.790.839.772.806.042 Face++.888.959.805.944.876.904.884.897.865.981.770.968.822.978.896.066 IBM.910.966.758.927.899.910.852.919.884.972.811.957.871.959.900.061 FairFace.987.991.964.974.966.979.978.961.991.989.991.987.972.991.980.011 *Microsoft.973.998.962.967.963.976.960.957.983.993.975.991.966.993.975.014 *Face++.893.968.810.956.878.911.886.899.870.983.773.975.827.983.901.067 *IBM.914.981.761.956.909.920.852.926.892.977.819.975.881.979.910.066 Table 6 shows the gender classification accuracies of the tested APIs. These APIs first detect a face from an input image and classify its gender. Not all 7,476 faces were detected by these APIs with the exception of Amazon Rekognition which detected all of them. Table 8 in Appendix reports the detection rate. 1 We report two sets of accuracies: 1) treating mis-detections as mis-classifications and 2) excluding mis-detections. For comparison, we included a model trained with our dataset to provide an upper bound for classification accuracy. Following prior work , we also show the classification accuracy as a function of skin color in Figure 6. The suggest several findings. First, all tested gender classifiers still favor the male category, which is consistent with the previous report . Second, dark-skinned females tend to yield higher classification error rates, but there exist many exceptions. For example, Indians have darker skin tones (Figure 5), but some APIs (Amazon and MS) classified them more accurately than Whites. This suggests skin color alone, or any other individual phenotypic feature, is not a sufficient guideline to study model bias. Third, face detection can also introduce significant gender bias. Microsoft's model failed to detect many male faces, an opposite direction from the gender classification bias. This was not reported in previous studies which only used clean profile images of frontal faces. This paper proposes a novel face image dataset balanced on race, gender and age. Compared to existing large-scale in-the-wild datasets, our dataset achieves much better generalization classification performance for gender, race, and age on novel image datasets collected from Twitter, international online newspapers, and web search, which contain more non-White faces than typical face datasets. We show that the model trained from our dataset produces balanced accuracy across race, whereas other datasets often lead to asymmetric accuracy on different race groups. This dataset was derived from the Yahoo YFCC100m dataset (Thomee et al.) for the images with Creative Common Licenses by Attribution and Share Alike, which permit both academic and commercial usage. Our dataset can be used for training a new model and verifying balanced accuracy of existing classifiers. Algorithmic fairness is an important aspect to consider in designing and developing AI systems, especially because these systems are being translated into many areas in our society and affecting our decision making. Large scale image datasets have contributed to the recent success in computer vision by improving model accuracy; yet the public and media have doubts about its transparency. The novel dataset proposed in this paper will help us discover and mitigate race and gender bias present in computer vision systems such that such systems can be more easily accepted in society. A APPENDIX Figure 5: Individual Typology Angle (ITA), i.e. skin color, distribution of different races measured in our dataset.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1xSSTNKDB
A new face image dataset for balanced race, gender, and age which can be used for bias measurement and mitigation
Dramatic advances in generative models have ed in near photographic quality for artificially rendered faces, animals and other objects in the natural world. In spite of such advances, a higher level understanding of vision and imagery does not arise from exhaustively modeling an object, but instead identifying higher-level attributes that best summarize the aspects of an object. In this work we attempt to model the drawing process of fonts by building sequential generative models of vector graphics. This model has the benefit of providing a scale-invariant representation for imagery whose latent representation may be systematically manipulated and exploited to perform style propagation. We demonstrate these on a large dataset of fonts and highlight how such a model captures the statistical dependencies and richness of this dataset. We envision that our model can find use as a tool for designers to facilitate font design. moveTo lineTo (-2, 0. 3) cubicBezier (-7.4, 0. 2) (-14.5, 11.7), (-12.1, 23.4)... that enables such manipulations. All images are samples from this generative model. The last few years have witnessed dramatic advances in generative models of images that produce near photographic quality imagery of human faces, animals, and natural objects (; ; ; ;). These models provide an exhaustive characterization of natural image statistics and represent a significant advance in this domain. However, these advances in image synthesis ignore an important facet of how humans interpret raw visual information , namely that humans seem to exploit structured representations of visual concepts . Structured representations may be readily employed to aid generalization and efficient learning by identifying higher level primitives for conveying visual information or provide building blocks for creative exploration . This may be best seen in human drawing, where techniques such as gesture drawing emphasize parsimony for capturing higher level semantics and actions with minimal graphical content .In this work, we focus on an subset of this domain where we think we can make progress and improve the generality of the approach. Font generation represents a 30 year old problem posited as a constrained but diverse domain for understanding high level perception and creativity . Early research attempted to heuristically systematize the creation of fonts for expressing the identity of characters (e.g. a, 2) as well as stylistic elements constituting the "spirit" of a font . Despite providing great inspiration, the were limited by a reliance on heuristics and a lack of a learned, structured representation . Subsequent work for learning font representations focused on models with simple parameterizations , template matching , example-based hints , or more recently, learning manifolds for geometric annotations .We instead frame the problem of generating fonts by specifying it with Scalable Vector Graphics (SVG) -a common file format for fonts, human drawings, designs and illustrations . SVGs are a compact, scale-invariant representation that may be rendered on most web browsers. SVGs specify an illustration as a sequence of a higher-level commands paired with numerical arguments FIG0. We take inspiration from the literature on generative models of images in rasterized pixel space (; Van den). Such models provide powerful auto-regressive formulations for discrete, sequential data (; ; Van den) and may be applied to rasterized renderings of drawings . We extend these approaches to the generation of sequences of SVG commands for the inference of individual font characters. The goal of this work is to build a tool to learn a representation for font characters and style that may be extended to other artistic domains (Clouâtre & ; ;), or exploited as an intelligent assistant for font creation .Our main contributions are: 1) Build a generative model for scalable vector graphics (SVG) images and apply this to a large-scale dataset of 14 M font characters. 2) Demonstrate that the generative model provides a latent representation of font styles that captures a large amount of diversity and is consistent across individual characters. 3) Exploit the latent representation from the model to infer complete SVG fontsets from a single character. 4) Identify semantically meaningful directions in the latent representation to globally manipulate font style. We compiled a font dataset composed of 14 M examples across 62 characters (i.e. 0-9, a-z, A-Z), which we term SVG-Fonts. The dataset consists of fonts in a common font format (SFD) 1 converted to SVG, excluding examples where the unicode ID does not match the targeted 62 character set specified above. In spite of the filtering, label noise exists across the roughly 220 K fonts examined. The proposed model consists of a variational autoencoder (VAE) and an autoregressive SVG decoder implemented in Tensor2Tensor . Briefly, the VAE is a convolutional encoder and decoder paired with instance normalization conditioned on the label (e.g. a, 2, etc.) . The VAE is trained as an class-conditioned autoencoder ing in a latent code z that is largely classindependent . The latent z is composed of µ and σ: the mean and standard deviation of a multivariate Gaussian. The SVG decoder consists of 4 stacked LSTMs trained with dropout (; ;) and a Mixture Density Network (MDN) at its final layer. The LSTM receives as input the previous sampled MDN output, concatenated with the discrete class label and the latent style representation z. The SVG decoder's loss is composed of a softmax cross-entropy loss between over one-hot SVG commands plus the MDN loss applied to the real-valued arguments. In principle, the model may be trained end-to-end, but we found it simpler to train the two parts of the model separately. Note that both the VAE and MDN are probabilistic models that maybe sampled many times during evaluation. The shown here are the selected best out of 10 samples. We compiled the SVG-Fonts dataset wherein individual SFD font characters were normalized and converted into SVG format for training and evaluation. We trained a VAE and SVG decoder over 3 epochs of the data and evaluated the on a hold-out test split. Over the course of training, we find that the model does indeed improve in terms of likelihood and plateaus in performance, while not overfitting on the training set (FIG0 . Yet, we note a small but systematic spread in average likelihood across classes FIG0 . What follows is an analysis of the representational ability of the model to learn and generate SVG specified fonts. We first ask whether the proposed model may learn a latent representation of font style that captures a large amount of diversity. We demonstrate this by generating SVG characters using the SVG decoder, while conditioning on a randomly sampled z. In FIG1 (left) we see that the decodings represent a wide array of font styles. Because the VAE is conditioned on the class label, we expect that the latent representation z would only encode the font style with minimal class information . We wish to exploit this model structure to perform style propagation across fonts. In particular, we ask whether a single character from a font set is sufficient to infer the rest of the font set in a visually plausible manner .To perform this task, we calculate the latent representation z for a single character and condition the SVG decoder on z as well as the label for all other font characters (i.e. 0-9, a-z, A-Z). FIG1 (right) shows the of this experiment. For each row, z is calculated from the character in the red box. The other characters in that row are generated from the SVG decoder conditioned on z. We observe a perceptually-similar style consistently within each row. Note that there was no requirement during training that the same point in latent space would correspond to a perceptually similar character across labels -that is, the consistency across class labels was learned in an unsu- pervised manner . Thus, a single value of z seems to correspond to a perceptually-similar set of characters that resembles a plausible fontset. Additionally, we observe a large amount of style variety across rows (i.e. different z) in FIG1 (right). The variety indicates that the latent space z is able to learn and capture a large diversity of styles observed in the training set. Finally, we also note that for a given column the decoded glyph does indeed belong to the class that was supplied to the SVG decoder. These indicate that z encodes style information consistently across different character labels, and that the proposed model largely disentangles class label from style. Given that the latent style is perceptually smooth and aligned across class labels, we next ask if we may find semantically meaningful directions in this latent space. In particular, we ask whether these semantically meaningful directions may permit global manipulations of font style. Inspired by the work on word vectors , we ask whether one may identify analogies for organizing the space of font styles FIG2. To address this question, we select positive and negative examples for semantic concepts of organizing fonts (e.g. bold) and identify regions in latent space corresponding to the presence or absence of this concept (blue and red points). We compute the average z red and z blue, and define the concept direction c = z blue − z red.We test if these directions are meaningful by taking an example font style z * from the dataset (FIG2, right, yellow), and adding (or subtracting) the concept vector c scaled by some parameter α. Finally, we compute the SVG decodings for z * + αc across a range of α. FIG2 shows the ing fonts. Note that across the three properties examined, we observe a smooth interpolation in the direction of the concept modeled (e.g.: first row v becomes increasingly bold from left to right). We take these to indicate that one may interpret semantically meaningful directions in the latent space. Additionally, these indicate that one may find directions in the latent space to globally manipulate font style. In the work we presented a generative model for vector graphics. This model has the benefit of providing a scale-invariant representation for imagery whose latent representation may be systematically manipulated and exploited to perform style propagation. We demonstrate these on a large dataset of fonts and highlight the limitations of a sequential, stochastic model for capturing the statistical dependencies and richness of this dataset. Even in its present form, the current model may be employed as an assistive agent for helping humans design fonts in a more time-efficient manner .
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rklf4IUtOE
We attempt to model the drawing process of fonts by building sequential generative models of vector graphics (SVGs), a highly structured representation of font characters.
What can we learn about the functional organization of cortical microcircuits from large-scale recordings of neural activity? To obtain an explicit and interpretable model of time-dependent functional connections between neurons and to establish the dynamics of the cortical information flow, we develop'dynamic neural relational inference' (dNRI). We study both synthetic and real-world neural spiking data and demonstrate that the developed method is able to uncover the dynamic relations between neurons more reliably than existing baselines. Extraction of latent temporal dynamics in complex networks is important to understand their functional connectivity and to predict their behavior. Recently, various machine learning methods were used to encode/decode the behavior from recorded activity of large neuronal populations. However, in these mostly'static' brain models the temporal dynamics of the firing activity as well as interactions between different neurons are often neglected. It is expected, however, that the dynamic interactions in neural networks might be the key to understanding the brain computations. Addressing this, several methods have been proposed to uncover low-dimensional latent representations of neural network activity and its dynamics, including dimensionality reduction-based techniques such as principal components analysis and tensor components analysis, pattern extraction techniques based on matrix factorization such as ConvNMF and SeqNMF, and autoencoder models such as LFADS. However, temporal correlations between individual neurons in the network are often only modeled implicitly, hindering reconstruction of functional connectivity of the neural circuits. In contrast to these implicit techniques, here, we develop an extension to Neural Relational Inference, which we call'dynamic Neural Relational Inference' (dNRI). Specifically, we develop a new model to extract rapid dynamic changes of network activity in the form of a time-dependent adjacency matrix. We aim at extracting rapid (tens of milliseconds) correlations between recorded neurons that capture their functional relations across the network. Moreover, our method enables the tracking of the temporal evolution of this functional connectivity over the span of a trial. This means it can provide an interpretable approach to uncover hidden dynamical structure of brain information flows and to reconstruct the underlying functional brain circuitry. We demonstrate the applicability of our method on both synthetic spiking data and data recorded from the cortex of live and behaving mice. We are interested in recovering the dynamic flow of information between neurons, i.e., we want to estimate whether spiking of one neuron either excites or suppresses spiking of another neuron at various points in time. To address this task, we assume spiking information for a set of neurons to be available. We represent neural spiking information via matrices x ∈ {0, 1} N ×T, where N is the number of neurons recorded for T time bins and each entry represents the absence or presence of a spike for a particular neuron i at a given time bin t. The goal is to predict binary variables z (t) ij (hereafter called 'edges') for every pair (i, j) of neurons for every timestep t which indicate whether the spiking activity of neuron i influences that of neuron j. With the assumption that neurons i and j are connected, setting z (t) ij = 1 indicates that this connection is currently'active' at time t. To model this problem, we follow the recently introduced NRI formulation and learn a variational auto-encoder (VAE) whose observed variables represent neural spiking patterns x and whose latent variables z represent connections between neurons. See Fig. 1 for a depiction of the full model. Unlike Kipf et al., who focus on predicting static connectivity graphs, we model dynamic connections that vary across time. Additionally, they optimize the evidence lower bound (ELBO), but we instead use the β-VAE formulation described by Higgins et al.. More formally, we optimize the following variational objective This objective consists of three major components, which we will describe subsequently. Encoder. The encoder q φ takes an entire neural spike train x as input and produces q φ (z|x), which is an approximate posterior probability distribution for each connection variable. The encoder hence estimates the probability of a neuron i being connected to neuron j at time t. We use long-short-termmemory (LSTM) unit-based deep nets parameterized by φ as our encoder. q φ is then used to sample likely interaction patterns which are used by the decoder. Because the latent variables z are discrete, the process of sampling from their distribution is non-differentiable. Consequently, we follow prior work and sample instead from the concrete distribution, which approximates discrete sampling in a differentiable manner and enables to backpropagate gradients from the decoder reconstruction all the way to the encoder parameters φ. Decoder. The decoder p θ (x|z) models the probability of reconstructing the input spike train given a sampled set of edges from the approximate posterior. For this we also use an LSTM unit-based deep net and refer to its parameters via θ. A separate recurrent neural net (RNN) is used to model each neuron. To represent the influence of the predicted edges, these RNNs take as input a masked version of the predicted spiking probability for every neuron from the current time step, where the mask for each neuron is derived from the sampled edges. Prior. The choice of the prior p(z) is used to encourage sparsity of the modeled edges. Because we want edge predictions to be independent of each other, we use an independent Bernoulli prior p(z) = i,j ) for each latent variable. Setting the probability of no edge (i.e., p θ (z (t) i,j = 0)) larger than 0.5 reduces prediction of spurious edges. On synthetic data, we found that using a value of 0.8 worked well for our experiments. For the real data, however, we found that using a strong no-edge probability prevented the model from picking up the relatively sparse connections, so we used a uniform prior for the experiments on real-world spiking data reported below. To train the parameters θ and φ of the decoder and encoder, we proceed as follows: for each spike train in the current minibatch, the encoder first predicts the approximate posterior q φ (z|x) for each latent variable. We then sample from this distribution as discussed previously. Given these samplesẑ, we then predict spiking activity using the decoder p θ (x|ẑ). For training, we use ground-truth spikes as the decoder input; during testing, predictions for each time step are fed as input into the next step. We demonstrate the efficacy of dNRI using two types of data: the first are three synthetic datasets consisting of 12 simulated neurons with baseline spiking rates each sampled from the interval [0.1, 0.3]. Additional spikes are generated as follows: time is divided into four phases, with each phase containing 10 randomly sampled neuron pairs (i, j) which indicate that whenever neuron i spikes at time t, neuron j will spike at time t + 1 with probability 1, 0.8 or 0.6 (hereafter referred to as 'edge probability'). The second type of data consists of spiking activity of 24 neurons which are binned at 20 ms and recorded from a primary somatosensory cortex of a mouse actively navigating in a tactile virtual reality while motorized walls were moved towards and away from the animal snout. We compare the proposed approach on the synthetic data to four baselines using the following metrics: their ability to find the underlying edges (measured via F1) and on the normalized reconstruction error of neural spiking, computed as x − x F / x F where x is the original spiking data,x is the predicted reconstruction and · F denotes the Frobenius-norm. Each dataset is separated into train, validation, and test splits, with dNRI models being trained on the train split and hyperparameters being tuned using performance on the validation split. Test set are presented. We use the following baselines: Tensor Component Analysis (TCA) is a PCA extension that factorizes the data into time, trial, and neuron components. We first convolve input data with a Gaussian filter. After running TCA, we take the outer product of neuron factors with themselves to find neurons that spike close to each other. Predicted edges are then obtained by multiplying this by the time and trial components to get predictions at each time step. SeqNMF is an extension of non-negative matrix factorization that produces a matrix factor representing neural activity for some fixed length of time and a vector factor representing time. To predict edges from learned factors, we take the outer products of all columns of the neuron factor, which produces edge matrices whose values are large for neurons that spike in sequence. We multiply these by their corresponding time factors to get predictions per time step, and sum the contributions from each factor to obtain final edges. Static (d)NRI employs a static graph dNRI model where the outputs of the encoder edge LSTM are averaged across time before computing the final edge probabilities. These probabilities are then used for all time bins. GLM consists of a Bernoulli generalized linear model per neuron, using all neuron spiking history as covariates. Note that not all baselines were originally developed for this task, yet we think they are applicable. Synthetic Data Results. The computed metrics for all of the synthetic datasets for all models are reported in Tab. 1. None of the baselines are able to recover the dynamic connections reliably. In contrast, dNRI is able to recover these interactions to a high degree of accuracy. Also note the benefits of dynamically estimating adjacency as opposed to a static interaction. Moreover, this performance is maintained when the edge spiking probability becomes smaller. Many of the baselines outperform dNRI at reconstructing the original spiking activity, but this is a consequence of the difference in training objectives or inference procedures. Fig. 2 visualizes the edge predictions made by dNRI. Mouse Cortical Recording Data Results. In analysis of the real-world data, we focus on a choice period between 400ms from the start of the trial, when the animal starts to sense the approaching wall, and 950ms, when the animal is making a decision to change the run direction to avoid the approaching wall. We present the on this data in Fig. 3 focusing on several frames that correspond to the last stage of sensory information processing when the animal has almost made a choice and is preparing for motor action. Neurons are ordered with respect to their cortical depth and assigned to specific cortical layers. While the overall population spiking activity is relatively dense, significant correlations revealed by dNRI are sparse. This is expected, as we are focusing only on rapid correlations to reveal putative monosynaptic connections. Correlations are also transient, with a typical lifetime on the order of 90ms. In Fig. 3, we highlight several neuron pairs to exemplify the power of our representation: dNRI infer transient information flow from L2/3 to L5B neurons (red curve) as well as communications within deep L5A and L5B (blue and green curves), as they are strongest outputs of the somatosensory barrel columns. Similar to the analysis of synthetic data trials, neither SeqNMF, GLM, nor TCA are able to capture fast transient features revealed by dNRI. Fig. 4 displays the cross-correlations between the neuron pair whose predicted edges are highlighted in red in Fig. 3. The use of crosscorrelations is a standard method of analysis used to discover putative monosynaptic connections between neurons. The in Fig. 4 indicates the presence of such a connection; however, this sort of analysis does not provide any information regarding when this connection is being actively used in the network. As displayed in Fig. 3, the dNRI model was not only able to successfully detect the presence of this connection, but it also predicts when this connection is active. In other words, dNRI allows for additional analyses of neural spiking data that are not possible when using a static analysis technique. We develop a method to explicitly extract time-dependent functional relations from large-scale neural spiking data recordings of cortical networks. Using simulated data of spiking activity where ground truth is available, and real data, we demonstrate that the proposed approach is able to recover the implanted interactions more accurately than baselines which model relations implicitly.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1leV7t8IB
We develop 'dynamic neural relational inference', a variational autoencoder model that can explicitly and interpretably represent the hidden dynamic relations between neurons.
DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks. DeePa optimizes parallelism at the granularity of each individual layer in the network. We present an elimination-based algorithm that finds an optimal parallelism configuration for every layer. Our evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×. Training convolutional neural networks (CNNs) is increasingly compute-intensive and timeconsuming. It takes days or even weeks to train deep CNNs from scratch BID10 BID12 BID9 BID11. Existing deep learning frameworks such as TensorFlow, PyTorch, and Caffe2 parallelize the training process onto multiple processors (usually GPUs) using image parallelism 1 dividing the entire image dataset into batches with the same number of images and assigning each batch to a dedicated processor. The standard parallelization of CNN training only exploits image parallelism. However, other dimensions can also parallelize the training process. For example, in CNNs for 2D images, data is commonly organized as 4-dimensional tensors (i.e., image, height, width, channel). The image dimension includes an index for each image in the input dataset. The height and width dimensions specify a position in an image. For a particular position, the channel dimension 2 indexes different neurons for that position. Exploring these other parallelizable dimensions can potentially reduce the compute time and data transfer cost when training CNNs (see Section 2). Moreover, different layers in a CNN may prefer different parallelism configurations for achieving optimal performance. We propose DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. To the best of our knowledge, DeePa is the first system that models and exploits the parallelism of neural networks at the granularity of each individual layer. To generate a parallelism configuration for each layer, DeePa uses an elimination-based algorithm that automatically finds the configuration with the best estimated performance. The main contributions of this paper are:• We present DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs.• The parallelization strategy is selected at the granularity of each individual layer.• We present an elimination-based algorithm for finding the parallelism configuration with optimal estimated performance for each layer.• Our evaluation shows that, compared to state-of-the-art deep learning frameworks (e.g., TensorFlow and PyTorch), DeePa achieves 6.5×, 1.9×, and 1.5× speedup for AlexNet, TAB0 Conv3x3 in an Inception-v3 module respectively. The performance improvement comes from reducing overall data transfers, automatically overlapping computation with data movement, and accelerating computation throughput. This work is motivated by the following observations. Convolutional layers generally consume the bulk of the training time in CNNs, and parallelizing training in different data dimensions in significantly different performance. Figure 1 shows the relative speed of training six different convolutional layers from AlexNet, VGG-16, and Inception-v3. The properties of the convolutional layers are shown in TAB0. For each convolutional layer, we tried parallelizing the computation in each individual parallelizable dimension as well as combinations of different parallelizable dimensions, and we report the performance of the standard parallelization over images along with the worst and best parallelization strategies we discovered. Figure 1 shows that different parallelism configurations in very different performance, and image parallelism generally achieves suboptimal performance. Therefore, exploring parallelism in other dimensions can potentially accelerate the training of convolutional layers. Different parallelization strategies can also in significantly different amounts of data movement. FIG3 shows an example of parallelizing the first fully-connected layer of VGG-16 on two GPUs in different dimensions. In image parallelism FIG3, each GPU processes a batch of images and computes the gradient for the entire fully-connected layer. This requires each GPU to synchronize the gradients for the entire fully-connected layer (shown as the shadow rectangles) after each step. An alternative approach FIG3 ) parallelizes in the channel dimension by assigning a subset of the output channels to each GPU. As a , different GPUs compute the gradients for disjoint subsets of the fully-connected layer, which eliminates transferring the fully-connected layer but introduces additional data transfers for input tensors (shown as the shadow rectangles). For this particular case, using parallelism in the channel dimension reduces data transfer costs by 12×. When processing a batch of images, increasing the number of workers does not always improve overall execution time, due to the data transfer overhead to synchronize gradients across different workers. FIG1 shows the per-step training time for three different layers in Inception-v3 for a batch size of 512 images on up to 16 GPUs. The training time includes forward processing, backward propagation, and gradient aggregation. The figure shows that different layers in a neural network may prefer different hardware configurations, and there is no single configuration that is optimal for all layers. For example, the third layer performs best on 16 GPUs while the last layer performs best on 4 GPUs. Thus, a parallelism configuration includes both selecting the data dimensions to be parallelized and the number of parallel workers (or, equivalently, the number of subsets into which the data is partitioned). Width Height C h a n n e l (a) (n=1, c=1, h=1, w=4)Width Height C h a n n e l (b) (n=1, c=1, h=4, w=1)Width Height C h a n n e l (c) (n=1, c=4, h=1, w=1)Width Height C h a n n e l (d) (n=1, c=1, h=2, w=2) Similar to TensorFlow and PyTorch, DeePa uses computation graphs to describe dependencies between operations. In a computation graph G = (V, E), each node n ∈ V is an operation (e.g., a convolution or matrix-multiply), and each directed edge (u, v) ∈ E is a tensor that is an output of u and an input of v. One key difference between DeePa and TensorFlow or PyTorch is that each node in the DeePa computation graph also includes a configuration that describes how the corresponding operation is parallelized across different workers. For each parallelizable dimension (i.e., image, height, width, and channel), the configuration includes an integer that describes the degree of parallelism in that dimension. For a configuration, the product of the integers over all dimensions is the number of workers needed to process the operation in that configuration. FIG4 demonstrates some example configurations that explore parallelism in a single dimension as well as combinations of different dimensions. DeePa assumes equal partitioning in each dimension. As a , each worker receives the same size input, which provides well-balanced workload distribution in our experiments. For each node in the computation graph, its configuration describes how the output tensor is divided onto multiple workers. Each worker computes a disjoint subset of the output tensor, and thus each worker can process the operation in parallel without data dependencies. Given a node's configuration, DeePa calculates the input sets for each worker and automatically schedules proper data transfers between operations. DeePa also provides three additional functions:• For each node v and configuration c, v.compute(c) estimates the time to process the corresponding operation under the parallelism configuration c. This includes both the forward processing and back propagation time and is estimated by running the operation in that configuration multiple times on the device and measuring the average execution time.• For each edge e = (u, v), e.xfer(c u, c v) estimates the time to transfer the input tensor e to each worker, using the size of the data to be moved and the known communication bandwidth. Note that e.xfer(c u, c v) is zero if u and v have the same configuration (i.e., c u = c v), in which case no data is transferred. As with compute, we precompute the xfer function for each edge in the graph by calculating the overall data transfer size for all possible source and destination configurations.• For each node v and configuration c, v.update(c) estimates the time to update parameters for the corresponding operation. We use the data transfer time to approximate the update time, since the data transfer time is much longer than the compute time for updating parameters. Note that different configurations can have significantly different update time, as described in Section 2.2.A global configuration g includes a parallelism configuration for each node in a computation graph: g(v) describes the parallelism configuration for node v. Using the functions defined above, we can model the per-step execution time for a computation graph: DISPLAYFORM0 Cost(g, (V, E)) estimates the per-step execution time if the computation graph (V, E) is parallelized using global configuration g. This execution time includes forwarding processing, backward propagation, and gradient aggregation. Equation 1 expresses the problem of finding the configuration for each individual node as a global optimization problem. (a) Node elimination. We now describe our algorithm for finding a global configuration that minimizes Equation 1. In DeePa, each node can select any of a fixed (but large) set of parallelism configurations. Therefore the number of potential global configurations is exponential in the number of nodes in a computation graph, which makes it impractical to enumerate all global configurations for deep CNNs such as VGG-16 and Inception-v3. However, the CNNs we have seen in practice exhibit strong locality: each node is only connected to a few nodes with similar depth in a computation graph. Based on this observation, we use the following two elimination strategies to iteratively simplify the computation graph while preserving the globally optimal configuration. Node elimination. For each node w with a single in-edge e 1 = (u, w) and a single out-edge e 2 = (w, v), we remove node w and the two edges e 1 and e 2 from the graph and insert a new edge e = (u, v) (shown in FIG7). The xfer function for node e is e.xfer(c u, c v) = min DISPLAYFORM0 Note that because we have precomputed the xfer function for edges in the original graph, we can similarly compute the xfer function for the transitive edge added by a node elimination; i.e., we use dynamic programming to compute the optimal configuration for node w for every possible choice of configurations for nodes u and v. For CNNs with a linear computation graph (e.g., AlexNet and VGG-16), node elimination is sufficient to reduce the original graph to a graph with only 2 nodes. Edge elimination. For two edges with the same source and destination node (i.e., e 1 = (u, v) and e 2 = (u, v)), we can remove e 1 and e 2 from the graph and insert a new edge e = (u, v) (shown in FIG7). The xfer function for node e is e.xfer(c u, DISPLAYFORM1 Concat (c) After edge elimination. As with node elimination, we compute the xfer function for e using the already computed xfer functions for e 1 and e 2. FIG9 shows how DeePa iteratively eliminates nodes and edges for an Inception-v3 module. The full Inception-v3 computation graph has 120 nodes, which DeePa reduces to a 2-node graph. DeePa iteratively uses node and edge eliminations to simplify a computation graph until neither elimination can be applied. DeePa then enumerates all global configurations for the final graph and chooses the one that minimizes the Cost function in Equation 1.After deciding the configuration for each node in the final graph, DeePa then decides the configuration for the eliminated nodes by undoing the node and edge eliminations in reverse order. When undoing a node elimination for node w, DeePa selects the configuration that minimizes Equation 2 for node w. After undoing all eliminations, DeePa has a configuration for every node in the original graph. In Appendix A.1, we prove that our algorithm finds an optimal global configuration. In our experiments, DeePa finds an optimal configuration for parallelizing the largest CNN we have worked with, Inception-v3, on 16 GPUs in about 100ms. We found that it is non-trivial to parallelize the training of CNNs in the height, width, and channel dimensions in existing frameworks (e.g., TensorFlow, PyTorch, and Caffe2), and none provides an interface for controlling per-operation parallelism. We implemented DeePa in Legion BID3, a high-performance parallel runtime for distributed heterogeneous architectures, and use cuDNN BID4 and cuBLAS (cub, 2016) as the underlying libraries for processing neural network operations. The following Legion features significantly simplify our implementation for DeePa. First, Legion supports high-dimensional partitioning that allows us to parallelize any operation in any combination of the dimensions. Second, Legion allows DeePa to control parallelism at the granularity of each operation. Third, Legion allows fine-grain control over the placement of data in memory. Fourth, Legion's asynchronous tasking model makes it easy to exploit task as well as image parallelism. We also include two critical optimizations that help achieve good performance. Overlapping computation with data transfers. DeePa manages the gradients of each operation separately and transfers an operation's gradients as long as its back propagation is completed. We have found that this can effectively hide the data transfer overhead for gradient synchronization. As a , the synchronous training performance matches asynchronous training in DeePa, which allows users to use synchronous training with its better algorithmic efficiency. Distributing parameter servers. Existing frameworks use parameter servers to store and update variables for a CNN model. Parameter servers are located in CPU memory in TensorFlow and PyTorch. Because DeePa manages the parameters for each operation separately, DeePa can opportunistically distribute the parameter server onto the GPU memories whenever possible. This eliminates data transfers for operations whose gradients and parameter server are located on the same GPU and transforms all GPU to CPU copies into faster GPU to GPU copies. To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of neural networks in all dimensions at the granularity of each operation. Existing frameworks such as TensorFlow BID2 ), Caffe2 (, and PyTorch use image parallelism to distribute the training of CNNs and only explore parallelism in the image dimension. The standard image parallelism configuration keeps a replica of the entire network on each worker, which in large data transfers for synchronizing the gradients in each step. BID7 uses model parallelism that assigns each operation to a dedicated processor for training Inception-v3. It uses a reinforcement learning algorithm to optimize the placement of each operation on a GPU device. The learned device placement on 4 GPUs achieves 19% speedup compared to single GPU performance. However, parallelism in each operation is not explored. Krizhevsky FORMULA4 introduces "one weird trick" (OWT) that combines image parallelism with model parallelism to accelerate the distributed training of AlexNet, which efficiently reduces the data transfer cost compared to the standard image parallelism configuration. In Section 7.1.2, we show that DeePa further reduces the overall data transfers for AlexNet by 3× and the per-step training time by 2.3× compared to OWT. BID5 empirically shows no loss of accuracy for training ResNet-50 on the ImageNet dataset with a large minibatch size of 8192 images 3. It uses the standard image parallelism configuration to distribute the training onto 256 GPUs and includes a number of optimizations for reducing communication overhead. As communication is a bottleneck in distributed deep learning, we believe our techniques for reducing data transfers can substantially benefit training on large numbers of GPUs. We use AlexNet BID6, VGG-16 BID9, and Inceptionv3 BID11 as benchmark CNNs and use the ImageNet dataset BID8 as the input. For each CNN, we compare the performance of DeePa against TensorFlow, PyTorch, and OWT. We implement OWT in DeePa by restricting all convolutional and pooling layers to use image parallelism and all fully-connected layers to use model parallelism. We conduct a detailed case study for training the three CNNs on a 16-GPU machine, with two Intel 10-core E5-2680 Xeon processors, 256 GB main memory, and 16 NVIDIA Tesla K80 GPUs 4. We use all 16 GPUs for training each CNN model with a minibatch size of 512 images. As a , each GPU processes a batch of 32 images in the image parallelism configuration. DeePa uses the search algorithm in Section 4 to find the optimal parallelism configurations, which requires 0.7, 1.1, and 4.8 seconds for AlexNet, VGG-16, and Inception-v3, respectively. Figure 7 shows the synchronous training throughput for a minibatch size of 512 images on 16 GPUs. When DeePa uses image parallelism for all operations, DeePa achieves competitive performance compared to the best of TensorFlow and PyTorch. The OWT approach that uses model parallelism for fully-connected layers speeds up the training throughput by 1.4×, 1.2×, and 1.07× compared to image parallelism using DeePa. The best configurations found by DeePa achieve 6.5×, 1.9×, and 1.5× speedup compared to TensorFlow and PyTorch. Three main optimizations in DeePa achieve most of the performance benefit over the other frameworks. First, DeePa significantly reduces data transfers in each step, as shown in Figure 8. Compared to image parallelism, the OWT approach reduces data transfers by 1.05-8.4×. However, the best configuration used by DeePa further reduces data transfers by 1.2-2.7× compared to OWT. Second, the optimization for overlapping computation with data transfers (described in Section 5) effectively hides data transfer latency and achieves better GPU utilization. The grey bars in Figure 7 illustrate DeePa's performance when the overlap optimization is disabled, which shows that overlapping computation with data transfers can improve the training throughput by 10%-30%. Third, DeePa also improves performance by exploring parallelism in the height and width dimensions (see Section 7.1.3). We describe the best configurations discovered for AlexNet, VGG-16, and Inception-v3 in Sections 7.1.2 to 7.1.4. The best configurations have several similarities. First, for the beginning layers with large height/width dimensions and small channel dimensions, DeePa uses image parallelism on all available GPUs, since the data transfers for synchronizing gradients are much smaller than the data transfers for moving tensors between operations. Second, deeper layers in CNNs tend to have smaller height/width dimensions and larger channel dimensions. As a , the cost for moving tensors between different operations decreases, while the cost for synchronizing gradients increases. DeePa adaptively reduces the number of GPU workers for these layers to reduce the expensive data transfers for synchronizing gradients at the cost of introducing cheaper data transfers for moving tensors. Third, DeePa uses model parallelism on a small number of GPU workers for fully-connected layers, because synchronizing gradients and moving tensors are both much more expensive than the compute time for fully-connected layers. DeePa reduces the data transfers for synchronizing gradients and moving tensors at the cost of using fewer GPUs. Figure 9: The global configuration for parallelizing AlexNet on 16 GPU workers. Figure 9 shows the global configuration for AlexNet on 16 GPU workers. Note that DeePa selects the parallelism configuration that optimizes the performance for each layer. TAB2 lists the cost for different configurations of the first fully-connected layer. The standard image parallelism configuration eliminates the cost for transferring the input tensors but introduces additional data transfers for synchronizing gradients. The OWT approach completely eliminates gradient synchronization at the cost of replicating the input tensors on every GPU worker. The configuration chosen by DeePa only uses 2 GPU workers for training the first fully-connected layer, which prolongs the compute time but significantly reduces the cost for both transferring input tensors and synchronizing gradients. As a , DeePa reduces the total cost by 5× compared to other approaches. DeePa uses image parallelism for all convolutional and pooling layers, because the additional data transfer cost introduced by transforming configurations outweighs any performance benefits. Configurations:{n=16, c=1, h=1,w=1} {n=1, c=1, h=2,w=2} {n=1, c=4} {n=1, c=2} DeePa uses similar configurations for parallelizing the fully-connected layers in VGG-16 FIG11. In addition, DeePa also uses a different configuration to cooperatively accelerate the last three convolutional layers (the yellow node in FIG11). TAB4 lists the cost for different parallelism configurations for the last three convolutional layers. The configuration with optimal total cost uses only four GPU workers for the last three convolutional layers to reduce data transfers for synchronizing gradients. DeePa also exploits parallelism in the height and width dimensions to further reduce the compute time. Configurations: {n=16, c=1, h=1, w=1} {n=8, c=1, h=1, w=1} {n=1, c=4} {n=1, c=2}Figure 11: The global configuration for parallelizing Inception-v3 on 16 GPU workers. Each module is shown as a single node for simplicity. The Inception-v3 model has multiple Inception modules BID11. Each module has several branches of convolutional and pooling layers, which are then concatenated as the output tensor of the module. Figure 11 shows the global configuration for Inception-v3. DeePa uses different configurations to parallelize different branches for the InceptionE1 module, as shown in FIG1. We found that this configuration reduces data transfers by 30% in InceptionE1 and InceptionE2 and reduces overall data transfers by 20%. The minibatch size plays an important rule on the performance of CNNs. FIG3 compares DeePa, PyTorch, and TensorFlow with different minibatch sizes. All three networks were trained on 16 Tesla K80 GPUs on a single node, as described in Section 7.1. We were not able to train VGG-16 and Inception-v3 with a minibatch size of 2048 images, because the required metadata size exceeds the aggregate memory capacity of the 16 GPUs. FIG3 shows that, DeePa achieves constant speedups compared to PyTorch and TensorFlow for various minibatch sizes. In particular, DeePa achieves 4.6-6.5×, 1.6-1.9×, and 1.2-1.5× speedup for AlexNet, VGG-16, and Inception-v3, respectively. We evaluate the scalability of different frameworks by comparing their training throughput with different number of GPUs and compute nodes. The experiments were performed on a GPU cluster with 4 nodes, each of which is equipped with two Intel 10-core E5-2600 Xeon processors, 256G main memory, and four NVIDIA Tesla P100 GPUs. GPUs on the same node are connected by NVLink, and nodes are connected over 100Gb/s EDR Infiniband. FIG4 shows the performance comparison among DeePa, PyTorch, and TensorFlow for weakscaling. DeePa achieves competitive performance compared to PyTorch and TensorFlow for training on a single GPU, in which all three frameworks place all operations on a single GPU. For training on 4 GPUs on a single node, DeePa achieves 3.1×, 1.6×, and 1.3× speedup for AlexNet, VGG-16, and Inception-v3, respectively. DeePa achieves even better performance speedups for trainings on multiple nodes, where the data transfer time becomes a larger component of the per-iteration training time. For training on 4 nodes, DeePa achieves 8.1×, 3.2×, and 1.8× speedup for AlexNet, VGG-16, and Inception-v3, respectively. We have presented DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. DeePa optimizes the parallelism configuration chosen at the granularity of individual layers. DeePa achieves up to 6.5× for training CNNs and reduces overall data transfers by up to 23× compared to state-of-the-art deep learning frameworks. Proof. The Cost function is defined in Equation 1. Let g be any configuration. We first compute the difference between Cost(g, (V, E)) and Cost(g, (V, E)). DISPLAYFORM0 =w.compute(g(w)) + w.update(g(w)) DISPLAYFORM1 Now assume g is an optimal configuration for (V, E). Then we have w.compute(g(w)) + w.update(g(w)) + e 1.xfer(g(u), g(w)) + e 2.xfer(g(w), g(v)) = min cw {w.compute(c w) + w.update(c w) + e 1.xfer(g(u), c w ) + e 2.xfer(c w, g(v))}Therefore, g is an optimal configuration of (V, E). For the other direction, note that if g is an optimal configuration of (V, E), then it can be extended to an optimal configuration of (V, E) by adding the node w with the same minimal assignment. For a computation graph G(V, E), applying an edge elimination on e 1 = (u, v) and e 2 = (u, v) in a modified graph G = (V, E), where E = E − e 1 − e 2 + e and e = (u, v). We prove that Cost(g, (V, E)) = Cost(g, (V, E)) for any global configuration g of (V, E). DISPLAYFORM0, where (V, E) is the modified graph of (V, E) after an edge elimination. Proof. We compute the difference between Cost(g, (V, E)) and Cost(g, (V, E)). DISPLAYFORM1 The last equation uses Equation 3. The overlap optimization in Section 5 is motivated by BID5, which performs gradient aggregation in parallel with back propagation to scale synchronous training to large number of GPUs. We extend their design and implementation by also enabling the optimization for asynchronous training in DeePa. We show profiling for visualizing the performance bottlenecks in different parallelism approaches. The experiment was performed on a single node with four Tesla P100 GPUs (as described in Section 7.3). We enable overlapping computation with data transfers (described in Section 5) in this experiment. FIG7 shows the profiling for training VGG-16 on 4 GPUs with different parallelism configurations. Note that DeePa with image parallelism achieves 10% higher training throughput compared to PyTorch and TensorFlow, as shown in FIG4. FIG7 shows that all GPUs are highly utilized during forward and backward passes, as indicated by the tight packing of tasks in the timeline. However, the image parallelism approach requires moving 4GB of metadata in every iteration, which cannot be fully overlapped with back propagation, therefore the image parallelism approach has a performance gap between iterations (shown as the white space on the GPU timelines). (a) Image parallelism. (b) DeePa's parallelism configuration. FIG7 shows the profiling of the optimal parallelism configuration chosen by DeePa, which uses image parallelism on 4 GPUs for all convolutional layers and pooling layers and uses model parallelism on 2 GPUs for the fully connected layers. Therefore, the training with the optimal configuration includes data transfers for each fully connected layers, which adds small performance gaps at the end of the forward pass and the beginning of the backward pass (shown as the small white space on the GPU timelines). However, the optimal configuration reduces the per-iteration data transfers from 4GB to 490MB, which effectively hides data transfer overhead and achieves better GPU utilization. As a , the optimal configuration reduces the per-iteration training time from 0.34 seconds to 0.24 seconds. We compare the performance of DeePa, PyTorch, and TensorFlow on the ImageNet-22K dataset BID8 ) that contains 21,841 different categories (the ImageNet dataset used in Section 7 contains 1,000 catagories). The last fully-connected layer in AlexNet, VGG-16, and Inception-v3 originally have 1,000 neurons followed by a 1,000-way softmax layer. To train the three networks on the ImageNet-22K dataset, we change the last fully-connected layer to have 21,841 neurons and use a 21,841-way softmax layer at the end. The modified networks were trained on 16 Tesla K80 GPUs on a single node with a minibatch size of 512 images. FIG9 compares the training throughput and per-iteration data transfers among DeePa, PyTorch, and TensorFlow on the ImageNet and ImageNet-22K datasets. FIG9 shows that, on the ImageNet-22K dataset, the training throughput of PyTorch and TensorFlow is reduced by 20%-45%, while DeePa's throughput falls off by 3%, compared to training on the original ImageNet dataset. FIG9 compares the per-iteration data transfers between image parallelism and the global configurations used by DeePa. Using image parallelism increases the data transfers in each iteration by 5-10GB, while DeePa only increases the per-iteration data transfers by 40MB. As a , for training on the ImageNet-22K dataset, DeePa reduces the per-iteration data transfers by 3.7-44.5× compared to image parallelism.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJCPLLpaW
To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of CNNs in all parallelizable dimensions at the granularity of each layer.
One can substitute each neuron in any neural network with a kernel machine and obtain a counterpart powered by kernel machines. The new network inherits the expressive power and architecture of the original but works in a more intuitive way since each node enjoys the simple interpretation as a hyperplane (in a reproducing kernel Hilbert space). Further, using the kernel multilayer perceptron as an example, we prove that in classification, an optimal representation that minimizes the risk of the network can be characterized for each hidden layer. This removes the need of backpropagation in learning the model and can be generalized to any feedforward kernel network. Moreover, unlike backpropagation, which turns models into black boxes, the optimal hidden representation enjoys an intuitive geometric interpretation, making the dynamics of learning in a deep kernel network simple to understand. Empirical are provided to validate our theory. Any neural network (NN) can be turned into a kernel network (KN) by replacing each artificial neuron , i.e., learning machine of the form f (x) = σ(w x + b), with a kernel machine, i.e., learning machine of the form f (x) = w, φ(x) + b with kernel function k(x, y) = φ(x), φ(y). This combination of connectionism and kernel method enables the learning of hierarchical, distributed representations with kernels. In terms of training, similar to NN, KN can be trained with backpropagation (BP) . In the context of supervised learning, the need for BP in learning a deep architecture is caused by the fact that there is no explicit target information to tune the hidden layers . Moreover, BP is usually computationally intensive and can suffer from vanishing gradient. And most importantly, BP in hidden representations that are notoriously difficult to interpret or assess, turning deep architectures into "black boxes".The main theoretical contribution of this paper is the following: Employing the simplest feedforward, fully-connected KN as an example, we prove that in classification and under certain losses, the optimal representation for each hidden layer that minimizes the risk of the network can be explicitly characterized. This removes the need for BP and makes it possible to train the network in a feedforward, layer-wise fashion. And the same idea can be generalized to other feedforward KNs. The layer-wise learning algorithm gives the same optimality guarantee as BP in the sense that it minimizes the risk. But the former is much faster and evidently less susceptible to vanishing gradient. Moreover, the quality of learning in the hidden layers can be directly assessed during or after training, providing more information about the model to the user. For practitioners, this enables completely new model selection paradigms. For example, the bad performance of the network can now be traced to a certain layer, allowing the user to debug the layers individually. Most importantly, the optimal representation for each hidden layer enjoys an intuitive geometric interpretation, making the learning dynamics in a deep KN more transparent than that in a deep NN. A simple acceleration method that utilizes the "sparse" nature of the optimal hidden representations is proposed to further reduce computational complexity. Empirical on several computer vision benchmarks are provided to demonstrate the competence of the model and the effectiveness of the greedy learning algorithm. Figure 1: (a) Any NN (left, presented in the usual weight-nonlinearity abstraction) can be abstracted as a "graph" (right) with each node representing a neuron and each edge the input-output relationship between neurons. If a node receives multiple inputs, we view its input as a vector in some Euclidean space, as indicated by the colored rectangles. Under this abstraction, each neuron (f (x) = σ(w x + b)) can be directly replaced by a kernel machine (f (x) = w, φ(x) + b with kernel k(x, y) = φ(x), φ(y) ) mapping from the same Euclidean space into the real line without altering the architecture and functionality of the model. (b) Illustration for layer-wise optimality drifting away from network-optimality. Consider a two-layer network and let T 1, T 2 be the target function of the first and second layer, respectively. If the first layer creates error, which is illustrated by F (x) being far away from T 1 (x), the composed solution F • F on the right is better than that on the left and hence the F on the right corresponds to the network-wise optimality of the second layer. But the F on the left is clearly a better estimate to the layer-wise optimality T 2 if the quality of estimation is measured by the supremum distance. In this section, we discuss how to build a KN using a given NN. First, the generic approach is described in FIG4. Note that KN inherits the expressive power of the original NN since a kernel machine is a universal function approximator under mild conditions and the two models share the same architecture. However, KN works in a more intuitive way since each node is a simple linear model in a reproducing kernel Hilbert space (RKHS).We now concretely define the KN equivalent of an l-layer Multilayer Perceptron (MLP), which we shall refer to as the kernel MLP (kMLP).1 Given a random sample (x n, y n) N n=1, where (x n, y n) ∈ X 1 × Y ⊂ R d0 × R, denote (x n) N n=1 as S and (y n) N n=1 as Y S for convenience. For i = 1, 2,..., l, consider kernel k (i): X i × X i → R, X i ⊂ R di−1 (for i > 1, d i−1 is determined by the width of the i − 1 th layer). k (i) (x, y) = φ (i) (x), φ (i) (y) Hi, where φ (i) is a mapping into RKHS H i.For i ≥ 1, the i th layer in a kMLP, denoted F (i), is an array of d i kernel machines: DISPLAYFORM0 2,..., f (i) di ), a d i -tuple. Let F be the identity map on R d0, each f DISPLAYFORM1 j, where the α DISPLAYFORM2 ∈ R are the learnable parameters. 2 The set of mappings DISPLAYFORM3 j ∈ R for all admissible n, j, i} defines an l-layer kMLP. In the rest of this paper, we shall restrict our discussions to this kMLP. We now specify the assumptions that we impose on all kernels considered in this paper. First, we consider real, continuous, symmetric kernels only and we call a kernel positive semidefinite (PSD) or positive definite (PD) if for any S, the kernel matrix defined as (G) mn = k(x m, x n) is PSD or PD, respectively. We shall always assume that any kernel considered is at least PSD and that DISPLAYFORM0 It is straightforward to check using Cauchy-Schwarz inequality that the first condition implies max x,y∈Xi k (i) (x, y) = c. For each fixed x ∈ X i, we assume that DISPLAYFORM1 x -Lipschitz with respect to the Euclidean metric on DISPLAYFORM2, which we assume to be finite. The following notations will be used whenever convenient: We use the shorthand DISPLAYFORM3 (S) and the same is true with S substituted by any x. Throughout this paper, notations such as F (i) can either be used to denote a set of functions or a specific function in some set depending on the context. Also, when there is no confusion, we shall suppress the dependency of any loss function on the example for brevity, i.e., for a loss function, instead of writing (f (x), y), we shall write (f). To simplify discussion, we shall restrict ourselves to binary classification (Y = {+1, −1}) and directly give the on classification with more than two classes in the end. A generalization to regression is left as future work. Again, we only focus on kMLP although the idea can be directly generalized to all feedforward KNs. We now discuss the layer-wise learning algorithm, beginning by addressing the difficulties with training a deep architecture layer-by-layer. There are two fundamental difficulties with learning a deep architecture layer-wise. First, the hidden layers do not have supervision (labels) to learn from. And it depends on BP to propagate supervision from the output backward . We shall prove that for kMLP, one can characterize the optimal target representation for each hidden layer, which induces a risk for that layer. The target is optimal in the sense that it minimizes the risk of the subsequent layer and eventually that of the network if all layers succeed in learning their optimal representations. This optimal representation defines what we call "layer-wise optimality".The other difficulty with layer-wise learning is that for any given hidden layer, when the upstream layers create error, layer-wise optimality may not coincide with "network-wise optimality", i.e., the solution of this layer that eventually leads to the composed solution that minimizes the risk of the network in this suboptimal case. Indeed, when a hidden layer creates error, the objective of any layer after it becomes learning a solution that is a compromise between one that is close to the layer-wise optimality and one that prevents the error from the "bad" layer before it from "getting through" easily. And the best compromise is the network-wise optimality. The two solutions may not coincide, as shown in the toy example in FIG4. Clearly, we would like to always learn the network-wise optimality at each layer, but the learner is blind to it if it is only allowed to work on one layer at a time. By decomposing the overall error of the network into error at each layer, we prove that in fact, network-wise optimality is learnable for each hidden layer even in a purely layer-wise fashion and that the proposed layer-wise algorithm learns network-wise optimality at each layer. We now address the first difficulty in layer-wise learning. The basic idea is first described in Section 4.2.1. Then we provide technical in Section 4.2.2 and Section 4.2.3 to fill in the details. DISPLAYFORM0 and a loss function l defined for this network which induces a risk that we wish to minimize: R l = E l (F). BP views this problem in the following way: R l is a function of F. The learner tries to find an F that minimizes R l using the random sample S with labels Y S according to some learning paradigm such as Empirical Risk Minimization (ERM) or Structural Risk Minimization (SRM) . S is considered as fixed in the sense that it cannot be adjusted by the learner. Alternatively, one can view R l as a function of F (l) and the learner tries to find an F (l) minimizing R l using random sample S l−1 with labels Y S according to some learning paradigm, where S l−1:= DISPLAYFORM1 The advantage is that the learner has the freedom to learn both the function F (l) and the random sample S l−1. And since S l−1 determines the decision of the learning paradigm, which then determines R l, R l is now essentially a function of both F (l) and S l−1: DISPLAYFORM2 The key is that independently of the actual learning of F (l), one can characterize the sufficient condition on S l−1 under which R l, as a function of S l−1, is minimized, as we shall prove. In other words, the "global minimum" of R l w.r.t. S l−1 can be explicitly identified prior to any training. This gives the optimal S l−1, which we denote as S l−1.Moreover, the characterization of S l−1 gives rise to a new loss function l−1 and thus also a new risk R l−1 that is a function of DISPLAYFORM3. Consequently, the same reasoning would allow us to deduce S l−2 before the learner learns F (l−1). And this analysis can be applied to each layer, eventually leading to a greedy learning algorithm that sequentially learns DISPLAYFORM4 *, in that order, where the asterisk on the superscript indicates that the corresponding layer has been learned and frozen. The layer-wise learning algorithm provides a framework that enjoys great flexibility. To be specific, one could stop the above analysis at any layer i, then learn layers i + 1,..., l in a greedy fashion but still learn layers 1,..., i together with BP. Thus, it is easy to see that BP can be brought under this framework as a special case. Nevertheless, in later text, we shall stay on the one end of the spectrum where each layer is learned individually for clarity. We now present the formal that give the optimal hidden representations. By the reasoning above, the analysis starts from the last hidden layer (layer l − 1) and proceeds backward. To begin with, we need to approximate the true classification error R l since it is not computable. To this end, we first review a well-known complexity measure. Definition 4.1 (Gaussian complexity ). Let P be a probability distribution on a metric space X and suppose x 1,..., x N are independent random elements distributed as P. Let F be a set of functions mapping from X into R. Definê DISPLAYFORM0 where g 1,..., g N are independent standard normal random variables. The Gaussian complexity of DISPLAYFORM1 Intuitively, Gaussian complexity quantifies how well elements in a given function class can be correlated with a noise sequence of length N, i.e., the g n . Based on this complexity measure, we have the following bound on the expected classification error. DISPLAYFORM2, with probability at least 1 − δ and for any N ∈ N, every function DISPLAYFORM3, the empirical hinge loss. Given the assumptions on k (l), for any F, we have DISPLAYFORM4 Without loss of generality, we shall set hyperparameter γ = 1. We now characterize S l−1. Note that for a given f (l), A = w f (l) H l is the smallest nonnegative real number such that f (l) ∈ F l,A and it is immediate that this gives the tightest bound in Theorem 4.2. DISPLAYFORM5, where τ is any positive constant satisfying τ < 2(c − a) min(κ, 1 − κ). Denote as S l−1 any representation satisfying DISPLAYFORM6 for all pairs of x +, x − from distinct classes in S and all pairs of x, x from the same class. Suppose the learning paradigm returns f (l) under this representation. Let S DISPLAYFORM7 The optimal representation S l−1, characterized by Eq. 1, enjoys a straightforward geometric interpretation: Examples from distinct classes are as distant as possible in the RKHS whereas examples from the same class are as concentrated as possible (see proof (C) of Lemma 4.3 for a rigorous justification). Intuitively, it is easy to see that such a representation is the "easiest" for the classifier. The conditions in Eq. 1 can be concisely summarized in an ideal kernel matrix G defined as DISPLAYFORM8 And to have the l − 1 th layer learn S l−1, it suffices to train it to minimize some dissimilarity measure between G and the kernel matrix computed from k (l) and F (l−1) (S), which we denote G l−1. Empirical alignment , L 1 and L 2 distances between matrices can all serve as the dissimilarity measure. To simplify discussion, we let the dissimilarity measure be the DISPLAYFORM9 This specifiesR l−1 (F (l−1) ) as the sample mean of (l−1 (F (l−1), (x m, y m), (x n, y n))) N m,n=1 and R l−1 as the expectation of l−1 over (X 1, Y)×(X 1, Y). Note that due to the boundedness assumption on k (l), l−1 ≤ 2 max(|c|, |a|). S l−2,..., S 1 Similar to Section 4.2.2, we first need to approximate R l−1. DISPLAYFORM0, where F l−1 is a given hypothesis class. There exists an absolute constant C > 0 such that for any N ∈ N, with probability at least 1 − δ, DISPLAYFORM1 We are now in a position to characterize S l−2. For the following lemma only, we further assume that k (l−1) (x, y), as a function of (x, y), depends only on and strictly decreases in x − y 2 for all x, y ∈ X l−1 with k (l−1) (x, y) > a, and that the infimum inf x,y∈X l−1 k (l−1) (x, y) = a is attained in X l−1 at all x, y with x − y 2 ≥ η. Also assume that inf x,y∈X l−1; x−y 2<η ∂k (l−1) (x, y)/∂ x − y 2 = ι (l−1) is defined and is positive. Consider an F mapping S into X l−1, let DISPLAYFORM2, it is immediate that DISPLAYFORM3 is the smallest nonnegative real number such that f Under review as a conference paper at ICLR 2019Lemma 4.5 (optimal S l−2). Given a learning paradigm minimizingR l−1 (DISPLAYFORM4, where τ is any positive constant satisfying τ < 2d l−1 (c − a)ψι (l−1). Denote as S l−2 any representation satisfying DISPLAYFORM5 for all pairs of x +, x − from distinct classes in S and all pairs of x, x from the same class. Suppose the learning paradigm returns DISPLAYFORM6 under this representation. Let S • l−2 be another representation under which the learning paradigm returns DISPLAYFORM7 achieves zero loss on at least one pair of examples from distinct classes, then for any N ∈ N, DISPLAYFORM8 Applying this analysis to the rest of the hidden layers, it is evident that the i th layer, i = 1, 2,..., l − 1, should be trained to minimize the difference between G and the kernel matrix computed with k DISPLAYFORM9 and DISPLAYFORM10 Generalizing to classification with more than two classes requires no change to the algorithm since the definition of G is agnostic to the number of classes involved in the classification task. Also note that the sufficiency of expanding the kernel machines of each layer on the training sample (see Section 2) for the learning objectives in Lemma 4.3 and Lemma 4.5 is trivially justified since the generalized representer theorem directly applies (Schölkopf et al., 2001). Now since the optimal representation is consistent across layers, the dynamics of layer-wise learning in a kMLP is clear: The network maps the random sample sequentially through layers, with each layer trying to map examples from distinct classes as far as possible in the RKHS while keeping examples from the same class in a cluster as concentrated as possible. In other words, each layer learns a more separable representation of the sample. Eventually, the output layer works as a classifier on the final representation and since the representation would be "simple" after the mappings of the lower layers, the learned decision boundary would generalize better to unseen data, as suggested by the bounds above. We now discuss how to design a layer-wise learning algorithm that learns network-wise optimality at each layer. A rigorous description of the problem of layer-wise optimality drifting away from network-wise optimality and the search for a solution begins with the following bound on the total error of any two consecutive layers in a kMLP.Lemma 4.6. For any i = 2,..., l, let the target function and the approximation function be T i, F (i) *: DISPLAYFORM0 DISPLAYFORM1 By applying the above bound sequentially from the input layer to the output, we can decompose the error of an arbitrary kMLP into the error of the layers. This gives a formal description of the problem: The hypothesis with the minimal norm minimizes the propagated error from upstream, but evidently, this hypothesis is not necessarily close to the layer-wise optimality T i.Moreover, this bound provides the insight needed for learning network-wise optimality individually at each layer: For the i th layer, i ≥ 2, searching for network-wise optimality amounts to minimizing the r.h.s. of Eq. 2. Lemma 4.3 and Lemma 4.5 characterized T i for i < l and learning objectives that bound i were provided earlier in the text accordingly. Based on those , the solution that minimizes the new learning objectiveR i (DISPLAYFORM2, where τ > 0 is a hyperparameter, provides a good approximate to the minimizer of the r.h.s. of Eq. 2 if, of course, τ is chosen well. Thus, taking this as the learning objective of the i th layer produces a layer-wise algorithm that learns network-wise optimality at this layer. Note that for BP, one usually also needs to heuristically tune the regularization coefficient for weights as a hyperparameter. There is a natural method to accelerate the upper layers (all but the input layer): The optimal representation F (S) is sparse in the sense that φ(F (x m)) = φ(F (x n)) if y m = y n and φ(F (x m)) = φ(F (x n)) if y m = y n (see the proof (C) of Lemma 4.3). Since a kernel machine built on this representation of the given sample is a function in the RKHS that is contained in the span of the image of the sample, retaining only one example from each class would in exactly the same hypothesis class because trivially, we have {DISPLAYFORM0 Thus, after training a given layer, depending on how close the actual kernel matrix is to the ideal one, one can (even randomly) discard a large portion of centers for kernel machines of the next layer to speed up the training of it without sacrificing performance. As we will later show in the experiments, randomly keeping a fraction of the training sample as centers for upper layers produces performance comparable to or better than that obtained with using the entire training set. The idea of combining connectionism with kernel method was initiated by. In their work, an "arc cosine" kernel was so defined as to imitate the computations performed by a one-layer MLP. extended the idea to arbitrary kernels with a focus on MKL, using an architecture similar to a two-layer kMLP. As a further generalization, Zhang et al. FORMULA8 independently proposed kMLP and the KN equivalent of CNN. However, they did not extend the idea to any arbitrary NN. proposed to reparameterize each nonlinearity in an NN with a kernel expansion, ing in a network similar to KN but is trained with BP. There are other works aiming at building "deep" kernels using approaches that are different in spirit from those above. proposed to learn the covariance matrix of a Gaussian process using an NN in order to make the kernel "adaptive". This idea also underlies the now standard approach of combining a deep NN with an SVM for classification, which was first explored by and. Such an interpretation can be given to KNs as well, as we point out in Appendix B.5. proposed to learn hierarchical representations by learning mappings of kernels that are invariant to irrelevant variations in images. Much works have been done to improve or substitute BP in learning a deep architecture. Most aim at improving the classical method, working as add-ons for BP. The most notable ones are perhaps the unsupervised greedy pre-training techniques proposed by and. Among works that try to completely substitute BP, none provided a comparable optimality guarantee in theory as that given by BP. pioneered the idea of greedily learn the architecture of an NN. In their work, each new node is added to maximize the correlation between its output and the residual error signal. Several authors explored the idea of approximating error signals propagated by BP locally at each layer or each node (; ; ; ;). proposed to train NN layer-wise using an ideal kernel matrix that is a special case of that in our work. No theoretical were provided to justify its optimality for NN. proposed a BP-free deep architecture based on decision trees, but the idea is very different from ours. attempted to quantify the quality of hidden representations toward learning more interpretable deep architectures, sharing a motivation similar to ours. We compared kMLP learned using the proposed greedy algorithm with other popular deep architectures including MLP, Deep Belief Network (DBN) and Stacked Autoencoder (SAE) , with the last two trained using a combination of unsupervised greedy pre-training and standard BP . Note that we only focused on comparing with the standard, generic architectures because kMLP, as the KN equivalent of MLP, does not have a specialized architecture or features designed for specific application domains. Several optimization and training techniques were applied to the MLPs to boost performance. These include Adam , RMSProp , dropout and batch normalization (BN) . kMLP accelerated using the proposed method (kMLP FAST) was also compared. For these models, we randomly retained a subset of the centers of each upper layer before its training. As for the benchmarks used, rectangles, rectangles-image and convex are binary classification datasets, mnist (10k) and mnist (10k) rotated are variants of MNIST . And fashion-mnist is the Fashion-MNIST dataset . To further test the proposed layer-wise learning algorithm and the acceleration method, we compared greedily-trained kMLP with MLP and kMLP trained using BP using the standard MNIST . Two popular acceleration methods for kernel machines were also compared on the same benchmark, including using a parametric representation (i.e., for each node in a kMLP, f (x) = k(w, x), w learnable) (kMLP PARAM) and using random Fourier features (kMLP RFF) . More details for the experiments can be found in Appendix A 4.From TAB0, we see that the performance of kMLP is on par with some of the most popular and most mature deep architectures. In particular, the greedily-trained kMLPs compared favorably with their direct NN equivalents, i.e., the MLPs, even though neither batch normalization nor dropout was used for the former. These also validate our earlier theoretical on the layer-wise learning algorithm, showing that it indeed has the potential to be a substitute for BP with an equivalent optimality guarantee. Results in TAB0 further demonstrate the effectiveness of the greedy learning scheme. For both the single-hidden-layer and the two-hidden-layer kMLPs, the layer-wise algorithm consistently outperformed BP. It is worth noting that the proposed acceleration trick, despite being extremely simple, is clearly very effective and even produced models outperforming the original ones. This shows that kMLP together with the greedy learning scheme can be of practical interest even when dealing with the massive data sets in today's machine learning. Last but not least, we argue that it is the practical aspects that makes the greedy learning framework promising. Namely, this framework of learning makes deep architectures more transparent and intuitive, which can serve as a tentative step toward more interpretable, easy-to-understand models with strong expressive power. Also, new design paradigms are now possible under the layer-wise framework. For example, each layer can now be "debugged" individually. Moreover, since learning becomes increasingly simple for the upper layers as the representations become more and more well-behaved, these layers are usually very easy to set up and also converge very fast during training. The first data set, known as rectangles, has 1000 training images, 200 validation images and 50000 test images. The learning machine is required to tell if a rectangle contained in an image has a larger width or length. The location of the rectangle is random. The border of the rectangle has pixel value 255 and pixels in the rest of an image all have value 0. The second data set, rectangles-image, is the same with rectangles except that the inside and outside of the rectangle are replaced by an image patch, respectively. rectangles-image has 10000 training images, 2000 validation images and 50000 test images. The third data set, convex, consists of images in which there are white regions (pixel value 255) on black (pixel value 0) . The learning machine needs to distinguish if the region is convex. This data set has 6000 training images, 2000 validation images and 50000 test images. The fourth data set contains 10000 training images, 2000 validation images and 50000 test images taken from MNIST. The fifth is the same as the fourth except that the digits have been randomly rotated. Sample images from the data sets are given in FIG3 The experimental setup for the greedily-trained kMLPs is as follows, kMLP-1 corresponds to a one-hidden-layer kMLP with the first layer consisting of 15 to 150 kernel machines using the same Gaussian kernel (k(x, y) = e − x−y 2 /σ 2 ) and the second layer being a single or ten (depending on the number of classes) kernel machines using another Gaussian kernel. Note that the Gaussian kernel does not satisfy the condition that the infimum a is attained (see the extra assumptions before Lemma 4.5), but for practical purposes, it suffices to set the corresponding entries of the ideal kernel matrix to some small value. For all of our experiments, we set (G) mn = 1 if y m = y n and 0 otherwise. Hyperparameters were selected using the validation set. The validation set was then used in final training only for early-stopping based on validation error. For the standard MNIST and Fashion-MNIST, the last 5000 training examples were held out as validation set. For other datasets, see . kMLP-1 FAST is the same kMLP for which we accelerated by randomly choosing a fraction of the training set as centers for the second layer after the first had been trained. The kMLP-2 and kMLP-2 FAST are the two-hidden-layer kMLPs, the second hidden layers of which contained 15 to 150 kernel machines. We used Adam as the optimization algorithm of the layer-wise scheme. Although some of the theoretical presented earlier in the paper were proved under certain losses, we did not notice a significant performance difference between using L 1, L 2 and empirical alignment as loss function for the hidden layers. And neither was such difference observed between using hinge loss and cross-entropy for the output layer. This suggests that these may be proved in more general settings. To make a fair comparison with the NN models, the overall loss functions of all models were chosen to be the cross-entropy loss. Settings of all the kMLPs trained with BP can be found in . Note that because it is extremely time/memory-consuming to train kMLP with BP without any acceleration method, to make training possible, we could only randomly use 10000 examples from the entire training set of 55000 examples as centers for the kMLP-2 (BP) from TAB0.We compared kMLP with a one/two-hidden-layer MLP (MLP-1/MLP-2), a one/three-hidden-layer DBN (DBN-1/DBN-3) and a three-hidden-layer SAE (SAE-3). For these models, hyperparameters were also selected using the validation set. For the MLPs, the sizes of the hidden layers were chosen from the interval. All hyperparameters involved in Adam, RMSProp and BN were set to the suggested default values in the corresponding papers. If used, dropout and BN was added to each hidden layer, respectively. For DBN-3 and SAE-3, the sizes of the three hidden layers varied in intervals, and, respectively. DBN-1 used a much larger hidden layer than DBN-3 to obtain comparable performance. A simple calculation shows that the total numbers of parameters in the kMLPs were fewer than those in the corresponding DBNs and SAEs by orders of magnitude in all experiments. Like in the training for the kMLPs, the validation set were also reserved for early-stopping in final training. The DBNs and SAEs had been pre-trained unsupervisedly before the supervised training phase, following the algorithms described in . More detailed settings for these models were reported in . In this section, we provide some further analysis on kMLP and the layer-wise learning algorithm. Namely, in Appendix B.1, we give a bound on the Gaussian complexity of an l-layer kMLP, which describes the intrinsic model complexity of kMLP. In particular, the bound describes the relationship between the depth/width of the model and the complexity of its hypothesis class, providing useful information for model selection. In Appendix B.2, we give a constructive stating that the dissimilarity measure being optimized at each hidden layer will not increase as training proceeds from the input layer to the output. This also implies that a deeper kMLP performs at least as well as its shallower counterparts in minimizing any loss function they are trained on. In Appendix B.3, a similar to Lemma 4.3 is provided, stating that the characterization for the optimal representation can be made much simpler if one uses a more restricted learning paradigm. In fact, in contrast to Lemma 4.3, both necessary and sufficient conditions can be determined under the more restricted setting. In Appendix B.4, we provide a standard, generic method to estimate the Lipschitz constant of a continuously differentiable kernel, as this quantity has been repeatedly used in many of our in this paper. In Appendix B.5, we state some advantages of kMLP over classical kernel machines. In particular, empirical are provided in Appendix B.5.1, in which a two-layer kMLP consistently outperforms the classical Support Vector Machine (SVM) as well as several SVMs enhanced by Multiple Kernel Learning (MKL) algorithms (; Gönen & Alpaydın, 2011). We first give a on the Gaussian complexity of a two-layer kMLP. Lemma B.1. Given kernel k: DISPLAYFORM0 where the x ν are arbitrary examples from X 2. DISPLAYFORM1 where Ω is a given hypothesis class that is closed under negation, i.e., if DISPLAYFORM2 If the range of some element in Ω contains 0, we have DISPLAYFORM3 The above can be easily generalized to kMLP with an arbitrary number of layers. Lemma B.2. Given an l-layer kMLP, for each f DISPLAYFORM4 1 ≤ A i and let d l = 1. Denote the class of functions implemented by this kMLP as F l, we have DISPLAYFORM5 Proof. It is trivial to check that the hypothesis class of each layer is closed under negation and that there exists a function in each of these hypothesis classes whose range contains 0. Then the follows from repeatedly applying Lemma B.1. Lemma B.3. For i ≥ 2, assume k (i) is PD and fix layers 1, 2,..., i − 1 at arbitrary states F, F,..., F (i−1). Let the loss functions i, i−1 be the same up to their domains, and denote both i and i−1 as. Suppose layer i is trained with a gradient-based algorithm to minimize the loss (F (i) ). Denote the state of layer i after training by DISPLAYFORM0 Calculation for this initialization is specified in the proof. For i = 1, under the further assumption that DISPLAYFORM1 is the identity map on X 1.Remark B.3.1. For the greedily-trained kMLP, Lemma B.3 applies to the hidden layers and implicitly requires that k (i+1) = k (i) since the loss function for layer i, when viewed as a function of DISPLAYFORM2 ) and can be rewritten as DISPLAYFORM3 ). Since Lemma B.3 assumes to be the same across layers (otherwise it does not make sense to compare between layers), this forces DISPLAYFORM4 Further, if k (i+1) and k (i) have the property that k(x, y) = k(x,ȳ), wherex,ȳ denote the images of x, y under an embedding of R p into R q (p ≤ q) defined by the identity map onto a p-dimensional subspace of R q, then the condition DISPLAYFORM5 This lemma states that for a given kMLP, when it has been trained upto and including the i th hidden layer, the i + 1 th hidden layer can be initialized in such a way that the value of its loss function will be lower than or equal to that of the i th hidden layer after training. In particular, the actual hidden representation "converges" to the optimal represetation as training proceeds across layers. On the other hand, when comparing two kMLPs, this implies that the deeper kMLP will not perform worse in minimizing the loss function than its shallower counterpart. In deep learning literature, analogous to Lemma B.3 generally state that in the hypothesis class of a NN with more layers, there exists a hypothesis that approximates the target function nontrivially better than any hypothesis in that of another shallower network . Such an existence for kMLP can be easily deduced from the earlier bound on its Gaussian complexity (see Lemma B.2). However, these proofs of existence do not guarantee that such a hypothesis can always be found through learning in practice, whereas Lemma B.3 is constructive in this regard. Nevertheless, one should note that Lemma B.3 does not address the risk R = E. Instead, it serves as a handy that guarantees fast convergence of upper layers during training in practice. The following lemma states that if we are willing to settle with a more restricted learning paradigm, the necessary and sufficient condition that guarantees the optimality of a representation can be characterized and is simpler than that described in Lemma 4.3. The setup for this lemma is the same as that of Lemma 4.3 except that the assumption that the numbers of examples from the two classes are equal is not needed. Lemma B.4. Consider a learning paradigm that minimizesR l (f (l) ) + τ G N (F l,A) using represen- DISPLAYFORM0 is minimized over all linearly separable representations if and only if the representation F (S) satisfies DISPLAYFORM1 for all pairs of x +, x − from distinct classes in S. In general, for a continuously differentiable function f: R → R with derivative f and any a, b ∈ R, a < b, we have DISPLAYFORM0 This simple can be used to bound the Lipschitz constant of a continuously differentiable kernel. For example, for Gaussian kernel k: DISPLAYFORM1, we have ∂k(x, y)/∂y = 2(x − y)k(x, y)/σ 2. Hence for each fixed x ∈ X, k(x, y) is Lipschitz in y with Lipschitz constant bounded by sup y∈X 2(x − y)k(x, y)/σ 2. In practice, X is always compact and can be a rather small subspace of some Euclidean space after normalization of data, hence this would provide a reasonable approximation to the Lipschitz constant of Gaussian kernel. There are mainly two issues with classical kernel machines besides their usually high computational complexity. First, despite the fact that under mild conditions, they are capable of universal function approximation and that they enjoy a very solid mathematical foundation BID0, kernel machines are unable to learn multiple levels of distributed representations , yet learning representations of this nature is considered to be crucial for complicated artificial intelligence (AI) tasks such as computer vision, natural language processing, etc. . Second, in practice, performance of a kernel machine is usually highly dependent on the choice of kernel since it governs the quality of the accessible hypothesis class. But few rules or good heuristics exist for this topic due to its extremely task-dependent nature. Existing solutions such as MKL (; Gönen & Alpaydın, 2011) view the task of learning an ideal kernel for the given problem to be separate from the problem itself, necessitating either designing an ad hoc kernel or fitting an extra trainable model on a set of generic base kernels, complicating training.kMLP learns distributed, hierarchical representations because it inherits the architecture of MLP.To be specific, first, we see easily that the hidden activation of each layer, i.e., F (i) (x) ⊂ R di, is a distributed representation . Indeed, just like in an MLP, each layer of a kMLP consists of an array of identical computing units (kernel machines) that can be activated independently. Further, since each layer in a kMLP is built on top of the previous layer in exactly the same way as how the layers are composed in an MLP, the hidden representations are hierarchical .Second, kMLP naturally combines the problem of learning an ideal kernel for a given task and the problem of learning the parameters of its kernel machines to accomplish that task. To be specific, kMLP performs nonparametric kernel learning alongside learning to perform the given task. Indeed, for kMLP, to build the network one only needs generic kernels, but each layer F (i) can be viewed as a part of a kernel of the form DISPLAYFORM0 The fact that each F (i) is learnable makes this kernel "adaptive", mitigating to some extent any limitation of the fixed generic kernel k (i+1). The training of layer i makes this adaptive kernel optimal as a constituent part of layer i + 1 for the task the network was trained for. And it is always a valid kernel if the generic kernel k (i+1) is. Note that this interpretation has been given in a different context by and , we include it here only for completeness. We now compare a single-hidden-layer kMLP using simple, generic kernels with SVMs enhanced by MKL algorithms that used significantly more kernels to demonstrate the ability of kMLP to automatically learn task-specific kernels out of standard ones. The standard SVM and seven other SVMs enhanced by popular MKL methods were compared , including the classical convex MKL with kernels learned using the extended level method proposed in Eleven binary classification data sets that have been widely used in MKL literature were split evenly for training and test and were all normalized to zero mean and unit variance prior to training. 20 runs with identical settings but random weight initializations were repeated for each model. For each repetition, a new training-test split was selected randomly. For kMLP, all were achieved using a greedily-trained, one-hidden-layer model with the number of kernel machines ranging from 3 to 10 on the first layer for different data sets. The second layer was a single kernel machine. All kernel machines within one layer used the same Gaussian kernel, and the two kernels on the two layers differed only in kernel width σ. All hyperparameters were chosen via 5-fold cross-validation. As for the other models compared, for each data set, SVM used a Gaussian kernel. For the MKL algorithms, the base kernels contained Gaussian kernels with 10 different widths on all features and on each single feature and polynomial kernels of degree 1 to 3 on all features and on each single feature. For 2LMKL INF, one Gaussian kernel was added to the base kernels at each iteration. Each base kernel matrix was normalized to unit trace. For L p MKL, p was selected from {2, 3, 4}. For MKM, the degree parameter was chosen from {0, 1, 2}. All hyperparameters were selected via 5-fold cross-validation. From TAB4, kMLP compares favorably with other models, which validates our claim that kMLP learns its own kernels nonparametrically hence can work well even without excessive kernel parameterization. Performance difference among models can be small for some data sets, which is expected since they are all rather small in size and not too challenging. Nevertheless, it is worth noting that only 2 Gaussian kernels were used for kMLP, whereas all other models except for SVM used significantly more kernels. Proof of Lemma 4.3. Throughout this proof we shall drop the layer index l for brevity. Given that the representation satisfies Eq. 1, the idea is to first collect enough information about the returned f = (w, b) such that we can computeR(f) + τ w H and then show that for any other F (S) satisfying the condition in the lemma, suppose the learning paradigm returns f = (w, b) ∈ F A, thenR(f) + τ w H ≥R(f) + τ w H. We now start the formal proof. First, note that in the optimal representation, i.e., an F (S) such that Eq. 1 holds, it is easy to see that φ(F (x −)) − φ(F (x +)) H is maximized over all representations for all x −, x +.Moreover, note that given the representation is optimal, we have φ(F (x)) = φ(F (x)) if y = y and φ(F (x)) = φ(F (x)) if y = y: Indeed, by Cauchy-Schwarz inequality, for all x, x ∈ S, k(F (x), F (x)) = φ(F (x)), φ(F (x)) H ≤ φ(F (x)) H φ(F (x)) H and the equality holds if and only if φ(F (x)) = pφ(F (x)) for some real constant p. Using the assumption on k, namely, that φ(F (x)) H = √ c for all F (x), we further conclude that the equality holds if and only if p = 1. And the second half of the claim follows simply from c > a. Thus, all examples from the + and − class can be viewed as one vector φ(F (x +)) and φ(F (x −)), respectively. The returned hyperplane f cannot pass both F (x +) and F (x −), i.e., f (F (x +)) = 0 and f (F (x −)) = 0 cannot happen simultaneously since if so, first subtract b, rotate while keeping w H unchanged and add some suitable b to get a new f such that f (F (x −)) < 0 and f (F (x +)) > 0, then it is easy to see thatR(f) + τ w H <R(f) + τ w H. But by construction of the learning paradigm, this is not possible. Now suppose the learning paradigm returns an f such that DISPLAYFORM0 First note that for an arbitrary θ F,w, ζ is less than or equal to 2 since one can always adjust b such that y + f (F (x +)) = y − f (F (x −)) without changing ζ and hence having a larger ζ will not further reduceR(f), which is 0 when ζ = 2, but will in a larger w H according to Eq. 4. On the other hand, θ F,w must be 0 since this gives the largest ζ with the smallest w H. Indeed, if the returned f does not satisfy θ F,w = 0, one could always shift, rotate while keeping w H fixed and then shift the hyperplane back to produce another f with θ F,w = 0 and this f in a larger ζ if ζ < 2 or the same ζ if ζ = 2 but a smaller w H by rescaling. Hencê DISPLAYFORM1 Together with what we have shown earlier, we conclude that 2 ≥ ζ > 0. Then for some t ∈ R, we haveR DISPLAYFORM2 First note that we can choose t freely while keeping w fixed by changing b. If κ = 1/2, we havê DISPLAYFORM3 Evidently, the last two cases both inR(f DISPLAYFORM4 If κ > 1/2,R(f) decreases in t hence t must be 1 for f, which impliesR(f) = (1 − κ)(2 − ζ). Similarly, if κ < 1/2, t = ζ − 1 and henceR(f) = κ(2 − ζ). DISPLAYFORM5, which increases in t and hence t = 1 and DISPLAYFORM6, this combination of κ and t contradicts the optimality assumption of f. DISPLAYFORM7, where the second equality is becauseR(f) decreases in t. Again, κ > 1/2 leads to a contradiction. Combining all cases, we havê DISPLAYFORM8 which, by the assumption on τ, strictly decreases in ζ over. Hence the returned f must satisfy ζ = 2, which impliesR(f) = 0 and we havê DISPLAYFORM9 Now, for any other F (S), suppose the learning paradigm returns f. Let x w +, x w − be the pair of examples with the largest f (F (x +)) − f (F (x −)). We have DISPLAYFORM10 where we have used the assumption that there exists DISPLAYFORM11 This proves the desired . Lemma C.1. Suppose f 1 ∈ F 1,..., f d ∈ F d are elements from sets of real-valued functions defined on all of X 1, X 2,..., X m, where FIG4,..., f d (x 1), f 1 (x 2),..., f d (x m), y), where ω: R md × Y → R + ∪ {0} is bounded and L-Lipschitz for each y ∈ Y with respect to the Euclidean metric on R md. Let ω • F = {ω • f : f ∈ F}. Denote the Gaussian complexity of F i on X j as G j N (F i), if the F i are closed under negation, i.e., for all i, if f ∈ F i, then −f ∈ F i, we have DISPLAYFORM12 DISPLAYFORM13 In particular, for all j, if the x j n upon which the Gaussian complexities of the F i are evaluated are sets of i.i.d. random elements with the same distribution, we have G DISPLAYFORM14 This lemma is a generalization of a on the Gaussian complexity of Lipschitz functions on R k from . And the technique used in the following proof is also adapted from there. Proof. For the sake of brevity, we prove the case where m = 2. The general case uses exactly the same technique except that the notations would be more cumbersome. Let F be indexed by A. Without loss of generality, assume |A| < ∞. Define DISPLAYFORM15 ω(f α,1 (x n),..., f α,d (x n), y n )g n; DISPLAYFORM16 (f α,i (x n)g n,i + f α,i (x n)g N +n,i ), where α ∈ A, the (x n, x n) are a sample of size N from X 1 × X 2 and g 1,..., g N, g 1,1,..., g 2N,d are i.i.d. standard normal random variables. Let arbitrary α, β ∈ A be given, define X α − X β 2 2 = E(X α − X β) 2, where the expectation is taken over the g n. Define Y α − Y β 2 2 similarly and we have DISPLAYFORM17 ω(f α,1 (x n),..., f α,d (x n), y n ) − ω(f β,1 (x n),..., f β,d (x n), y n ) DISPLAYFORM18. By Slepian's lemma and since the F i are closed under negation, DISPLAYFORM19 Taking the expectation of the x n and x n on both sides, we have DISPLAYFORM20 Proof of Lemma 4.4. Normalize l−1 to by dividing 2 max(|c|, |a|). Then the loss function becomes l−1 F (l−1), (x m, y m), (x n, y n) = 1 2 max(|c|, |a|) k (l) F (l−1) (x m), F (l−1) (x n) − (G) mn.For each fixed (G) mn, l−1 F (l−1), (x m, y m), (x n, y n) − l−1 F (l−1), (x m, y m), (x n, y n) DISPLAYFORM21 2 max(|c|, |a|) DISPLAYFORM22 max(|c|, |a|) DISPLAYFORM23 Hence l−1 is L (l) / max(|c|, |a|)-Lipschitz in (F (l−1) (x m), F (l−1) (x n)) with respect to the Euclidean metric on R Proof of Lemma 4.6. First, it is trivial that the so-defined s metric is indeed a metric. In particular, it satisfies the triangle inequality. For i = 2,..., l, Proof of Lemma B.1. Since Ω and F 2 are both closed under negation, we havê DISPLAYFORM24 which proves that, as a function of F, A achieves its minimum if and only if F maximizes φ(F (x +)) − φ(F (x −)) H. Since arg max where we have used the assumption on k, namely, that k(x, x) = φ(x), φ(x) H = φ(x) 2 H = c, for all x. It immediately follows that any minimizer F of A must minimize k(F (x +), F (x −)) for all pairs of examples from opposite classes. This proves the desired .
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1GLm2R9Km
We combine kernel method with connectionist models and show that the resulting deep architectures can be trained layer-wise and have more transparent learning dynamics.
Many notions of fairness may be expressed as linear constraints, and the ing constrained objective is often optimized by transforming the problem into its Lagrangian dual with additive linear penalties. In non-convex settings, the ing problem may be difficult to solve as the Lagrangian is not guaranteed to have a deterministic saddle-point equilibrium. In this paper, we propose to modify the linear penalties to second-order ones, and we argue that this in a more practical training procedure in non-convex, large-data settings. For one, the use of second-order penalties allows training the penalized objective with a fixed value of the penalty coefficient, thus avoiding the instability and potential lack of convergence associated with two-player min-max games. Secondly, we derive a method for efficiently computing the gradients associated with the second-order penalties in stochastic mini-batch settings. Our ing algorithm performs well empirically, learning an appropriately fair classifier on a number of standard benchmarks. Machine learning systems are becoming increasingly prevalent in real-world applications, consequently affecting the decisions that determine a person's life and future, such as playing a role in parole conditions BID1, loan applications BID12, and airport screening BID13. Recent work has shown that such machine learning models often have biases which can unfairly disadvantage certain groups. For example, learned word embeddings exhibit gender-specific biases in what should be gender neutral words BID4. In another case, a machine learning model's predictions regarding convict recidivism were found to be unfairly biased against African-Americans (BID1 . While it may seem at first that simply ignoring the features corresponding to these protected traits when training can alleviate this, previous work BID18 has shown that enforcing such blindness is largely ineffective due to redundant encodings in the data. In other words, while the learning algorithm used may not be biased, the data can be inherently biased in complex ways, and this leads to models which perpetuate these undesirable biases. Research into the challenging problem of machine learning fairness is therefore of great interest. To better specify this problem, previous work has elaborated on precise notions of fairness, such as demographic parity BID8, equal opportunity BID14, etc. These notions can often be expressed mathematically as a linear constraint on the output of a machine learning model, taken in expectation over the entire data distribution. Accordingly, a number of recent works have proposed to incorporate fairness during training by expressing the objective as a constrained optimization problem BID24 BID11 . If the original objective is convex, the addition of linear constraints in a problem which may be readily solved by Lagrangian methods. However, modern machine learning models are often not in a convex form. Indeed, the success of deep neural networks over the past decade makes it clear that the most well-performing models are often highly non-convex and optimized via stochastic gradient methods over large amounts of data BID19 BID20 . It is unfortunate that much of the existing work on fairness in machine learning has provided methods which are either focused on the convex, small data-set setting BID24 BID11, or otherwise require sophisticated and complex training methods BID6 .In this paper, we present a general method for imposing fairness conditions during training, such that it is practical in non-convex, large data settings. We take inspiration from the standard Lagrangian method of augmenting the original loss with linear penalties. In non-convex settings, this dual objective must be optimized with respect to both model parameters and penalty coefficients concurrently, and in general is not guaranteed to converge to a deterministic equilibrium. We propose to re-express the linear penalties associated with common fairness criteria as secondorder penalties. Second-order penalties are especially beneficial in non-convex settings, as they may be optimized using a fixed non-negative value for the penalty coefficient λ. When λ → 0 the optimization corresponds to an unconstrained objective, while as λ → ∞, the problem approaches that of a hard equality constraint. This allows us to avoid sophisticated optimization methods for potentially non-convergent two-player games. Instead, we only need to choose a fixed value for the penalty coefficient, which may be easily determined via standard hyperparameter optimization methods, such as cross-validation. As an additional benefit, by choosing the penalty coefficient on a separate validation set, we can improve generalization performance. Second-order penalties, however, potentially introduce a new problem: By squaring an expectation over the entire data distribution, the ing penalized loss is no longer an expectation of loss functions on individual data points sampled from the distribution, and therefore not readily approachable by stochastic gradient methods. We solve this by presenting an equivalent form of the second-order penalty as an expectation of individual loss functions on pairs of independently sampled data points. Our ing algorithm is thus not only more practical to optimize in non-convex settings, using a fixed value for the penalty coefficient, but is also easily optimized in large-data settings via standard stochastic gradient descent. We evaluate the performance of our algorithm in a number of different settings. In each setting, our algorithm is able to adequately optimize the desired constraints, such as encouraging feature orthonormality in deep image autoencoders and imposing predictive fairness across protected data groups. Consider a data domain X and a probability measure µ constituting a data distribution D = (X, µ). Let H be a set of real-valued functions f: X → R endowed with the inner-product, DISPLAYFORM0 A machine learning model is a function d ∈ H. Given a data point x ∈ X it provides a score d(x).When the machine learning model is a predictive binary classifier, the range of d is and a score d(x) corresponds to a machine learning model which on input x returns 1 with probability d(x) and returns 0 with probability 1 − d(x). Many notions of fairness may be written as linear constraints on d. That is, they may be expressed as d, c ∈ [C −, C +], where c ∈ H is some other fixed function and C ∈ R, ∈ R +. We elaborate on a few popular notions below. For simplicity, we restrict the text to refer to a domain X with a single protected group G ⊂ X whose predictions d(x) we desire to be fair with respect to the distribution of predictions on the entire domain X. Nevertheless, all of our apply to the fully general multi-group setting. We assume access to an indicator function g(x) = 1[x ∈ G] and thus the proportion of data in G is Z G = 1, g. We let y: X → {0, 1} be the true label function. Thus, the proportion of examples which are positive is P X = 1, y; the proportion which are positive and in G is P G = g, y.• Demographic parity BID8: A fair classifier d should make positive predictions at the same rate on each group. This constraint may be expressed as d, c = 0, where c(x) = g(x)/Z G − 1.• Equal opportunity BID14: A fair classifier d should have equal true positive rates on each group. This constraint may be expressed as d, c = 0, where DISPLAYFORM1 • Equalized odds BID14: A fair classifier d should have equal true positive and false positive rates on each group. In addition to the linear constraint associated with equal opportunity, this notion applies an additional constraint d, b = 0, where DISPLAYFORM2 • Disparate impact BID10: A fair classifier d should have a rate of positive prediction on a group at least p% as high as the rate of positive prediction on another group. Traditionally, p = 80. Unlike the other notions of fairness, disparate impact may not be expressed as a linear constraint. Nevertheless, previous work BID24 has suggested approximating it as such; i. DISPLAYFORM3 Any linear constraint d, c ∈ [C −, C +] may be equivalently written as a quadratic constraint: DISPLAYFORM0 We first discuss the known approaches based on the penalty-form Lagrangian with linear constraints and list their disadvantages. We then introduce the optimization based on our second-order penalties and how it avoids many of the issues associated with linear constraints. Suppose that we wish to minimize a loss DISPLAYFORM0 The Lagrangian is formulated as follows DISPLAYFORM1 Then, the original constrained optimization problem (the primal problem) would be DISPLAYFORM2 If our loss function is convex, then solving the dual problem would lead to the solution for the original problem; i.e., there is no duality gap. Unfortunately, this is not so in the non-convex case. Not only can the solution to the dual problem not yield an optimal feasible solution, there may not even exist a saddle point in the Lagrangian that we can converge to. Instead, in order to solve such a constrained optimization problem, one must consider it as a two-player game where one player chooses a classifier and the other player chooses a Lagrange multiplier. The final solution will then be a mixed equilibrium: in other words yields a randomized classifier. This line of work has received recent attention; e.g. BID3; BID0; BID6. To summarize:• The Lagrangian when optimizing jointly over model parameters and multipliers may not converge to any saddle point, and even if converged, may not lead to the right solution. This requires us to resort to sophisticated procedures with many parameters and hyperparameters, which may be difficult to train.• The Lagrangian approach with non-convex objectives leads to randomized classifiers, which may not be desirable in practice. Although work has been done to reduce the randomization BID6, it is in general not possible to reduce the solution to a deterministic one while still being an equilibrium.• Another weakness of the Lagrangian approach is that the constraints must be relaxed or approximated (e.g. hinge relaxation BID11) in order to make them convex and differentiable. The introduced slack necessitates hyperparameter tuning to encourage the optimization on approximated constraints to yield a classifier satisfying the original constraints. We propose to optimize the following objective DISPLAYFORM0 where λ ≥ 0 is a hyperparameter which decides the fairness-accuracy tradeoff. Note that λ → 0 corresponds to the unconstrained objective, while λ → ∞ corresponds to an optimization with hard constraints (i.e. = 0). Any fixed value of λ > 0 will yield a solution between these two extremes. Since the penalty coefficient λ is fixed during training, it may be treated as an additional hyperparameter. Standard hyperparameter optimization methods may be used to choose λ based on validation so that the final solution gives the desired fairness-accuracy trade-off. While we are not optimizing for the fairness metrics such as demographic parity or equalized odds directly, we will show that empirically our second-order penalties provide a reasonable proxy for many of the popular metrics. Additionally, while popular methods such as the aforementioned Lagrangian with linear penalties appear to directly optimize for the fairness metrics, in practice they require some sort of relaxation of these constraints to make the optimization feasible; thus in essence, such methods also optimize a proxy to the fairness metrics rather than the actual metrics desired and accordingly also require some amount of hyperparameter tuning. It can now also be seen that this alternative way of solving for fairness-constrained classifiers overcomes many of the drawbacks listed earlier associated with methods based on the Lagrangian: For any fixed choice of λ, there will exist a solution to the optimization. Moreover, this solution will be deterministic. Next, by tuning λ to directly satisfy the desired fairness metrics, we decouple much of the inherent difficulty of hyperparameter tuning in the Lagrangian approaches, which rely on the procedure itself to allow a certain amount of slack. Moreover, this decoupling encourages better generalization performance as there is less chance of overfitting to the training set compared to approaches which solve for the model parameters and Lagrange multipliers simultaneously on the same dataset. At face value, it appears that the introduction of second-order penalties may complicate stochastic optimization of the objective. The quadratic penalty is a square of an expectation over the dataset. It is not possible to express such a penalty as an expectation of individual loss functions over the dataset. In this section, we show that despite this obstacle, it is in fact possible to express the secondorder penalty as an expectation of individual loss functions over pairs of data points sampled from the dataset. This derivation is crucial for most modern machine learning applications, in which the data set is exceedingly large, at times presenting itself in an online form, and must be optimized using stochastic mini-batches. The second-order penalty is of the form DISPLAYFORM0 Since µ is a probability measure, we may re-write the integrals as DISPLAYFORM1 We may express each squared integral as a double integral: DISPLAYFORM2 Finally, we may express the sum of these double integrals as a double integral of a sum: DISPLAYFORM3 The gradients of this double integral with respect to parameters of d may be approximated via Monte Carlo estimation, only requiring access to two independent samples w, x from D. Algorithm 1 provides a pseudocode of our fairness-aware training algorithm. The machine learning model is parameterized as d θ, for parameter vector θ. We also assume the training loss is an expectation of individual loss functions: DISPLAYFORM0 Algorithm 1 Fairness training with second-order penalties. DISPLAYFORM1, model parameterization d θ, learning rate η, number of training steps N, batch size B, hyperparameter λ. DISPLAYFORM2. For a fixed penalty coefficient λ, our algorithm performs stochastic gradient descent by sampling batches. Each batch is used to compute an unbiased estimate of the gradient of the loss (d θ) as well as an unbiased estimate of the gradient of the second-order penalty. The optimal choice of λ is determined by standard hyperparameter tuning. Our work builds on the constrained optimization view of fairness in machine learning. This view was first introduced in BID24 and later extended in BID11. These works have focused on the convex setting, where optimality and convergence can be guaranteed. Although less is known in the non-convex case, there is work which frames the constrained optimization problem as a two-player game BID6. The ing classifier in this case is a randomized distribution over classifiers. In contrast, our work proposes the use of second-order penalties in a general setting. This allows one to avoid the two-player game formulation in the non-convex case and accordingly the ing classifier is deterministic. We are not the first to propose training for fairness in this manner: BID7 studies this for kernel methods and BID15 gives for training a linear model with such penalities. In contrast, our methods are applicable to highly non-convex models and previous works do not address how to optimize for these penalities stochastically. Another approach for non-convex settings is the use of adversarial training BID9. In this setting, a predictive model is trained concurrently with an adversary, whose objective is to recover sensitive or protected attributes from the model outputs. The model's loss is then augmented with a penalty based on the adversary's success. This is thus another form of a twoplayer game, and hence also suffers from convergence issues. Our approach avoids these issues by allowing the use of a fixed penalty coefficient in training. Moreover, the form of our constraints may be seen as equivalent to an adversarial formulation in which the adversary is parameterized by a linear model. The quadratic penalties we propose are similar to previous notions of orthogonality in machine learning, which is generally useful when one desires diversity of features BID21 BID16. The specific penalties we impose may be interpreted as penalizing the Frobenius norm of the Gram matrix of model outputs, as mentioned in BID23. Many of these methods propose optimization schemes which are not amenable to stochastic mini-batching. In contrast, one of the key contributions of our work is showing that a second-order penalty may be optimized stochastically. Our hinges on a standard calculus identity relating a product of integrals to a double integral. Similar techniques have been used previously in the context of reinforcement learning. Specifically, double-sampling is a known technique for unbiased minimization of the Bellman error, also a square of an expectation BID2. We use the Iris dataset BID17 and train a simple model which is a network with a single intermediate 2-node layer. We show the effects of adding a penalty to encourage orthogonality on this layer. That is, we wish for the two learned features to be decorrelated over the dataset. We show the in Figure 1 as weight on the penalty increases. We further increase the difficulty task by using small stochastic batches of size 4. While this is only a toy example, it clearly shows the possibility of stochastically training with these second-order penalties. Figure 1: The learned 2d representation of the datapoints (colored by label) as the penalty weight changes. We see that indeed, the two features become decorrelated as we increase the weight. We now move to applying our technique to a highly non-convex, neural network model. We use a convolutional autoencoder applied to MNIST images, where the encoder consists of 3 convolutional and max-pooling layers and the decoder consists of 4 convolutional and 3 upsampling layers. The encoded representation is 128-dimensional. We train this autoencoder with added penalties that encourage orthonormality. That is, in additional to orthogonality (as in the previous simulation), we also encourage the feature columns to be of unit norm. For training, we trained with the Adam optimizer and batch-size 128. We show in FIG0 that indeed, we can stochastically optimize with the quadratic penalty and that feature correlation on the test set decreases as we increase the penalty coefficient. We see that we can dramatically decrease the correlation of features while suffering small sacrifices in reconstruction error. 6 EXPERIMENTS: FAIRNESS 6.1 DATASETS Adult (48842 examples). Each datapoint corresponds to an individual and the task is to predict whether the person's income is more than 50k per year. We use 2 protected groups based on gender and preprocess the dataset and use a linear model, consistent with previous works, e.g. BID24; BID11. The 2 fairness constraints here are the equal opportunity constraints for the 2 protected classes with slack 0.05, that is, the positive prediction rate on the positively-labeled examples for each protected class must be at least 95% of the overall positive prediction rate over all positively-labeled examples. Bank Marketing BID17 (45211 examples). The data is based on a direct marketing campaign of a banking institution. The task is to predict whether someone will subscribe to a bank product. We use age as a protected feature and we have 5 protected groups, based on age quantiles and the 5 fairness constraints are demographic parity. Communities and Crime BID17 (Lichman et al.,) (1994 . Each datapoint represents a community and the task is to predict whether a community has high (above the 70-th percentile) or low crime rate. We preprocess the data and use a linear model, consistent with previous works, e.g. BID5 and form the protected group based on race in the same way as done in BID5. We use four race features as real-valued protected attributes corresponding to White, Black, Asian and Hispanic. We threshold each at the median to form 8 protected groups. There is one fairness constraint for each of the 8 protected groups, which constrains the groups false positive rate to be at most the overall false positive rate. ProPublicas COMPAS recidivism data (7, 918 examples). The task is to predict recidivism based on criminal history, jail and prison time, demographics, and risk scores. We preprocess this dataset in a similar way as the Adult dataset and the protected groups are two race-based (Black or White) and two gender-based (Male or Female). We use a 2-layer Neural Network with ReLU activations and 10 hidden units. The 4 fairness constraints here are the equal opportunity constraints for the 4 protected classes each being bounded by at most 0.05. That is, we wish to not predict recidivism more than 5% on top of the overall predicted recidivism rate restricted to examples whose which indeed had recidivism in two years for any protected class. Deterministic Lagrangian: This method BID11 is jointly training the Lagrangian in both model parameters and Lagrange multipliers and uses a Hinge approximation of the constraints to make the Lagrangian differentiable in its input. We then return the "best" iterate selected using a heuristic introduced by BID6, which finds a reasonable accuracy/fairness trade-off. Stochastic Lagrangian: This method BID6 ) returns a stochastic solution to the twoplayer game with the Lagrangian of the previous method as pay-off function. This solution is based on approximating a Nash equilibrium to this two-player game. We optimize over hyperparameters for our method in the following way: we perform a grid search over the weight of the orthogonality penalty as well as the fixed learning rate for the model trained using Adam optimizer. Then, to choose the best model, we find the highest accuracy model on the validation set which satisfies the constraints on the validation set. A final evaluation is performed on a fully un-seen test set. The Lagrangian baselines take in the desired slack and the hyperparameter search is over the fixed learning rate and is chosen using the heuristic on a validation set which chooses the best accuracy and constraint trade-off as done in BID6. We see that our method attains far lower constraint violation compared to training without constraints with almost no trade-off in accuracy. The Lagrangian baselines give solutions that satisfy the constraints but with considerable trade-off in accuracy. We see that our method attains the best testing error compared to the baselines while satisfying the fairness constraints. We show both the violations and overall false positive rates (FPR). The violation is the maximum difference between the FPR for any given protected group and the overall FPR. For example, in the first row, we see that there exists a protected group with a FPR of 0.3482 + 0.1108 = 0.459 by the model. While our method does not attain the lowest violation, it provides a reasonable trade-off. Interestingly, our method attains the lowest overall (i.e. average) FPR across the entire dataset out of all the methods. A low violation at the cost of high overall FPR may be undesirable because ensuring fairness in FPR might make everyone worse off in terms of FPR. We see that our method is able to learn a classifier that is significantly closer to satisfying the fairness constraints than the baselines while trading off a reasonable amount of accuracy. A PARETO CURVES Figure 3: For two of our datasets, we show the pareto frontiers for the testing error vs constraint violation trade-off over the runs of our method obtained from the grid search on the single constant learning rate and the orthogonality penalty term.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Bke0rjR5F7
We propose a method to stochastically optimize second-order penalties and show how this may apply to training fairness-aware classifiers.
Training methods for deep networks are primarily variants on stochastic gradient descent. Techniques that use (approximate) second-order information are rarely used because of the computational cost and noise associated with those approaches in deep learning contexts. However, in this paper, we show how feedforward deep networks exhibit a low-rank derivative structure. This low-rank structure makes it possible to use second-order information without needing approximations and without incurring a significantly greater computational cost than gradient descent. To demonstrate this capability, we implement Cubic Regularization (CR) on a feedforward deep network with stochastic gradient descent and two of its variants. There, we use CR to calculate learning rates on a per-iteration basis while training on the MNIST and CIFAR-10 datasets. CR proved particularly successful in escaping plateau regions of the objective function. We also found that this approach requires less problem-specific information (e.g. an optimal initial learning rate) than other first-order methods in order to perform well. Gradient-based optimization methods use derivative information to determine intelligent search directions when minimizing a continuous objective function. The steepest descent method is the most basic of these optimization techniques, but it is known to converge very slowly in ill-conditioned systems. Even outside of these cases, it still only has a linear rate of convergence. Newton's method is a more sophisticated approach -one that uses second-order derivative information, which allows the optimizer to model the error surface more accurately and thus take more efficient update steps. When it converges, it does so quadratically, but Newton's method also has limitations of its own. Firstly, it does not scale well: it can be very expensive to calculate, store, and invert the objective function Hessian. Secondly, the method may fail if the Hessian is indefinite or singular. A variety of methods have been developed to try and appropriate the strengths of each approach while avoiding their weaknesses. The conjugate gradient method, for example, uses only firstorder information but uses the history of past steps taken to produce a better convergence rate than steepest descent. Quasi-Newton methods, on the other hand, approximate the Hessian (or its inverse) using first-order information and may enforce positive-definiteness on its approximation. Other approaches like trust region methods use second-order information without requiring convexity. For further information about gradient-based optimization, see BID15.Deep learning (DL) provides a set of problems that can be tackled with gradient-based optimization methods, but it has a number of unique features and challenges. Firstly, DL problems can be extremely large, and storing the Hessian, or even a full matrix approximation thereto, is not feasible for such problems. Secondly, DL problems are often highly nonconvex. Thirdly, training deep networks via mini-batch sampling in a stochastic optimization problem. Even if the necessary expectations can be calculated (in an unbiased way), the variance associated with the batch sample calculations produces noise, and this noise can make it more difficult to perform the optimization. Finally, deep networks consist of the composition of analytic functions whose forms are known. As such, we can calculate derivative information analytically via back-propagation (i.e. the chain rule). These special characteristics of DL have motivated researchers to develop training methods specifically designed to overcome the challenges with training a deep neural network. One such approach is layer-wise pretraining BID1, where pretraining a neural network layer-by-layer encourages the weights to initialize close to a optimal minimum. Transfer learning works by a similar mechanism, relying on knowledge gained through previous tasks to encourage nice training on a novel task. Outside of pretraining, a class of optimization algorithms have been specifically designed for training deep networks. The Adam, Adagrad, and Adamax set of algorithms provide examples of using history-dependent learning rate adjustment BID6. Similarly, Nesterov momentum provides a method for leveraging history dependence in stochastic gradient descent BID17. One could possibly argue that these methods implicitly leverage second order information via their history dependence, but the stochastic nature of mini-batching prevents this from becoming explicit. Some researchers have sought to use second-order information explicitly to improve the training process. Most of these methods have used an approximation to the Hessian. For example, the L-BFGS method can estimate the Hessian (or its inverse) in a way that is feasible with respect to memory requirements; however, the noise associated with the sampling techniques can either overwhelm the estimation or require special modifications to the L-BFGS method to prevent it from diverging BID3. There have been two primary ways to deal with this: subsampling BID3 BID13 and mini-batch reuse BID16 BID12. Subsampling involves updating the Hessian approximation every L iterations rather than every iteration, as would normally be done. Mini-batch reuse consists of using the same minibatch on subsequent iterations when calculating the difference in gradients between those two iterations. These approximate second-order methods typically have a computational cost that is higher than, though on the same order of, gradient descent, and that cost can be further reduced by using a smaller mini-batch for the Hessian approximation calculations than for the gradient calculation BID2. There is also the question of bias: it is possible to produce unbiased low-rank Hessian approximations BID11, but if the Hessian is indefinite, then quasi-Newton methods will prefer biased estimates -ones that are positive definite. Other work has foregone these kinds of Hessian approximations in favor of using finite differences BID10 ). In this paper, we prove, by construction, that the first and second derivatives of feedforward deep learning networks exhibit a low-rank, outer product structure. This structure allows us to use and manipulate second-order derivative information, without requiring approximation, in a computationally feasible way. As an application of this low-rank structure, we implement Cubic Regularization (CR) to exploit Hessian information in calculating learning rates while training a feedforward deep network. Finally, we show that calculating learning rates in this fashion can improve existing training methods' ability to exit plateau regions during the training process. Second-order derivatives are not widely used in DL, and where they are used, they are typically estimated. These derivatives can be calculated analytically, but this is not often done because of the scalability constraints described in Section 1.1. If we write out the first and second derivatives, though, we can see that they have a low-rank structure to them -an outer product structure, in fact. When a matrix has low rank (or less than full rank), it means that the information contained in that matrix (or the operations performed by that matrix) can be fully represented without needing to know every entry of that matrix. An outer product structure is a special case of this, where an mxn matrix A can be fully represented by two vectors A = uv T. We can then calculate, store, and use secondorder derivatives exactly in an efficient manner by only dealing with the components needed to represent the full Hessians rather than dealing with those Hessians themselves. Doing this involves some extra calculations, but the storage costs are comparable to those of gradient calculations. In this section, we will illustrate the low-rank structure for a feedforward network, of arbitrary depth and layer widths, consisting of ReLUs in the hidden layers and a softmax at the output layer. A feedforward network with arbitrary activation functions has somewhat more complicated derivative formulae, but those derivatives still exhibit a low-rank structure. That structure also does not depend on the form of the objective function or whether a softmax is used, and it is present for convolutional and recurrent layers as well. The complete derivations for these cases are given in Appendix B.In our calculations, we make extensive use of index notation with the summation convention BID5. In index notation, a scalar has no indices (v), a vector has one index (v as v i or v i), a matrix has two (V as V ij, V i j, or V ij), and so on. The summation convention holds that repeated indices in a given expression are summed over unless otherwise indicated. For example, DISPLAYFORM0 The pair of indices being summed over will often consist of a superscript and a subscript; this is a bookkeeping technique used in differential geometry, but in this context, the subscripting or superscripting of indices will not indicate covariance or contravariance. We have also adapted index notation slightly to suit the structure of deep networks better: indices placed in brackets (e.g. the k in v (k),j ) are not summed over, even if repeated, unless explicitly indicated by a summation sign. A tensor convention that we will use, however, is the Kronecker delta: δ ij, δ i j, or δ ij. The Kronecker delta is the identity matrix represented in index notation: it is 1 for i = j and 0 otherwise. The summation convention can sometimes be employed to simplify expressions containing Kronecker deltas. For example, δ DISPLAYFORM1 Let us consider a generic feedforward network with ReLU activation functions in n hidden layers, a softmax at the output layer, and categorical cross-entropy as the objective function (defined in more detail in Appendix B. The first derivatives, on a per-sample basis, for this deep network are DISPLAYFORM2 where f is the per-sample objective function, v (k),j is the vector output of layer k, u, see Appendix B. In calculating these expressions, we have deliberately left ∂f ∂p j unevaluated. This keeps the expression relatively simple, and programs like TensorFlow BID0 can easily calculate this for us. Leaving it in this form also preserves the generality of the expression -there is no low-rank structure contained in ∂f ∂p j, and the low-rank structure of the network as a whole is therefore shown to be independent of the objective function and whether or not a softmax is used. In fact, as long as Equation 13 holds, any sufficiently smooth function of p j may be used in place of a softmax without disrupting the low-rank structure. The one quantity that needs to be stored here is η (n,k),j i for k = 1, 2,..., n − 1; it will be needed in the second derivative calculations. Note, however, that this is roughly the same size as the gradient itself. We can now see the low-rank structure: DISPLAYFORM3 is the outer product (or tensor product) of the vectors ∂f ∂p i and v (n),j, and DISPLAYFORM4 (which ends up being a rank-1 tensor) and v (k−1),j. The index notation makes the outer product structure clear. It is important to note that this low-rank structure only exists for each sample -a weighted sum of low-rank matrices is not necessarily (and generally, will not be) low rank. In other words, even if the gradient of f is low rank, the gradient of the expectation, F = E [f], will not be, because the gradient of F is the weighted sum of the gradients of f. The second-order objective function derivatives are then DISPLAYFORM5 DISPLAYFORM6 Calculating all of these second derivatives requires the repeated use of DISPLAYFORM7 Evaluating that Hessian is straightforward given knowledge of the activation functions and objective used in the network, and storing it is also likely not an issue as long as the number of categories is small relative to the number of weights. For example, consider a small network with 10 categories and 1000 weights. In such a case, DISPLAYFORM8 ∂p 2 would only contain 100 entries -the gradient would be 10 times larger. We now find that we have to store η (n,k),i j values in order to calculate the derivatives. In DISPLAYFORM9 ∂w 2, we also end up needing η (r,k),i j for r = n. In a network with n hidden layers, we would then have DISPLAYFORM10 matrices to store. For n = 10, this would be 45, for n = 20, this would be 190, and so on. This aspect of the calculations does not seem to scale well, but in practice, it is relatively simple to work around. It is still necessary to store η DISPLAYFORM11, r < n, only actually shows up in one place, and thus it is possible to calculate each η (r,k),i j matrix, use it, and discard it without needing to store it for future calculations. The key thing to note about these second derivatives is that they retain a low-rank structure -they are now tensor products (or the sums of tensor products) of matrices and vectors. For example, DISPLAYFORM12 With these expressions, it would be relatively straightforward to extract the diagonal of the Hessian and store or manipulate it as a vector. The rank of the weighted sum of low rank components (as occurs with mini-batch sampling) is generally larger than the rank of the summed components, however. As such, manipulating the entire Hessian may not be as computationally feasible; this will depend on how large the mini-batch size is relative to the number of weights. The low rank properties that we highlight here for the Hessian exist on a per-sample basis, as they did for the gradient, and therefore, the computational savings provided by this approach will be most salient when calculating scalar or vector quantities on a sample-by-sample basis and then taking a weighted sum of the . In principle, we could calculate third derivatives, but the formulae would likely become unwieldy, and they may require memory usage significantly greater than that involved in storing gradient information. Second derivatives should suffice for now, but of course if a use arose for third derivatives, calculating them would be a real option. Thus far, we have not included bias terms. Including bias terms as trainable weights would increase the overall size of the gradient (by adding additional variables), but it would not change the overall low-rank structure. Using the calculations provided in Appendix B, it would not be difficult to produce the appropriate derivations. Cubic Regularization (CR) is a trust region method that uses a cubic model of the objective function: DISPLAYFORM0 at the j-th iteration, where H j is the objective function Hessian and s j = x − x j. The cubic term makes it possible to use information in the Hessian without requiring convexity, and the weight σ j on that cubic term can have its own update scheme (based on how well m (s j) approximates f BID7 ). Solving for an optimal s j value then involves finding the root of a univariate nonlinear equation BID14. CR is not commonly used in deep learning; we have seen only one example of CR applied to machine learning BID7 and no examples with deep learning. This is likely the case because of two computationally expensive operations: calculating the Hessian and solving for s j. We can overcome the first by using the lowrank properties described above. The second is more challenging, but we can bypass it by using CR to calculate a step length (i.e. the learning rate) for a given search direction rather than calculating the search direction itself. Our approach in this paper is to use CR as a metamethod -a technique that sits on top of existing training algorithms. The algorithm calculates a search direction, and then CR calculates a learning rate for that search direction. For a general iterative optimization process, this would look like x j+1 = x j + α j g j, where g j is the search direction (which need not be normalized), α j is the learning rate, and the subscript refers to the iteration. With the search direction fixed, m would then be a cubic function of α at each iteration. Solving ∂m ∂α = 0 as a quadratic equation in α then yields DISPLAYFORM0 If we assume that g T ∇f < 0 (i.e. g is a descent direction), then α is guaranteed to be real. Continuing under that assumption, of the two possible α values, we choose the one guaranteed to be positive. The sampling involved in mini-batch training means that there are a number of possible ways to get a final α j g j . One option would be to calculate E [α j g j]. This would involve calculating an α value with respect to the search direction produced by each sample point and then averaging the product αg over all of the sample points. Doing this should produce an unbiased estimate of α j g j, but in practice, we found that this approach ed in a great deal sampling noise and thus was not effective. The second approach would calculate DISPLAYFORM1 To do this, we would calculate an α value with respect to the search direction produced by each sample point, as in the first option, calculate an average α value, and multiply the overall search direction by that average. This approach, too, suffered from excessive noise. In the interest of reducing noise and increasing simplicity, we chose a third option: once the step direction had been determined, we considered that fixed, took the average of g T Hg and g T ∇f over all of the sample points to produce m (α) and then solved for a single α j value. This approach was the most effective of the three. To test CR computationally, we created deep feedforward networks using ReLU activations in the hidden layers, softmax in the output layer, and categorical cross-entropy as the error function; we then trained them on the MNIST and CIFAR-10 (BID8 data sets. This paper shows from networks with 12 hidden layers, each 128 nodes wide. For the purposes of this paper, we treat network training strictly as an optimization process, and thus we are not interested in network performance measures such as accuracy and validation error -the sole consideration is minimizing the error function presented to the network. As we consider that minimization progress, we will also focus on optimization iteration rather than wall clock time: the former indicates the behaviour of the algorithm itself, whereas the latter is strongly dependent upon implementation (which we do not want to address at this juncture). Overall computational cost per iteration matters, and we will discuss it, but it will not be our primary interest. Further implementation details are found in Appendix A. Figure 1 shows an example of CR (applied on top of SGD). In this case, using CR provided little to no benefit. The average learning rate with CR was around 0.05 (a moving average with a period of 100 is shown in green on the learning rate plot both here and in the rest of the paper), which was close to our initial choice of learning rate. This suggests that 0.02 was a good choice of learning rate. Another reason the were similar, though, is that the optimization process did not run into any plateaus. We would expect CR to provide the greatest benefit when the optimization gets stuck on a plateau -having information about the objective function curvature would enable the algorithm to increase the learning rate while on the plateau and then return it to a more typical value once it leaves the plateau. To test this, we deliberately initialized our weights so that they lay on a plateau: the objective function is very flat near the origin, and we found that setting the network weights to random values uniformly sampled between 0.1 and -0.1 was sufficient.(a) Error (SGD in blue, SGD with CR in red) (b) Learning rate (calculated rate in red, period-100 moving average in green)Figure 3: Cubic Regularization (CR) applied to Stochastic Gradient Descent (SGD) on the CIFAR-10 Dataset; initial learning rate = 0.01, σ = 1000 Figure 2 shows the of SGD with and without CR when stuck on a plateau. There, we see a hundred-fold increase in the learning rate while the optimization is on the plateau, but this rate drops rapidly as the optimization exits the plateau, and once it returns to a more normal descent, the learning rate also returns to an average of about 0.05 as before. The CR calculation enables the training process to recognize the flat space and take significantly larger steps as a . Applying CR to SGD when training on CIFAR-10 (Figure 3) produced similar to those seen on MNIST.We then considered if this behaviour would hold true on other training algorithms: we employed CR with Adagrad BID4 and Adadelta on MNIST. The were similar. CR did not provide a meaningful difference when the algorithms performed well, but when those algorithms were stuck on plateaus, CR increased the learning rate and caused the algorithms to exit the plateau more quickly than they otherwise would have (Figures 4 and 5). The relative magnitudes of those increases were smaller than for SGD, but Adagrad and Adadelta already incorporate some adaptive learning rate behaviour, and good choices for the initial learning rate varied significantly from algorithm to algorithm. We also used a larger value for σ to account for the increased variability due to those algorithms' adaptive nature. The with Adadelta showed some interesting learning rate changes: the learning rate calculated by CR dropped steadily as the algorithm exited the plateau, but it jumped again around iteration 1200 as it apparently found itself in a flat region of space. We see this CR approach as an addition to, not a replacement for, existing training methods. It could potentially replace existing methods, but it does not have to in order to be used. Because of the low-rank structure of the Hessian, we can use CR to supplement existing optimizers that do not explicitly leverage second order information. The CR technique used here is most useful when the optimization is stuck on a plateau prior to convergence: CR makes it possible to determine whether the optimization has converged (perhaps to a local minimum) or is simply bogged down in a flat region. It may eventually be possible to calculate a search direction as well as a step length, which would likely be a significant advancement, but this would be a completely separate algorithm. We found that applying CR to Adagrad and Adadelta provided the same kinds of improvements that applying CR to SGD did. However, using CR with Adam BID6 did not provide gains as it did with the other methods. Adam generally demonstrates a greater degree of adaptivity than Adagrad or Adadelta; in our experiments, we found that Adam was better than Adagrad or Adadelta in escaping the plateau region. We suspect that trying to overlay an additional calculated learning rate on top of the variable-specific learning rate produced by Adam may create interference in both sets of learning rate calculations. Analyzing each algorithm's update scheme in conjunction with the CR calculations could provide insight into the nature and extent of this interference, and provide ways to further improve both algorithms. In future work, though, it would not be difficult to adapt the CR approach to calculate layer-or variable-specific learning rates, and doing that could address this problem. Calculating a variable-specific learning rate would essentially involve rescaling each variable's step by the corresponding diagonal entry in the Hessian; calculating a layer-specific learning rate would involve rescaling the step of each variable in that layer by some measure of the block diagonal component of the Hessian corresponding to those variables. The calculations for variable-specific learning rates with CR are given in Appendix B.There are two aspects of the computational cost to consider in evaluating the use of CR. The first aspect is storage cost. In this regard, the second-order calculations are relatively inexpensive (comparable to storing gradient information). The second aspect is the number of operations, and the second-order calculations circumvent the storage issue by increasing the number of operations. The number of matrix multiplications involved in calculating the components of Equation 9, for example, scales quadratically with the number of layers (see the derivations in Appendix B). Although the number of matrix multiplications will not change with an increase in width, the cost of naïve matrix multiplication scales cubically with matrix size. That being said, these calculations are parallelizable and as such, the effect of the computation cost will be implementation-dependent. A significant distinction between CR and methods like SGD has to do with the degree of knowledge about the problem required prior to optimization. SGD requires an initial learning rate and (usually) a learning rate decay scheme; an optimal value for the former can be very problem-dependent and may be different for other algorithms when applied to the same problem. For CR, it is necessary to specify σ, but optimization performance is relatively insensitive to this -order of magnitude estimates seem to be sufficient -and varying σ has a stronger affect on the variability of the learning rate than it does on the magnitude (though it does affect both). If the space is very curved, the choice of σ matters little because the step size determination is dominated by the curvature, and if the space if flat, it bounds the step length. It is also possible to employ an adaptive approach for updating σ BID7 ), but we did not pursue that here. Essentially, using CR is roughly equivalent to using the optimal learning rate (for SGD). In this paper, we showed that feedforward networks exhibit a low-rank derivative structure. We demonstrate that this structure provides a way to represent the Hessian efficiently; we can exploit this structure to obtain higher-order derivative information at relatively low computational cost and without massive storage requirements. We then used second-order derivative information to implement CR in calculating a learning rate when supplied with a search direction. The CR method has a higher per-iteration cost than SGD, for example, but it is also highly parallelizable. When SGD converged well, CR showed comparable optimization performance (on a per-iteration basis), but the adaptive learning rate that CR provided proved to be capable of driving the optimization away from plateaus that SGD would stagnate on. The were similar with Adagrad and Adadelta, though not with Adam. CR also required less problem-specific knowledge (such as an optimal initial learning rate) to perform well. At this point, we see it as a valuable technique that can be incorporated into existing methods, but there is room for further work on exploiting the low-rank derivative structure to enable CR to calculate search directions as well as step sizes. Starting at a point far from the origin ed in extremely large derivative and curvature values (not to mention extremely large objective function values), and this could sometimes cause difficulties for the CR method. This was easy to solve by choosing an initialization point relatively near the origin; choosing an initialization relatively near the origin also provided a significantly better initial objective function value. We initialized the networks' weights to random values between an upper and lower bound: to induce plateau effects, we set, the bounds to ±0.1, otherwise, we set them to ±0.2.All of the networks used a mini-batch size of 32 and were implemented in TensorFlow BID0. The initial learning rate varied with network size; we chose learning rates that were large and reasonable but perhaps not optimal, and for optimization algorithms with other parameters governing the optimization, we used the default TensorFlow values for those parameters. For the learning rate decay, we used an exponential decay with a decay rate of 0.95 per 100 iterations. The σ value used is specified along with the initial learning rate for each network's . This value was also not optimized but was instead set to a reasonable power of 10.B LOW-RANK DERIVATIONS FOR DEEP NETWORKS B.1 FEEDFORWARD NETWORK WITH RELU ACTIVATIONS TAB1 provides a nomenclature for our deep network definition. Equations 10-16 define a generic feedforward network with ReLU activation functions in the hidden layers, n hidden layers, a softmax at the output layer, and categorical cross-entropy as the objective function. DISPLAYFORM0 A (z) = max (z, 0) DISPLAYFORM1 DISPLAYFORM2 The relevant first derivatives for this deep network are DISPLAYFORM3 where there is no summation over j in Equation FORMULA13. We now define several intermediate quantities to simplify the derivation process: DISPLAYFORM4 DISPLAYFORM5 where there is no summation over j in Equations 19 and 20. We can now complete our calculations of the first derivatives. DISPLAYFORM6 DISPLAYFORM7 We then start our second derivative calculations by considering some intermediate quantities: DISPLAYFORM8 Convolutional and recurrent layers preserve the low-rank derivative structure of the fully connected feedforward layers considered above, and we will show this in the following sections. Because we are only considering a single layer of each, we calculate the derivatives of the layer outputs with respect to the layer inputs -in a larger network, those derivatives will be necessary for calculating total derivatives via back-propagation. We can define a convolutional layer as DISPLAYFORM0 where x i j is the layer input, σ is the vertical stride, τ is the horizontal stride, A is the activation function, and v s t is the layer output. A convolutional structure can make the expressions somewhat complicated when expressed in index notation, but we can simplify matters by using the simplification z with no summation over s and t in any of the expressions above. Using the simplification with z sl tk makes it significantly easier to see the low rank structure in these derivatives, but that structure is still noticeable without the simplification. The conditional form of the expressions is more complicated, but it is also possible to see how the derivatives relate to w DISPLAYFORM1 where t indicates the number of times that the recursion has been looped through. If we inspect this carefully, we can actually see that this is almost identical to the hidden layers of the feedforward network: they are identical if we stipulate that the weights of the feedforward network are identical at each layer (i.e. w
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByJ7obb0b
We show that deep learning network derivatives have a low-rank structure, and this structure allows us to use second-order derivative information to calculate learning rates adaptively and in a computationally feasible manner.
The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations. Nonetheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the research of adversarial attack and defense. In particular, given a set of risk sources (domains), minimizing the maximal loss induced from the domain set can be reformulated as a general min-max problem that is different from AT. Examples of this general formulation include attacking model ensembles, devising universal perturbation under multiple inputs or data transformations, and generalized AT over different types of attack models. We show that these problems can be solved under a unified and theoretically principled min-max optimization framework. We also show that the self-adjusted domain weights learned from our method provides a means to explain the difficulty level of attack and defense over multiple domains. Extensive experiments show that our approach leads to substantial performance improvement over the conventional averaging strategy. Training a machine learning model that is capable of assuring its worst-case performance against all possible adversaries given a specified threat model is a fundamental yet challenging problem, especially for deep neural networks (DNNs) (; ;). A common practice to train an adversarially robust model is based on a specific form of min-max training, known as adversarial training (AT) , where the minimization step learns model weights under the adversarial loss constructed at the maximization step in an alternative training fashion. On datasets such as MNIST and CIFAR-10, AT has achieved the state-of-the-art defense performance against p -norm-ball input perturbations (b). Motivated by the success of AT, one follow-up question that naturally arises is: Beyond AT, can other types of min-max formulation and optimization techniques advance the research in adversarial robustness? In this paper, we give an affirmative answer corroborated by the substantial performance gain and the ability of self-learned risk interpretation using our proposed min-max framework on several tasks for adversarial attack and defense. We demonstrate the utility of a general formulation for min-max optimization minimizing the maximal loss induced from a set of risk sources (domains). Our considered min-max formulation is fundamentally different from AT, as our maximization step is taken over the probability simplex of the set of domains. Moreover, we show that many problem setups in adversarial attacks and defenses can in fact be reformulated under this general min-max framework, including attacking model ensembles (Tramèr et al., 2018;), devising universal perturbation to input samples or data transformations , and generalized AT over multiple types of threat models (Tramèr & ;). However, current methods for solving these tasks often rely on simple heuristics (e.g., Work Recent studies have identified that DNNs are highly vulnerable to adversarial manipulations in various applications (; ; ; ; ; ; ; ; a;), thus leading to an arms race between adversarial attacks (; b; ; a; ; b;) and defenses (; b; ; ;). One intriguing property of adversarial examples is the transferability across multiple domains (; Tramèr et al., 2017; ;), which indicates a more challenging yet promising research directiondevising universal adversarial perturbations over model ensembles (Tramèr et al., 2018;), input samples (; ;) and data transformations (b; ;). However, current approaches suffer from a significant performance loss for resting on the uniform averaging strategy. We will compare these works with our min-max method in Sec. 4. As a natural extension following min-max attack, we study the generalized AT under multiple perturbations (Tramèr & ; ; ;). Finally, our min-max framework is adapted and inspired by previous literature on robust learning over multiple domains (; ; ; 2019a). We begin by introducing the principle of robust learning over multiple domains and its connection to a specialized form of min-max optimization. We then show that the ing min-max formulation fits into various attack settings for adversarial exploration: a) ensemble adversarial attack, b) universal adversarial perturbation and c) robust perturbation over data transformations. Finally, we propose a generalized adversarial training (AT) framework under mixed types of adversarial attacks to improve model robustness. Consider K loss functions {F i (v)} (each of which is defined on a learning domain), the problem of robust learning over K domains can be formulated as (; ;) where v and w are optimization variables, V is a constraint set, and P denotes the probability simplex P = {w | 1 T w = 1, w i ∈, ∀i}. Since the inner maximization problem in is a linear function of w over the probabilistic simplex, problem is thus equivalent to where [K] denotes the integer set {1, 2, . . ., K}. Benefit and computation challenge of Compared to multi-task learning in a finite-sum formulation which minimizes K losses on average, problem provides consistently robust worst-case performance across all domains. This can be explained from the epigraph form of, minimize v∈V,t t, subject to where t is an epigraph variable that provides the t-level robustness at each domain. Although the min-max problem offers a great robustness interpretation as in, solving it becomes more challenging than solving the finite-sum problem. It is clear from that the inner maximization problem of always returns the one-hot value of w, namely, w = e i, where e i is the ith standard basis vector, and i = arg max i {F i (v)}. The one-hot coding reduces the generalizability to other domains and induces instability of the learning procedure in practice. Such an issue is often mitigated by introducing a strongly concave regularizer in the inner maximization step . Regularized problem formulation Spurred by , we penalize the distance between the worst-case loss and the average loss over K domains. This yields where γ > 0 is a regularization parameter. As γ → 0, problem is equivalent to. By contrast, it becomes the finite-sum problem when γ → ∞ since w → 1/K. In this sense, the trainable w provides an essential indicator on the importance level of each domain. The larger the weight is, the more important the domain is. We call w domain weights in this paper. We next show how the principle of robust learning over multiple domains can fit into various settings of adversarial attack and defense problems. The general goal of adversarial attack is to craft an adversarial example x = x 0 + δ ∈ R d to mislead the prediction of machine learning (ML) or deep learning (DL) systems, where x 0 denotes the natural example with the true label t 0, and δ is known as adversarial perturbation, commonly subject to given small number. Here the p norm enforces the similarity between x and x 0, and the input space of ML/DL systems is normalized to d. Ensemble attack over multiple models Consider K ML/DL models, the goal is to find robust adversarial examples that can fool all K models simultaneously. In this case, the notion of'domain' in is specified as'model', and the objective function F i in signifies the attack loss f (δ; x 0, y 0, M i) given the natural input (x 0, y 0) and the model M i. Thus, problem becomes where w encodes the difficulty level of attacking each model. and a single model M, our goal is to find the universal perturbation δ so that all the corrupted K examples can fool M. In this case, the notion of'domain' in is specified as'example', and problem becomes where different from, w encodes the difficulty level of attacking each example. Adversarial attack over data transformations Consider K categories of data transformation {p i}, e.g., rotation, lightening, and translation (a), our goal is to find the adversarial attack that is robust to data transformations. In this case, the notion of'domain' in is specified as'data transformer', and problem becomes where E t∼pi [f (t(x 0 +δ); y 0, M)] denotes the attack loss under the distribution of data transformation p i, and w encodes the difficulty level of attacking each type of transformed example x 0. Conventional AT is restricted to a single type of norm-ball constrained adversarial attack . For example, AT under ∞ attack yields minimize where θ ∈ R n denotes model parameters, δ denotes -tolerant ∞ attack, and f tr (θ, δ; x, y) is the training loss under perturbed examples {(x + δ, y)}. However, there possibly exist blind attacking spots across multiple types of adversarial attacks so that AT under one attack would not be strong enough against another attack . Thus, an interesting question is how to generalize AT under multiple types of adversarial attacks. One possible way is to use the finite-sum formulation where δ i ∈ X i is the ith type of adversarial perturbation defined on X i, e.g., different p attacks. Moreover, one can map'attack type' to'domain' considered in. We then perform AT against the strongest adversarial attack across K attack types in order to avoid blind attacking spots. That is, upon defining F i (θ):= maximize δi∈Xi f tr (θ, δ i ; x, y), we solve the problem of the form, minimize In fact, problem is in the min-max-max form, however, Lemma 1 shows that problem can be further simplified to the min-max form. Lemma 1. Problem is equivalent to where w ∈ R K represent domain weights, and P has been defined in. Proof: see Appendix A. Similar to, a strongly concave regularizer −γ/2 w − 1/K 2 2 can be added into the inner maximization problem of, which can boost the stability of the learning procedure and strike a balance between the max and the average attack performance. However, solving problem and its regularized version is more complicated than since the inner maximization involves both domain weights w and adversarial perturbations {δ i}. We finally remark that there was an independent work (Tramèr &) which also proposed the formulation for AT under multiple perturbations. However, what we propose here is the regularized formulation of. As will be evident later, the domain weights w in our formulation have strong interpretability, which learns the importance level of different attacks. Most significantly, our work has different motivation from (Tramèr &), and our idea applies to not only AT but also attack generation in Sec. 2.2. In this section, we delve into technical details on how to efficiently solve problems of robust adversarial attacks given by the generic form and problem for generalized AT under mixed types of adversarial attacks. We propose the alternating one-step projected gradient descent (APGD) method (Algorithm 1) to solve problem. For clarity, we repeat problem under the adversarial perturbation δ and its constraint set X defined in Sec. 2.2, We show that at each iteration, APGD takes only one-step PGD for outer minimization and one-step projected gradient ascent for inner maximization (namely, PGD for its negative objective function). We also show that each alternating step has a closed-form expression, and the main computational complexity stems from computing the gradient of the attack loss w.r.t. the input. Therefore, APGD is computationally efficient like PGD, which is commonly used for design of conventional single p -norm based adversarial attacks . Outer minimization Considering w = w (t−1) and, we perform one-step PGD to update δ at iteration t, where proj(·) denotes the Euclidean projection operator, i.e., proj X (a) = arg min x∈X x − a 2 2 at the point a, α > 0 is a given learning rate, and ∇ δ denotes the first order gradient w.r.t. δ. In, the projection operation becomes the key to obtain the closed-form of the updating rule. Recall from Sec. 2.2 that X = {δ| δ p ≤,č ≤ δ ≤ĉ}, where p ∈ {0, 1, 2, ∞}, andč = −x 0 andĉ = 1 − x 0 (implyingč ≤ 0 ≤ĉ). If p = ∞, then the projection function becomes the clip function. However, when p ∈ {0, 1, 2}, the closed-form of projection operation becomes non-trivial. In Proposition 1, we derive the solution of proj X (a) under different p norms. Proposition 1. Given a point a ∈ R d and a constraint set X = {δ| δ p ≤,č ≤ δ ≤ĉ}, the Euclidean projection δ * = proj X (a) has a closed-form solution when p ∈ {0, 1, 2}. Proof: See Appendix B. Inner maximization By fixing δ = δ (t) and letting ψ(w): in problem, we then perform one-step PGD (w.r.t. −ψ) to update w, where β > 0 is a given learning rate, In, the second equality holds due to the closed-form of projection operation onto the probabilistic simplex P , where (·) + denotes the elementwise nonnegative operator, i.e., (x) + = max{0, x}, and µ is the root of the equation 1, the root µ exists within the interval [min i {b i} − 1/K, max i {b i} − 1/K] and can be found via the bisection method . Convergence analysis We remark that APGD follows the gradient primal-dual optimization framework (a), and thus enjoys the same optimization guarantees. In Theorem 1, we demonstrate the convergence rate of Algorithm 1 for solving problem. Theorem 1. (inherited from primal-dual min-max optimization) Suppose that in problem F i (δ) has L-Lipschitz continuous gradients, and X is a convex compact set. Given learning rates α ≤ We next propose the alternating multi-step projected gradient descent (AMPGD) method to solve the regularized version of problem, which is repeated as follows Algorithm 2 AMPGD to solve problem given w (t−1) and δ (t−1), perform SGD to update θ (t) 4: given θ (t), perform R-step PGD to update w (t) and δ (t) Problem is in a more general non-convex non-concave min-max setting, where the inner maximization involves both domain weights w and adversarial perturbations {δ i}. It was shown in that the multi-step PGD is required for inner maximization in order to approximate the near-optimal solution. This is also in the similar spirit of AT , which executed multi-step PGD attack during inner maximization. We summarize AMPGD in Algorithm 2. At step 4 of Algorithm 2, each PGD step to update w and δ can be decomposed as where let w 1:= w (t−1) and δ. Here the subscript t represents the iteration index of AMPGD, and the subscript r denotes the iteration index of R-step PGD. Clearly, the above projection operations can be derived for closed-form expressions through and Lemma 1. To the best of our knowledge, it is still an open question to build theoretical convergence guarantees for solving the general non-convex non-concave min-max problem like, except the work which proposed O(1/T) convergence rate if the objective function satisfies Polyak-Łojasiewicz conditions . Improved robustness via diversified p attacks. It was recently shown in that the diversity of individual neural networks improves adversarial robustness of an ensemble model. Spurred by that, one may wonder if the promotion of diversity among p attacks is beneficial to adversarial robustness? We measure the diversity between adversarial attacks through the similarity between perturbation directions, namely, input gradients {∇ δi f tr (θ, δ i ; x, y)} i in. We find that there exists a strong correlation between input gradients for different p attacks. Thus, we propose to enhance their diversity through the orthogonality-promoting regularizer used for encouraging diversified prediction of ensemble models in , where G ∈ R d×K is a d × K matrix, each column of which corresponds to a normalized input gradient ∇ δi f tr (θ, δ i ; x, y) for i ∈ [K], and h(θ, {δ i}; x, y) reaches the maximum value 0 as input gradients become orthogonal. With the aid of, we modify problem to minimize The rationale behind is that the adversary aims to enhance the effectiveness of attacks from diversified perturbation directions (inner maximization), while the defender robustifies the model θ, which makes diversified attacks less effective (outer minimization). In this section, we first evaluate the proposed min-max optimization strategy on three attack tasks. We show that our approach leads to substantial improvement compared with state-of-the-art attack methods such as ensemble PGD and expectation over transformation (EOT) (b; ; a). We next demonstrate the effectiveness of the generalized AT for multiple types of adversarial perturbations. We show that the use of trainable domain weights in problem can automatically adjust the risk level of different attacks during the training process even if the defender lacks prior knowledge on the strength of these attacks. We also show that the promotion of diversity of p attacks help improve adversarial robustness further. We thoroughly evaluate our APGD/AMPGD algorithm on MNIST and CIFAR-10. A set of diverse image classifiers (denoted from Model A to Model H) are trained, including multi-layer perceptrons (Most current works play a min-max game from a defender's perspective, i.e., adversarial training. However, we show the great strength of min-max optimization also lies at the side of attack generation. Note that problem formulations- are applicable to both untargeted and targeted attack. Here we focus on the former setting and use C&W loss function . The details of crafting adversarial examples are available in Appendix C.2. Ensemble attack over multiple models We craft adversarial examples against an ensemble of known classifiers. The work (, 5th place at CAAD-18) proposed an ensemble PGD attack, which assumed equal importance among different models, namely, w i = 1/K in problem. Throughout this task, we measure the attack performance via ASR all -the attack success rate (ASR) of fooling model ensembles simultaneously. Compared to the ensemble PGD attack , our approach in 40.79% and 17.48% ASR all improvement averaged over different p -norm constraints on MNIST and CIFAR-10, respectively. In what follows, we provide more detailed experiment and analysis. In Table 1, we show that our min-max APGD significantly outperforms ensemble PGD in ASR all. Taking ∞ -attack on MNIST as an example, our min-max attack leads to a 90.16% ASR all, which largely outperforms 48.17% (ensemble PGD). The reason is that Model C, D are more difficult to attack, which can be observed from their higher test accuracy on adversarial examples. As a , although the adversarial examples crafted by assigning equal weights over multiple models are able to attack {A, B} well, they achieve a much lower ASR (i.e., 1 -Acc) in {C, D}. By contrast, APGD automatically handles the worst case {C, D} by slightly sacrificing the performance on {A, B}: 31.47% averaged ASR improvement on {C, D} versus 0.86% degradation on {A, B}. More on CIFAR-10 and more complicated DNNs (e.g., GoogLeNet) are provided in Appendix D. Lastly, we highlight that tracking domain weights w provides us novel insights for model robustness and understanding attack procedure. From our theory, a model with higher robustness always corresponds to a larger w because its loss is hard to attack and becomes the "worst" term. This hypothesis can be verified empirically. According to Figure 1c, we have w c > w d > w a > w bindicating a decrease in model robustness for C, D, A and B, which is exactly verified by Acc C > Acc D > Acc A > Acc B in Table 1 (∞ -norm). Universal perturbation over multiple examples We evaluate APGD in universal perturbation on MNIST and CIFAR-10, where 10,000 test images are randomly divided into equal-size groups (containing K images per group) for universal perturbation. We measure two types of ASR (%), ASR avg and ASR gp. Here the former represents the ASR averaged over all images in all groups, and the latter signifies the ASR averaged over all groups but a successful attack is counted under a more restricted condition: images within each group must be successfully attacked simultaneously by universal perturbation. When K = 5, our approach achieves 42.63% and 35.21% improvement over the averaging strategy under MNIST and CIFAR-10, respectively. In Table 2, we compare the proposed min-max strategy (APGD) with the averaging strategy on the attack performance of generated universal perturbations. As we can see, our method always achieves higher ASR gp for different values of K. The universal perturbation generated from APGD can successfully attack'hard' images (on which the average-based PGD attack fails) by self-adjusting domain weights, and thus leads to a higher ASR gp. Besides, the min-max universal perturbation also offers interpretability of "image robustness" by associating domain weights with image visualization; see Figure A9 and A10 (Appendix F) for an example in which the large domain weight corresponds to the MNIST letter with clear appearance (e.g., bold letter). Robust adversarial attack over data transformations EOT (a) achieves stateof-the-art performance in producing adversarial examples robust to data transformations. From, we could derive EOT as a special case when the weights satisfy w i = 1/K (average case). For each input sample (ori), we transform the image under a series of functions, e.g., flipping horizontally and γ = 4. (flh) or vertically (flv), adjusting brightness (bri), performing gamma correction (gam) and cropping (crop), and group each image with its transformed variants. Similar to universal perturbation, ASR avg and ASR gp are reported to measure the ASR over all transformed images and groups of transformed images (each group is successfully attacked signifies successfully attacking an example under all transformers). In Table 3, compared to EOT, our approach leads to 9.39% averaged lift in ASR gp over given models on CIFAR-10 by optimizing the weights for various transformations. Due to limited space, we leave the details of transformers in Append C.3 and the under randomness (e.g., flipping images randomly w.p. 0.8; randomly clipping the images at specific range) in Appendix D. Compared to vanilla AT, we show the generalized AT scheme produces models robust to multiple types of perturbation, thus leads to stronger "overall robustness". We measure the training performance using two types of Acc (%): Acc max adv and Acc avg adv, where Acc max adv denotes the test accuracy over examples with the strongest perturbation (∞ or 2), and Acc avg adv denotes the averaged test accuracy over examples with all types of perturbations (∞ and 2). Moreover, we measure the overall worst-case robustness S in terms of the area under the curve'Acc max adv vs.' (see Figure 3b). In Table 4, we present the test accuracy of MLP in different training schemes: a) natural training, b) single-norm: vanilla AT (∞ or 2), c) multi-norm: generalized AT (avg and min max), and d) generalized AT with diversity-promoting attack regularization (DPAR, λ = 0.1 in problem). If the adversary only performs single-type attack, training and testing on the same attack type leads to the best performance (diagonal of ∞ -2 block). However, when facing ∞ and 2 attacks simultaneously, multi-norm generalized AT achieves better Acc max adv and Acc avg adv than single-norm AT. In particular, the min-max strategy slightly outperforms the averaging strategy under multiple perturbation norms. Figure A2 (Appendix E). DPAR further boosts the adversarial test accuracy, which implies that the promotion of diversified p attacks is a beneficial supplement to adversarial training. In Figure 3, we offer deeper insights on the performance of generalized AT. During the training procedure we fix ∞ (for ∞ attack during training) as 0.2, and change 2 from 0.2 to 5.6 (∞ × √ d) so that the ∞ and 2 balls are not completely overlapped . In Figure 3a, as 2 increases, 2 -attack becomes stronger so the corresponding w also increases, which is consistent with min-max spirit -defending the strongest attack. We remark that min max or avg training does not always lead to the best performance on Acc max adv and Acc avg adv, especially when the strengths of two attacks diverge greatly (see Table A8). This can be explained by the large overlapping between ∞ and 2 balls (see Figure A3). However, Figure 3b and 3c show that AMPGD is able to achieve a rather robust model no matter how changes (red lines), which empirically verifies the effectiveness of our proposed training scheme. In terms of the area-under-the-curve measure S, AMPGD achieves the highest worst-case robustness: 6.27% and 17.64% improvement compared to the vanilla AT with ∞ and 2 attacks. Furthermore, we show in Figure A4a that our min-max scheme leads to faster convergence than the averaging scheme due to the benefit of self-adjusted domain weights. In this paper, we propose a general min-max framework applicable to both adversarial attack and defense settings. We show that many problem setups can be re-formulated under this general framework. Extensive experiments show that proposed algorithms lead to significant improvement on multiple attack and defense tasks compared with previous state-of-the-art approaches. In particular, we obtain 17.48%, 35.21% and 9.39% improvement on attacking model ensembles, devising universal perturbation to input samples, and data transformations under CIFAR-10, respectively. Our minmax scheme also generalizes adversarial training (AT) for multiple types of adversarial attacks, attaining faster convergence and better robustness compared to the vanilla AT and the average strategy. Moreover, our approach provides a holistic tool for self-risk assessment by learning domain weights. where w ∈ R K represent domain weights, and P has been defined in. Similar to, problem is equivalent to Recall that F i (θ):= maximize δi∈Xi f tr (θ, δ i ; x, y), problem can then be written as According to proof by contradiction, it is clear that problem is equivalent to B PROOF OF PROPOSITION 1 Proposition 1. Given a point a ∈ R d and a constraint set X = {δ| δ p ≤,č ≤ δ ≤ĉ}, the Euclidean projection δ * = proj X (a) has the closed-form solution when p ∈ {0, 1, 2}. 1) If p = 1, then δ * is given by where x i denotes the ith element of a vector x; P [či,ĉi] (·) denotes the clip function over the interval where λ 2 ∈ (0, a 2 / − 1] is the root of 3) If p = 0 and ∈ N +, then δ * is given by where [η] denotes the -th largest element of η, and δ i = P [či,ĉi] (a i). 1 norm When we find the Euclidean projection of a onto the set X, we solve where I [č,ĉ] (·) is the indicator function of the set [č,ĉ]. The Langragian of this problem is The minimizer δ * minimizes the Lagrangian, it is obtained by elementwise soft-thresholding where x i is the ith element of a vector x, P [či,ĉi] (·) is the clip function over the interval The primal, dual feasibility and complementary slackness are, where λ 1 is given by the root of the equation Bisection method can be used to solve the above equation for λ 1, starting with the initial interval (0, max 2 norm When we find the Euclidean projection of a onto the set X, we solve where I [č,ĉ] (·) is the indicator function of the set [č,ĉ]. The Langragian of this problem is The minimizer δ * minimizes the Lagrangian, it is The primal, dual feasibility and complementary slackness are λ2+1 a i, where λ 2 is given by the root of the equation Bisection method can be used to solve the above equation for λ 2, starting with the initial interval (0, 2 > 2 in this case, and 0 norm For 0 norm in X, it is independent to the box constraint. So we can clip a to the box constraint first, which is δ i = P [či,ĉi] (a i), and then project it onto 0 norm. We find the additional Euclidean distance of every element in a and zero after they are clipped to the box constraint, which is It can be equivalently written as To derive the Euclidean projection onto 0 norm, we find the -th largest element in η and call it [η]. We keep the elements whose corresponding η i is above or equals to -th, and set rest to zeros. The closed-form solution is given by.1) that the problem is convex and the solution can be derived using KKT conditions. However, Proposition 1 in our paper is different from (, Proposition 4.1). First, we place p norm as a hard constraint rather than minimizing it in the objective function. This difference will make our Lagrangian function more involved with a newly introduced nonnegative Lagrangian multiplier. Second, the problem of our interest is projection onto the intersection of box and p constraints. Such a projection step can then be combined with an attack loss (no need of linearization) for generating adversarial examples. Third, we cover the case of 0 norm. C. For the last four models, we use the exact same architecture as original papers and evaluate them only on CIFAR-10 dataset. The details for model architectures are provided in Table A1. For compatibility with our framework, we implement and train these models based on the strategies adopted in pytorch-cifar classifiers for 50 epochs with Adam and a constant learning rate of 0.001. For CIFAR-10 classifers, the models are trained for 250 epochs with SGD (using 0.8 nesterov momentum, weight decay 5e −4). The learning rate is reduced at epoch 100 and 175 with a decay rate of 0.1. The initial learning rate is set as 0.01 for models {A, B, C, D, H} and 0.1 for {E, F, G}. Note that no data augmentation is employed in the training. with a confidence parameter κ = 50. Cross-entropy loss is also supported in our implementation. The adversarial examples are generated by 20-step PGD/APGD unless otherwise stated (e.g., 50 steps for ensemble attacks). Note that proposed algorithms are robust and will not be affected largely by the choices of hyperparameters (α, β, γ). In consequence, we do not finely tune the parameters on the validation set. Specifically, The learning rates α, β and regularization factor γ for Table 1 Moreover, both deterministic and stochastic transformations are considered in our experiments. In particular, Table 3 and Table A6 are deterministic settings -rot: rotating images 30 degree clockwise; crop: cropping images in the center (0.8 × 0.8) and resizing them to 32 × 32; bri: adjusting the brightness of images with a scale of 0.1; gam: performing gamma correction with a value of 1.3. Differently, in Table A5, we introduce randomness for drawing samples from the distribution -rot: rotating images randomly from -10 to 10 degree; crop: cropping images in the center randomly (from 0.6 to 1.0); other transformations are done with a probability of 0.8. In experiments, we adopt tf.image API 7 for processing the images. Table A3 shows the performance of average (ensemble) and min-max (APGD) strategies for attacking model ensembles. Our min-max approach in 15.69% averaged improvement on ASR all over models {A, E, F, H} on CIFAR-10., γ = 6. The attack iteration for APGD is set as 50. Opt. To further demonstrate the effectiveness of self-adjusted weighting factors in proposed min-max framework, we compare with heuristic weighting schemes in Table A4 Table A4 shows that our min-max approach outperforms all static heuristic weighting schemes by a large margin. Specifically, our min-max APGD also achieves significant improvement compared to w static setting, where the converged optimal weights are statically (i.e., invariant w.r.t different images and attack procedure) adopted. It again verifies the benefits of proposed min-max approach by automatically learning the weights for different examples during the process of ensemble attack generation (see Figure 1c). Table A5 and A6 compare the performance of average (EOT Athalye et al. (2018a) ) and min-max (APGD) strategies. Our approach in 4.31% and 8.22% averaged lift over four models {A, B, C, D} on CIFAR-10 under given stochastic and deterministic transformation sets. and γ = 10. To further explore the utility of quadratic regularizer on the probability simplex in proposed min-max framework, we conducted sensitivity analysis on γ and show how the proposed regularization affects the eventual performance (Figure A1) taking ensemble attack as an example. The experimental setting is the same as Table 1 except for altering the value of γ from 0 to 10. Figure A1 shows that too small or too large γ leads to relative weak performance due to the unstable convergence and penalizing too much for average case. When γ is around 4, APGD will achieve the best performance so we adopted this value in the experiments (Table 1). Moreover, when γ → ∞, the regularizer term dominates the optimization objective and it becomes the average case. Figure A2 presents "overall robustness" comparison of our min-max generalized AT scheme and vanilla AT with single type of attacks (∞ and 2) on MNIST (LeNet). Similarly, our min-max training scheme leads to a higher "overall robustness" measured by S. In practice, due to the lacking knowledge of the strengths/types of the attacks used by adversaries, it is meaningful to enhance "overall robustness" of models under the worst perturbation (Acc max adv). Specifically, our min-max generalized AT leads to 6.27% and 17.63% improvement on S compared to single-type AT with ∞ and 2 attacks. Furthermore, weighting factor w of the probability simplex helps understand the behavior of AT under mixed types of attacks. Our AMPGD algorithm will adjust w automatically according to the min-max principle -defending the strongest attack. In Figure A2a, as 2 increases, 2 -attack becomes stronger so its corresponding w increases as well. When 2 ≥ 2.5, 2 -attack dominates the adversarial training process. That is to say, our AMPGD algorithm will put more weights on stronger attacks even if the strengths of attacks are unknown, which is a meritorious feature in practice. also propose a variant of adversarial training to defend universal perturbations over multiple images. To produce universal perturbations, they propose uSGD to conduct gradient descent on the averaged loss of one-batch images. In consequence, their approach can be regarded as a variant of our generalized AT in average case. The difference is that they do AT across multiple adversarial images under universal perturbation rather than mixed p -norm perturbations. We added UAT as one of our defense baselines in Table A7. The universal perturbation is generated by uSGD (∞ norm, = 0.3) with a batch size of 128 following. We find that a) our proposed approach outperforms UAT under per-image p attacks. Taking A7a as an example, our avg and min max generalized AT (with DPAR) in average 17.85% and 17.97% improvement in adversarial test accuracy (ATA), b) our approach has just 3.72% degradation in ATA when encountering universal attacks, and c) both methods yield very similar normal test accuracy. It is not surprising that our average and min-max training schemes can achieve better overall robustness while maintaining competitive performance on defending universal perturbation. This is because the defensed model is trained under more general (p norm) and diversity promoted perturbations. As a , proposed generalized AT is expected to obtain better overall robustness and higher transferability as shown in Table 4 and A7. As reported in Sec. 4.2, our min-max generalized AT does not always in the best performance on the success rate of defending the worst/strongest perturbation (Acc max adv) for given (∞, 2) pair, especially when the strengths of two attacks diverge greatly (e.g., for ∞ and 2 attacks are 0.2 and 0.5). In what follows, we provide explanation and analysis about this finding inspired by recent work. and inside 2 ball (right, red area). In particular, the red (blue) area in (a) (or (b)) represents the percentage of adversarial examples crafted by ∞ attack that also belong to 2 (∞) ball. We generate adversarial examples on 10,000 test images for each attack. (c): Average p norm of adversarial examples as a function of perturbation magnitude 2. The top (bottom) side represents the 2-norm (∞) of the adversarial examples generated by ∞ attack as 2 for generalized AT increases. Note that the same as the AT procedure is used while attacking trained robust models. Figure A3 shows the real overlap of ∞ and 2 norm balls in adversarial attacks for MLP model on MNIST. Ideally, if 2 satisfies ∞ < 2 < ∞ × √ d, ∞ and 2 balls will not cover each other completely. In other words, AT with ∞ and 2 attacks cannot interchange with each other. However, the real range of 2 for keeping 2 and ∞ balls intersected is not, because crafted adversarial examples are not uniformly distributed in p -norm balls. In Figure A3b, 99.98% adversarial examples devising using 2 attack are also inside ∞ ball, even if 0.2 < 2 = 0.5 < 5.6. In consequence, AT with ∞ attack is enough to handle 2 -attack in overwhelming majority cases, which in better performance than min-max optimization (Table A8a). Figure A3c presents the average p distance of adversarial examples with 2 increasing. The average 2 -norm (green line) of adversarial examples generated by ∞ attack remains around 2.0 with a slight rising trend. This is consistent to our setting -fixing 2 as 0.2. It also indicates model robustness may effect the behavior of attacks -as 2 increases, robustly trained MLP model becomes more robust against 2 examples, so the ∞ attacker implicitly increases 2 norm to attack the model more effectively. On the other hand, the average ∞ -norm increases substantially as 2 increases from 0.5 to 2.5. When 2 arriving at 0.85, the average ∞ norm gets close to 0.2, so around half adversarial examples generated by 2 -attack are also inside ∞ balls, which is consistent with Table A3b. Figure A4 shows the learning curves of model A under different AT schemes, where two setting are plotted: (a) (∞, 2) = (0.2, 0.5); (b) (∞, 2) = (0.2, 2.0). Apart from better worst-case robustness shown in Table A8, our min-max generalized AT leads to a faster convergence compared to average-based AT, especially when the strengths of two attacks diverge greatly. For instance, when 2 = 0.5 (Figure A4a Tracking domain weight w of the probability simplex from our algorithms is an exclusive feature of solving problem 1. In Sec. 4, we show the strength of w in understanding the procedure of optimization and interpreting the adversarial robustness. Here we would like to show the usage of w in measuring "image robustness" on devising universal perturbation to multiple input samples. Table A9 and A10 show the image groups on MNIST with weight w in APGD and two metrics (distortion of 2-C&W, minimum for ∞ -PGD) of measuring the difficulty of attacking single images. The binary search is utilized to searching for the minimum perturbation. Although adversaries need to consider a trade-off between multiple images while devising universal perturbation, we find that weighting factor w in APGD is highly correlated under different p norms. Furthermore, w is also highly related to minimum distortion required for attacking a single image Table A8: Adversarial training of MNIST models with single attacks (∞ and 2) and multiple attacks (avg. and min max). During the training process, the perturbation magnitude ∞ is fixed as 0.2, and 2 are changed from 0.5 to 3.0 with a step size of 0.5. For min-max scheme, the adversarial examples are crafted using 20-step ∞-APGD with α = 1 6, β = 1 50 and γ = 4. The ratio of adversarial and benign examples in adversarial training is set as 1.0. For diversity-promoting attack regularizer (DPAR) in generalized AT, the hyperparameter λ = 0.1. (a) (∞, 2) = (0.2, 0.5) successfully. It means the inherent "image robustness" exists and effects the behavior of generating universal perturbation. Larger weight w usually indicates an image with higher robustness (e.g., fifth 'zero' in the first row of Table A9), which usually corresponds to the MNIST letter with clear appearance (e.g., bold letter).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1eik6EtPB
A unified min-max optimization framework for adversarial attack and defense
Most deep learning models rely on expressive high-dimensional representations to achieve good performance on tasks such as classification. However, the high dimensionality of these representations makes them difficult to interpret and prone to over-fitting. We propose a simple, intuitive and scalable dimension reduction framework that takes into account the soft probabilistic interpretation of standard deep models for classification. When applying our framework to visualization, our representations more accurately reflect inter-class distances than standard visualization techniques such as t-SNE. We show experimentally that our framework improves generalization performance to unseen categories in zero-shot learning. We also provide a finite sample error upper bound guarantee for the method. Dimensionality reduction is an important problem in machine learning tasks to increase classification performance of learned models, improve computational efficiency, or perform visualization. In the context of visualization, high-dimensional representations are typically converted to two or threedimensional representations so that the underlying relations between data points can be observed and interpreted from a scatterplot. Currently, a major source of high-dimensional representations that machine learning practitioners have trouble understanding are those generated by deep neural networks. Techniques such as PCA or t-SNE (BID11 are typically used to visualize them, e.g., in ; BID5 . Moreover, BID4 proposed a visualization technique that represents examples based on their predicted category only. However, none of these techniques exploit the fact that deep models have soft probabilistic interpretations. For instance, the output of deep classifiers typically employs softmax regression, which optimizes classification scores across categories by minimizing cross entropy. This in soft probabilistic representations that reflect the confidence of the model in assigning examples to the different categories. Many other deep learning tasks such as semantic segmentation or boundary/skeleton detection BID16 also optimize for probability distributions. In this paper, we experimentally demonstrate that the soft probability representations learned by a neural network reveal key structure about the learned model. To this end, we propose a dimensionality reduction framework that transforms probability representations into a low-dimensional space for easy visualization. Furthermore, our approach improves generalization. In the context of zero-shot learning where novel categories are added at test time, deep learning approaches often learn high-dimensional representations that over-fit to training categories. By learning low-dimensional representations that match the classification scores of a high-dimensional pre-trained model, our approach takes into account inter-class similarities and generalizes better to unseen categories than standard approaches. Proposed approach: We propose to exploit as input representations the probability scores generated by a high-dimensional pre-trained model, called the teacher model or target, in order to train a lower-dimensional representation, called the student. In detail, our approach learns low-dimensional student representations of examples such that, when applying a specific soft clustering algorithm on the student representations, the predicted clustering scores are similar to the target probability scores. Contributions: This paper makes the following contributions: We propose the first dimensionality reduction approach optimized to consider some soft target probabilistic representations as input. By exploiting the probability representations generated by a pre-trained model, our approach reflects the learned semantic structure better than standard visualization approaches. We experimentally show that our approach improves generalization performance in zero-shot learning. We theoretically analyze the statistical properties of the approach and provide a finite sample error upper bound guarantee for it. Our method, called Dimensionality Reduction of Probabilistic Representations (DRPR, pronounced "Derper"), is given probability distribution representations generated from high-dimensional data as target. Its goal is to learn a low-dimensional representation such that the soft clustering scores predicted by a soft clustering algorithm are similar to the target. If the targets are probability distributions generated by a pre-trained classifier, we want the low-dimensional space to reflect the relationships between categories interpreted by the classifier. The position of each example in the low-dimensional space should then reflect the ambiguity of the classifier for the example (see FIG1). We summarize in Section 2.1 the soft clustering algorithm that is used by DRPR in the low-dimensional space, the algorithm is detailed in. The general learning algorithm of DRPR is introduced in Section 2.2. Probability density: We consider that we are given a set of n vectors f 1, · · ·, f n ∈ V concatenated into a single matrix F = [f 1, · · ·, f n] ∈ V n. In the following, we consider V = R d, and V n = R n×d. The goal is to partition n examples into k soft clusters. Each cluster C c with c ∈ {1, · · ·, k} has a centerμ c ∈ V and its corresponding probability density is p c (f i) = exp(−d(f i,μ c)) b(f i), where d is a regular Bregman divergence and b: V → R + is a uniquely determined function that depends on d and ensures that the integral of the density over V is 1 (e.g., b(f i) = 1/(2π) d if d is the squared Euclidean distance). For simplicity, we consider that d is the squared Euclidean distance. The density p c (f i) decreases as the divergence between the example f i and the centerμ c increases. The BSCP is defined as that of learning the maximum likelihood parameters Γ = {μ c,π c} k c=1 of a mixture model p(f i |Γ) = k c=1π c exp(−d(f i,μ c)) b(f i) whereπ c is the prior probability that f i is generated by C c. To partition the n examples into k clusters, apply the EM algorithm to maximize the likelihood parameter estimation problem for mixture models formulated as: max Γ n i=1 p(f i |Γ) Assignment matrix: Partitioning the n observations in F into k soft clusters is equivalent to determining some soft assignment matrixŶ ∈ n×k in the set Y n×k of matrices whose rows are positive and sum to 1. Formally, Y n×k is written Y n×k:= {Ŷ ∈ n×k:Ŷ 1 k = 1 n } where 1 k ∈ {1} k is the k-dimensional vector containing only 1. For a given value of Γ, the element Y ic = p(C c |f i) ∈ is the posterior probability, or responsibility of C c for f i. The higher the value ofŶ ic, the more likely f i belongs to cluster C c.Local maximum condition: Once the BSCP has converged to a local maximum of max Γ n i=1 p(f i |Γ), the following equations are all satisfied:(E-step) ∀i, c, p(C c |f i) =Ŷ ic =π c exp(−d(f i,μ c)) DISPLAYFORM0 Eq. corresponds to the E-step of the EM algorithm that computes p(C c |f i) =Ŷ ic when the parameters F and Γ are given. Eq. corresponds to the M-step, which has a simple form since the likelihood is a regular exponential family function . The M-step may be computationally expensive for other types of exponential family distributions (, Section 5.2). It is worth noting that these optimality conditions do not depend on the function b used to define p c (f i), so b can be ignored. Section 2.1 explains how to perform soft clustering on some fixed representation F. We now describe how to learn F so that the soft clustering scores predicted by the BSCP match those of the target. We assume that we are given the probability representation y i ∈ k as target for each training example f i. These representations are concatenated into a single ma- DISPLAYFORM0 n×k which is the target of our method for F. DRPR learns the representation of F ∈ V n such that the soft assignment matrix obtained from applying the BSCP on F is close to Y. We first give the formulation of our prediction function in Eq.. Our dimensionality reduction problem is given in Eq..Prediction function: Let us assume that we are given the dataset matrix DISPLAYFORM1, we define our prediction function ψ(F, M, π) = Ψ ∈ Y n×k as the soft assignment matrix predicted by the BSCP given F, M and π. The elements of the matrix Ψ are then computed as follows: DISPLAYFORM2 Optimization problem: DRPR learns the representation of F so that the predicted assignment matrix ψ(F, M, π) = Ψ is similar to Y. Given the optimal condition properties of the BSCP stated in Section 2.1, the optimal values of M and π also depend on Ψ and are therefore variables of our dimensionality reduction problem that we formulate: DISPLAYFORM3 The function ∆ n (Ψ, Y) is an empirical discrepancy loss between the predicted assignment matrix Ψ and the target assignment matrix Y. Since the rows of Ψ and Y represent probability distributions, we formulate ∆ n as the average KL-divergence between the rows of Y and the rows of Ψ. Let ψ i and y i be the i-th rows of Ψ and Y, respectively, we formulate DISPLAYFORM4 Note that the choice of the discrepancy loss ∆ n is independent of the chosen Bregman divergence d. Moreover, the number of classes k has an impact on the number of clusters in the low-dimensional space but is not related to the dimensionality d of the model. DRPR considers that each class c ∈ {1, · · ·, k} is represented by one cluster prototype µ c ∈ R d. In our experiments, the target (or teacher) is the assignment matrix Y ∈ Y n×k that contains the probability scores generated by a pre-trained neural network. It corresponds to the output of a classifier trained with softmax regression in the visualization experiments, and to the matrices Y 1 and Y 2 described in Section 4.2. The goal is then to learn F, M and π so that F, M and π reach the BSCP optimality conditions given in Section 2.1 and Ψ = ψ(F, M, π) is similar to Y.Visualization: DRPR can be used for visualization since many models (e.g., usual neural networks) have probabilistic interpretations w.r.t. the k training categories. In our visualization task, the matrices M and π are not provided, whereas the target matrix Y is given. By using the optimality conditions input: Set of training examples (e.g., images) in X and their target probability scores (e.g., classification scores w.r.t. k training categories), nonlinear mapping g θ parameterized by parameters θ, number of iterations t 1: for iteration 1 to t do 2: Randomly sample n training examples x1, · · ·, xn ∈ X and create target assignment matrix Y ∈ Y n×k containing the target probability scores y1, DISPLAYFORM0 −1 Y F and prior vector π ← 1 n Y 1n 5: Update the parameters θ by performing a gradient descent iteration of ∆n (ψ(F, M, π), Y ) (i.e., Eq.) 6: end for output: nonlinear mapping g θ in Eq., we can write the desired values of M and π as a function of F and Y: at each iteration, for some current value of F, the optimal values M = diag(Y 1 n) −1 Y F and π = 1 n Y 1 n are computed and F is updated via gradient descent. DRPR is illustrated in Algorithm 1 in the case where F is the output of a model g θ parameterized by θ, e.g., g θ can be a neural network. However, we represent F as non-parametric embeddings in our visualization experiments to have an equitable comparison to non-parametric baselines. The learning algorithm then modifies the matrix F at each iteration. If the priors π c are all equal, then the priors are updated in step 4 as follows: π ← 1 k 1 k. Zero-shot learning: DRPR can be used to improve zero-shot learning generalization since highdimensional models may overfit to training categories and the goal of zero-shot learning is to generalize to novel categories. In the considered zero-shot learning task, the variable F concatenates image representations (outputs of a neural network) in the same way as step 3 of Algorithm 1, and the variable M concatenates category representations extracted from text (outputs of another neural network). Both F and M are of different nature and are therefore computed as concatenating the outputs of two distinct neural networks taking different sources as input. To optimize Eq., both neural networks are trained jointly by fixing the other neural network during backpropagation. In our experiments, we consider the squared Euclidean distance d(f i,μ c) = f i −μ c 2 2. However, DRPR can be used with any regular Bregman divergence . The algorithm is then identical, with the exception of the chosen divergence d to compute the prediction in Eq..Convergence and scalability: Although our problem has 3 variables (F, M and π), we use the optimal properties of the BSCP to write them as a function of each other (see step 4 of Algo 1). Eq. is then an optimization problem w.r.t. only one variable F. Since the problem is differentiable wrt F and unconstrained, it is easy to optimize by gradient descent (e.g., back-propagation when training neural networks). Moreover, our loss is nonnegative, hence lower-bounded. It is worth noting that we do not apply multiple iterations of the EM algorithm at each gradient descent iteration, as we first use the optimal properties of BSCP to obtain closed-form formulations of M and π, and then compute ψ(F, M, π) = Ψ to minimize our problem in Eq.. Unlike t-SNE and many iterative DR problems, the complexity of DRPR is linear in n (instead of quadratic) and linear in k, which makes it efficient and scalable. Our visualization experiments take less than 5 minutes to do 10 5 iterations while t-SNE takes 1 hour to do 1000 iterations. PCA, which has an efficient closed-form solution, is still much faster. Our approach is thus simple to optimize, hence scalable, and it generalizes to a large family of Bregman divergences. We now interpret the gradient of our optimization problem w.r.t. examples. To simplify its formulation, we consider that all the priors π 1, · · ·, π k are equal and the matrix M does not depend on F, which is the case in our zero-shot learning task. When d is the squared Euclidean distance, all the priors are equal and M does not depend on F, the gradient of Eq. w.r.t. f i is: DISPLAYFORM0 One can observe that its magnitude depends on both the target scores Y ic ∈ and the predicted responsibilities Ψ ic. The gradient tries to make the vector f i closer to each centroid µ c while separating it from all the centroids µ m depending on their predicted scores Ψ im ∈. We analyze the statistical property of the algorithm in Appendix A. Theorem 1 provides a finite sample upper bound guarantee on the quality of the minimizer of the empirical discrepancy of Eq.. We show that it is upper bounded by the minimum of the true expected discrepancy, and an estimation error term of O(n −1/2) (under certain conditions and with high probability). We defer the detail to the appendix. This paper introduces a dimensionality reduction method that represents the relations in a dataset that has probabilistic interpretation. It can be seen as a metric learning approach. Metric Learning: The most related approaches try to solve the "supervised" hard clustering problem . During training, they are given the target hard assignment matrix Y ∈ {0, 1} n×k where Y ic = 1 if the training example f i has to belong to C c, and 0 otherwise. The goal is to learn a representation such that applying a hard clustering algorithm (e.g., kmeans) on the training dataset will return the desired assignment matrix Y. These approaches can be decomposed in 2 groups: the methods that optimize some regression problem (; ; BID5 and exploit orthogonal projection properties that hold only in the hard clustering context, and the methods that use exponential families (; BID5 to describe some probability score. In the latter case, the learning problem is written as a multiclass logistic regression problem where the probability of a category C c given an example DISPLAYFORM0 and δ W is the learned dissimilarity function. DRPR generalizes those approaches and can also be used for hard clustering learning. DISPLAYFORM1 n×k is a hard assignment matrix and ∆ n is the same ∆ n as ours (i.e., ∆ n (Ψ, Y) = 1 n n i=1 D KL (y i ψ i) by using the convention 0 log 0 = 0). Moreover, the optimal value of M is not implicitly written as a function of F and Y in BID5. The approach in is similar to BID5 but only considers linear models. In summary, both BID5 and do not exploit the BSCP formulation to its full potential as they consider some restricted hard clustering case. DRPR generalizes these approaches to the soft clustering context at no additional algorithmic complexity. Dimensionality reduction: Learning models by exploiting soft probability scores predicted by another pre-trained model as supervision was proposed in for classification. It was experimentally observed in that using the output of a large pre-trained neural network as supervision, instead of ground truth labels, improves classification performance of small neural networks. However, in , each dimension of the student representation describes the confidence score of the model for one training category, which is problematic in contexts such as zero-shot learning where categories can be added or removed at test time. Our approach can learn a representation with dimensionality different from the number of training categories. It can therefore be used in zero-shot learning. Dimensionality reduction with neural networks has been proposed in unsupervised contexts (e.g., to maximize variance in the latent space ) and in supervised contexts (e.g., using ground truth labels as supervision BID3). Instead, our approach exploits probability scores generated by a teacher pre-trained model. These scores may be very different from ground truth labels if the pretrained model does not generalize well on the dataset. Our approach can then help understand what was learned by the teacher by visualizing groups of examples the teacher has trouble distinguishing. representation obtained with t-SNE by exploiting soft probability scores w.r.t. the 6 clusters; (d) 2D representation obtained by our method by exploiting using the same supervision as (c). The relative inter-cluster distances of the original dataset are preserved with our approach, unlike t-SNE. We evaluate the relevance of our method in two types of experiments. The first learns low-dimensional representations for visualization to better interpret pre-trained deep models. The second experiment exploits the probability scores generated by a pre-trained classifier in the zero-shot learning context; these probability scores are used as supervision to improve performance on novel categories. Interpreting deep models is a difficult task, and one of the most common tools to solve that task is visualization. Representations are most often visualized with t-SNE which does not account for the probabilistic interpretation of the learned models. We propose to exploit probability classification scores as input of our dimensionality reduction framework. In the visualization experiments of this section, DRPR learns non-parametric low-dimensional embeddings (i.e. our representations are not outputs of neural networks but vectors) as done by non-parametric baselines. Nonetheless, DRPR can also be learned with neural networks (e.g., as done in Section 4.2).Toy dataset: As an illustrative toy experiment, we compare the behavior of t-SNE and DRPR when applied to reduce a simple artificial 3-dimensional dataset as 2D representations. The 3D dataset illustrated in FIG3 (a) contains k = 6 clusters, each generated by a Gaussian model and containing 1,000 artificial points. To generate target soft clustering probability scores in the 3D space, we compute the relative distances of the examples to the different cluster centers and normalize them to obtain (probability) responsibilities as done in Eq.. In detail, let us note the original DISPLAYFORM0 (where n is the number of examples) plotted in FIG3 n×k where k = 6 is constructed by computing DISPLAYFORM0 where c c ∈ R 3 is the center of the c-th Gaussian model (defined by its color) in the original space, and priors are equal. We plot in FIG3 the 2D representation of our model when using Y as input/target, and the t-SNE representations obtained when using the original dataset X as input in Fig the global structure of the original dataset is better preserved with DRPR than with t-SNE; DRPR satisfies the relative distances between the different clusters better than t-SNE since DRPR tries to preserve the relative responsibilities of the different clusters. We quantitatively evaluate these two observations in the following. It is also worth noting that it is known that distances between clusters obtained with t-SNE may not be meaningful BID13 as t-SNE preserves local neighborhood instead of global similarities. In the following (i.e. in FIG6 and tables of ), we only consider the case where t-SNE takes as input the logits (i.e., classification scores before the softmax operation) instead of probability scores since the latter case returns bad artifacts such as the one in FIG3 (c). Other examples of bad artifacts obtained with t-SNE exploiting probability scores are provided in the supplementary material. We also provide in the supplementary material the visualizations obtained by t-SNE when replacing the 2 -norm in the input space by the KL-divergence and the Jensen-Shannon divergence to compare probabilistic representations that DRPR uses as input. These lead to worse visualizations than Quantitative evaluation metrics: We quantify the relevance of our approach with the two following evaluation metrics: FORMULA0 an adaptation of the Neighborhood Preservation Ratio (NPR) (Van der): for each image i, it counts the ratio of κ nearest neighbors of i (i.e. that have the closest probability scores w.r.t. the KL divergence) that are also in the set of κ nearest neighbors in the learned low-dimensional space (w.r.t. the Euclidean distance). It is averaged over all images i. This metric evaluates how much images that have similar probability scores are close to each other with the student representation. Clustering Distance Preservation Ratio (CDPR): we randomly sample 10 5 triplets of images (i, i +, i −) such that the 3 images all belong to different categories and i has closer probability score to i + than to i − w.r.t. the KL divergence. The metric counts the percentage of times that the learned representation of i is closer to i + than to i − w.r.t. the Euclidean distance in the low-dimensional representation. This evaluates how well inter-cluster distances are preserved. We evaluate our approach on the test sets of the MNIST , STL , CIFAR 10 and CIFAR 100 datasets with pre-trained models that are publicly available and optimized for cross entropy.2 The dimensionality of the high-dimensional representations is equal to the number of categories in the respective datasets (i.e. 10 except for CIFAR 100 that contains 100 categories). Our goal is to visualize the teacher representations with 2-dimensional representations by using their probability scores as target, not the ground truth labels. Quantitative comparisons with standard visualization techniques such as t-SNE, ISOMAP BID9, Locally Linear Embedding (LLE) BID2 and PCA using the 2 leading dimensions are provided in TAB0. We also report the scores obtained with the logit representations which are not dimensionality reduction representations but provide an estimate of the behavior of the original dataset. DRPR outperforms the dimensionality reduction baselines w.r.t. both evaluation metrics and is competitive with the logit representation. Examples that have similar probability-based representations are closer with our approach than with other dimensionality reduction baselines. DRPR also better preserves inter-cluster distances. It is worth noting that DRPR exploits as much supervision as the "unsupervised" visualization baselines. Indeed, all the compared methods use as input the same source of supervision which is included in the (classifier output) representations given as input. Qualitative : Visualizations of pre-trained models obtained with DRPR and t-SNE are illustrated in FIG6 for MNIST and STL. The visualizations for CIFAR 10 and 100 are in the supplementary material. DRPR representations contain spiked groups at the corners to better reflect examples that have high confidence scores for one category. Indeed, an example in a spike at the corner of a figure has a soft assignment score w.r.t. its closest center close to 1. This means that the pre-trained model has very high confidence to assign the example to the corresponding category (see One can observe that representations obtained with DRPR reflect the semantic structure between categories. On MNIST, categories that contain a curve at the bottom of the digit (i.e., 0, 3, 5, 6, 8 and 9) are in the bottom of FIG6 (left); some pairs of digits that are often hard to differentiate by classifiers (i.e., 4 and 9, 1 and 7, 3 and 8) are also adjacent. On STL and CIFAR 10, animal categories are illustrated on the right whereas machines are on the left. Semantically close categories such as airplane and bird, or car and truck are also adjacent in the figures. One main difference between the DRPR and t-SNE representations for STL is the distance between the clusters ship and airplane. These categories are actually hard for the model to differentiate since they contain blue s and relatively similar objects. In particular, the STL airplane category contains many images of seaplanes lying on the sea and can then be mistaken for ships. This ambiguity between both categories is not observed on the t-SNE representation. Due to lack of space, a detailed analysis for the CIFAR 100 and STL datasets is available in the supplementary material. A summary of the is that categories that belong to the same superclass (e.g., categories hamster, mouse, rabbit, shrew, squirrel are part of the superclass small mammals) are grouped together with DRPR. The DRPR visualization also reflects some semantical structure: plants and insects are on the top left; animals are on the bottom left and categories on the right are outdoor categories. Medium mammals are also represented between small mammals and large carnivores. In , the quantitative show that the representations of DRPR are meaningful since they better preserve the cluster structure and allow observation of ambiguities between categories. We consider the same zero-shot learning scenario as BID1 and BID5. In particular, we test our approach on the same datasets and splits as them. The main goal is to learn two mappings, one for image representations and one for category representations, in a common space V. The latter mapping takes some category descriptions as input (e.g., from text description or visual attributes). Image representations are then learned so that they are closer to the representative of their category than to the representative of any other category. At test time, categories that were unseen during training are considered, and their representative is obtained by using the second mapping. An image is assigned to the category with closest representative. Training datasets: We use the medium-scaled Caltech-UCSD Birds (CUB) dataset BID14 and Oxford Flowers-102 (Flowers) dataset . CUB contains 11,788 bird images from 200 different species categories split into disjoint sets: 100 categories for training, 50 for validation and 50 for test. Flowers contains 8,189 flower images from 102 different species categories: 62 categories are used for training, 20 for validation and 20 for test. To represent images, BID1 train a GoogLeNet BID8 model whose output dimensionality is 1,024. For each category, BID1 extract some text annotations from BID0 50.1% DS-SJE BID1 (Bag-of-words) 44.1 % DS-SJE BID1 (Char CNN-RNN) 54.0 % Ziming & Saligrama BID19 55.3 % DS-SJE BID1 (Word CNN-RNN) 56.8 % Prototypical Networks BID5 58.3 % Ours -using DS-SJE (Char CNN-RNN) as supervision 57.7 % Ours -using Prototypical Networks as supervision 60.3 % 59.6 % DS-SJE BID1 (Word CNN-RNN) 65.6 % Prototypical Networks BID5 63.9 % Ours -using DS-SJE (Char CNN-RNN) [publicly available] 62.4 % Ours -using Prototypical Networks as supervision 68.2 % which they learn a representative vector (e.g., based on Char CNN-RNN BID18). The image representations of examples and the text representations of categories are learned jointly so that each image is more similar to the representative vector of its own category than to any other. We now describe how we generate the supervision/target of our model from the models pre-trained by BID1 BID5 and provided by their respective authors. Once its training is over, Prototypical Network BID5 represents each image i by some vectorf i and each category c by some vectorμ c. By concatenating the different vectors into matrices DISPLAYFORM0 In the case of BID1, we consider the same preprocessing as BID5. Each image i is represented by some vectorf i, each category c is represented by some vectorμ c (that is 2 -normalized in BID5 . The target soft assignment matrix of DRPR is then DISPLAYFORM1 In the model that BID5 provide and that obtains 58.3% accuracy on CUB, ProtoNet trains two models that take as arguments the representations learned by BID1 . They train one modelgθ 1 for images such that ∀i,f i =gθ 1 (f i), and one modelgθ 2 for text representative vectors such that ∀c,μ c =gθ 2 (μ c). Following BID5, we train two (neural network) models: g θ1 for images, and g θ2 for categories. Both of them take as input the image and category representations used to create the target soft assignment matrix (i.e., we take the representations learned by BID1 when its probability scores Y 2 are used as supervision, and the representations learned by otherwise). In this context, we alternately optimize g θ1 by fixing M (which depends on g θ2) and optimize g θ2 by fixing F (which depends on g θ1).Implementation details: We consider that the learned models g θ1 and g θ2 have the same architecture and are multilayer perceptrons (MLP) with tanh activation functions. The number of hidden layers λ ∈ {0, 1, 2} and output dimensionality d are hyperparameters cross-validated from the accuracy on the validation set. More details on their architecture can be found in the supplementary material. Results: We report the performance of our approach on the test categories of the CUB and Flowers datasets in TAB2, respectively. The performance is measured as the average classification accuracy across all unseen classes. We use DS-SJE (Char CNN-RNN) and Prototypical Networks as supervision for our model because they are the only approaches whose pre-trained models are publicly available. Our approach obtains state-of-the-art on both CUB and Flowers datasets by significantly improving the classification performance of the different classifiers. For instance, it improves the scores of 63.9% obtained by ProtoNet of Flowers up to 68.2%. In general, it improves zero-shot learning performance of the different classifiers by 2% to 4.3%. We report in the supplementary material the performance of our model on both the validation and test sets using different numbers of hidden layers, and ranging the output dimensionality d from 16 to the dimensionality e of the input representations. Except for linear models (i.e. λ = 0), reducing the dimensionality improves generalization. This shows that the zeroshot learning performance of a given model can be significantly improved by taking its prediction scores as supervision of our model. To study the impact of the dimensionality reduction generated DRPR, we also ran the codes of BID1; BID5 by learning representations with dimensionality smaller than those provided (using the same ranges as those in the tables of the supp. material). This decreased their generalization performance. Therefore, directly learning a low-dimensional representation is not a sufficient condition to generalize well. Our framework that learns representations so that examples with similar ambiguities (i.e. similar teacher predictions) are close to each other acts as a semantic regularizer. This is suggested by the fact that test accuracy is improved with DRPR even when e = d (as long as the MLPs contain hidden layers).It is worth mentioning that one test category of the CUB dataset (Indigo bunting) belongs to the ImageNet dataset that was used to pretrain GoogLeNet. By using the train/val/test category splits proposed by Xian et al., we did not observe a change of performance of the different models on CUB. We have proposed a dimensionality reduction approach such that the soft clustering scores obtained in the low-dimensional space are similar to those given as input. We experimentally show that our approach improves generalization performance in zero-shot learning on challenging datasets. It can also be used to complement t-SNE, as a visualization tool to better understand learned models. In particular, we show that we can give a soft clustering interpretation to models that have probabilistic interpretations. Real-world applications that can be used with DRPR include distillation. For instance, when the teacher model is too large to store on a device with small memory (e.g., mobile phone), the student model which has a smaller memory footprint is used instead. Low-dimensional representations can also speed up retrieval tasks. Future work includes applying our approach to the task of distillation in the standard classification task where training categories are also test categories. We thank Fartash Faghri and the anonymous reviewers for their helpful comments on early versions of this manuscript. This work was supported by Samsung and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. AMF acknowledges funding from the Canada CIFAR AI Chairs Program. Disclaimer: The views and contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. This section provides a statistical guarantee for the algorithm presented in Section 2.2. We show that the minimizer of the empirical discrepancy, which uses a finite number of data points x 1, · · ·, x n ∈ X, is close to the minimizer of the true expected discrepancy measure, to be defined shortly. Let us define the setup here. We are given data points D n = {X 1, . . ., X n}.3 We suppose that each X i ∈ X is independent and identically distributed (i.i.d.) with the distribution ν ∈ M(X), where M(X) is the space of all probability distributions defined over X. The teacher is a fixed function φ = [φ(·; 1),..., φ(·; k)] that maps points in X to a k-dimensional simplex, and provides the target probability distributions. That is, the target y i for X i is computed as y i = φ(X i).Consider a function space G whose domain is X and its range is a subset of R d. This function space might be represented by a DNN, but we do not make such an assumption in our statistical analysis. Given a function g ∈ G (called g θ in the main article) and the number of clusters k, we define ψ g (x) = [ψ g (x; 1),..., ψ g (x; k)] as DISPLAYFORM0, with the cluster centres DISPLAYFORM1 and the priors DISPLAYFORM2 for c ∈ {1, . . ., k} (cf. Section 2.1). Note that ψ g (x) defines a k-dimensional probability distribution, and µ c (g) is a mapping of the function g to a point in R d.Similarly, given D n, we define the empirical cluster centreŝ DISPLAYFORM3.Note that here for simplicity of analysis, we assume that the priors π c are exact, and not estimated from data. The student's goal is to find a g such that ψ g is close to φ. The closeness of the student to the teacher is defined based on their KL divergence. Specifically, Algorithm 1 minimizes the distorted empirical discrepancy, which can be written as DISPLAYFORM4 where X i s are from dataset D n. Notice that the distorted empirical discrepancy ∆ n (ψ, φ) is defined based onψ, which uses the empirical centresμ c, instead of the expected centres µ c. We also define an empirical discrepancy w.r.t. the true µ c as ∆ n (ψ, φ). This quantity is not accessible to the algorithm. We evaluate the quality of g, and its corresponding ψ g, based on how well, in average, it performs on new points x ∈ X. We consider the expected KL divergence between ψ and φ w.r.t. distribution ν as the measure of performance. Therefore, the discrepancy is DISPLAYFORM5 The output of Algorithm 1 is the minimizer 5 of ∆ n (ψ, φ), which we denote byĝ DISPLAYFORM6 We also define the minimizer of the discrepancy by g *: DISPLAYFORM7 We would like to compare the performance ofĝ when evaluated according to discrepancy, that is ∆(ψĝ, φ), and compare it with ∆(ψ g *, φ).Before stating our , we enlist our assumptions. We shall remark on them as we introduce. DISPLAYFORM8 consists of i.i.d. samples drawn from ν(X).The i.i.d. assumption simplifies the analysis. With extra effort, one can provide similar for some classes of dependent processes too. For example, if the dependent process comes from a time series and it gradually "forgets" its past, one may still obtain similar statistical guarantees. Forgetting can be formalized through the notion of "mixing" of the underlying stochastic process . One can then provide statistical guarantees for learning algorithms under various mixing conditions BID17; BID7; BID14 Farahmand & Szepesvári, 2012). DISPLAYFORM9 This is a mild and realistic assumption on the function space G, and is mainly here to simplify some steps of the analysis. Assumption A3 (Teacher) Part I) The output of the teacher φ is a probability distribution, i.e., for any x ∈ X and c = {1, . . ., k}, we have φ(x; c) ≥ 0 and k c=1 φ(x; c) = 1. Part II) We assume that π c = E [φ c (X)] is bounded away from zero for all c. We set π min = min c π c. This first part of the assumption explicitly expresses the fact that the algorithm expects to receive a probability distribution from the teacher. If it does not, for example if φ(x; c) is negative for some x and c but we treat it as a probability in the calculation of the KL divergence, the algorithm would not be well-defined. The second part of this assumption requires that prior probability for each cluster is bounded away from zero, and has the probability at least π min. This is a technical assumption used by the proof technique; it might be possible that one can relax this assumption. We need to make some assumptions about the function space G and its complexity, i.e., capacity. We use covering number (and its logarithm, i.e., metric entropy) as the characterizer of the complexity. The covering number at resolution ε is the minimum number of balls with radius ε required to cover the space M according to a particular metric. We use N (ε, G, ·) to denote the covering number of G w.r.t. the norm ·, which we shall explicitly specify. As ε decreases, the covering number increases (or more accurately, the covering number is non-decreasing). For example, the covering number for a p-dimensional linear function approximator with constraint on the magnitude of its Let us define a norm for the function space G: DISPLAYFORM10 This is a mixed-norm where we compute the 2 -norm for each g(x) ∈ R d, and then take the supremum norm over the ing 2 -norm. We use this norm to characterize the covering number of G.Assumption A4 (Metric Entropy) There exists constants B > 0 and 0 < α < 1 such that for any ε, the following covering number (i.e., metric entropy) condition is satisfied: DISPLAYFORM11 The logarithm of the covering number of G is O(1 ε 2α). It grows much faster than the metric entropy of linear models, which is O(p log( 1 ε)). This behaviour is suitable to capture the complexity of large function spaces such as the Sobolev space W k (R d) and many reproducing kernel Hilbert spaces (RKHS).6 Note that we use a mixed-norm to define the covering number. The use of supremum norm in the definition might be considered conservative. Using a more relaxed norm, for example based on the empirical L p (P X1:n)-norm for some 1 ≤ p < ∞, is an open technical question for future work. Finally let us define the pointwise loss function DISPLAYFORM12 Notice that ∆ n (ψ g, φ) = DISPLAYFORM13 We define the following function space: DISPLAYFORM14 We also define the entropy integral (Dudley integral) DISPLAYFORM15 We are now ready to state the main theoretical of this section. Theorem 1. Suppose that Assumptions A1, A2, and A3 hold. Considerĝ obtained by solving.There exists a finite c 1 > 0 such that for any δ > 0, with probability at least 1 − δ, we have DISPLAYFORM16 Furthermore, if the metric entropy satisfies Assumption A4, there exist constants c 2, c 4 > 0 and a function c 3 (α) > 0, which depends only on α, such that for any δ > 0, with probability at least 1 − δ, we have DISPLAYFORM17, the Sobolev space defined w.r.t. the L2-norm of the weak derivatives, we can DISPLAYFORM18 This theorem provides a finite sample error upper bound on the true discrepancy ofĝ, and relates it to the expressiveness and complexity of the function space G, the number of samples n, and some other properties. The term min g∈G ∆(ψ g, φ) = ∆(ψ g *, φ) is the function approximation error, and reflects the expressiveness of G. This is the minimum achievable error given the function space G. The other terms in the upper bound correspond to the estimation error caused by having a finite number of data points. Let us focus on the second part of the theorem, which is under the particular choice of covering number according to Assumption A4. In that case, the estimation error shows n −1/2 dependence on the number of samples, and hence decreases as we have more training data. We observe that the upper bound increases as the range L of the function space G, the dimension d of the low-dimensional space, and the number of clusters k increases. The effect of using the distorted empirical discrepancy ∆ n (ψ g, φ) instead of ∆ n (ψ g, φ) shows itself in the last term, i.e., the term with the constant multiplier of (equality under the uniform distribution over classes), so we have at least a linear dependence on k in the upper bound. This dependence might be due to our proof technique; it remains to be seen whether this can be improved. Proof. To simplify the notation, we denote DISPLAYFORM19, and ∆(g) = ∆(ψ g, φ). We want to relate ∆(ĝ) to ∆(g *), the supremum of the empirical process ∆(g) − ∆ n (g) and the supremum of the distortion of the empirical loss ∆ n (g) −∆ n (g). We have the following relations: DISPLAYFORM20 The first inequality is because of the the optimizer property ofĝ, i.e.,∆ n (ĝ) ≤∆ n (g) for any g ∈ G, including g *.We need to upper bound the supremum of the empirical process, that is sup g∈G |∆(g) − ∆ n (g)|, and the supremum of the distortion caused by usingψ in minimizing ∆ n instead of ψ, that is sup g∈G |∆ n (g) −∆ n (g)|, cf.Upper Bounding sup g∈G |∆(g) − ∆ n (g)|. We use Lemma 7 in Appendix B.3 in order to upper bound the supremum of the empirical process, sup g∈G |∆(g) − ∆ n (g)|, which is equivalent to sup l∈L DISPLAYFORM21. That lemma, which is originally Theorem 2.1 of , relates the supremum of the empirical process to the Rademacher complexity of L, defined in the same appendix. To apply the lemma, we first provide upper bound on l(x) and DISPLAYFORM22 for example, see the proof leading to. So DISPLAYFORM23 We evoke Proposition 4 with the choice of f (x; c) = g(x) − µ c 2 2 and L = 4dL 2 to obtain that l(x; g) ≤ 8dL 2 + log 2 k. Since l(x; g) is bounded, we have DISPLAYFORM24 By the choice of β = 1, B = (8dL 2 + log 2 k), and r = (8dL 2 + log 2 k) 2 in Lemma 7, we get that for any δ 1 > 0, DISPLAYFORM25 with probability at least 1 − δ 1.. This can be done by using Dudley's integral to relate the Rademacher complexity of L to the covering number of L. Afterwards, we use Lemma 3 in Appendix A.1 to relate the covering number of L to the covering number of G. We have DISPLAYFORM0 In the second inequality, we benefit from two observations: first, we use l(x) ≤ 8dL 2 + log 2 k for any l ∈ L to upper bound diam(L); second, the covering number w.r.t. L 2 (P X1:n) can be upper bounded by the covering number w.r.t. the supremum norm. Upper Bounding sup g∈G ∆ n (g) −∆ n (g). We use Proposition 5 in Appendix A.2, which states that for any δ 2 > 0, DISPLAYFORM0 with probability at least 1 − δ 2.Plugging FORMULA0 and FORMULA0 in FORMULA0 and using the entropy integral upper bound lead to the desired of the first part. To prove the second part of the theorem, we use log N ε, G, · ∞,2 ≤ B ε 2α to calculate J (ε), which in DISPLAYFORM1 By plugging in ε = 16dL 2 + 2 log 2 k, we get that DISPLAYFORM2 We upper bound J (2L) in FORMULA0 to obtain: DISPLAYFORM3 After some simplifications these lead to the desired of the second part. A.1 SOME TECHNICAL TOOLSWe develop some technical tools required in the proof of Theorem 1. Proposition 2 provides Lipschitz constant for some functions that are used in the proof of Lemma 3 to relate the covering number of the function space L, defined in, to that of G. This was a key step of the proof of the theorem. Proposition 4 provides an upper bound on the magnitude of l(x; f), shortly defined in.We introduce a few more notations to reduce the clutter. Let DISPLAYFORM4, so we can write DISPLAYFORM5 For a function f: X × {1, . . ., k} → R, we define DISPLAYFORM6 We overload the pointwise loss function l(x; g), defined in FORMULA27, and define a similar definition for l(x; f) as follows DISPLAYFORM7 It is clear that with the choice of f = d g, the probability distribution p f is the same as ψ g.The following proposition specifies the Lipschitz properties of d g and l(x; f). Proposition 2. Suppose that Assumption A3 hold. Part I) Consider g 1, g 2 ∈ G and let Assumption A2 hold. For any x ∈ X and c ∈ {1, . . ., k}, we have DISPLAYFORM8 Part II) Consider two functions f 1, f 2: X × {1, . . ., k} → R. For any x ∈ X we have DISPLAYFORM9 Proof. Part I) First notice that for any two vectors u and v, by the Cauchy-Schwarz inequality we have DISPLAYFORM10 Consider g 1, g 2 ∈ G, and their corresponding d g1 and d g2. We have DISPLAYFORM11 Note that DISPLAYFORM12 where we used Jensen's inequality and the fact that φ c (x) ≥ 0. As µ c = 0, we also obtain that DISPLAYFORM13 As DISPLAYFORM14 Therefore by FORMULA0, FORMULA0, and, DISPLAYFORM15 Part II) For functions f 1, f 2, using the definition of l(x; f), we get that DISPLAYFORM16 By substituting the definition and some simplifications, we get DISPLAYFORM17 with DISPLAYFORM18 We study the Lipschitz property of ρ(u) as a function of u in order to upper bound the second term on the right-hand side (RHS).We take the derivative of ρ(u) w.r.t. the c-th component of u to obtain that DISPLAYFORM19 Notice that q c (u) is a probability distribution. We denote (q 1 (u),..., q k (c)) by q(u).By Taylor's theorem, for u, u ∈ R k, we have DISPLAYFORM20 for someũ = (1 − λ)u + λu with 0 ≤ λ ≤ 1. By Hölder inequality, for any Hölder conjugate DISPLAYFORM21 where max over u ≤ũ ≤ u should be understood as the maximum over the line segment between u and u.In particular, DISPLAYFORM22 Here we used the fact that q c (u) is a probability distribution and its sum is equal to 1, for any choice ofũ. We substitute FORMULA0 in FORMULA1 and use the upper bound |ρ(−f 2 (x; ·)) − ρ(−f 1 (x; ·))| ≤ f 1 (x; ·) − f 2 (x; ·) ∞, which is just shown, to get that DISPLAYFORM23 as desired. The following lemma relates the covering number of the function space L to the covering number of the function space G. Lemma 3. Consider the function space G and its induced function space L. Let Assumptions A2 and A3 hold. For any ε > 0, we have DISPLAYFORM24 and using both parts of Proposition 2, we get that DISPLAYFORM25 If we have an ε-cover of G w.r.t. g ∞,2 = sup x∈X g(x) 2, it induces a 16L √ dε-cover on L w.r.t. the supremum norm. The following proposition upper bounds the magnitude of l(x; f). Proposition 4. Suppose that Assumption A3 hold and |f (x; c)| ≤ L for any x ∈ X and c ∈ {1, . . ., k}. Consider p f defined in. It holds that DISPLAYFORM26 Proof. For simplicity, we ignore the dependence on x. We use the definition of p f to get DISPLAYFORM27 Let us consider each term on the RHS.• As log φ(c) ≤ 0, we have c φ(c) log φ(c) ≤ 0.• The priors (π 1, . . ., π k) indeed defines a probability distribution, as each π c is nonnegative and c π c = c φ c (x)dν(x) = c φ c (x)dν(x) = 1 × dν(x) = 1. So − c φ(c) log π c is the entropy of a probability distribution over an alphabet with size k, which is at most log 2 k.• The summation c φ(c)f (c) is upper bounded by L because of the boundedness of f (c) ≤ L and the fact that φ is a probability distribution and sums to one.• Consider the term c φ(c) log (b π b exp(−f (b))). By the boundedness of f (b), we have DISPLAYFORM28 Collecting all these terms leads to the upper bound of 2L + log 2 k. This section provides an upper bound on the distortion of the empirical discrepancy, i.e., sup g∈G |∆ n (g) −∆ n (g)|.Proposition 5. Suppose that Assumptions A1, A2, and A3 hold. For any δ > 0, there exists a constant c 1 > 0 such that with probability at least 1 − δ, we have DISPLAYFORM0. Also from the definition of l(x, f), we can write DISPLAYFORM1 Therefore, for any g ∈ G and by the application of Proposition 2 (Part II), we have DISPLAYFORM2 where the supremum norm is taken over the centres c = {1, . . ., k}. For any x ∈ X and any c = {1, . . ., k}, we have DISPLAYFORM3 It was shown in that µ c (x) 2 is upper bounded by sup x∈X g(x) 2. One may similarly show the same for μ c (x) 2: DISPLAYFORM4 This together with FORMULA1 and FORMULA1 show that DISPLAYFORM5 Proposition 6, which we prove soon, upper bounds sup g∈G µ c (g) −μ c (g) 2. By a union bound argument over c ∈ {1, . . ., k}, we get that for any fixed δ > 0, there exists a constant c 1 > 0 such that DISPLAYFORM6 with probability at least 1 − δ. This proposition upper bounds the supremum of the 2 distance between cluster centres. Proposition 6. Suppose that Assumptions A1, A2, and A3 hold. Consider a fixed c ∈ {1, . . ., k}. For any δ > 0, there exists a constant c 1 > 0 such that with probability at least 1 − δ, we have DISPLAYFORM7 Proof. To shorten the formulae, we use the notation DISPLAYFORM8 f (X i) to denote the empirical expectation. We can decompose µ c (g) −μ c (g) as follows DISPLAYFORM9.In the rest of the proof, we provide an upper bound for Term (I) and Term (II).Term (I): Fix δ 1 > 0. We denote 1 d as the d-dimensional vector with all components equal to 1. As DISPLAYFORM10 where the comparison is dimension-wise. Therefore, DISPLAYFORM11 We provide a probabilistic upper bound on DISPLAYFORM12 Since φ c is a fixed 1-bounded function, we use Hoeffding's inequality to upper bound it. After some manipulations, we obtain that DISPLAYFORM13 with probability at least 1 − δ 1. We use this along and the assumption that DISPLAYFORM14 with probability at least 1 − δ 1. We want to upper bound the supremum of the norm of second term (II). Fix δ 2 > 0. To simplify the notation, let us first define function f (x) = φ c (x)g(x) corresponding to a function g. Functions f are mapping from X to R d. Furthermore, we define the function space DISPLAYFORM0, where g s is the s-th dimension of g. With this notation, we can write DISPLAYFORM1 Let us focus on the supremum over F term. We have DISPLAYFORM2 We take the square root of both sides and use the fact that a 2 1 +... DISPLAYFORM3 Notice that as φ c (x) is bounded by 1 and g is L-bounded dimension-wise, each f s (x) is also Lbounded. We use Lemma 7 (Appendix B.3) on each term to obtain a high probability upper bound. We choose the parameters of that lemma as β = 1, B = L, r = L 2, and δ = δ 2 /d. This leads to DISPLAYFORM4 with probability at least 1 − δ 2.In order to control E [R n (F s)], we relate it to the covering number of F s. We see that the covering number of F s can be upper bounded by the covering number of F, which in turn can be upper bounded by the covering number of G. First, we use Dudley's integral to get DISPLAYFORM5 The covering number argument is as follows. Choose any functions f s, f s ∈ F s. For any sequence x 1:n, the squared of the empirical 2 -norm is DISPLAYFORM6 where in the last step we used the fact that φ c (x) ≤ 1.An ε-cover of G w.r.t. · ∞,2 induces an ε-cover of G w.r.t. L 2 (P x1:n), which in turn induces an ε-cover on F s w.r.t. L 2 (P x1:n), as the inequality above shows. Notice that this holds for any choice of x 1:n, including X 1:n appearing in the Dudley's integral. Therefore, by and setting diam(F s) = 2L, we have DISPLAYFORM7 where we used the definition of the entropy integral of G.After plugging this upper bound in and use the upper bound, which was obtained for Term (I), we see that DISPLAYFORM8 with probability at least 1 − (δ 1 + δ 2). We set δ 1 = δ 2 = δ/2 and simplify the upper bound to obtain the desired . For the convenience of the reader, we collect some auxiliary definitions and that are used in the our proofs. Here we briefly define some of the notations that we use throughout the paper. Consider the domain X and a function space F: X → R. We do not deal with measure theoretic considerations, so we just assume that all functions involved are measurable w.r.t. an appropriate σ-algebra for that space. We use M(X) to refer to the set of all probability distributions defined over that space. We use symbols such as ν ∈ M(X) to refer to probability distributions defined over that space. We use f p,ν to denote the L p (ν)-norm (1 ≤ p < ∞) of a measurable function f: X → R, i.e., DISPLAYFORM0 The supremum norm is defined as DISPLAYFORM1.., x n be a sequence of points in X. We use x 1:n to refer to this sequence. The empirical measure P n is the probability measure that puts a mass of 1 n at each x i, i.e., DISPLAYFORM2 where δ x is the Dirac's delta function. For DISPLAYFORM3 We can also define other L p (P n)-norms similarly. 8 When there is no chance of confusion about D n, we may denote the empirical norm simply by f n. We quote the definition of the covering number from Györfi et al.. Definition 1 (Definition 9.3 of Györfi et al. 2002). Let ε > 0, F be a set of real-valued functions defined on X, and ν X be a probability measure on X. Every finite collection of N ε = {f 1, . . ., f Nε} defined on X with the property that for every f ∈ F, there is a function f ∈ N ε such that f − f p,ν X < ε is called an ε-cover of F w.r.t. · p,ν X.Let N (ε, F, · p,ν X) be the size of the smallest ε-cover of F w.r.t. · p,ν X. If no finite ε-cover exists, take N (ε, F, · p,ν X) = ∞. Then N (ε, F, · p,ν X) is called an ε-covering number of F and log N (ε, F, · p,ν X) is called the metric entropy of F w.r.t. the same norm. Given a x 1:n = (x 1, . . ., x n) ⊂ X and its corresponding empirical measure P n = P x1:n, we can define the empirical covering number of F w.r.t. the empirical norm · p,x1:n and is denoted by DISPLAYFORM0 We define Rademacher complexity and quote a from. For more information about Rademacher complexity, we refer the reader to;.Let σ 1,..., σ n be independent random variables with P {σ i = 1} = P {σ i = −1} = 1/2. For a function space F: DISPLAYFORM0, in which the expectation is w.r.t. both σ and X i. Rademacher complexity appears in the analysis of the supremum of an empirical process right after the application of the symmetrization technique. As such, its behaviour is closely related to the 8 Or maybe more clearly, Pn(A) = 1 n n i=1 I{xi ∈ A} for any measurable subset A ⊂ X. behaviour of the empirical process. One may interpret the Rademacher complexity as a complexity measure that quantifies the extent that a function from F can fit a noise sequence of length n .The following is a simplified (and slightly reworded) version of Theorem 2.1 of.Lemma 7. Let F: X → R be a measurable function space with B-bounded functions. Let X 1,..., X n ∈ X be independent random variables. Assume that for some r > 0, Var [f (X i)] ≤ r for every f ∈ F. Then for every δ > 0, with probability at least 1 − δ, DISPLAYFORM1 We give implementation details about the experiments of our submitted paper in Section C.1. We report the detailed performance of our model as a function of the output dimensionality and the number of hidden layers in the zero-shot learning context in Section C.2. We show in Section C.3 that our method can be generalized to hard clustering by using implicit centers. We give additional visualization in Section C.4. We coded our method in PyTorch and ran all our experiments on a single Nvidia GeForce GTX 1060 which has 6GB of RAM.PyTorch automatically calculates the gradient w.r.t. the mini-batch representation F. Nonetheless, it is worth mentioning that both the first and second arguments of our prediction function ψ(F, M, π) depend on F in the case where the centers are implicit (i.e., when we write M = diag(Y 1 n) −1 Y F ). In this case, the gradient of our loss function w.r.t. F depends on both the first and second arguments of ψ. We now give details specific to the zero-shot experiments. In the zero-shot learning experiment where F and M are computed from different sources (i.e., images and text) and are the output of two different networks, the optimization is performed by alternately optimizing one variable while fixing the other. Mini-batch size: The training datasets of CUB and Flowers contain 5894 and 5878 images, respectively. In order to fit into memory, we set our mini-batch sizes as 421 (= 5894/14) and 735 (≈ 5878/8) for CUB and Flowers, respectively.. In this case, we formulate: DISPLAYFORM0 Using a temperature of 10 made the optimization more stable as it avoided gradients with high values. We use a temperature of 2 when using the representations provided by BID1.Initial temperature of our model: To make our optimization framework stable, we start with a temperature of 50. We then formulate our Bregman divergence as: DISPLAYFORM1 where f i and µ c are the representations learned by our model. We decrease our temperature by 10% (i.e., temp t+1 = 0.9temp t) every 3000 epochs until the algorithm stops training. We stop training at 10k epochs on CUB and 1k epochs on Flowers. We now give details specific to the visualization experiments. Dataset size: To be comparable to t-SNE, we directly learn two-dimensional embeddings instead of neural networks. Our mini-batch size is the size of the test set (i.e., the number of examples is n = 10 4 for most datasets except STL that contains n = 8000 test examples).Optimizer: We use the RMSprop optimizer with a learning rate of 10 −3, α = 0.99, = 10 −6, the weight decay and momentum are both 0, and the data is not centered. We also we formulate the empirical discrepancy loss DISPLAYFORM0 k the vector containing the logits of the learned representation of the i-th test example. We formulate y i = [y i,1, · · ·, y i,k] ∈ R k our target assignment vector for the i-th test example as follows: DISPLAYFORM1 where τ = 5 for all the dataset except CIFAR-100 for which τ = 4.We report the quantitative scores for τ = 1.Initial temperature of our model: We learned our representation by using a fixed temperature of 1 (i.e., using the standard squared Euclidean distance).We stop the algorithm after 8000 iterations. Tuning t-SNE: we tested different ranges of scaling (1/1, 1/10, 1/100) and perplexity (i.e., 1, 10, 30 (default) and 100) and reported the representations that obtained the best quantitative . Let e ∈ N be the dimensionality of the representations taken as input and d the output dimensionality of the models g θ1 and g θ2, the architecture of the models is e-d and e-e-d in the 1 and 2 hidden layer cases, respectively. The hyperparameter d ∈ {16, 32, 64, · · ·, e} is also a hyperparameter cross-validated on the validation set. We give the detailed accuracy performance of our model in TAB5.• TAB5 reports the test performance of our model on CUB when using the features provided by BID1 as supervision for different numbers of hidden layers and values of output dimensionality of our model.• TAB6 (resp. Table 7) reports the validation (resp. test) performance of our model on CUB when using the features provided by BID5 as supervision.• Table 8 (resp. Table 9) reports the validation (resp. test) performance of our model on Flowers when using the features provided by BID5 as supervision. Dimensionality reduction improves performance, though the optimal dimensionality is dataset specific. In general, increasing the number of hidden layers also helps. Table 8: Validation accuracy (in %) as a function of the output dimensionality when using ProtoNet BID5 Table 9: Test accuracy (in %) as a function of the output dimensionality when using ProtoNet BID5 as supervision on Flowers C.3 GENERALIZATION TO HARD CLUSTERING We validate that DRPR can be used to perform hard clustering as in BID5 but with implicit centers. To this end, we train a neural network with 2 convolutional layers on MNIST followed by a fully connected layer. Its output dimensionality is d = 2 or d = 3, the mini-batch size is n = 1000, the number of categories is k = 10 and the target hard assignment matrix Y ∈ {0, 1} n×k contains category membership information (i.e., Y ic is 1 if the example f i belongs to category c, 0 otherwise). We train the model on the training set of MNIST and plot in Fig. 6 the representations of the test set. By assigning each test example to the category with closest centroid (obtained from the training set), the model obtains 98% (resp. 99%) accuracy when d = 2 (resp. d = 3). DRPR can then be learned for hard clustering when the centers are implicitly written as a function of the mini-batch matrix representation F and the target hard assignment matrix Y.Published as a conference paper at ICLR 2019 Figure 6: Visualization of the representation learned on MNIST by our approach in the supervised hard clustering setup. The left (resp. right) figure is the representation learned by our model when its output dimensionality is d = 2 (resp. d = 3). We now present visualization . C.4.1 ARTIFACTS WITH T-SNE FIG11 illustrates the CIFAR 100 representation learned by t-SNE when its input data is the target probability distribution that we give as supervision/input of our algorithm. Following the recommendations mentioned in https://lvdmaaten.github.io/tsne/ when the representation form a strange ball with uniformly distributed points and obtaining very low error, we decreased the perplexity from 30 (which is the value by default) to 10 and 1 and divided our data by 10, 100 and 1000. Nonetheless, we still obtained the same type of representation as in FIG11.This kind of artifact is the reason why we only report obtained with logits. We plot in FIG12 the visualization obtained by t-SNE when using the KL or JS divergences to compare pairs of probability distribution representations. The representations obtained in this case are still worse than using the original 3-dimensional representations as the cluster structures are not preserved, nor the inter-cluster distances. This suggests that comparing pairs of examples, as done by t-SNE, is less appropriate than our method that considers similarities between examples and the different k = 6 clusters. C.4.3 ADDITIONAL FIG13 illustrates the DRPR and t-SNE representations of CIFAR 10. Animal categories are illustrated on the right whereas machines are on the left. FIG14 illustrates the DRPR and t-SNE representations of CIFAR 100. We learned the representations by exploiting 100 clusters but plot only 20 colors (one for each superclass of CIFAR 100), which is why multiple spikes have the same color. Groups with same colors are better defined with our approach than with t-SNE, this means that different categories from the same superclass (e.g., hamster, mouse, rabbit, shrew, squirrel which are small mammals) are grouped together with DRPR. One can observe a semantic structure in the 2D representation of DRPR: plants and insects are on the top left; animals are on the bottom left and categories on the right are outdoor categories. Medium mammals are also represented between small mammals and large carnivores. Figures 10, 11, 12 and 13 illustrate the representations learned by our model for the STL, MNIST, CIFAR 100 and CIFAR 10 datasets, respectively. Instead of using colors that represent the categories of the embeddings as done in the submitted paper, we directly plot the images. In general, we observe that images towards the end of spikes consist of a clearly visible object in a standard viewpoint on a simple . Those closer to the center often have objects with a non-standard viewpoint or have a complex textured . At a high-level, the classes appear to be organized by their s. Taking the STL-10 visualization as an example, deer and horses are close together since they both tend to be found in the presence of green vegetation. These classes are far from boats and planes, which often have solid blue s. Looking more closely, the ordering of classes is sensible. Planes are neighbors to both boats (similar ) and birds (similar silhouette). And trucks neighbor both cars (similar ) and horses, which appear visually similar, particularly for images in which the horse is pulling a cart. Taking the MNIST visualization as another example, one can observe that written characters in spikes are easy to recognize as they correspond to examples for which the learned model has high confidence in its scores. On the other hand, ambiguous examples are between multiple spikes (e.g., the characters 0 and 6 between spikes are more ambiguous than their neighbors in spikes).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SygD-hCcF7
dimensionality reduction for cases where examples can be represented as soft probability distributions
Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations. Spontaneous exploration plays a key role in the development of knowledge and skills in human children. For example, young children spend a large amount of time exploring what they can do with their body and external objects, independently of external objectives such as finding food or following instructions from adults. Such intrinsically motivated exploration BID6 BID17 BID31 ) leads them to make ratcheting discoveries, such as learning to locomote or climb in various styles and on various surfaces, or learning to stack and use objects as tools. Equipping machines with similar intrinsically motivated exploration capabilities should also be an essential dimension for lifelong open-ended learning and artificial intelligence. In the last two decades, several families of computational models have both contributed to a better understanding of such exploration processes in infants, and how to apply them efficiently for autonomous lifelong machine learning. One general approach taken by several research groups BID1 BID4 BID16 has been to model the child as intrinsically motivated to make sense of the world, exploring like a scientist that imagines, selects and runs experiments to gain knowledge and control over the world. These models have focused in particular on three kinds of mechanisms argued to be essential and complementary to enable machines and animals to efficiently explore and discover skill repertoires in the real world BID10: embodiment 1, intrinsic motivation and social guidance 3. This article focuses on challenges related to learning goal representations for intrinsically motivated exploration, but also leverages models of embodiment, through the use of parameterized Dynamic Movement Primitives controllers BID20 and social guidance, through the use of observations of another agent. Given an embodiment, intrinsically motivated exploration 4 consists in automatically and spontaneously conducting experiments with the body to discover both the world dynamics and how it can be controlled through actions. Computational models have framed intrinsic motivation as a family of mechanisms that self-organize agents exploration curriculum, in particular through generating and selecting experiments that maximize measures such as novelty BID0 BID48, predictive information gain BID25, learning progress BID43 BID21, compression progress BID44, competence progress, predictive information BID26 or empowerment BID41. When used in the Reinforcement Learning (RL) framework (e.g. BID48 BID43 BID21 BID4), these measures have been called intrinsic rewards, and they are often applied to reward the "interestingness" of actions or states that are explored. They have been consistently shown to enable artificial agents or robots to make discoveries and solve problems that would have been difficult to learn using a classical optimization or RL approach based only on the target reward (which is often rare or deceptive) BID11 BID47. Recently, they have been similarly used to guide exploration in difficult deep RL problems with sparse rewards, e.g. BID5 BID19 BID49 BID35.However, many of these computational approaches have considered intrinsically motivated exploration at the level of micro-actions and states (e.g. considering low-level actions and pixel level perception). Yet, children's intrinsically motivated exploration leverages abstractions of the environments, such as objects and qualitative properties of the way they may move or sound, and explore by setting self-generated goals BID53, ranging from objects to be reached, toy towers to be built, or paper planes to be flown. A computational framework proposed to address this higher-level form of exploration has been Intrinsically Motivated Goal Exploration Processes (IMGEPs) BID2 BID15, which is closely related to the idea of goal babbling BID39. Within this approach, agents are equipped with a mechanism enabling them to sample a goal in a space of parameterized goals 5, before they try to reach it by executing an experiment. Each time they sample a goal, they dedicate a certain budget of experiments time to improve the solution to reach this goal, using lower-level optimization or RL methods for example. Most importantly, in the same time, they take advantage of information gathered during this exploration to discover other outcomes and improve solutions to other goals 6.This property of cross-goal learning often enables efficient exploration even if goals are sampled randomly in goal spaces containing many unachievable goals. Indeed, generating random goals (including unachievable ones) will very often produce goals that are outside the convex hull of already discovered outcomes, which in turn leads to exploration of variants of known corresponding policies, pushing the convex hull further. Thus, this fosters exploration of policies that have a high probability to produce novel outcomes without the need to explicitly measure novelty. This explains why forms of random goal exploration are a form of intrinsically motivated exploration. However, more powerful goal sampling strategies exist. A particular one consists in using meta-learning algorithms to monitor the evolution of competences over the space of goals and to select the next goal to try, according to the expected competence progress ing from practicing it. This enables to automate curriculum sequences of goals of progressively increasing complexity, which has been shown to allow high-dimensional real world robots to acquire efficiently repertoires of locomotion skills or soft object manipulation, or advanced forms of nested tool use BID15 Here a goal is not necessarily an end state to be reached, but can characterize certain parameterized properties of changes of the world, such as following a parameterized trajectory. 6 E.g. while learning how to move an object to the right, they may discover how to move it to the left.sampling them randomly BID9 BID27 or adaptively BID13 ).Yet, a current limit of existing algorithms within the family of Intrinsically Motivated Goal Exploration Processes is that they have assumed that the designer 7 provides a representation allowing the autonomous agent to generate goals, together with formal tools used to measure the achievement of these goals (e.g. cost functions). For example, the designer could provide a representation that enables the agent to imagine goals as potential continuous target trajectories of objects BID15, or reach an end-state starting from various initial states defined in Euclidean space BID13, or realize one of several discrete relative configurations of objects BID9 ), which are high-level abstractions from the pixels. While this has allowed to show the power of intrinsically motivated goal exploration architectures, designing IMGEPs that sample goals from a learned goal representation remains an open question. There are several difficulties. One concerns the question of how an agent can learn in an unsupervised manner a representation for hypothetical goals that are relevant to their world before knowing whether and how it is possible to achieve them with the agent's own action system. Another challenge is how to sample "interesting" goals using a learned goal representation, in order to remain in regions of the learned goal parameters that are not too exotic from the underlying physical possibilities of the world. Finally, a third challenge consists in understanding which properties of unsupervised representation learning methods enable an efficient use within an IMGEP architecture so as to lead to efficient discovery of controllable effects in the environment. In this paper, we present one possible approach named IMGEP-UGL where aspects of these difficulties are addressed within a 2-stage developmental approach, combining deep representation learning and goal exploration processes:Unsupervised Goal space Learning stage (UGL): In the first phase, we assume the learner can passively observe a distribution of world changes (e.g. different ways in which objects can move), perceived through raw sensors (e.g. camera pixels or other forms of low-level sensors in other modalities). Then, an unsupervised representation learning algorithm is used to learn a lower-dimensional latent space representation (also called embedding) of these world configurations. After training, a Kernel Density Estimator (KDE) is used to estimate the distribution of these observations in the latent space. In the second phase, the embedding representation and the corresponding density estimation learned during the first stage are reused in a standard IMGEP. Here, goals are iteratively sampled in the embedding as target outcomes. Each time a goal is sampled, the current knowledge (forward model and meta-policy, see below) enables to guess the parameters of a corresponding policy, used to initialize a time-bounded optimization process to improve the cost of this policy for this goal. Crucially, each time a policy is executed, the observed outcome is not only used to improve knowledge for the currently selected goal, but for all goals in the embedding. This process enables the learner to incrementally discover new policy parameters and their associated outcomes, and aims at learning a repertoire of policies that produce a maximally diverse set of outcomes. A potential limit of this approach, as it is implemented and studied in this article, is that representations learned in the first stage are frozen and do not evolve in the second stage. However, we consider here this decomposition for two reasons. First, it corresponds to a well-known developmental progression in infant development: in their first few weeks, motor exploration in infants is very limited (due to multiple factors), while they spend a considerable amount of time observing what is happening in the outside world with their eyes (e.g. observing images of social peers producing varieties of effects on objects). During this phase, a lot of perceptual learning happens, and this is reused later on for motor learning (infant perceptual development often happens ahead of motor development in several important ways). Here, passive perceptual learning from a database of visual effects observed in the world in the first phase can be seen as a model of this stage where infants learn by passively observing what is happening around them 8. A second reason for this decomposi-tion is methodological: given the complexity of the underlying algorithmic components, analyzing the dynamics of the architecture is facilitated when one decomposes learning in these two phases (representation learning, then exploration).Main contribution of this article. Prior to this work, and to our knowledge, all existing goal exploration process architectures used a goal space representation that was hand designed by the engineer, limiting the autonomy of the system. Here, the main contribution is to show that representation learning algorithms can discover goal spaces that lead to exploration dynamics close to the one obtained using an engineered goal representation space. The proposed algorithmic architecture is tested in two environments where a simulated robot learns to discover how to move and rotate an object with its arm to various places (the object scene being perceived as a raw pixel map). The objective measure we consider, called KL-coverage, characterizes the diversity of discovered outcomes during exploration by comparing their distribution with the uniform distribution over the space of outcomes that are physically possible (which is unknown to the learner). We even show that the use of particular representation learning algorithms such as VAEs in the IMGEP-UGL architecture can produce exploration dynamics that match the one using engineered representations. • We show that the IMGEP-UGL architecture can be successfully implemented (in terms of exploration efficiency) using various unsupervised learning algorithms for the goal space learning component: AutoEncoders (AEs) BID8, Variational AE (VAE) BID38 BID22, VAE with Normalizing Flow , Isomap BID50, PCA BID36, and we quantitatively compare their performances in terms of exploration dynamics of the associated IMGEP-UGL architecture.• We show that specifying more embedding dimensions than needed to capture the phenomenon manifold does not deteriorate the performance of these unsupervised learning algorithms.• We show examples of unsupervised learning algorithms (Radial Flow VAEs) which produce less efficient exploration dynamics than other algorithms in our experiments, and suggest hypotheses to explain this difference. In this section, we first present an outline of intrinsically motivated goal exploration algorithmic architectures (IMGEPs) as originally developed and used in the field of developmental robotics, and where goal spaces are typically hand crafted. Then, we present a new version of this architecture (IMGEP-UGL) that includes a first phase of passive perceptual learning where goal spaces are learned using a combination of representation learning and density estimation. Finally, we outline a list of representation learning algorithms that can be used in this first phase, as done in the experimental section. Intrinsically Motivated Goal Exploration Processes (IMGEPs), are powerful algorithmic architectures which were initially introduced in BID2 and formalized in BID15. They can be used as heuristics to drive the exploration of high-dimensional continuous action spaces so as to learn forward and inverse control models in difficult robotic problems. To clearly understand the essence of IMGEPs, we must envision the robotic agent as an experimenter seeking information about an unknown physical phenomenon through sequential experiments. In this perspective, the main elements of an exploration process are:• A context c, element of a Context Space C. This context represents the initial experimental factors that are not under the robotic agent control. In most cases, the context is considered fully observable (e.g. state of the world as measured by sensors).have considered how stronger forms of social guidance, such as imitation learning BID42, could accelerate intrinsically motivated goal exploration BID28 ), but they did not consider the challenge of learning goal representations.• A parameterization θ, element of a Parameterization Space Θ. This parameterization represents the experimental factors that can be controlled by the robotic agent (e.g. parameters of a policy).• An outcome o, element of an Outcome Space O. The outcome contains information qualifying properties of the phenomenon during the execution of the experiment (e.g. measures characterizing the trajectory of raw sensor observations during the experiment).• A phenomenon dynamics D: C, Θ → O, which in most interesting cases is unknown. If we take the example of the Arm-Ball problem 9 in which a multi-joint robotic arm can interact with a ball, the context could be the initial state of the robot and the ball, the parameterization could be the parameters of a policy that generate a sequence of motor torque commands for N time steps, and the outcome could be the position of the ball at the last time step. Developmental roboticists are interested in developing autonomous agents that learn two models, the forward modelD: C × Θ → O which approximates the phenomenon dynamics, and the inverse modelĨ: C × O → Θ which allows to produce desired outcomes under given context by properly setting the parameterization. Using the aforementioned elements, one could imagine a simple strategy that would allow the agent to gather tuples {c, θ, o} to train those models, by uniformly sampling a random parameterization θ ∼ U(θ) and executing the experiment. We refer to this strategy as Random Parameterization Exploration. The problem for most interesting applications in robotics, is that only a small subspace of Θ is likely to produce interesting outcomes. Indeed, considering again the Arm-Ball problem with time-bounded action sequences as parameterizations, very few of those will lead the arm to touch the object and move it. In this case, a random sampling in Θ would be a terrible strategy to yield interesting samples allowing to learn useful forward and inverse models for moving the ball. To overcome this difficulty, one must come up with a better approach to sample parameterizations that lead to informative samples. Intrinsically Motivated Goal Exploration Strategies propose a way to address this issue by giving the agent a set of tools to handle this situation:• A Goal Space T whose elements τ represent parameterized goals that can be targeted by the autonomous agent. In the context of this article, and of the IMGEP-UGL architecture, we consider the simple but important case where the Goal Space is equated with the Outcome space. Thus, goals are simply vectors in the outcome space that describe target properties of the phenomenon that the learner tries to achieve through actions.• A Goal Policy γ(τ), which is a probability distribution over the Goal Space used for sampling goals (see Algorithmic Architecture 2). It can be stationary, but in most cases, it will be updated over time following an intrinsic motivation strategy. Note that in some cases, this Goal Policy can be conditioned on the context γ(τ |c).• A set of Goal-parameterized Cost Functions C τ: O → R defined over all O, which maps every outcome with a real number representing the goodness-of-fit of the outcome o regarding the goal τ. As these cost functions are defined over O, this enables to compute the cost of a policy for a given goal even if the goal is imagined after the policy roll-out. Thus, as IMGEPs typically memorize the population of all executed policies and their outcomes, this enables reuse of experimentations across multiple goals.• A Meta-Policy Π: T, C → Θ which is a mechanism to approximately solve the minimization problem Π(τ, c) = arg min θ C τ (D(θ, c)), whereD is a running forward model (approximating D), trained on-line during exploration. In some applications, a de-facto ensemble of such tools can be used. For example, in the case where O is an Euclidean space, we can allow the agent to set goals in the Outcome Space T = O, in which case for every goal τ we can consider a Goal-parameterized cost function DISPLAYFORM0. is a similarity metric. In the case of the Arm-Ball problem, the final position of the ball can be used as Outcome Space, hence the Euclidean distance between the goal position and the final ball position at the end of the episode can be used as Goal-parameterized cost function (but one could equally choose the full trajectories of the ball as outcomes and goals, and an associated similarity metric).Algorithmic architecture 2 describes the main steps of Intrinsically Motivated Goal Exploration Processes using these tools 10:Bootstrapping phase: Sampling a few policy parameters (called Random Parametrization Exploration, RPE), observing the starting context and the ing outcome, to initialize a memory of experiments (H = {(c i, θ i, o i)}) and a regressorD running approximating the phenomenon dynamics. Goal exploration phase: Stochastically mixing random policy exploration with goal exploration. In goal exploration, one first observes the context c and then samples a goal τ using goal policy γ (this goal policy can be a random stationary distribution, as in experiments below, or a contextual multi-armed bandit maximizing information gain or competence progress, see). Then, a meta-policy algorithm Π is used to search the parameterization θ minimizing the Goal-parameterized cost function C τ, i.e. it computes θ = arg min θ C τ (D running (θ, c)). This process is typically initialized by searching the parameter θ init in H such that the corresponding c init is in the neighborhood of c and C τ (o init) is minimized. Then, this initial guess is improved using an optimization algorithm (e.g. L-BFGS) over the regressorD running. The ing policy θ is executed, and the outcome o is observed. The observation (c, θ, o) is then used to update H andD running.This procedure has been experimentally shown to enable sample efficient exploration in highdimensional continuous action robotic setups, enabling in turn to learn repertoires of skills in complex physical setups with object manipulations using tools BID14 BID15 or soft deformable objects BID28.Nevertheless, two issues arise when it comes to using these algorithms in real-life setups, and within a fully autonomous learning approach. First, there are many real world cases where providing an Outcome Space (in which to make observations and sample goals, so this is also the Goal Space) to the agent is difficult, since the designer may not himself understand well the space that the robot is learning about. The approach taken until now BID15, was to create an external program which extracted information out of images, such as tracking all objects positions. This information was presented to the agent as a point in n, which was hence considered as an Outcome Space. In such complex environments, the designer may not know what is actually feasible or not for the robot, and the Outcome space may contain many unfeasible goals. This is the reason why advanced mechanisms for sampling goals and discovering which ones are actually feasible have been designed BID15. Second, a system where the engineer designs the representation of an Outcome Space space is limited in its autonomy. A question arising from this is: can we design a mechanism that allows the agent to construct an Outcome Space that leads to efficient exploration by the mean of examples? Representation Learning methods, in particular Deep Learning algorithms, constitute a natural approach to this problem as it has shown outstanding performances in learning representations for images. In the next two sections, we present an update of the IMGEP architecture that includes a goal space representation learning stage, as well as various Deep Representation Learning algorithms tested: Autoencoders along with their more recent Variational counter-parts. In order to enable goal space representation learning within the IMGEP framework, we propose to add a first stage of unsupervised perceptual learning (called UGL) before the goal exploration stage, leading to the new IMGEP-UGL architecture described in Algorithmic Architecture 1. In the passive perceptual learning stage (UGL, lines 2-8), the learner passively observes the unknown phenomenon by collecting samples x i of raw sensor values as the world changes. The architecture is neutral with regards to how these world changes are produced, but as argued in the introduction, one can see them as coming from actions of other agents in the environment. Then, this database of x i observations is used to train an unsupervised learning algorithm (e.g. VAE, Isomap) to learn an embedding functionR which maps the high-dimensional raw sensor observations onto a lower-10 IMGEPs characterize an architecture and not an algorithm as several of the steps of this architecture can be implemented in multiple ways, for e.g. depending on which regression or meta-policy algorithms are implemented dimensional representation o. Also, a kernel density estimator KDE estimates the distribution p kde (o) of observed world changes projected in the embedding. Then, in the goal exploration stage (lines 9-26), this lower-dimensional representation o is used as the outcome and goal space, and the distribution p kde (o) is used as a stochastic goal policy, within a standard IMGEP process (see above). Learn an embedding functionR: DISPLAYFORM0 Estimate the outcome distribution p kde (o) from {R(x i)} i∈ using algorithm KDE 9 Set the Goal Policy γ = p kde to be the estimated outcome distribution As IMGEP-UGL is an algorithmic architecture, it can be implemented with several algorithmic variants depending on which unsupervised learning algorithm is used in the UGL phase. We experimented over different deep and classical Representation Learning algorithms for the UGL phase. We rapidly outline these algorithms here. For a more in-depth introduction to those models, the reader can refer to Appendix B which contains details on the derivations of the different Cost Functions and Architectures of the Deep Neural Networks based models. Auto-Encoders (AEs) are a particular type of Feed-Forward Neural Networks that were introduced in the early hours of neural networks BID8. They are trained to output a reconstructionx of the input vector x of dimension D, through a representation layer of size d < D. They can be trained in an unsupervised manner using a large dataset of unlabeled samples D = {x (i) } i∈{0... N}. Their main interest lies in their ability to model the statistical regularities existing in the data. Indeed, during training, the network learns the regularities allowing to encode most of the information existing in the input in a more compact representation. Put differently, AEs can be seen as learning a non-linear compression for data coming from an unknown distribution. Those models can be trained using different algorithms, the most simple being Stochastic Gradient Descent (SGD), to minimize a loss function J (D) that penalizes differences betweenx and x for all samples in D.Variational Auto-Encoders (VAEs) are a recent alternative to classic AEs BID38 BID22, that can be seen as an extension to a stochastic encoding. The argument underlying this model is slightly more involved than the simple approach taken for AEs, and relies on a statistical standpoint presented in Appendix B. In practice, this model simplifies to an architecture very similar to an AE, differing only in the fact that the encoder f θ outputs the parameters µ and σ of a multivariate Gaussian distribution N (µ, diag(σ 2)) with diagonal covariance matrix, from which the representation z is sampled. Moreover, an extra term is added to the Cost Function, to condition the distribution of z in the representation space. Under the restriction that a factorial Gaussian is used, the neural network can be made fully differentiable thanks to a reparameterization trick, making it possible to use SGD for training. In practice VAEs tend to yield smooth representations of the data, and are faster to converge than AEs from our experiments. Despite these interesting properties, the derivation of the actual cost function relies mostly on the assumption that the factors can be described by a factorial Gaussian distribution. This hypothesis can be largely erroneous, for example if one of the factors is periodic, multi-modal, or discrete. In practice our experiments showed that even if training could converge for non-Gaussian factors, it tends to be slower and to yield poorly conditioned representations. Normalizing Flow proposes a way to overcome this restriction on distribution, by allowing more expressive ones . It uses the classic rule of change of variables for random variables, which states that considering a random variable z 0 ∼ q(z 0), and an invertible transformation t: DISPLAYFORM1 Using this, we can chain multiple transformations t 1, t 2,..., t K to produce a new random variable z K = t K • · · · • t 2 • t 1 (z 0). One particularly interesting transformation is the Radial Flow, which allows to radially contract and expand a distribution as can be seen in FIG5 in Appendix. This transformation seems to give the required flexibility to encode periodic factors. Isomap is a classical approach of Multi-Dimensional Scaling BID24 a procedure allowing to embed a set of N -dimensional points in a n dimensional space, with N > n, minimizing the Kruskal Stress, which measures the distortion induced by the embedding in the pairwise Euclidean distances. This algorithm in an embedding whose pairwise distances are roughly the same as in the initial space. Isomap BID50 goes further by assuming that the data lies in the vicinity of a lower dimensional manifold. Hence, it replaces the pairwise Euclidean distances in the input space by an approximate pairwise geodesic distance, computed by the Dijkstra's Shortest Path algorithm on a κ nearest-neighbors graph. Principal Component Analysis is an ubiquitous procedure BID36 which, for a set of data points, allows to find the orthogonal transformation that yields linearly uncorrelated data. This transformation is found by taking the principal axis of the covariance matrix of the data, leading to a representation whose variance is in decreasing order along dimensions. This procedure can be used to reduce dimensionality, by taking only the first n dimensions of the transformed data. Estimation of sampling distribution: Since the Outcome Space O was learned by the agent, it had no prior knowledge of p(o) for o ∈ O. We used a Gaussian Kernel Density Estimation (KDE) BID34 BID40 to estimate this distribution from the projection of the images observed by the agent, into the learned goal space representation. Kernel Density Estimation allows to estimate the continuous density function (cdf) f (o) out of a discrete set of samples {o i} i∈{1,...,n} drown from distribution p(o). The estimated cdf is computed using the following equation: DISPLAYFORM2 with K(·) a kernel function and H a bandwidth d × d matrix (d the dimension of O). In our case, we used a Gaussian Kernel: DISPLAYFORM3 with the bandwidth matrix H equaling the covariance matrix of the set of points, rescaled by factor n − 1 d+4, with n the number of samples, as proposed in. We conducted experiments to address the following questions in the context of two simulated environments:• Is it possible for an IMGEP-UGL implementation to produce a Goal Space representation yielding an exploration dynamics as efficient as the dynamics produced by an IMGEP implementation using engineered goal space representations? Here, the dynamics of exploration is measured through the KL Coverage defined thereafter.• What is the impact of the target embedding dimensionality provided to these algorithms? We now present in depth the experimental campaign we performed 11.Environments: We experimented on two different Simulated Environments derived from the Arm-Ball benchmark represented in FIG1, namely the Arm-Ball and the Arm-Arrow environments, in which a 7-joint arm, controlled by a 21 continuous dimension Dynamic Movement Primitives (DMP) BID20 controller, evolves in an environment containing an object it can handle and move around in the scene. In the case of IMGEP-UGL learners, the scene is perceived as a 70x70 pixel image. For the UGL phase, we used the following mechanism to generate the distribution of samples x i: the object was moved randomly uniformly over [−1, 1] 2 for ArmBall, and over [−1, 1] 2 × [0, 2π] for ArmArrow, and the corresponding images were generated and provided as an observable sample to IMGEP-UGL learners. Note that the physically reachable space (i.e. the largest space the arm can move the object to) is the disk centered on 0 and of radius 1: this means that the distribution of object movements observed by the learner is slightly larger than the actual space of moves that learners can produce themselves (and learners have no knowledge of which subspace corresponds to physically feasible outcomes). The environments are presented in depth in Appendix C.Algorithmic Instantiation of the IMGEP-UGL Architecture: We experimented over the following Representation Learning Algorithms for the UGL component: Auto-Encoders with KDE (RGE-AE), Variational Auto-Encoders with KDE (RGE-VAE), Variational Auto-Encoders using the associated Gaussian prior for sampling goal instead of KDE (RGE-VAE-GP), Radial Flow Variational Auto-Encoders with KDE (RGE-RFVAE), Radial Flow Variational Auto-Encoders using the associated Gaussian prior for sampling goal (RGE-RFVAE-GP), Isomap (RGE-Isomap) BID50 and Principal Component Analysis (RGE-Isomap).Regarding the classical IMGEP components, we considered the following elements:• Context Space C = ∅: In the implemented environments, the initial positions of the arm and the object were reset at each episode 12. Consequently, the context was not observed nor accounted for by the agent. • Parameterization Space Θ = 21: During the experiments, we used DMP controllers as parameterized policies to generate time-bounded motor actions sequences. Since the DMP controller was parameterized by 3 basis functions for each joint of the arm FORMULA12, the parameterization of the controller was represented by a point in DISPLAYFORM0 The Outcome Space is the subspace of R l spanned by the embedding representations of the ensemble of images observed in the first phase of learning. For the RGE-EFR algorithm, l = 2 in ArmBall and l = 3 in ArmArrow. For IMGEP-UGL algorithms, as the representation learning algorithms used in the UGL stage require a parameter specifying the maximum dimensionality of the target embedding, we considered two cases in experiments: 1) l = 10, which is 5 times larger than the true manifold dimension for ArmBall, and 3.3 times larger for ArmArrow (the algorithm is not supposed to know this, so testing the performance with larger embedding dimension is key); 2) l = 2 for ArmBall, and l = 3 for ArmArrow, which is the same dimensionality as the true dimensions of these manifolds.• Goal Space T = O: The Goal Space was taken to equate the Outcome Space.• Goal-Parameterized Cost function C τ (·) = τ − · 2: Sampling goals in the Outcome Space allows us to use the Euclidean distance as Goal-parameterized cost function. Considering those elements, we used the instantiation of the IMGEP architecture represented in Appendix D in Algorithm 3. We implemented a goal sampling strategy known as Random Goal Exploration (RGE), which consists, given a stationary distribution over the Outcome Space p(o), in sampling a random goal o ∼ p(o) each time (note that this stationary distribution p(o) is learnt in the UGL stage for IMGEP-UGL implementations). We used a simple k-neighbors regressor to implement the running forward modelD, and the Meta-Policy mechanism consisted in returning the nearest achieved outcome in the outcome space, and taking the same parameterization perturbed by an exploration noise (which has proved to be a very strong baseline in IMGEP architectures in previous works BID14).Exploration Performance Measure: In this article, the central property we are interested in is the dynamics and quality of exploration of the outcome space, characterizing the evolution of the distribution of discovered outcomes, i.e. the diversity of effects that the learner discovers how to produce. In order to characterize this exploration dynamics quantitatively, we monitored a measure which we refer to as Kullback-Leibler Coverage (KLC). At a given point in time during exploration, this measure computes the KL-divergence between the distribution of the outcomes produced so far, with a uniform distribution of outcomes in the space of physically possible outcomes (which is known by the experimenter, but unknown by the learner). To compute it, we use a normalized histogram of the explored outcomes, with 30 bins per dimension, which we refer to as E, and we compute its Kullback Leibler Divergence with the normalized histogram of attainable points which we refer to as A: DISPLAYFORM1 We emphasize that, when computed against a uniform distribution, the KLC measure is a proxy for the (opposite) Entropy of the E distribution. Nevertheless, we prefer to keep it under the divergence form, as the A distribution allows to define what the experimenter considers to be a good exploration distribution. In the case of this study, we consider a uniform distribution of explored locations over the attainable domain, to be the best exploration distribution achievable. Baseline algorithms: We are using two natural baseline algorithms for evaluating the exploration dynamics of our IMGEP-UGL algorithmic implementations: This is an IMGEP implementation using a goal/outcome space with handcrafted features that directly encode the underlying structure of environments: for Arm-Ball, this is the 2D position of the ball in 2, and for Arm-Arrow this is the 2D position and the 1D orientation of the arrow in 3. This algorithm is also given the prior knowledge of p(o) = U(O). All other aspects of the IMGEP (regressor, meta-policy, other parameters) are identical to IMGEP-UGL implementations. This algorithm is known to provide highly efficient exploration dynamics in these environments BID14.• Random Parameterization Exploration (RPE): The Random Parameterization Exploration approach does not use an Outcome Space, nor a Goal Policy, and only samples a random parameterization θ ∼ U(Θ) at each episode. We expected this algorithm to lower bound the performances of our novel architecture. We first study the exploration dynamics of all IMGEP-UGL algorithms, comparing them to the baselines and among themselves. Then, we study specifically the impact of the target embedding dimension (latent space) for the UGL implementations, by observing what exploration dynamics is produced in two cases:• Using a target dimension larger than the true dimension (l = 10)• Providing the true embedding dimension to the UGL implementations (l = 2, 3)Finally, we specifically study RGE-VAE, using the intrinsic Gaussian prior of these algorithms to replace the KDE estimator of p(O) in the UGL part. Exploration Performances: In FIG2, we can see the evolution of the KLC through exploration epochs (one exploration epoch is defined as one experimentation/roll-out of a parameter θ). We can see that for both environments, and all values of latent spaces, all IMGEP-UGL algorithms, except RGE-RFVAE, achieve similar or better performance (both in terms of asymptotic KLC and speed to reach it) than the RGE-EFR algorithm using engineered Goal Space features, and much better performance than the RPE algorithm. Figure 3 (see also Figure 8 and 9 in Appendix) show details of the evolution of discovered outcomes in ArmBall (final ball positions after the end of a policy roll-out) and corresponding KLC measures for individual runs with various algorithms. It also shows the evolution of the number of times learners managed to move the ball, which is considered in the KLC measure but not easily visible in the displayed set of outcomes in FIG3. For instance, we observe that both RPE FIG3 ) and RGE-RFVAE FIG3 ) algorithms perform poorly: they discover very few policies moving the ball at all (pink curves), and these discovered ball moves cover only a small part of the physically possible outcome space. On the contrary, both RGE-EFR (handcrafted features) and RGE-VAE (learned goal space representation with VAE) perform very well, and the KLC of RGE-VAE is even better than the KLC of RGE-EFR, due to the fact that RGE-VAE has discovered more policies (around 2400) that move the ball than RGE-EFR (around 1600, pink curve). Impact of target latent space size in IMGEP-UGL algorithms On the ArmBall problem, we observe that if one provides the true target embedding dimension (l = 2) to IMGEP-UGL implementations, RGE-Isomap is slightly improving (getting quasi-identical to RGE-EFR), RGE-AE does not change (remains quasi-identical to RGE-EFR), but the performance of RGE-PCA and RGE-VAE is degraded. For ArmArrow, the effect is similar: IMGEP-UGL algorithms with a larger target embedding dimension (l = 10) than the true dimensionality all perform better than RGE-EFR (except RGE-RFVAE which is worse in all cases), while when l = 2 only RGE-VAE is significantly better than RGE-EFR. In Appendix F, more examples of exploration curves with attached exploration scatters are shown. For most example runs, increasing the target embedding dimension enables learners to discover more policies moving the ball and, in these cases, the discovered outcomes are more concentrated towards the external boundary of the discus of physically possible outcomes. This behavior, where increasing the target embedding dimension improves the KLC while biasing the discovered outcome towards the boundary the feasible goals, can be understood as a consequence of the following well-known general property of IMGEPs: if goals are sampled outside the convex hull of outcomes already discovered, this has the side-effect of biasing exploration towards policies that will produce outcomes beyond this convex hull (until the boundary of feasible outcomes is reached). Here, as observations in the UGL phase were generated by uniformly moving the objects on the square [−1, 1] 2, while the feasible outcome space was the smaller discus of radius 1, goal sampling happened in a distribution of outcomes larger than the feasible outcome space. As one increases the embedding space dimensionality, the ratio between the volume of the corresponding hyper-cube and hyper-discus increases, in turn increasing the probability to sample goals outside the feasible space, which has the side effect of fostering the discovery of novel outcomes and biasing exploration towards the boundaries. Impact of Sampling Kernel Density Estimation Another factor impacting the exploration assessed during our experiments was the importance of the distribution used as stationary Goal Policy. If, in most cases, the representation algorithm gives no particular prior knowledge of p(o), in the case of Variational Auto-Encoders, it is assumed in the derivation that p(o) = N (0, I). Hence, the isotropic Gaussian distribution is a better candidate stationary Goal Policy than Kernel Density Estimation. FIG4 shows a comparison between exploration performances achieved with RGE-VAE using a KDE distribution or an isotropic Gaussian as Goal Policy. The performance is not significantly different from the isotropic Gaussian case. Our experiments showed that convergence on the KL term of the loss can be more or less quick depending on the initialization. Since we used a number of iterations as stopping criterion for training (based on early experiments), we found that sometimes, at stop, the divergence was still pretty high despite achieving a low reconstruction error. In those cases the representation was not be perfectly matching an isotropic Gaussian, which could lead to a goal sampling bias when using the isotropic Gaussian Goal Policy. In this paper, we proposed a new Intrinsically Motivated Goal Exploration architecture with Unsupervised Learning of Goal spaces (IMGEP-UGL). Here, the Outcome Space (also used as Goal Space) representation is learned using passive observations of world changes through low-level raw sensors (e.g. movements of objects caused by another agent and perceived at the pixel level).Within the perspective of research on Intrinsically Motivated Goal Exploration started a decade ago, and considering the fundamental problem of how AI agents can autonomously explore environments and skills by setting their own goals, this new architecture constitutes a milestone as it is to our knowledge the first goal exploration architecture where the goal space representation is learned, as opposed to hand-crafted. Furthermore, we have shown in two simulated environments (involving a high-dimensional continuous action arm) that this new architecture can be successfully implemented using multiple kinds of unsupervised learning algorithms, including recent advanced deep neural network algorithms like Variational Auto-Encoders. This flexibility opens the possibility to benefit from future advances in unsupervised representation learning research. Yet, our experiments have shown that all algorithms we tried (except RGE-RFVAE) can compete with an IMGEP implementation using engineered feature representations. We also showed, in the context of our test environments, that providing to IMGEP-UGL algorithms a target embedding dimension larger than the true dimensionality of the phenomenon can be beneficial through leveraging exploration dynamics properties of IMGEPs. Though we must investigate more systematically the extent of this effect, this is encouraging from an autonomous learning perspective, as one should not assume that the learner initially knows the target dimensionality. Limits and future work. The experiments presented here were limited to a fairly restricted set of environments. Experimenting over a larger set of environments would improve our understanding of IMGEP-UGL algorithms in general. In particular, a potential challenge is to consider environments where multiple objects/entities can be independently controlled, or where some objects/entities are not controllable (e.g. animate entities). In these cases, previous work on IMGEPs has shown that random Goal Policies should be either replaced by modular Goal Policies (considering a modular goal space representation, see BID15), or by active Goal Policies which adaptively focus the sampling of goals in subregions of the Goal Space where the competence progress is maximal. For learning modular representations of Goal Spaces, an interesting avenue of investigations could be the use of the Independently Controllable Factors approach proposed in BID51.Finally, in this paper, we only studied a learning scenario where representation learning happens first in a passive perceptual learning stage, and is then fixed during a second stage of autonomous goal exploration. While this was here motivated both by analogies to infant development and to facilitate evaluation, the ability to incrementally and jointly learn an outcome space representation and explore the world is a stimulating topic for future work. Bernouilli distribution of ξ parameters 13, and the log likelihood of the dataset D is expressed as: DISPLAYFORM0 with DISPLAYFORM1. For a binary valued input vector x (i), the unitary Cost Function to minimize is: DISPLAYFORM2 provided that f θ is the encoder part of the architecture and g φ is the decoding part of the architecture. This Cost Function can be minimized using Stochastic Gradient Descent BID7, or more advanced optimizers such as Adagrad BID12 ) or Adam (.Depending on the depth of the network 14, those architectures can prove difficult to train using vanilla Stochastic Gradient Descent. A particularly successful procedure to overcome this difficulty is to greedily train each pairs of encoding-decoding layers and stacking those to sequentially form the complete network. This procedure, known as stacked AEs, accelerates convergence. But it has shown bad with our problem, and thus was discarded for the sake of clarity. Variational Auto-Encoders (VAEs) If we assume that the observed data are realizations of a random variable x ∼ p(x|ψ), we can hypothesize that they are conditioned by a random vector of independent factors z ∼ p(z|ψ). In this setting, learning the model would amount to searching the parameters ψ of both distributions. We might use the same principle of maximum likelihood as before to find the best parameters by computing the likelihood log L(D) = N i=1 log p(x (i) |ψ) by using the fact that p(x|ψ) = p(x, z|ψ)dz = p(x|z, ψ)p(z|ψ)dz. Unfortunately, in most cases, this integral is intractable and cannot be approximated by Monte-Carlo sampling in reasonable time. To overcome this problem, we can introduce an arbitrary distribution q(z|x, χ) and remark that the following holds: DISPLAYFORM3 with the Evidence Lower Bound being: DISPLAYFORM4 Looking at Equation FORMULA10, we can see that since the KL divergence is non-negative, L(q, ψ) ≤ log p(x|ψ) − D KL ([q(z|x, χ) p(z|x, ψ)] whatever the q distribution, hence the name of Evidence Lower Bound (ELBO). Consequently, maximizing the ELBO have the effect to maximize the log likelihood, while minimizing the KL-Divergence between the approximate q(z|x) distribution, and the true unknown posterior p(z|x, ψ). The approach taken by VAEs is to learn the parameters of both conditional distributions p(x|z, ψ) and q(z|x, χ) as non-linear functions. Under some restricted conditions, Equation can be turned into a valid cost function to train a neural network. First, we hypothesize that q(z|x, χ) and p(z|ψ) follow Multivariate Gaussian distributions with diagonal covariances, which allows us to compute the b term in closed form. Second, using the Gaussian assumption on q, we can reparameterize the inner sampling operation by z = µ + σ 2 with ∼ N (0, I). Using this trick, the Path-wise Derivative estimator can be used for the a member of the ELBO. Under those conditions, and assuming that p(x|ψ) follows a Multivariate Bernouilli distribution, we can write the cost function used to train the neural network as: DISPLAYFORM5 where f χ represents the encoding and sampling part of the architecture and g ψ represents the decoding part of the architecture. In essence, this derivation simplifies to the initial cost function used in AEs augmented by a term penalizing the divergence between q(z|x, χ) and the assumed prior that p(x|ψ) = N (0, I).Normalizing Flow overcomes the problem stated earlier, by permitting more expressive prior distributions . It is based on the classic rule of change of variables for random variables. Considering a random variable z 0 ∼ q(z 0), and an invertible transformation t: DISPLAYFORM6 We can then directly chain different invertible transformations t 1, t 2,..., t K to produce a new ran- DISPLAYFORM7. In this case, we have: DISPLAYFORM8 This formulation is interesting because the Law Of The Unconscious Statistician allows us to compute expectations over q(z k) without having a precise knowledge of it: DISPLAYFORM9 provided that h does not depends on q(z k). Using this principle on the ELBO allows us to derive the following: DISPLAYFORM10 This is nothing more than the regular ELBO with an additional term concerning the log-determinant of the transformations. In practice, as before, we use p(z 0) = N (z 0 ; 0, I), and q(z 0 |x) = N (z 0 ; µ(x), diag(σ(x)2 )). We only have to find out parameterized transformations t, whose parameters can be learned and have a defined log-determinant. Using radial flow, which is expressed as: DISPLAYFORM11 where r = |z − c|, h(α, r) = 1 α+r and α, β, c are learnable parameters of the transformation, our cost function can be written as: DISPLAYFORM12 (1 + log(σ( DISPLAYFORM13 provided that f χ represents the encoding, sampling ad transforming part of the architecture, g ψ represents the decoding part of the architecture, and β k, α k, c k are the parameters of the different transformations. Other types of transformations have been proposed lately. The Householder flow BID52) is a volume preserving transformation, meaning that its log determinant equals 1, with the consequence that it can be used with no modifications of the loss function. A more convoluted type of transformations based on a masked autoregressive auto-encoder, the Inverse Autoregressive Flow, was proposed in BID23. We did not explore those two last approaches. The following environments were considered:• Arm-Ball: A 7 joints arm, controlled in angular position, can move around in an environment containing a ball. The environment state is perceived visually as a 50x50 pixels image. The arm has a sticky arm tip: if the tip of the arm touches the ball, the ball sticks to the arm until the end of the movement. The underlying state of the environment is hence parameterized by two bounded continuous factors which represent the coordinates of the ball. A situation can be sampled by the experimenter by taking a random point in 2.• Arm-Arrow: The same arm can manipulate an arrow in a plane, an arrow being considered as an object with a single symmetry that can be oriented in space. Consequently, the underlying state of the environment is parameterized by two bounded continuous factors representing the coordinates of the arrow, and one periodic continuous factor representing its orientation. A particular situation can hence be sampled by taking a random point in 3.The physical situations were represented by small 70x70 images very similar to the dSprites dataset proposed by BID18 15. The arm was not depicted in the field of view of the (virtual) camera used to gather images for representation learning. We used a robotic arm composed of 7 joints, whose motions were parameterized by DMPs using 3 basis functions (hence action policies have 21 continuous parameters), during 50 time-steps. An example of such a DMP executed in the environment is represented in FIG6. The first phase, where the learner observes changes of the environment (= ball moves) caused by another agent, is modeled by a process which samples iteratively a random state in the underlying state space, e.g. in the case of Arm-Ball s ∼ U ( 2 ), and then generating the corresponding image x = f (s) that is observed by the learner. For the experiments, we instantiated the Algorithmic Architecture 1 into Algorithm 3.In the text, Algorithm 3 is denoted (RGE-), where denotes any representation learning algorithm: (RGE-AE) for Auto-Encoders, (RGE-VAE) for Variational Auto-Encoders, (RGE-RFVAE) Image, 70x70
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1DWPP1A-
We propose a novel Intrinsically Motivated Goal Exploration architecture with unsupervised learning of goal space representations, and evaluate how various implementations enable the discovery of a diversity of policies.
One of the main challenges of deep learning methods is the choice of an appropriate training strategy. In particular, additional steps, such as unsupervised pre-training, have been shown to greatly improve the performances of deep structures. In this article, we propose an extra training step, called post-training, which only optimizes the last layer of the network. We show that this procedure can be analyzed in the context of kernel theory, with the first layers computing an embedding of the data and the last layer a statistical model to solve the task based on this embedding. This step makes sure that the embedding, or representation, of the data is used in the best possible way for the considered task. This idea is then tested on multiple architectures with various data sets, showing that it consistently provides a boost in performance. One of the main challenges of the deep learning methods is to efficiently solve the highly complex and non-convex optimization problem involved in the training step. Many parameters influence the performances of trained networks, and small mistakes can drive the algorithm into a sub-optimal local minimum, ing into poor performances BID0. Consequently, the choice of an appropriate training strategy is critical to the usage of deep learning models. The most common approach to train deep networks is to use the stochastic gradient descent (SGD) algorithm. This method selects a few points in the training set, called a batch, and compute the gradient of a cost function relatively to all the layers parameter. The gradient is then used to update the weights of all layers. Empirically, this method converges most of the time to a local minimum of the cost function which have good generalization properties. The stochastic updates estimate the gradient of the error on the input distribution, and several works proposed to use variance reduction technique such as Adagrap BID2, RMSprop BID8 or Adam , to achieve faster convergence. While these algorithms converge to a local minima, this minima is often influenced by the properties of the initialization used for the network weights. A frequently used approach to find a good starting point is to use pre-training BID12 BID5. This method iteratively constructs each layer using unsupervised learning to capture the information from the data. The network is then fine-tuned using SGD to solve the task at hand. Pretraining strategies have been applied successfully to many applications, such as classification tasks BID0 BID17, regression BID6, robotics BID4 or information retrieval BID19. The influence of different pre-training strategies over the different layers has been thoroughly studied in BID13. In addition to improving the training strategies, these works also shed light onto the role of the different layers BID3 BID15. The first layers of a deep neural network, qualified as general, tend to learn feature extractors which can be reused in other architectures, independently of the solved task. Meanwhile, the last layers of the network are much more dependent of the task and data set, and are said to be specific. Deep Learning generally achieves better than shallow structures, but the later are generally easier to train and more stable. For convex models such as logistic regression, the training problem is also convex when the data representation is fixed. The separation between the representation and the model learning is a key ingredient for the model stability. When the representation is learned simultaneously, for instance with dictionary learning or with EM algorithms, the problem often become non-convex. But this coupling between the representation and the model is critical for end-to-end models. For instance, showed that for networks trained using pretraining, the fine-tuning step -where all the layers are trained together -improves the performances of the network. This shows the importance of the adaptation of the representation to the task in end-to-end models. Our contribution in this chapter is an additional training step which improves the use of the representation learned by the network to solve the considered task. This new step is called post-training. It is based on the idea of separating representation learning and statistical analysis and it should be used after the training of the network. In this step, only the specific layers are trained. Since the general layers -which encode the data representation -are fixed, this step focuses on finding the best usage of the learned representation to solve the desired task. In particular, we chose to study the case where only the last layer is trained during the post-training, as this layer is the most specific one BID22. In this setting, learning the weights of the last layer corresponds to learning the weights for the kernel associated to the feature map given by the previous layers. The post-training scheme can thus be interpreted in light of different from kernel theory. To summarize our contributions:• We introduce a post-training step, where all layers except the last one are frozen. This method can be applied after any traditional training scheme for deep networks. Note that this step does not replace the end-to-end training, which co-adapts the last layer representation with the solver weights, but it makes sure that this representation is used in the most efficient way for the given task.• We show that this post-training step is easy to use, that it can be effortlessly added to most learning strategies, and that it is computationally inexpensive.• We highlight the link existing between this method and the kernel techniques. We also show numerically that the previous layers can be used as a kernel map when the problem is small enough.• We experimentally show that the post-training does not overfit and often produces improvement for various architectures and data sets. The rest of this article is organized as follows: Section 2 introduces the post-training step and discusses its relation with kernel methods. Section 4 presents our numerical experiments with multiple neural network architectures and data sets and Section 5 discusses these . In this section, we consider a feedforward neural network with L layers, where X 1,..., X L denote the input space of the different layers, typically R d l with d l > 0 and Y = X L+1 the output space of our network. Let φ l: X l → X l+1 be the applications which respectively compute the output of the l-th layer of the network, for 1 ≤ l ≤ L, using the output of the l−1-th layer and Φ L = φ L • · · · • φ 1 be the mapping of the full network from X 1 to Y. Also, for each layer l, we denote W W W l its weights matrix and ψ l its activation function. The training of our network is done using a convex and continuous loss function: Y × Y → R +. The objective of the neural network training is to find weights parametrizing Φ L that solves the following problem: DISPLAYFORM0, drawn from this input distribution. Using these notations, the training objective can then be rewritten DISPLAYFORM1 This reformulation highlights the special role of the last layer in our network compared to the others. When Φ L−1 is fixed, the problem of finding W W W L is simple for several popular choices of activation function ψ L and loss. For instance, when the activation function ψ L is the softmax function and the loss is the cross entropy, is a multinomial logistic regression. In this case, training the last layer is equivalent to a regression of the labels y using the embedding of the data x in X L by the mapping Φ L−1. Since the problem is convex in W W W L (see Appendix A), classical optimization techniques can efficiently produce an accurate approximation of the optimal weights W W W L -and this optimization given the mapping Φ L−1 is the idea behind post-training. Indeed, during the regular training, the network tries to simultaneously learn suitable representation for the data in the space X L through its L − 1 first layer and the best use of this representation with W W W L. This joint minimization is a strongly non-convex problem, therefore ing in a potentially sub-optimal usage of the learned data representation. The post-training is an additional step of learning which takes place after the regular training and proceeds as follows:1. Regular training: This step aims to obtain interesting features to solve the initial problem, as in any usual deep learning training. Any training strategy can be applied to the network, optimizing the empirical loss DISPLAYFORM2 The stochastic gradient descent explores the parameter space and provides a solution for Φ L−1 and W W W L. This step is non restrictive: any type of training strategy can be used here, including gradient bias reduction techniques, such as Adagrad BID2, or regularization strategies, for instance using Dropout BID1. Similarly, any type of stopping criterion can be used here. The training might last for a fixed number of epochs, or can stop after using early stopping BID16. Different combinations of training strategies and stopping criterion are tested in Section 4.2. Post-training: During this step, the first L − 1 layers are fixed and only the last layer of the network, φ L, is trained by minimizing over W W W L the following problem DISPLAYFORM3 where˜ (x, y):= (ψ L (x), y). This extra learning step uses the mapping Φ L−1 as an embedding of the data in X L and learn the best linear predictor in this space. This optimization problem takes place in a significantly lower dimensional space and since there is no need for back propagation, this step is computationally faster. To reduce the risk of overfitting with this step, a 2 -regularization is added. FIG0 illustrates the post-training step. We would like to emphasize the importance of the 2 -regularization used during the post-training. This regularization is added regardless of the one used in the regular training, and for all the network architectures. The extra term improves the strong convexity of the minimization problem, making post-training more efficient, and promotes the generalization of the model. The choice of the 2 -regularization is motivated from the comparison with the kernel framework discussed in Section 3 and from our experimental . Remark 1 (Dropout.). It is important to note that Dropout should not be applied on the previous layers of the network during the post-training, as it would lead to changes in the feature function Φ L−1. In this section, we show that for the case where DISPLAYFORM0 can be approximated using kernel methods. We define the kernel k as follows, DISPLAYFORM1 Then k is the kernel associated with the feature function Φ L−1. It is easy to see that this kernel is continuous positive definite and that for DISPLAYFORM2 belongs by construction to the Reproducing Kernel Hilbert Space (RKHS) H k generated by k. The post-training problem is therefore related to the problem posed in the RKHS space H k, defined by DISPLAYFORM3 This problem is classic for the kernel methods. With mild hypothesis on˜, the generalized representer theorem can be applied BID20. As a consequence, there exists α * ∈ R N such that DISPLAYFORM4 Rewriting FORMULA8 with g * of the form, we have that g * = g W W W *, with DISPLAYFORM5 We emphasize that W W W * gives the optimal solution for the problem and should not be confused with W W W * L, the optimum of. However, the two problems differ only in their regularization, which are closely related (see the next paragraph). Thus W W W * can thus be seen as an approximation of the optimal value W W W * L. It is worth noting that in our experiments, W W W * appears to be a nearly optimal estimator of W W W * L (see Subsection 4.3).Relation between · H and · 2. The problems and only differ in the choice of the regularization norm. By definition of the RKHS norm, we have DISPLAYFORM6 Consequently, we have that g W H ≤ W 2, with equality when Vect(Φ L−1 (X 1)) spans the entire space X L. In this case, the norm induced by the RKHS is equal to the 2 -norm. This is generally the case, as the input space is usually in a far higher dimensional space than the embedding space, and since the neural network structure generally enforces the independence of the features. Therefore, while both norms can be used in, we chose to use the 2 -norm for all our experiments as it is easier to compute than the RKHS norm. Close-form Solution. In the particular case where (y 1, y 2) = y 1 − y 2 2 and f (x) = x, can be reduced to a classical Kernel Ridge Regression problem. In this setting, W * can be computed by combining FORMULA9 and DISPLAYFORM7 where DISPLAYFORM8 Y is the matrix of the output data y 1,..., y N and I I I N is the identity matrix in R N. This is experimentally illustrated in Subsection 4.3. Although data sets are generally too large for to be computed in practice, it is worth noting that some kernel methods, such as Random Features BID18, can be applied to compute approximations of the optimal weights during the post-training. Multidimensional Output. Most of the previously discussed related to kernel theory hold for multidimensional output spaces, i.e. dim(X L+1) = d > 1, using multitask or operator valued kernels BID9. Hence the previous remarks can be easily extended to multidimensional outputs, encouraging the use of post-training in most settings. This section provides numerical arguments to study post-training and its influence on performances, over different data sets and network architectures. All the experiments were run using python and Tensorflow. The code to reproduce the figures is available online 1. The of all the experiments are discussed in depth in Section 5. The post-training method can be applied easily to feedforward convolutional neural network, used to solve a wide class of real world problems. To assert its performance, we apply it to three classic benchmark datsets: CIFAR10 BID11, MNIST and FACES BID5.CIFAR10. This data set is composed of 60, 000 images 32 × 32, representing objects from 10 classes. We use the default architecture proposed by Tensorflow for CIFAR10 in our experiments, based on the original architecture proposed by BID11. It is composed of 5 layers described in FIG1. The first layers use various common tools such as local response normalization (lrn), max pooling and RELU activation. The last layer have a softmax activation function Test Error (-0.1) Figure 3: Evolution of the performances of the neural network on the CIFAR10 data set, (dashed) with the usual training and (solid) with the post-training phase. For the post-training, the value of the curve at iteration q is the error for a network trained for q − 100 iterations with the regular training strategy and then trained for 100 iterations with post-training. The top figure presents the classification error on the training set and the bottom figure displays the loss cost on the test set. The curves have been smoothed to increase readability. and the chosen training loss was the cross entropy function. The network is trained for 90k iterations, with batches of size 128, using stochastic gradient descent (SGD), dropout and an exponential weight decay for the learning rate. Figure 3 presents the performance of the network on the training and test sets for 2 different training strategies. The dashed line present the classic training with SGD, with performance evaluated every 100 iterations and the solid line present the performance of the same network where the last 100 iterations are done using post-training instead of regular training. To be clearer, the value of this curve at iteration q is the error of the network, trained for q − 100 iterations with the regular training strategy, and then trained for 100 iterations with post-training. The regularization parameter λ for post-training is set to 1 × 10 −3.The show that while the training cost of the network mildly increases due to the use of post-training, this extra step improves the generalization of the solution. The gain is smaller at the end of the training as the network converges to a local minimum, but it is consistent. Also, it is interesting to note that the post-training iterations are 4× faster than the classic iterations, due to their inexpensiveness. Additional Data Sets. We also evaluate post-training on the MNIST data set (65000 images 27 × 27, with 55000 for train and 10000 for test; 10 classes) and the pre-processed FACES data set (400 images 64 × 64, from which 102400 sub-images, 32 × 32, are extracted, with 92160 for training and 10240 for testing; 40 classes). For each data set, we train two different convolutional neural networks -to assert the influence of the complexity of the network over post-training:• a small network, with one convolutional layer (5 × 5 patches, 32 channels), one 2 × 2 max pooling layer, and one fully connected hidden layer with 512 neurons,• a large network, with one convolutional layer (5 × 5 patches, 32 channels), one 2 × 2 max pooling layer, one convolutional layer (5 × 5 patches, 64 channels), one 2 × 2 max pooling layer and one fully connected hidden layer with 1024 neurons. We use dropout for the regularization, and set λ = 1 × 10 −2. We compare the performance gain ing of the application of post-training (100 iterations) at different epochs of each of these networks. The are reported in TAB0.As seen in TAB0, post-training improves the test performance of the networks with as little as 100 iterations -which is negligible compared to the time required to train the network. While the improvement varies depending on the complexity of the network, of the data set, and of the time spent training the network, it is important to remark that it always provides an improvement. While the kernel framework developed in Section 2 does not apply directly to Recurrent Neural Network, the idea of post-training can still be applied. In this experiment, we test the performances of post-training on Long Short-Term Memory-based networks (LSTM), using PTB data set BID14.Penn Tree Bank (PTB). This data set is composed of 929k training words and 82k test word, with a 10000 words vocabulary. We train a recurrent neural network to predict the next word given the word history. We use the architecture proposed by , composed of 2 layers of 1500 LSTM units with tanh activation, followed by a fully connected softmax layer. The network is trained to minimize the average per-word perplexity for 100 epochs, with batches of size 20, using gradient descent, an exponential weight decay for the learning rate, and dropout for regularization. The performances of the network after each epoch are compared to the obtained if the 100 last steps (i.e. 100 batches) are done using post-training. The regularization parameter for posttraining, λ, is set to 1 × 10 −3. The are reported in Figure 4, which presents the evolution of the training and testing perplexity. Similarly to the previous experiments, post-training improves the test performance of the networks, even after the network has converged. In this subsection we aim to empirically evaluate the close-form solution discussed in Section 2 for regression tasks. We set the activation function of the last layer to be the identity f L (x) = x, and consider the loss function to be the least-squared error (x, y) = x − y 2 2 in. In in each experiment, and FORMULA9 are used to compute W * for the kernel learned after the regular training of Test Perplexity (-80.5) Figure 4: Evolution of the performances of the Recurrent network on the PTB data set. The top figure presents the train perplexity and the bottom figure displays the test perplexity. For the posttraining, the value of the curve at iteration q is the error for a network trained for q − 100 iterations with the regular training strategy and then trained for 100 iterations with post-training.the neural network, which learn the embedding Φ L−1 and an estimate W L. In order to illustrate this , and to compare the performances of the weights W * with respect to the weights W L, learned either with usual learning strategies or with post-training, we train a neural network on two regression problems using a real and a synthetic data set. 70% of the data are used for training, and 30% for testing. Real Data Set Regression. For this experiment, we use the Parkinson Telemonitoring data set BID21. The input consists in 5, 875 instances of 17 dimensional data, and the output are one dimensional real number. For this data set, a neural network made of two fully connected hidden layers of size 17 and 10 with respectively tanh and RELU activation, is trained for 250, 500 and 750 iterations, with batches of size 50. The layer weights are all regularized with the 2 -norm and a fixed regularization parameter λ = 10 −3. Then, starting from each of the trained networks, 200 iterations of post-training are used with the same regularization parameter λ and the performances are compared to the closed-form solutions computed using for each saved network. The are presented in TAB1.Simulated Data Set Regression. For this experiment, we use a synthetic data set. The inputs were generated using a uniform distribution on 0, 1 10. The outputs are computed as follows: DISPLAYFORM0 where W 1 ∈ −1, 1 10×5 and W 2 ∈ −1, 1 5 are randomly generated using a uniform law. In total, the data set is composed of 10, 000 pairs (x i, y j). For this data set, a neural network with two fully connected hidden layers of size 10 with activation tanh for the first layer and RELU for the second layer is trained for 250, 500 and 750 iterations, with batches of size 50. We use the same protocol with 200 extra post-training iterations. The are presented in TAB1.For these two experiments, the post-training improves the performances toward these of the optimal solution, for several choices of stopping times. It is worth noting that the performance of the optimal solution is better when the first layers are not fully optimized with Parkinson Telemonitoring data set. This effect denotes an overfitting tendency with the full training, where the first layers become overly specified for the training set. The experiments presented in Section 4 show that post-training improves the performances of all the networks considered -including recurrent, convolutional and fully connected networks. The gain is significant, regardless of the time at which the regular training is stopped and the posttraining is done. In both the CIFAR10 and the PTB experiment, the gap between the losses with and without post-training is more pronounced if the training is stopped early, and tends to be smaller as the network converges to a better solution (see Figure 4 and Figure 3). The reduction of the gap between the test performances with and without post-training is made clear in TAB0. For the MNIST data set, with a small-size convolutional neural network, while the error rate drops by 1.5% when post-training is applied after 5000 iterations, this same error rate only drops by 0.2% when it is applied after 20000 iterations. This same observation can be done for the other reported in TAB0. However, while the improvement is larger when the network did not fully converge prior to the post-training, it is still significant when the network has reached its minimum: for example in PTB the final test perplexity is 81.7 with post-training and 82.4 without; in CIFAR10 the errors are respectively 0.147 and 0.154.If the networks are allowed to moderately overfit, for instance by training them with regular algorithm for a very large number of iterations, the advantage provided by post-training vanishes: for example in PTB the test perplexity after 2000 iterations (instead of 400) is 83.2 regardless of posttraining. This is coherent with the intuition behind the post-training: after overfitting, the features learned by the network become less appropriate to the general problem, therefore their optimal usage obtained by post-training no longer provide an advantage. It is important to note that the post-training computational cost is very low compared to the full training computations. For instance, in the CIFAR10 experiment, each iteration for post-training is 4× faster on the same GPU than an iteration using the full gradient. Also, in the different experiments, post-training produces a performance gap after using as little as 100 batches. There are multiple reasons behind this efficiency: first, the system reaches a local minimum relatively rapidly for post-training as the problem FORMULA3 has a small number of parameters compared to the dimensionality of the original optimization problem. Second, the iterations used for the resolution of are computationally cheaper, as there is no need to chain high dimensional linear operations, contrarily to regular backpropagation used during the training phase. Finally, since the post-training optimization problem is generally convex, the optimization is guaranteed to converge rapidly to the optimal weights for the last layer. Another interesting point is that there is no evidence that the post-training step leads to overfitting. In CIFAR10, the test error is improved by the use of post-training, although the training loss is similar. The other experiments do not show signs of overfitting either as the test error is mostly improved by our additional step. This stems from the fact that the post-training optimization is much simpler than the original problem as it lies in a small-dimensional space -which, combined with the added 2 -regularization, efficiently prevents overfitting. The regularization parameter λ plays an important role in post-training. Setting λ to be very large reduces the explanatory capacity of the networks whereas if λ is too small, the capacity can become too large and lead to overfitting. Overall, our experiments highlighted that the post-training produces significant for any choice of λ reasonably small (i.e 10 −5 ≤ λ ≤ 10 −2). This parameter is linked to the regularization parameter of the kernel methods, as stated in Section 3.Overall, these show that the post-training step can be applied to most trained networks, without prerequisites about how optimized they are since post-training does not degrade their performances, providing a consistent gain in performances for a very low additional computational cost. In Subsection 4.3, numerical experiments highlight the link between post-training and kernel methods. As illustrated in TAB1, using the optimal weights derived from kernel theory immediately a performance boost for the considered network. The post-training step estimate numerically this optimal layer with the gradient descent optimizer. However, computing the optimal weights for the last layer is only achievable for small data set due to the required matrix inversion. Moreover, the closed form solution is known only for specific problems, such as kernelized least square regression. But post-training approaches the same performance in these cases solving with gradient-based methods. The post-training can be linked very naturally to the idea of pre-training, developed notably by BID12, and BID5. The unsupervised pre-training of a layer is designed to find a representation that captures enough information from the data to be able to reconstruct it from its embedding. The goal is thus to find suitable parametrization of the general layers to extract good features, summarizing the data. Conversely, the goal of the posttraining is, given a representation, to find the best parametrization of the last layer to discriminate the data. These two steps, in contrast with the usual training, focus on respectively the general or specific layers. In this work, we studied the concept of post-training, an additional step performed after the regular training, where only the last layer is trained. This step is intended to take fully advantage of the data representation learned by the network. We empirically shown that post-training is computationally inexpensive and provide a non negligible increase of performance on most neural network structures. While we chose to focus on post-training solely the last layer -as it is the most specific layer in the network and the ing problem is strongly convex under reasonable prerequisites -the relationship between the number of layers frozen in the post-training and the ing improvements might be an interesting direction for future works. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization.arXiv preprint, arXiv:1409, 2014. We show here, for the sake of completeness, that the post-training problem is convex for the softmax activation in the last layer and the cross entropy loss. This is proved showing that the hessian of the function is positive semidefinite, as it is a diagonally dominant matrix. Proposition 2 (convexity). ∀N, M ∈ N, ∀X ∈ R N, ∀j ∈ 1, M, the following function F is convex: DISPLAYFORM0 δ ij log exp(XW i).where δ is the Dirac function, and W i denotes the i-th row of a W. Proof 1. Let DISPLAYFORM1. DISPLAYFORM2 we have DISPLAYFORM3 ∂W m,n ∂W p,q = x n ∂P m ∂W p,q, = x n x q P m (W) δ m,p − P p (W). where ⊗ is the Kronecker product, and the matrix P(W) is defined by P m,p = P m (W) δ m,p − P p (W). Now since ∀1 ≤ m ≤ M, and thus P(W) is positive semidefinite. Since XX T is positive semidefinite too, their Kronecker product is also positive semidefinite, hence the .
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1O0KGC6b
We propose an additional training step, called post-training, which computes optimal weights for the last layer of the network.
Natural language processing (NLP) models often require a massive number of parameters for word embeddings, ing in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as, where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture. Word embeddings play an important role in neural-based natural language processing (NLP) models. Neural word embeddings encapsulate the linguistic information of words in continuous vectors. However, as each word is assigned an independent embedding vector, the number of parameters in the embedding matrix can be huge. For example, when each embedding has 500 dimensions, the network has to hold 100M embedding parameters to represent 200K words. In practice, for a simple sentiment analysis model, the word embedding parameters account for 98.8% of the total parameters. As only a small portion of the word embeddings is selected in the forward pass, the giant embedding matrix usually does not cause a speed issue. However, the massive number of parameters in the neural network in a large storage or memory footprint. When other components of the neural network are also large, the model may fail to fit into GPU memory during training. Moreover, as the demand for low-latency neural computation for mobile platforms increases, some neural-based models are expected to run on mobile devices. Thus, it is becoming more important to compress the size of NLP models for deployment to devices with limited memory or storage capacity. In this study, we attempt to reduce the number of parameters used in word embeddings without hurting the model performance. Neural networks are known for the significant redundancy in the connections BID8. In this work, we further hypothesize that learning independent embeddings causes more redundancy in the embedding vectors, as the inter-similarity among words is ignored. Some words are very similar regarding the semantics. For example, "dog" and "dogs" have almost the same meaning, except one is plural. To efficiently represent these two words, it is desirable to share information between the two embeddings. However, a small portion in both vectors still has to be trained independently to capture the syntactic difference. Following the intuition of creating partially shared embeddings, instead of assigning each word a unique ID, we represent each word w with a code C w = (C an integer number in [1, K]. Ideally, similar words should have similar codes. For example, we may desire C dog = and C dogs =. Once we have obtained such compact codes for all words in the vocabulary, we use embedding vectors to represent the codes rather than the unique words. More specifically, we create M codebooks E 1, E 2,..., E M, each containing K codeword vectors. The embedding of a word is computed by summing up the codewords corresponding to all the components in the code as DISPLAYFORM0 where DISPLAYFORM1 w -th codeword in the codebook E i. In this way, the number of vectors in the embedding matrix will be M × K, which is usually much smaller than the vocabulary size. FIG0 gives an intuitive comparison between the compositional approach and the conventional approach (assigning unique IDs). The codes of all the words can be stored in an integer matrix, denoted by C. Thus, the storage footprint of the embedding layer now depends on the total size of the combined codebook E and the code matrix C.Although the number of embedding vectors can be greatly reduced by using such coding approach, we want to prevent any serious degradation in performance compared to the models using normal embeddings. In other words, given a set of baseline word embeddingsẼ(w), we wish to find a set of codesĈ and combined codebookÊ that can produce the embeddings with the same effectiveness asẼ(w). A safe and straight-forward way is to minimize the squared distance between the baseline embeddings and the composed embeddings as DISPLAYFORM2 where |V | is the vocabulary size. The baseline embeddings can be a set of pre-trained vectors such as word2vec BID29 or GloVe BID34 embeddings. In Eq. 3, the baseline embedding matrixẼ is approximated by M codewords selected from M codebooks. The selection of codewords is controlled by the code C w. Such problem of learning compact codes with multiple codebooks is formalized and discussed in the research field of compressionbased source coding, known as product quantization BID16 and additive quantization BID1 BID28. Previous works learn compositional codes so as to enable an efficient similarity search of vectors. In this work, we utilize such codes for a different purpose, that is, constructing word embeddings with drastically fewer parameters. Due to the discreteness in the hash codes, it is usually difficult to directly optimize the objective function in Eq. 3. In this paper, we propose a simple and straight-forward method to learn the codes in an end-to-end neural network. We utilize the Gumbel-softmax trick BID27 BID15 to find the best discrete codes that minimize the loss. Besides the simplicity, this approach also allows one to use any arbitrary differentiable loss function, such as cosine similarity. The contribution of this work can be summarized as follows:• We propose to utilize the compositional coding approach for constructing the word embeddings with significantly fewer parameters. In the experiments, we show that over 98% of the embedding parameters can be eliminated in sentiment analysis task without affecting performance. In machine translation tasks, the loss-free compression rate reaches 94% ∼ 99%.• We propose a direct learning approach for the codes in an end-to-end neural network, with a Gumbel-softmax layer to encourage the discreteness.• The neural network for learning codes will be packaged into a tool 1. With the learned codes and basis vectors, the computation graph for composing embeddings is fairly easy to implement, and does not require modifications to other parts in the neural network. Existing works for compressing neural networks include low-precision computation BID38 BID14 BID7 BID0, quantization BID6 BID10 BID44, network pruning BID22 BID11 BID9 BID40 and knowledge distillation BID13. Network quantization such as HashedNet BID6 forces the weight matrix to have few real weights, with a hash function to determine the weight assignment. To capture the non-uniform nature of the networks, DeepCompression BID10 groups weight values into clusters based on pre-trained weight matrices. The weight assignment for each value is stored in the form of Huffman codes. However, as the embedding matrix is tremendously big, the number of hash codes a model need to maintain is still large even with Huffman coding. Network pruning works in a different way that makes a network sparse. Iterative pruning BID9 prunes a weight value if its absolute value is smaller than a threshold. The remaining network weights are retrained after pruning. Some recent works BID36 BID43 ) also apply iterative pruning to prune 80% of the connections for neural machine translation models. In this paper, we compare the proposed method with iterative pruning. The problem of learning compact codes considered in this paper is closely related to learning to hash BID39 BID21 BID25, which aims to learn the hash codes for vectors to facilitate the approximate nearest neighbor search. Initiated by product quantization BID16, subsequent works such as additive quantization BID1 explore the use of multiple codebooks for source coding, ing in compositional codes. We also adopt the coding scheme of additive quantization for its storage efficiency. Previous works mainly focus on performing efficient similarity search of image descriptors. In this work, we put more focus on reducing the codebook sizes and learning efficient codes to avoid performance loss. BID17 utilizes an improved version of product quantization to compress text classification models. However, to match the baseline performance, much longer hash codes are required by product quantization. This will be detailed in Section 5.2. Concurrent to this work, also explores the similar idea and obtained positive in language modeling tasks. Also, BID35 tried to reduce dimension of embeddings using PCA.To learn the codebooks and code assignment, additive quantization alternatively optimizes the codebooks and the discrete codes. The learning of code assignment is performed by Beam Search algorithm when the codebooks are fixed. In this work, we propose a straight-forward method to directly learn the code assignment and codebooks simutaneously in an end-to-end neural network. Some recent works BID41 BID24 BID42 in learning to hash also utilize neural networks to produce binary codes by applying binary constrains (e.g., sigmoid function). In this work, we encourage the discreteness with the Gumbel-Softmax trick for producing compositional codes. As an alternative to our approach, one can also reduce the number of unique word types by forcing a character-level segmentation. BID18 proposed a character-based neural language model, which applies a convolutional layer after the character embeddings. BID3 propose to use char-gram as input features, which are further hashed to save space. Generally, using characterlevel inputs requires modifications to the model architecture. Moreover, some Asian languages such as Japanese and Chinese retain a large vocabulary at the character level, which makes the character-based approach difficult to be applied. In contrast, our approach does not suffer from these limitations. In this section, we formally describe the compositional coding approach and analyze its merits for compressing word embeddings. The coding approach follows the scheme in additive quantization BID1. We represent each word w with a compact code C w that is composed of M components such that DISPLAYFORM0, which also indicates that M log 2 K bits are required to store each code. For convenience, K is selected to be a number of a multiple of 2, so that the codes can be efficiently stored. If we restrict each component C i w to values of 0 or 1, the code for each word C w will be a binary code. In this case, the code learning problem is equivalent to a matrix factorization problem with binary components. Forcing the compact codes to be binary numbers can be beneficial, as the learning problem is usually easier to solve in the binary case, and some existing optimization algorithms in learning to hash can be reused. However, the compositional coding approach produces shorter codes and is thus more storage efficient. As the number of basis vectors is M × K regardless of the vocabulary size, the only uncertain factor contributing to the model size is the size of the hash codes, which is proportional to the vocabulary size. Therefore, maintaining short codes is cruicial in our work. Suppose we wish the model to have a set of N basis vectors. Then in the binary case, each code will have N/2 bits. For the compositional coding approach, if we can find a M × K decomposition such that M × K = N, then each code will have M log 2 K bits. For example, a binary code will have a length of 256 bits to support 512 basis vectors. In contrast, a 32 × 16 compositional coding scheme will produce codes of only 128 bits. To support N basis vectors, a binary code will have N/2 bits and the embedding computation is a summation over N/2 vectors. For the compositional approach with M codebooks and K codewords in each codebook, each code has M log 2 K bits, and the computation is a summation over M vectors. A comparison of different coding approaches is summarized in TAB0. We also report the number of basis vectors required to compute an embedding as a measure of computational cost. For the conventional approach, the number of vectors is identical to the vocabulary size and the computation is basically a single indexing operation. In the case of binary codes, the computation for constructing an embedding involves a summation over N/2 basis vectors. For the compositional approach, the number of vectors required to construct an embedding vector is M. Both the binary and compositional approaches have significantly fewer vectors in the embedding matrix. The compositional coding approach provides a better balance with shorter codes and lower computational cost. LetẼ ∈ R |V |×H be the original embedding matrix, where each embedding vector has H dimensions. By using the reconstruction loss as the objective function in Eq. 3, we are actually finding DISPLAYFORM0 Therefore, the problem of learning discrete codes C w can be converted to a problem of finding a set of optimal one-hot vectors d DISPLAYFORM1 where G k is a noise term that is sampled from the Gumbel distribution − log(− log(Uniform)), whereas τ is the temperature of the softmax. In our model, the vector α i w is computed by a simple neural network with a single hidden layer as DISPLAYFORM2 DISPLAYFORM3 In our experiments, the hidden layer h w always has a size of M K/2. We found that a fixed temperature of τ = 1 just works well. The Gumbel-softmax trick is applied to α i w to obtain d i w. Then, the model reconstructs the embedding E(C w) with Eq. 5 and computes the reconstruction loss with Eq. 3. The model architecture of the end-to-end neural network is illustrated in FIG1, which is effectively an auto-encoder with a Gumbel-softmax middle layer. The whole neural network for coding learning has five parameters (θ, b, θ, b, A).Once the coding learning model is trained, the code C w for each word can be easily obtained by applying argmax to the one-hot vectors d For general NLP tasks, one can learn the compositional codes from publicly available word vectors such as GloVe vectors. However, for some tasks such as machine translation, the word embeddings are usually jointly learned with other parts of the neural network. For such tasks, one has to first train a normal model to obtain the baseline embeddings. Then, based on the trained embedding matrix, one can learn a set of task-specific codes. As the reconstructed embeddings E(C w) are not identical to the original embeddingsẼ(w), the model parameters other than the embedding matrix have to be retrained again. The code learning model cannot be jointly trained with the machine translation model as it takes far more iterations for the coding layer to converge to one-hot vectors. In our experiments, we focus on evaluating the maximum loss-free compression rate of word embeddings on two typical NLP tasks: sentiment analysis and machine translation. We compare the model performance and the size of embedding layer with the baseline model and the iterative pruning method BID9. Please note that the sizes of other parts in the neural networks are not included in our . For dense matrices, we report the size of dumped numpy arrays. For the sparse matrices, we report the size of dumped compressed sparse column matrices (csc matrix) in scipy. All float numbers take 32 bits storage. We enable the "compressed" option when dumping the matrices, without this option, the file size is about 1.1 times bigger. To learn efficient compact codes for each word, our proposed method requires a set of baseline embedding vectors. For the sentiment analysis task, we learn the codes based on the publicly available GloVe vectors. For the machine translation task, we first train a normal neural machine translation (NMT) model to obtain task-specific word embeddings. Then we learn the codes using the pre-trained embeddings. We train the end-to-end network described in Section 4 to learn the codes automatically. In each iteration, a small batch of the embeddings is sampled uniformly from the baseline embedding matrix. The network parameters are optimized to minimize the reconstruction loss of the sampled embeddings. In our experiments, the batch size is set to 128. We use Adam optimizer BID19 ) with a fixed learning rate of 0.0001. The training is run for 200K iterations. Every 1,000 iterations, we examine the loss on a fixed validation set and save the parameters if the loss decreases. We evenly distribute the model training to 4 GPUs using the nccl package, so that one round of code learning takes around 15 minutes to complete. Dataset: For sentiment analysis, we use a standard separation of IMDB movie review dataset BID26, which contains 25k reviews for training and 25K reviews for testing purpose. We lowercase and tokenize all texts with the nltk package. We choose the 300-dimensional uncased GloVe word vectors (trained on 42B tokens of Common Crawl data) as our baseline embeddings. The vocabulary for the model training contains all words appears both in the IMDB dataset and the GloVe vocabulary, which in around 75K words. We truncate the texts of reviews to assure they are not longer than 400 words. Model architecture: Both the baseline model and the compressed models have the same computational graph except the embedding layer. The model is composed of a single LSTM layer with 150 hidden units and a softmax layer for predicting the binary label. For the baseline model, the embedding layer contains a large 75K × 300 embedding matrix initialized by GloVe embeddings. For the compressed models based on the compositional coding, the embedding layer maintains a matrix of basis vectors. Suppose we use a 32 × 16 coding scheme, the basis matrix will then have a shape of 512 × 300, which is initialized by the concatenated weight matrices [A 1 ; A 2 ; ...; A M] in the code learning model. The embedding parameters for both models remain fixed during the training. For the models with network pruning, the sparse embedding matrix is finetuned during training. Training details: The models are trained with Adam optimizer for 15 epochs with a fixed learning rate of 0.0001. At the end of each epoch, we evaluate the loss on a small validation set. The parameters with lowest validation loss are saved. Results: For different settings of the number of components M and the number of codewords K, we train the code learning network. The average reconstruction loss on a fixed validation set is summarized in the left of TAB2. For reference, we also report the total size (MB) of the embedding layer in the right table, which includes the sizes of the basis matrix and the hash table. We can see that increasing either M or K can effectively decrease the reconstruction loss. However, setting M to a large number will in longer hash codes, thus significantly increase the size of the embedding layer. Hence, it is important to choose correct numbers for M and K to balance the performance and model size. We also show the using normalized product quantization (NPQ) BID17. We quantize the filtered GloVe embeddings with the codes provided by the authors, and train the models based on the quantized embeddings. To make the comparable, we report the codebook size in numpy format. For our proposed methods, the maximum loss-free compression rate is achieved by a 16 × 32 coding scheme. In this case, the total size of the embedding layer is 1.23 MB, which is equivalent to a compression rate of 98.4%. We also found the classification accuracy can be substantially improved with a slightly lower compression rate. The improved model performance may be a byproduct of the strong regularization. Dataset: For machine translation tasks, we experiment on IWSLT 2014 German-to-English translation task BID4 and ASPEC English-to-Japanese translation task BID31. The IWSLT14 training data contains 178K sentence pairs, which is a small dataset for machine translation. We utilize moses toolkit BID20 to tokenize and lowercase both sides of the texts. Then we concatenate all five TED/TEDx development and test corpus to form a test set containing 6750 sentence pairs. We apply byte-pair encoding BID37 to transform the texts to subword level so that the vocabulary has a size of 20K for each language. For evaluation, we report tokenized BLEU using "multi-bleu.perl".The ASPEC dataset contains 300M bilingual pairs in the training data with the automatically estimated quality scores provided for each pair. We only use the first 150M pairs for training the models. The English texts are tokenized by moses toolkit whereas the Japanese texts are tokenized by kytea BID33. The vocabulary size for each language is reduced to 40K using byte-pair encoding. The evaluation is performed using a standard kytea-based post-processing script for this dataset. Model architecture: In our preliminary experiments, we found a 32 × 16 coding works well for a vanilla NMT model. As it is more meaningful to test on a high-performance model, we applied several techniques to improve the performance. The model has a standard bi-directional encoder composed of two LSTM layers similar to BID2. The decoder contains two LSTM layers. Residual connection BID12 with a scaling factor of 1/2 is applied to the two decoder states to compute the outputs. All LSTMs and embeddings have 256 hidden units in the IWSLT14 task and 1000 hidden units in ASPEC task. The decoder states are firstly linearly transformed to 600-dimensional vectors before computing the final softmax. Dropout with a rate of 0.2 is applied everywhere except the recurrent computation. We apply Key-Value Attention BID30 to the first decoder, where the query is the sum of the feedback embedding and the previous decoder state and the keys are computed by linear transformation of encoder states. Training details: All models are trained by Nesterov's accelerated gradient BID32 with an initial learning rate of 0.25. We evaluate the smoothed BLEU BID23 ) on a validation set composed of 50 batches every 7,000 iterations. The learning rate is reduced by a factor of 10 if no improvement is observed in 3 validation runs. The training ends after the learning rate is reduced three times. Similar to the code learning, the training is distributed to 4 GPUs, each GPU computes a mini-batch of 16 samples. We firstly train a baseline NMT model to obtain the task-specific embeddings for all in-vocabulary words in both languages. Then based on these baseline embeddings, we obtain the hash codes and basis vectors by training the code learning model. Finally, the NMT models using compositional coding are retrained by plugging in the reconstructed embeddings. Note that the embedding layer is fixed in this phase, other parameters are retrained from random initial values. The experimental are summarized in TAB5. All translations are decoded by the beam search with a beam size of 5. The performance of iterative pruning varies between tasks. The loss-free compression rate reaches 92% on ASPEC dataset by pruning 90% of the connections. However, with the same pruning ratio, a modest performance loss is observed in IWSLT14 dataset. For the models using compositional coding, the loss-free compression rate is 94% for the IWSLT14 dataset and 99% for the ASPEC dataset. Similar to the sentiment analysis task, a significant performance improvement can be observed by slightly lowering the compression rate. Note that the sizes of NMT models are still quite large due to the big softmax layer and the recurrent layers, which are not reported in the table. Please refer to existing works such as BID43 TAB6, we show some examples of learned codes based on the 300-dimensional uncased GloVe embeddings used in the sentiment analysis task. We can see that the model learned to assign similar codes to the words with similar meanings. Such a code-sharing mechanism can significantly reduce the redundancy of the word embeddings, thus helping to achieve a high compression rate. Besides the performance, we also care about the storage efficiency of the codes. In the ideal situation, all codewords shall be fully utilized to convey a fraction of meaning. However, as the codes are automatically learned, it is possible that some codewords are abandoned during the training. In extreme cases, some "dead" codewords can be used by none of the words. To analyze the code efficiency, we count the number of words that contain a specific subcode in each component. FIG5 gives a visualization of the code balance for three coding schemes. Each column shows the counts of the subcodes of a specific component. In our experiments, when using a 8 × 8 coding scheme, we found 31% of the words have a subcode "0" for the first component, while the subcode "1" is only used by 5% of the words. The assignment of codes is more balanced for larger coding schemes. In any coding scheme, even the most unpopular codeword is used by about 1000 words. This indicates that the code learning model is capable of assigning codes efficiently without wasting a codeword. The show that any codeword is assigned to more than 1000 words without wasting. In this work, we propose a novel method for reducing the number of parameters required in word embeddings. Instead of assigning each unique word an embedding vector, we compose the embedding vectors using a small set of basis vectors. The selection of basis vectors is governed by the hash code of each word. We apply the compositional coding approach to maximize the storage efficiency. The proposed method works by eliminating the redundancy inherent in representing similar words with independent embeddings. In our work, we propose a simple way to directly learn the discrete codes in a neural network with Gumbel-softmax trick. The show that the size of the embedding layer was reduced by 98% in IMDB sentiment analysis task and 94% ∼ 99% in machine translation tasks without affecting the performance. Our approach achieves a high loss-free compression rate by considering the semantic inter-similarity among different words. In qualitative analysis, we found the learned codes of similar words are very close in Hamming space. As our approach maintains a dense basis matrix, it has the potential to be further compressed by applying pruning techniques to the dense matrix. The advantage of compositional coding approach will be more significant if the size of embedding layer is dominated by the hash codes. A APPENDIX: SHARED CODESIn both tasks, when we use a small code decomposition, we found some hash codes are assigned to multiple words. Table 6 lists some samples of shared codes with their corresponding words from the sentiment analysis task. This phenomenon does not cause a problem in either task, as the words only have shared codes when they have almost the same sentiments or target translations.shared code words 4 7 7 0 4 7 1 1 homes cruises motel hotel resorts mall vacations hotels 6 6 7 1 4 0 2 0 basketball softball nfl nascar baseball defensive ncaa tackle nba 3 7 3 2 4 3 3 0 unfortunately hardly obviously enough supposed seem totally... 4 6 7 0 4 7 5 0 toronto oakland phoenix miami sacramento denver minneapolis... 7 7 6 6 7 3 0 0 yo ya dig lol dat lil bye Table 6: Examples of words sharing same codes when using a 8 × 8 code decomposition B APPENDIX: SEMANTICS OF CODES In order to see whether each component captures semantic meaning. we learned a set of codes using a 3 x 256 coding scheme, this will force the model to decompose each embedding into 3 vectors. In order to maximize the compression rate, the model must make these 3 vectors as independent as possible. Table 7: Some code examples using a 3 × 256 coding scheme. As we can see from Table 7, we can transform "man/king" to "woman/queen" by change the subcode "210" in the first component to "232". So we can think "210" must be a "male" code, and "232" must be a "female" code. Such phenomenon can also be observed in other words such as city names.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJRZzFlRb
Compressing the word embeddings over 94% without hurting the performance.
It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance. Machine Learning systems in deployment often encounter data that is unlike the model's training data. This can occur in discovering novel astronomical phenomena, finding unknown diseases, or detecting sensor failure. In these situations, models that can detect anomalies are capable of correctly flagging unusual examples for human intervention, or carefully proceeding with a more conservative fallback policy. Behind many machine learning systems are deep learning models which can provide high performance in a variety of applications, so long as the data seen at test time is similar to the training data. However, when there is a distribution mismatch, deep neural network classifiers tend to give high confidence predictions on anomalous test examples . This can invalidate the use of prediction probabilities as calibrated confidence estimates , and makes detecting anomalous examples doubly important. Several previous works seek to address these problems by giving deep neural network classifiers a means of assigning anomaly scores to inputs. These scores can then be used for detecting outof-distribution (OOD) examples (; ;). These approaches have been demonstrated to work surprisingly well for complex input spaces, such as images, text, and speech. Moreover, they do not require modeling the full data distribution, but instead can use heuristics for detecting unmodeled phenomena. Several of these methods detect unmodeled phenomena by using representations from only in-distribution data. In this paper, we investigate a complementary method where we train models to detect unmodeled data by learning cues for whether an input is unmodeled. While it is difficult to model the full data distribution, we can learn effective heuristics for detecting out-of-distribution inputs by exposing the model to OOD examples, thus learning a more conservative concept of the inliers and enabling the detection of novel forms of anomalies. We propose leveraging diverse, realistic datasets for this purpose, with a method we call Outlier Exposure (OE). OE provides a simple and effective way to consistently improve existing methods for OOD detection. Through numerous experiments, we extensively evaluate the broad applicability of Outlier Exposure. For multiclass neural networks, we provide thorough on Computer Vision and Natural Language Processing tasks which show that Outlier Exposure can help anomaly detectors generalize to and perform well on unseen distributions of outliers, even on large-scale images. We also demonstrate that Outlier Exposure provides gains over several existing approaches to out-of-distribution detection. Our also show the flexibility of Outlier Exposure, as we can train various models with different sources of outlier distributions. Additionally, we establish that Outlier Exposure can make density estimates of OOD samples significantly more useful for OOD detection. Finally, we demonstrate that Outlier Exposure improves the calibration of neural network classifiers in the realistic setting where a fraction of the data is OOD. Our code is made publicly available at https://github.com/hendrycks/outlier-exposure. Out-of-Distribution Detection with Deep Networks. demonstrate that a deep, pre-trained classifier has a lower maximum softmax probability on anomalous examples than in-distribution examples, so a classifier can conveniently double as a consistently useful outof-distribution detector. Building on this work, attach an auxiliary branch onto a pre-trained classifier and derive a new OOD score from this branch. present a method which can improve performance of OOD detectors that use a softmax distribution. In particular, they make the maximum softmax probability more discriminative between anomalies and in-distribution examples by pre-processing input data with adversarial perturbations . Unlike in our work, their parameters are tailored to each source of anomalies. train a classifier concurrently with a GAN , and the classifier is trained to have lower confidence on GAN samples. For each testing distribution of anomalies, they tune the classifier and GAN using samples from that out-distribution, as discussed in Appendix B of their work.; , in this work we train our method without tuning parameters to fit specific types of anomaly test distributions, so our are not directly comparable with their . Many other works (de ; ; ;) also encourage the model to have lower confidence on anomalous examples. provide theoretical guarantees for detecting out-of-distribution examples under the assumption that a suitably powerful anomaly detector is available. Utilizing Auxiliary Datasets. Outlier Exposure uses an auxiliary dataset entirely disjoint from test-time data in order to teach the network better representations for anomaly detection. train on adversarial examples to increased robustness. pre-train unsupervised deep models on a database of web images for stronger features. train an unsupervised network on a corpus of Amazon reviews for a month in order to obtain quality sentiment representations. find that pre-training a network on the large ImageNet database endows the network with general representations that are useful in many fine-tuning applications.; show that representations learned from images scraped from the nigh unlimited source of search engines and photo-sharing websites improve object detection performance. We consider the task of deciding whether or not a sample is from a learned distribution called D in. Samples from D in are called "in-distribution," and otherwise are said to be "out-of-distribution" (OOD) or samples from D out. In real applications, it may be difficult to know the distribution of outliers one will encounter in advance. Thus, we consider the realistic setting where D out is unknown. Given a parametrized OOD detector and an Outlier Exposure (OE) dataset D OE out, disjoint from D test out, we train the model to discover signals and learn heuristics to detect whether a query is sampled from D in or D OE out. We find that these heuristics generalize to unseen distributions D out. Deep parametrized anomaly detectors typically leverage learned representations from an auxiliary task, such as classification or density estimation. Given a model f and the original learning objective L, we can thus formalize Outlier Exposure as minimizing the objective DISPLAYFORM0 over the parameters of f. In cases where labeled data is not available, then y can be ignored. Outlier Exposure can be applied with many types of data and original tasks. Hence, the specific formulation of L OE is a design choice, and depends on the task at hand and the OOD detector used. For example, when using the maximum softmax probability baseline detector , we set L OE to the cross-entropy from f (x) to the uniform distribution . When the original objective L is density estimation and labels are not available, we set L OE to a margin ranking loss on the log probabilities f (x) and f (x). We evaluate OOD detectors with and without OE on a wide range of datasets. Each evaluation consists of an in-distribution dataset D in used to train an initial model, a dataset of anomalous examples D OE out, and a baseline detector to which we apply OE. We describe the datasets in Section 4.2. The OOD detectors and L OE losses are described on a case-by-case basis. In the first experiment, we show that OE can help detectors generalize to new text and image anomalies. This is all accomplished without assuming access to the test distribution during training or tuning, unlike much previous work. In the confidence branch experiment, we show that OE is flexible and complements a binary anomaly detector. Then we demonstrate that using synthetic outliers does not work as well as using real and diverse data; previously it was assumed that we need synthetic data or carefully selected close-to-distribution data, but real and diverse data is enough. We conclude with experiments in density estimation. In these experiments we find that a cutting-edge density estimator unexpectedly assigns higher density to out-of-distribution samples than in-distribution samples, and we ameliorate this surprising behavior with Outlier Exposure. We evaluate out-of-distribution detection methods on their ability to detect OOD points. For this purpose, we treat the OOD examples as the positive class, and we evaluate three metrics: area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPR), and the false positive rate at N % true positive rate (FPRN). The AUROC and AUPR are holistic metrics that summarize the performance of a detection method across multiple thresholds. The AUROC can be thought of as the probability that an anomalous example is given a higher OOD score than a in-distribution example (Whereas the previous two metrics represent the detection performance across various thresholds, the FPRN metric represents performance at one strict threshold. By observing performance at a strict threshold, we can make clear comparisons among strong detectors. The FPRN metric (; ;) is the probability that an in-distribution example (negative) raises a false alarm when N % of anomalous examples (positive) are detected, so a lower FPRN is better. Capturing nearly all anomalies with few false alarms can be of high practical value. In what follows, we use Outlier Exposure to enhance the performance of existing OOD detection techniques with multiclass classification as the original task. Throughout the following experiments, we let x ∈ X be a classifier's input and y ∈ Y = {1, 2, . . ., k} be a class. We also represent the classifier with the function f: X → R k, such that for any x, 1 T f (x) = 1 and f (x) 0.Maximum Softmax Probability (MSP). Consider the maximum softmax probability baseline which gives an input x the OOD score − max c f c (x). Out-ofdistribution samples are drawn from various unseen distributions (Appendix A). For each task, we test with approximately twice the number of D test out distributions compared to most other papers, and we also test on NLP tasks. The quality of the OOD example scores are judged with the metrics described in Section 4.1. For this multiclass setting, we perform Outlier Exposure by fine-tuning a pre-trained classifier f so that its posterior is more uniform on D OE out samples. Specifically, the finetuning objective is DISPLAYFORM0, where H is the cross entropy and U is the uniform distribution over k classes. When there is class imbalance, we could encourage f (x) to match (P (y = 1),..., P (y = k)); yet for the datasets we consider, matching U works well enough. Also, note that training from scratch with OE can in even better performance than fine-tuning (Appendix C). This approach works on different architectures as well (Appendix D). TAB14 [log b(x)] to the network's original optimization objective. In TAB5, the baseline values are derived from the maximum softmax probabilities produced by the classifier trained with's publicly available training code. The confidence branch improves over this MSP detector, and after OE, the confidence branch detects anomalies more effectively. TAB7 shows the large gains from using OE with a real and diverse dataset over using synthetic samples from a GAN. DISPLAYFORM1 DISPLAYFORM2 In-Distribution Density estimators learn a probability density function over the data distribution D in. Anomalous examples should have low probability density, as they are scarce in D in by definition . Consequently, density estimates are another means by which to score anomalies . We show the ability of OE to improve density estimates on low-probability, outlying data. PixelCNN++. Autoregressive neural density estimators provide a way to parametrize the probability density of image data. Although sampling from these architectures is slow, they allow for evaluating the probability density with a single forward pass through a CNN, making them promising candidates for OOD detection. We use PixelCNN++ as a baseline OOD detector, and we train it on CIFAR-10. The OOD score of example x is the bits per pixel (BPP), defined as nll(x)/num_pixels, where nll is the negative log-likelihood. With this loss we fine-tune for 2 epochs using OE, which we find is sufficient for the training loss to converge. Here OE is implemented with a margin loss over the log-likelihood difference between in-distribution and anomalous examples, so that the loss for a sample x in from D in and point x out from D OE out is max{0, num_pixels + nll(x in) − nll(x out)}.Results are shown in greatly simplify the task of OOD detection. Accordingly, the OOD detection task is to provide a score for 70-or 150-token sequences in the unseen D test out datasets. We train word-level models for 300 epochs, and character-level models for 50 epochs. We then fine-tune using OE on WikiText-2 for 5 epochs. For the character-level language model, we create a character-level version of WikiText-2 by converting words to lowercase and leaving out characters which do not appear in PTB. OOD detection for the word-level and character-level language models are shown in Extensions to Multilabel Classifiers and the Reject Option. Outlier Exposure can work in more classification regimes than just those considered above. For example, a multilabel classifier trained on CIFAR-10 obtains an 88.8% mean AUROC when using the maximum prediction probability as the OOD score. By training with OE to decrease the classifier's output probabilities on OOD samples, the mean AUROC increases to 97.1%. This is slightly less than the AUROC for a multiclass model tuned with OE. An alternative OOD detection formulation is to give classifiers a "reject class" . Outlier Exposure is also flexible enough to improve performance in this setting, but we find that even with OE, classifiers with the reject option or multilabel outputs are not as competitive as OOD detectors with multiclass outputs. In addition to size and realism, we found diversity of D OE out to be an important factor. Concretely, a CIFAR-100 classifier with CIFAR-10 as D OE out hardly improves over the baseline. A CIFAR-10 classifier exposed to ten CIFAR-100 outlier classes corresponds to an average AUPR of 78.5%. Exposed to 30 such classes, the classifier's average AUPR becomes 85.1%. Next, 50 classes corresponds to 85.3%, and from thereon additional CIFAR-100 classes barely improve performance. This suggests that dataset diversity is important, not just size. In fact, experiments in this paper often used around 1% of the images in the 80 Million Tiny Images dataset since we only briefly fine-tuned the models. We also found that using only 50,000 examples from this dataset led to a negligible degradation in detection performance. Additionally, D OE Improves Calibration. When using classifiers for prediction, it is important that confidence estimates given for the predictions do not misrepresent empirical performance. A calibrated classifier gives confidence probabilities that match the empirical frequency of correctness. That is, if a calibrated model predicts an event with 30% probability, then 30% of the time the event transpires. Existing confidence calibration approaches consider the standard setting where data at test-time is always drawn from D in. We extend this setting to include examples from D test out at test-time since systems should provide calibrated probabilities on both in-and out-of-distribution samples. The classifier should have low-confidence predictions on these OOD examples, since they do not have a class. Building on the temperature tuning method of , we demonstrate that OE can improve calibration performance in this realistic setting. Summary are shown in FIG3. Detailed and a description of the metrics are in Appendix G. In this paper, we proposed Outlier Exposure, a simple technique that enhances many current OOD detectors across various settings. It uses out-of-distribution samples to teach a network heuristics to detect new, unmodeled, out-of-distribution examples. We showed that this method is broadly applicable in vision and natural language settings, even for large-scale image tasks. OE can improve model calibration and several previous anomaly detection techniques. Further, OE can teach density estimation models to assign more plausible densities to out-of-distribution samples. Finally, Outlier Exposure is computationally inexpensive, and it can be applied with low overhead to existing systems. In summary, Outlier Exposure is an effective and complementary approach for enhancing out-of-distribution detection systems. Expanded mutliclass out-of-distribution detection are in TAB14 Table 8: NLP OOD example detection for the maximum softmax probability (MSP) baseline detector and the MSP detector after fine-tuning with Outlier Exposure (OE). All are percentages and the of 10 runs. Values are rounded so that 99.95% rounds to 100%.Anomalous Data. For each in-distribution dataset D in, we comprehensively evaluate OOD detectors on artificial and real anomalous distributions D test out following. For each learned distribution D in, the number of test distributions that we compare against is approximately double that of most previous works. Gaussian anomalies have each dimension i.i.d. sampled from an isotropic Gaussian distribution. Rademacher anomalies are images where each dimension is −1 or 1 with equal probability, so each dimension is sampled from a symmetric Rademacher distribution. Bernoulli images have each pixel sampled from a Bernoulli distribution if the input range is. Blobs data consist in algorithmically generated amorphous shapes with definite edges. Icons-50 is a dataset of icons and emojis ; icons from the "Number" class are removed. Textures is a dataset of describable textural images . Places365 consists in images for scene recognition rather than object recognition . LSUN is another scene understanding dataset with fewer classes than Places365 . ImageNet anomalous examples are taken from the 800 ImageNet-1K classes disjoint from Tiny ImageNet's 200 classes, and when possible each image is cropped with bounding box information as in Tiny ImageNet. For the Places365 experiment, ImageNet is ImageNet-1K with all 1000 classes. With CIFAR-10 as D in, we use also CIFAR-100 as D test out and vice versa; recall that the CIFAR-10 and CIFAR-100 classes do not overlap. Chars74K is a dataset of photographed characters in various styles; digits and letters such as "O" and "l" were removed since they can look like numbers. Places69 has images from 69 scene categories not found in the Places365 dataset. SNLI is a dataset of predicates and hypotheses for natural language inference. We use the hypotheses for D OE out. IMDB is a sentiment classification dataset of movie reviews, with similar statistics to those of SST. Multi30K is a dataset of English-German image descriptions, of which we use the English descriptions. WMT16 is the English portion of the test set from WMT16. Yelp is a dataset of restaurant reviews. English Web Treebank (EWT) consists of five individual datasets: Answers (A), Email (E), Newsgroups (N), Reviews (R), and Weblog (W). Each contains examples from the indicated domain. Validation Data. For each experiment, we create a set of validation distributions D Elsewhere we show for pre-trained networks that are fine-tuned with OE. However, a network trained from scratch which simultaneously trains with OE tends to give superior . For example, a CIFAR-10 Wide ResNet trained normally obtains a classification error rate of 5.16% and an FPR95 of 34.94%. Fine-tuned, this network has an error rate of 5.27% and an FPR95 of 9.50%. Yet if we instead train the network from scratch and expose it to outliers as it trains, then the error rate is 4.26% and the FPR95 is 6.15%. This architecture corresponds to a 9.50% RMS calibration error with OE fine-tuning, but by training with OE from scratch the RMS calibration error is 6.15%. Compared to fine-tuning, training a network in tandem with OE tends to produce a network with a better error rate, calibration, and OOD detection performance. The reason why we use OE for fine-tuning is because training from scratch requires more time and sometimes more GPU memory than fine-tuning. Outlier Exposure also improves vision OOD detection performance for more than just Wide ResNets. Table 9 shows that Outlier Exposure also improves vision OOD detection performance for "All Convolutional Networks" (While − max c f c (x) tends to be a discriminative OOD score for example x, models with OE can do better by using −H(U; f (x)) instead. This alternative accounts for classes with small probability mass rather than just the class with most mass. Additionally, the model with OE is trained to give anomalous examples a uniform posterior not just a lower MSP. This simple change roundly aids performance as shown in TAB16: Comparison between the maximum softmax probability (MSP) and H(U; p) OOD scoring methods on a network fine-tuned with OE. Results are percentages and an average of 10 runs. For example, CIFAR-10 are averaged over "Gaussian," "Rademacher,"..., or "CIFAR-100" measurements. Detailed OOD detection with language modeling datasets are shown in TAB18. DISPLAYFORM0 Models integrated into a decision making process should indicate when they are trustworthy, and such models should not have inordinate confidence in their predictions. In an effort to combat a false sense of certainty from overconfident models, we aim to calibrate model confidence. A model is calibrated if its predicted probabilities match empirical frequencies. Thus if a calibrated model predicts an event with 30% probability, then 30% of the time the event transpires. Prior research (; ;) considers calibrating systems where test-time queries are samples from D in, but systems also encounter samples from D test out and should also ascribe low confidence to these samples. Hence, we use OE to control the confidence on these samples. In order to evaluate a multiclass classifier's calibration, we present three metrics. First we establish context. For input example X ∈ X, let Y ∈ Y = {1, 2, . . ., k} be the ground truth class. Let Y be the model's class prediction, and let C be the corresponding model confidence or prediction probability. Denote the set of prediction-label pairs made by the model with S = {( y 1, c 1), (y 2, c 2),..., (y n, c n)}. DISPLAYFORM0 Along similar lines, the MAD Calibration Error-which is an improper scoring rule due to its use of absolute differences rather than squared differences-is estimated with DISPLAYFORM1 Soft F1 Score. If a classifier makes only a few mistakes, then most examples should have high confidence. But if the classifier gives all predictions high confidence, including its mistakes, then the previous metrics will indicate that the model is calibrated on the vast majority of instances, despite having systematic miscalibration. The Soft F1 score is suited for measuring the calibration of a system where there is an acute imbalance between mistaken and correct decisions. Since we treat mistakes a positive examples, we can write the model's confidence that the examples are anomalous with c a = (1 − c 1, 1 − c 2, . . ., 1 − c n). To indicate that an example is positive (mistaken), we use the vector m ∈ {0, 1} n such that m i = 1(y i = y i) for 1 ≤ i ≤ n. Then the Soft F1 score is TAB3: Calibration for the temperature tuned baseline and temperature tuning + OE. There are many ways to estimate a classifier's confidence. One way is to bind a logistic regression branch onto the network, so that confidence values are in. Other confidence estimates use the model's logits l ∈ R k, such as the estimate σ(max i l i) ∈, where σ is the logistic sigmoid. Another common confidence estimate is max i exp (l i)/ k j=1 exp (l j). A modification of this estimate is our baseline. Softmax Temperature Tuning. show that good calibration can be obtained by including a tuned temperature parameter into the softmax: p(y = i | x) = exp(l i /T)/ k j=1 exp(l j /T). We tune T to maximize log likelihood on a validation set after the network has been trained on the training set. Results. In this calibration experiment, the baseline is confidence estimation with softmax temperature tuning. Therefore, we train SVHN, CIFAR-10, CIFAR-100, and Tiny ImageNet classifiers with 5000, 5000, 5000, and 10000 training examples held out, respectively. A copy of this classifier is fine-tuned with Outlier Exposure. Then we determine the optimal temperatures of the original and OE-fine-tuned classifiers on the held-out examples. To measure calibration, we take equally many examples from a given in-distribution dataset D Out-of-distribution points are understood to be incorrectly classified since their label is not in the model's output space, so calibrated models should assign these out-of-distribution points low confidence. Results are in TAB3. Outlier Exposure noticeably improves model calibration. While temperature tuning improves calibration, the confidence estimate p(y = i | x) cannot be less than 1/k, k the number of classes. For an out-of-distribution example like Gaussian Noise, a good model should have no confidence in its prediction over k classes. One possibility is to add a reject option, or a (k + 1)st class, which we cover in Section 5. A simpler option we found is to perform an affine transformation of p(y = i | x) ∈ [1/k, 1] with the formula (p(y = i | x) − 1/k)/(1 − 1/k) ∈. This simple transformation makes it possible for a network to express no confidence on an out-of-distribution input and improves calibration performance. As TAB5 shows, this simple 0-1 posterior rescaling technique consistently improves calibration, and the model fine-tuned with OE using temperature tuning and posterior rescaling achieved large calibration improvements. In FIG8, we show additional PR and ROC Curves using the Tiny ImageNet dataset and various anomalous distributions.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyxCxhRcY7
OE teaches anomaly detectors to learn heuristics for detecting unseen anomalies; experiments are in classification, density estimation, and calibration in NLP and vision settings; we do not tune on test distribution samples, unlike previous work
While generative neural networks can learn to transform a specific input dataset into a specific target dataset, they require having just such a paired set of input/output datasets. For instance, to fool the discriminator, a generative adversarial network (GAN) exclusively trained to transform images of black-haired *men* to blond-haired *men* would need to change gender-related characteristics as well as hair color when given images of black-haired *women* as input. This is problematic, as often it is possible to obtain *a* pair of (source, target) distributions but then have a second source distribution where the target distribution is unknown. The computational challenge is that generative models are good at generation within the manifold of the data that they are trained on. However, generating new samples outside of the manifold or extrapolating "out-of-sample" is a much harder problem that has been less well studied. To address this, we introduce a technique called *neuron editing* that learns how neurons encode an edit for a particular transformation in a latent space. We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons. By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations. Our technique is general and works on a wide variety of data domains and applications. We first demonstrate it on image transformations and then move to our two main biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs. A common situation arises when we have two datasets and seek to learn a transformation that is a mapping from one (the source) to the other (the target). While much existing work has been done on this, less studied is the case where we want to learn a transformation from this source/target pair of datasets and apply it to a second source dataset for which there is no known target. If the second source distribution differs from the first source distribution, any transformation naively learned with a neural network on only the first source/target pair will suffer from problems including domain shift (the second source distribution systematically differs from the first) and overfitting (which aspects of the target only exist because it started as the first source distribution, and shouldn't be part of the learned transformation?). This problem is important to address, as learning transformations is a common task in many different contexts, and often it is infeasible or impossible to obtain known source/target pairing information for every distribution needing to be transformed. For example, many experiments in biology are conducted to study the effect of a treatment on a set of samples, such as tissues from different patients. However, due to their expense and difficulty, clinical trials are often performed on only a small subset of the samples. The challenge is isolating treatment-induced variation from the confounding sample-specific variation. We propose a neural network-based method for learning a transformation that is general enough to be used across a wide range of data modalities and applications, from image-to-image translation to treatment in the biological setting. Popular neural network architectures like GANs pose the problem as one of learning to output data in the region of the space occupied by the target distribution, no matter where the input data is coming from. To fool the discriminator, the generator's output must end up in the same part of the space as the target distribution. The discriminator does not take into account the input points into the generator in any way. Instead, we reframe the problem as learning a transformation towards the target distribution that is more sensitive to where the input data starts. Thus, we could learn an edit between one source and target pair, and apply it to a second source without needing to assume it has no systematic differences from the first source. We propose to learn such an edit, which we term neuron editing, in the latent space of an autoencoder neural network with non-linear activations. First we train an autoencoder on the entire population of data which we are interested in transforming. This includes both the paired source/target data and the second source data. Neuron editing then involves extracting observed differences between the source/target activation distributions for neurons in this layer and then applying them to the second source data to generate a synthetic second target dataset. Performing the edit node-by-node in this space actually encodes complex multivariate edits in the ambient space, performed on denoised and meaningful features, owing to the fact that these features themselves are complex non-linear combinations of the input features. Neuron editing is a general technique that could be applied to the latent space of any neural network, even GANs themselves. We focus exclusively on the autoencoder in this work, however, to leverage its denoising ability, robustness to mode dropping, and superior training stability as compared to GANs. We demonstrate that neuron editing can work on a variety of architectures, while offering the advantages of introducing no new hyperparameters to tune and being stable across multiple runs. While latent space manipulation has been explored in previous work, ours differs in several ways. For example, represents a transformation between two distributions as a single constant shift in latent space. In addition to assuming the latent transformation is the same for all points in the distribution, also uses an off-the-shelf pre-trained Imagenet classifier network. Our work, on the other hand, does not require a richly supervised pre-trained model; also, we model the shift between two distributions as a complex, non-constant function that learns different shifts for different parts of the space. We compare to this "constant-shift" approach and demonstrate empirically why it is necessary to model the transformation more complexly. Some neurons are not heavily edited but still influence the output jointly with those neurons that are edited due to their integration in the decoding layers, propagating their effect into the output space. Thus, even a relatively simple transformation in the internal layer of an autoencoder allows for modeling complex transformations in the ambient data space. This aspect of neuron editing draws close connections with the field of domain adaptation, where the goal is to learn features on one labeled dataset that meaningfully separate points in another, unlabeled dataset . Similarly to that task, we want to learn a transformation from the known source to the known target samples that will also apply to the second source dataset where the target is unknown. Thus, neuron editing represents an extension of domain adaptation, where instead of learning a classifier that can be used on the unlabeled data, we are learning a distribution transformation that can be used on the unlabeled data. Further differences include that while domain adaptation attempts to make features for the unlabeled dataset overlap with those of the labeled dataset, neuron editing transforms the second source dataset without first trying to align it to the first source dataset . Also, different from many domain adaptation techniques, we do not need any sort of pre-trained classifier to yield an informative feature map for the data, as we learn our autoencoder de novo . Given the near exclusive focus of the domain adaptation community on learning classifiers on labeled data and applying it to unlabeled data, we are excited to expand the field to also learning transformations on data with known targets and applying it to data with unknown targets. We demonstrate that neuron editing extrapolates better than generative models on two important criteria. First, as to the original goal, the predicted change on the second source dataset more closely resembles the predicted change on the original source dataset. Second, the editing process produces more complex variation, since it simply preserves the existing variation in the data rather than needing a generator to learn to create it. We compare to standard GAN approaches, dedicated parametric statistical methods used by computational biologists, and alternative autoencoder frameworks. In Figure 1: (a) Neuron editing interrupts the standard feedforward process, editing the neurons of a trained encoder/decoder to include the source-to-target variation, and letting the trained decoder cascade the ing transformation back into the original data space. (b) The neuron editing process. The transformation is learned on the distribution of neuron activations for the source and applied to the distribution of neuron activations for the extrapolation data. each case, we see that they stumble on one or more of several hurdles: out-of-sample input, desired output that differs from the target of the training data, and data with complex variation. In the following section, we detail the neuron editing method. Then, we motivate the extrapolation problem by trying to perform natural image domain transfer on the canonical CelebA dataset. We then move to two biological applications where extrapolation is essential: correcting the artificial variability introduced by measuring instruments (batch effects), and predicting the combined effects of multiple drug treatments (combinatorial drug effects) ., and second source distributions with n S, n T, and n X observations, respectively. We seek a transformation such that: 1. when applied to S it produces a distribution equivalent to T 2. when applied to T it is the identity function and 3. when applied to X it does not necessarily produce T if S is different from X. While GANs learn a transformation with the first two properties, they fail at the third property due to the fact that T is the only target data we have for training, and thus the generator only learns to output data like T. Therefore, instead of learning such a transformation parameterized by a neural network, we learn a simpler transformation on a space learned by a neural network (summarized in Figure 1). We first train an encoder/decoder pair E/D to map the data into an abstract neuron space decomposed into high-level features such that it can also decode from that space, i.e., the standard autoencoder objective L: where MSE is the mean-squared error. The autoencoder is trained on all three data distributions S, T, and X and thus learns to model their joint manifold. Then, without further training, we separately extract the activations of an n-dimensional internal layer of the network for inputs from S and from T, denoted by a S: S → R n, a T: T → R n. We define a piecewise linear transformation, called N euronEdit, which we apply to these distributions of activations: where a ∈ R n consists of n activations for a single network input, p S j, p T j ∈ R n consist of the j th percentiles of activations (i.e., for each of the n neurons) over the distributions of a S, a T correspondingly, and all operations are taken pointwise, i.e., independently on each of the n neurons in the layer. Then, we define N euronEdit(a S): S → R n given by x → N euronEdit(a S (x)), and equivalently for a T and any other distribution (or collection) of activations over a set of network inputs. Therefore, the N euronEdit function operates on distributions, represented via activations over network input samples, and transforms the input activation distribution based on the difference between the source and target distributions (considered via their percentile disctretization). We note that the N euronEdit function has the three properties we stated above: This last property is crucial since learning to generate distributions like T, with a GAN for example, would produce a discriminator who encourages the output to be funneled as close to T as posssible no matter where in the support we start from. To apply the learned transformation to X, we first extract the activations of the internal layer computed by the encoder, a X. Then, we edit the activations with the neuron editing functionâ X. Finally, we cascade the transformations applied to the neuron activations through the decoder without any further training. Thus, the transformed outputX is obtained by: We emphasize that at this point, since we do no further training of the encoder and decoder, and since the neuron editing transformation has no weights to learn, there is no further objective term to minimize at this point and the transformation is fully defined. Crucially, the nomenclature of an autoencoder no longer strictly applies. If we allowed the encoder or decoder to train with the transformed neuron activations, the network could learn to undo these transformations and still produce the identity function. However, since we freeze training and apply these transformations exclusively on inference, we turn an autoencoder into a generative model that need not be close to the identity. Training a GAN in this setting could exclusively utilize the data in S and T, since we have no real examples of the output for X to feed to the discriminator. Neuron editing, on the other hand, is able to model the variation intrinsic to X in an unsupervised manner despite not having real posttransformation data for X. Since we know a priori that X will differ substantially from S, this provides significantly more information. Furthermore, GANs are notoriously tricky to train (; ;). Adversarial discriminators suffer from oscillating optimization dynamics, uninterpretable losses (;, and most debilitatingly, mode collapse (; ;). Under mode collapse, significant diversity that should exist in the output of the generator is lost, instead producing synthetic data that is a severely degenerated version of the true target distribution. Neuron editing avoids all of these traps by learning an unsupervised model of the data space with the easier-to-train autoencoder. The essential step that facilitates generation is the isolation of the variation in the neuron activations that characterizes the difference between source and target distributions. There is a relationship between neuron editing and the well-known word2vec embeddings in natural language processing . There, words are embedded in a latent space where a meaningful transformation such as changing the gender of a word is a constant vector in this space. This vector can be learned on one example, like transforming man to woman, and then extrapolated to another example, like king, to predict the location in the space of queen. Neuron editing is an extension in complexity of word2vec's vector arithmetic, because instead of transforming a single point into another single point, it transforms an entire distribution into another distribution. We compare the predictions from neuron editing to those of several generation-based approaches: a traditional GAN, a GAN implemented with residual blocks (ResnetGAN) to show generating residuals is not the same as editing , and a CycleGAN. While in other applications, like natural images, GANs have shown an impressive ability to generate plausible individual points, we illustrate that they struggle with these two criteria. We also motivate why neuron editing is performed on inference by comparing against a regularized autoencoder that performs the internal layer transformations during training, but the decoder learns to undo the transformation and reconstruct the input unchanged . Lastly, we motivate why the more complex neuron editing transformation is necessary by comparing against a naive "latent vector arithmetic" approach. We find the constant vector between the mean of the source and the mean of the target in the internal layer of our pre-trained autoencoder, and apply this single shift to all neurons in the target (Constant Shift). For the regularized autoencoder, the regularization penalized differences in the distributions of the source and target in a latent layer using maximal mean discrepancy . The image experiment used convolutional layers with stride-two filters of size four, with 64-128-256-128-64 filters in the layers. All other models used fully connected layers of size 500-250-50-250-500. Leaky ReLU activation was used with 0.2 leak. Training was done with minibatches of size 100, with the Adam optimizer , and learning rate 0.001. We first consider a motivational experiment on the canonical image dataset of CelebA. If we want to learn a transformation that turns a given image of a person with black hair to that same person except with blond hair, a natural approach would be to collect two sets of images, one with all black haired people and another with all blond haired people, and teach a generative model to map between them. The problem with this approach is that the learned model may perform worse on input images that differ from those it trained on. This has troubling consequences for the growing concern of socially unbiased neural networks, as we would want model performance to go unchanged for these different populations . This is illustrated in Figure 2a, where we collect images that have the attribute male and the attribute black hair and try to map to the set of images with the attribute male and the attribute blond hair. Then, after training on this data, we extrapolate and apply the transformation to females with black hair, which had not been seen during training. The GAN models are less successful at modeling this transformation on out-of-sample data. In the parts of the image that should stay the same (everything but the hair color), they do not always generate a recreation of the input. In the hair color, only sometimes is the color changed. The regular GAN model especially has copious artifacts that are a of the difficulty in training these models. This provides further evidence of the benefits of avoiding these complications when possible, for example by using the stable training of an autoencoder and editing it as we do in neuron editing. We quantify the success of neuron editing by using the common metric of Frechet Inception Distance (FID) that measures how well the generated distribution matches the distribution targeted for extrapolation. These scores are reported in Table 1, where we see neuron editing achieve the best on an average of three runs. Notably, due to the autoencoder's more stable training, the standard deviation across multiple runs is also lower than the GAN-based methods. In Figure 2b, we motivate why we need to perform the N euronEdit transformation on the internal layer of a neural network, as opposed to applying it on some other latent space like PCA. Only in the neuron space has this complex and abstract transformation of changing the hair color (and only the hair color) been decomposed into a relatively simple and piecewise linear shift. Beyond hair color transformation, neuron editing is able to learn general transformations on CelebA males and apply them to females. In Figure 3, we learn to transform between having/not having the mustache attribute and having/not having the glasses attribute. The latter transformation on glasses demonstrates the importance of learning a non-constant transformation. The glasses attribute is bimodal, with both examples of sunglasses and reading glasses in the dataset. With neuron editing, we are able to learn to map to each of these different parts of the latent space, as opposed to the constant shift which adds dark sunglasses to the entire distribution. We next demonstrate another application of neuron editing's ability to learn to transform a distribution based on a separate source/target pair: biological batch correction. Many biological experiments involve using an instrument to measure different populations of cells and then characterizing the features that distinguish between them. However, these complex instruments can be difficult to calibrate and use consistently, and thus can introduce technical artifacts into the data they are used to measure. In fact, we can even measure the same population of cells twice and get two very different datasets back. When we measure different populations, these technical artifacts (batch effects) get confounded with the true differences between the populations. Batch effects are a ubiquitous problem in biological experimental data can lead to incorrect in downstream analysis. Addressing batch effects is a goal of many new models (; ; ;), including some deep learning methods . Table 2: Correlation between observed change in spike-ins and applied change to samples. Neuron editing most accurately applies just the transformation observed as batch effect and not true biological variation. One method for grappling with this issue is to repeatedly measure an unvarying control (called a spike-in) set of cells with each population of interest (called a sample) . Because we know any observed differences in the spike-in are technical artifacts, we can model and then remove this artifact in the population of interest. In our previous terminology, the two spike-in distributions are our known source/target pair while the actual population of interest is our second source that lacks a known target. Existing methods of batch correction based on spike-ins work directly in the data space, operate independently on each dimension, and only do crude matching of distribution statistics. The most common approach is to simply subtract the difference in means between the spike-ins from the sample. We believe this is natural opportunity for deep learning, where the same concept can be extended to an abstract feature space, composed of combinations of features, and a more powerful transformation. Moreover, we expect neuron editing to shine as the spike-ins likely differ drastically from the sample. The dataset we investigate in this section comes from a mass cytometry experiment which measures the amount of particular proteins in each cell in two different individuals infected with dengue virus . We note that these data are in a drastically different format from the images of the previous experiment, as they are in tabular form with cell i being row i and the amount of protein j in column j. We believe a key strength to neuron editing is its general applicability to a wide range of data types and modalities. In this particular experiment, there are four datasets, each consisting of measurements of 35 proteins: the two spike-ins we refer to as Control1 and Control2 are shape 18919 × 35 and 22802 × 35, respectively, while the two populations we actually want to study, called Sample1 and Sample2, are shape 94556 × 35 and 55594 × 35. To better grasp the problem of batch effects, we visualize a biaxial plot with two of the proteins where there is a batch effect in one dimension and a true underlying biological difference in the other dimension (Figure 4). By using the controls, we seek to correct the artificially low readings of the protein IFNg in Sample1 (along the x-axis) without removing the biologically accurate readings of higher amounts of protein CCR6 (along the y-axis). We would like our model to identify this source of variation and compensate for the lower values of IFNg without losing other true biological variation in Sample1. For example, Sample1 also has higher values of the protein CCR6, and as the controls show, this is a true biological difference, not a batch effect (the y-axis in Figure 4a). We quantify the performance of the models at this goal by measuring the correlation between the change in median marker values observed in the spike-in with the change applied to the sample. If this correlation is high, we know the transformation applied to the samples only removes the variation where we have evidence, coming from the spike-ins, that it is a technical artifact. This data is presented in Table 2, where we compare to not only the deep generative models we have already introduced, but also dedicated batch correction methods commonly used by practitioners (; ;). We see that neuron editing outperforms Table 3: Correlation between real and predicted means/variances on the combinatorial drug prediction data. The GANs generate data that is less accurate (means are off) and less diverse (variances are smaller) than the real data, while neuron editing best models the true distribution. all of the alternatives at extrapolating from the spike-ins to the samples. This is unsurprising, as the GAN methods are only trained to produce data like Control2, and thus will not preserve much of the variation in the sample. The traditional batch correction methods make specific parametric distributional assumptions on the data that are not held in practice, and thus also perform poorly. The regularized autoencoder, since the transformation is performed during training rather than after training like neuron editing, just reproduces its input unchanged. In Figure 5a, a PCA embedding of the data space is visualized for Control1 (light blue), Control2 (light red), Sample1 (dark blue), and post-transformation Sample1 (dark red). The transformation from Control1 to Control2 mirrors the transformation applied to Sample1. Notably, the other variation (intra-sample variation) is preserved. In Figure 5b, we see that for every dimension, the variation between the controls corresponds accurately to the variation introduced by neuron editing into the sample. These global assessments across the full data space offer additional corroboration that the transformations produced by neuron editing reasonably reflect the transformation as evidenced by the controls. Finally, we consider biological data from a combinatorial drug experiment on cells from patients with acute lymphoblastic leukemia . The dataset we analyze consists of cells under four treatments: no treatment (basal), BEZ-235 (Bez), Dasatinib (Das), and both Bez and Das (Bez+Das). These measurements also come from mass cytometry, this time on 41 dimensions, with the four datasets consisting of 19925, 20078, 19843, and 19764 observations, respectively. In this setting, we define the source to be the basal cells, the target to be the Das cells, and then extrapolate to the Bez cells. We hold out the true Bez+Das data and attempt to predict the effects of applying Das to cells that have already been treated with Bez. Predicting the effects of drug combinations is an application which is typically approached through regression, fitting coefficients to an interaction term in a multiple linear regression model. This limitation of only fitting linear relationships and treating each protein independently, greatly restricts the model in a biological contexts where we know nonlinearity and protein regulatory networks exist and play a large role in cellular function. Using neuron editing in this context facilitates learning a much richer transformation than previous, non-deep learning methods. We quantitatively evaluate whether neuron editing produces a meaningful transformation in Table 3, where we calculate the correlation between the real and generated means and variances of each dimension. Neuron editing more accurately predicts the principle direction and magnitude of transformation across all dimensions than any other model. Furthermore, neuron editing better preserves the variation in the real data. The GANs have trouble modeling the diversity in the data, as manifested by their generated data having significantly less variance than really exists. We see an example of the learned transformation by looking at a characteristic effect of applying Das: a decrease in p4EBP1 (seen on the x-axis of Figure 4c). No change in another dimension, pSTATS, is associated with the treatment (the y-axis of Figure 4c). Neuron editing accurately models this change in p4EBP1, without introducing any change in pSTATS or losing variation within the extrapolation dataset (Figure 4d). We note that since much of the variation in the target distribution already exists in the source distribution and the shift is a relatively small one, we might expect the ResnetGAN to be able to easily mimic the target. However, despite the residual connections, it still suffers from the same problems as the other models using the generating approach: namely, the GAN objective encourages all output to be like the target it trained on. This leaves it unable to produce the correct distribution if it differs from the target of the learned transformation, as we see in this case. In this work, we have only consider learning from a single pair of distributions and applying it to another single distribution. We consider it an interesting direction for future work to extend this to multiple distributions, either for learning from and application to. Additional future work along these lines could include training parallel encoders with the same decoder, or training to generate conditionally.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1lDSCEYPH
A method for learning a transformation between one pair of source/target datasets and applying it a separate source dataset for which there is no target dataset
In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle. We study the effectiveness of the near- optimal cost-to-go oracle on the planning horizon and demonstrate that the cost- to-go oracle shortens the learner’s planning horizon as function of its accuracy: a globally optimal oracle can shorten the planning horizon to one, leading to a one- step greedy Markov Decision Process which is much easier to optimize, while an oracle that is far away from the optimality requires planning over a longer horizon to achieve near-optimal performance. Hence our new insight bridges the gap and interpolates between imitation learning and reinforcement learning. Motivated by the above mentioned insights, we propose Truncated HORizon Policy Search (THOR), a method that focuses on searching for policies that maximize the total reshaped reward over a finite planning horizon when the oracle is sub-optimal. We experimentally demonstrate that a gradient-based implementation of THOR can achieve superior performance compared to RL baselines and IL baselines even when the oracle is sub-optimal. Reinforcement Learning (RL), equipped with modern deep learning techniques, has dramatically advanced the state-of-the-art in challenging sequential decision problems including high-dimensional robotics control tasks as well as video and board games BID13 BID23. However, these approaches typically require a large amount of training data and computational resources to succeed. In response to these challenges, researchers have explored strategies for making RL more efficient by leveraging additional information to guide the learning process. Imitation learning (IL) is one such approach. In IL, the learner can reference expert demonstrations BID0, or can access a cost-to-go oracle BID19, providing additional information about the long-term effects of learner decisions. Through these strategies, imitation learning lowers sample complexity by reducing random global exploration. For example, BID25 shows that, with access to an optimal expert, imitation learning can exponentially lower sample complexity compared to pure RL approaches. Experimentally, researchers also have demonstrated sample efficiency by leveraging expert demonstrations by adding demonstrations into a replay buffer BID28 BID14, or mixing the policy gradient with a behavioral cloning-related gradient BID18.Although imitating experts can speed up the learning process in RL tasks, the performance of the learned policies are generally limited to the performance of the expert, which is often sub-optimal in practice. Previous imitation learning approaches with strong theoretical guarantees such as Data Aggregation (DAgger) BID20 and Aggregation with Values (AGGREVATE) BID19 can only guarantee a policy which performs as well as the expert policy or a one-step deviation improvement over the expert policy.1 Unfortunately, this implies that imitation learning with a sub-optimal expert will often return a sub-optimal policy. Ideally, we want the best of both IL and RL: we want to use the expert to quickly learn a reasonable policy by imitation, while also exploring how to improve upon the expert with RL. This would allow the learner to overcome the sample inefficiencies inherent in a pure RL strategy while also allowing the learner to eventually surpass a potentially sub-optimal expert. Combining RL and IL is, in fact, not new. BID5 attempted to combine IL and RL by stochastically interleaving incremental RL and IL updates. By doing so, the learned policy will either perform as well as the expert policy-the property of IL BID19, or eventually reach a local optimal policy-the property of policy iteration-based RL approaches. Although, when the expert policy is sub-optimal, the learned locally optimal policy could potentially perform better than the expert policy, it is still difficult to precisely quantify how much the learner can improve over the expert. In this work, we propose a novel way of combining IL and RL through the idea of Reward Shaping BID16. Throughout our paper we use cost instead of reward, and we refer to the concept of reward shaping with costs as cost shaping. We assume access to a cost-to-go oracle that provides an estimate of expert cost-to-go during training. The key idea is that the cost-to-go oracle can serve as a potential function for cost shaping. For example, consider a task modeled by a Markov Decision Process (MDP). Cost shaping with the cost-to-go oracle produces a new MDP with an optimal policy that is equivalent to the optimal policy of the original MDP BID16. The idea of cost shaping naturally suggests a strategy for IL: pick a favourite RL algorithm and run it on the new MDP reshaped using expert's cost-to-go oracle. In fact, BID16 demonstrated that running SARSA BID26 on an MDP reshaped with a potential function that approximates the optimal policy's value-to-go, is an effective strategy. We take this idea one step further and study the effectiveness of the cost shaping with the expert's cost-to-go oracle, with a focus on the setting where we only have an imperfect estimatorV e of the cost-to-go of some expert policy π e, i.e.,V e = V *, where V * is the optimal policy's cost-to-go in the original MDP. We show that cost shaping with the cost-to-go oracle shortens the learner's planning horizon as a function of the accuracy of the oracleV e compared to V *. Consider two extremes. On one hand, when we reshape the cost of the original MDP with V * (i.e.,V e = V *), the reshaped MDP has an effective planning horizon of one: a policy that minimizes the one-step cost of the reshaped MDP is in fact the optimal policy (hence the optimal policy of the original MDP). On the other hand, when the cost-to-go oracle provides no information regarding V *, we have no choice but simply optimize the reshaped MDP (or just the original MDP) using RL over the entire planning horizon. With the above insight, we propose the high-level strategy for combining IL and RL, which we name Truncated HORizon Policy Search with cost-to-go oracle (THOR). The idea is to first shape the cost using the expert's cost-to-go oracleV e, and then truncate the planning horizon of the new MDP and search for a policy that optimizes over the truncated planning horizon. For discrete MDPs, we mathematically formulate this strategy and guarantee that we will find a policy that performs better than the expert with a gap that can be exactly quantified (which is missing in the previous work of BID5). In practice, we propose a gradient-based algorithm that is motivated from this insight. The practical algorithm allows us to leverage complex function approximators to represent policies and can be applied to continuous state and action spaces. We verify our approach on several MDPs with continuous state and action spaces and show that THOR can be much more sample efficient than strong RL baselines (we compared to Trust Region Policy Optimization with Generalized Advantage Estimation (TRPO-GAE) ), and can learn a significantly better policy than AGGREVATE (we compared to the policy gradient version of AGGREVATE from BID25) with access only to an imperfect cost-to-go oracle. Previous work has shown that truncating the planning horizon can in a tradeoff between accuracy and computational complexity. BID8 proposed a model-based RL approach that focuses on a search for policies that maximize a sum of k-step rewards with a termination value that approximates the optimal value-to-go. Their algorithm focuses on the model-based setting and the discrete state and action setting, as the algorithm needs to perform k-step value iteration to compute the policy. Another use of the truncated planning horizon is to trade off bias and variance. When the oracle is an approximation of the value function of the agent's current policy, by using k-step rollouts bottomed up by the oracle's return, truncating the planning horizon trades off bias and variance of the estimated reward-to-go. The bias-variance tradeoff has been extensively studied in Temporal Difference Learning literature BID27 and policy iteration literature as well BID9 BID15 is perhaps the closest to our work. In Theorem 5 in the Appendix of Ng's dissertation, Ng considers the setting where the potential function for reward shaping is close to the optimal value function and suggests that if one performs reward shaping with the potential function, then one can decrease the discount factor of the original MDP without losing the optimality that much. Although in this work we consider truncating the planning steps directly, Theorem 5 in Ng's dissertation and our work both essentially considers trading off between the hardness of the reshaped MDP (the shorter the planning horizon, the easier the MDP to optimize) and optimality of the learned policy. In addition to this tradeoff, our work suggests a path toward understanding previous imitation learning approaches through reward shaping, and tries to unify IL and RL by varying the planning horizon from 1 to infinity, based on how close the expert oracle is to the optimal value function. Another contribution of our work is a lower bound analysis that shows that performance limitation of AGGREVATE with an imperfect oracle, which is missing in previous work BID19. The last contribution of our work is a model-free, actor-critic style algorithm that can be used for continuous state and action spaces. We consider the problem of optimizing Markov Decision Process defined as M 0 = (S, A, P, C, γ). Here, S is a set of S states and A is a set of A actions; P is the transition dynamics at such that for any s ∈ S, s ∈ S, a ∈ A, P (s |s, a) is the probability of transitioning to state s from state s by taking action a. For notation simplicity, in the rest of the paper, we will use short notation P sa to represent the distribution P (·|s, a). The cost for a given pair of s and a is c(s, a), which is sampled from the cost distribution C(s, a) with mean valuec(s, a). A stationary stochastic policy π(a|s) computes the probability of generating action a given state s. The value function V π M0 and the state action cost-to-go Q π M0,h (s, a) of π on M 0 are defined as: DISPLAYFORM0 where the expectation is taken with respect to the randomness of M 0 and the stochastic policy π. DISPLAYFORM1 The objective is to search for the optimal policy π * such that π * = arg min π V π (s), ∀s ∈ S.Throughout this work, we assume access to an cost-to-go oracleV e (s): S → R. Note that we do not requireV e (s) to be equal to V * M0. For example,V e (s) could be obtained by learning from trajectories demonstrated by the expert π e (e.g., Temporal Difference Learning ), orV e could be computed by near-optimal search algorithms via access to ground truth information BID7 BID5 BID24 or via access to a simulator using Dynamic Programming (DP) techniques BID6 BID17. In our experiment, we focus on the setting where we learn aV e (s) using TD methods from a set of expert demonstrations. Given the original MDP M 0 and any potential functions Φ: S → R, we can reshape the cost c(s, a) sampled from C(s, a) to be: DISPLAYFORM0 Denote the new MDP M as the MDP obtained by replacing c by c in M 0: M = (S, A, P, c, γ). BID16 showed that the optimal policy π * M on M and the optimal policy π * M0 on the original MDP are the same: π * M (s) = π * M0 (s), ∀s. In other words, if we can successfully find π * M on M, then we also find π * M0, the optimal policy on the original MDP M 0 that we ultimately want to optimize. In IL, when given a cost-to-go oracle V e, we can use it as a potential function for cost shaping.. As cost shaping does not change the optimal policy, we can rephrase the original policy search problem using the shaped cost: DISPLAYFORM0 for all s ∈ S. Though Eq. 2 provides an alternative objective for policy search, it could be as hard as the original problem as DISPLAYFORM1, which can be easily verified using the definition of cost shaping and a telescoping sum trick. As directly optimizing Eq 2 is as difficult as policy search in the original MDP, previous IL algorithms such as AGGREVATE essentially ignore temporal correlations between states and actions along the planning horizon and directly perform a policy iteration over the expert policy at every state, i.e., they are greedy with respect to A e asπ(s) = arg min a A e (s, a), ∀s ∈ S. The policy iteration theorem guarantees that such a greedy policyπ performs at least as well as the expert. Hence, when the expert is optimal, the greedy policyπ is guaranteed to be optimal. However when V e is not the optimal value function, the greedy policyπ over A e is a one-step deviation improvement over the expert but is not guaranteed to be close to the optimal π *. We analyze in detail how poor the policy ing from such a greedy policy improvement method could be when V e is far away from the optimal value function in Sec. 3. In this section we study the dependency of effective planning horizon on the cost-to-go oracle. We focus on the setting where we have access to an oracleV e (s) which approximates the cost-to-go of some expert policy π e (e.g., V e could be designed by domain knowledge BID16 or learned from a set of expert demonstrations). We assume the oracle is close to V * M0, but imperfect: |V e − V * M0 | = for some ∈ R +. We first show that with such an imperfect oracle, previous IL algorithms AGGREVATE and AGGREVATE D BID19 BID25 are only guaranteed to learn a policy that is γ /(1−γ) away from the optimal. Let us define the expected total cost for any policy π as J(π) = E s0∼v V π M0 (s 0), measured under some initial state distribution v and the original MDP M 0.Theorem 3.1. There exists an MDP and an imperfect oracleV e (s) with |V e (s) − V * M0,h (s)| =, such that the performance of the induced policy from the cost-to-go oracleπ * = arg min a c(s, a) + γE s ∼Psa [V e (s)] is at least Ω(γ /(1 − γ)) away from the optimal policy π *: DISPLAYFORM0 The proof with the constructed example can be found in Appendix A. DenoteQ DISPLAYFORM1, in high level, we construct an example whereQ e is close to Q * in terms of Q e − Q * ∞, but the order of the actions induced byQ e is different from the order of the actions from Q *, hence forcing the induced policyπ * to make mistakes. As AGGREVATE at best computes a policy that is one-step improvement over the oracle, i.e.,π * = arg min a c(s, a) + γE s ∼Psa [V e (s)], it eventually has to suffer from the above lower bound. This gap in fact is not surprising as AGGREVATE is a one-step greedy algorithm in a sense that it is only optimizing the one-step cost function c from the reshaped MDP M. To see this, note that the cost of the reshaped DISPLAYFORM2, and we havê π * (s) = arg min a E[c (s, a)]. Hence AGGREVATE can be regarded as a special algorithm that aims to optimizing the one-step cost of MDP M that is reshaped from the original MDP M 0 using the cost-to-go oracle. Though when the cost-to-go oracle is imperfect, AGGREVATE will suffer from the above lower bound due to being greedy, when the cost-to-go oracle is perfect, i.e.,V e = V *, being greedy on one-step cost makes perfect sense. To see this, use the property of the cost shaping BID16, we can verify that whenV e = V *: DISPLAYFORM3 Namely the optimal policy on the reshaped MDP M only optimizes the one-step cost, which indicates that the optimal cost-to-go oracle shortens the planning horizon to one: finding the optimal policy on M 0 becomes equivalent to optimizing the immediate cost function on M at every state s. When the cost-to-go oracle is away from the optimality, we lose the one-step greedy property shown in Eq. 4. In the next section, we show that how we can break the lower bound Ω(/(1 − γ)) only with access to an imperfect cost-to-go oracleV e, by being less greedy and looking head for more than one-step. Given the reshaped MDP M withV e as the potential function, as we mentioned in Sec. 2.2, directly optimizing Eq. 2 is as difficult as the original policy search problem, we instead propose to minimize the total cost of a policy π over a finite k ≥ 1 steps at any state s ∈ S: DISPLAYFORM0 Using the definition of cost shaping and telescoping sum trick,we can re-write Eq. 5 in the following format, which we define as k-step disadvantage with respect to the cost-to-go oracle: DISPLAYFORM1 We assume that our policy class Π is rich enough that there always exists a policyπ * ∈ Π that can simultaneously minimizes the k−step disadvantage at every state (e.g., policies in tabular representation in discrete MDPs). Note that when k = 1, minimizing Eq. 6 becomes the problem of finding a policy that minimizes the disadvantage A e M0 (s, a) with respect to the expert and reveals AGGREVATE.The following theorem shows that to outperform expert, we can optimize Eq. 6 with k > 1. Let us denote the policy that minimizes Eq. 6 in every state asπ *, and the value function ofπ * as Vπ *. Theorem 3.2. Assumeπ * minimizes Eq. 6 for every state s ∈ S with k > 1 and |V e (s) − V * (s)| = Θ, ∀s. We have: DISPLAYFORM2 Compare the above theorem to the lower bound shown in Theorem 3.1, we can see that when k > 1, we are able to learn a policy that performs better than the policy induced by the oracle (i.e.,π * (s) = arg min aQ e (s, a)) by at least (DISPLAYFORM3 The proof can be found in Appendix B. Theorem 3.2 and Theorem 3.1 together summarize that when the expert is imperfect, simply computing a policy that minimizes the one-step disadvantage (i.e., (k = 1)) is not sufficient to guarantee near-optimal performance; however, optimizing a k-step disadvantage with k > 1 leads to a policy that guarantees to outperform the policy induced by the oracle (i.e., the best possible policy that can be learnt using AGGREVATE and AGGREVATED). Also our theorem provides a concrete performance gap between the policy that optimizes Eq. 6 for k > 1 and the policy that induced by the oracle, which is missing in previous work (e.g., BID5).As we already showed, if we set k = 1, then optimizing Eq. 6 becomes optimizing the disadvantage over the expert A e M0, which is exactly what AGGREVATE aims for. When we set k = ∞, optimizing Eq. 6 or Eq. 5 just becomes optimizing the total cost of the original MDP. Optimizing over a shorter Reset system. Execute π θn to generate a set of trajectories {τ i} N i=1. Reshape cost c (s t, a t) = c(s t, a t) + V e t+1 (s t+1) − V e t (s t), for every t ∈ [1, |τ i |] in every trajectory τ i, i ∈ [N]. Compute gradient: DISPLAYFORM0 8:Update disadvantage estimator to πn,k M using {τ i} i with reshaped cost c. Update policy parameter to θ n+1. 10: end for finite horizon is easier than optimizing over the entire infinite long horizon due to advantages such as smaller variance of the empirical estimation of the objective function, less temporal correlations between states and costs along a shorter trajectory. Hence our main theorem essentially provides a tradeoff between the optimality of the solutionπ * and the difficulty of the underlying optimization problem. Given the original MDP M 0 and the cost-to-go oracleV e, the reshaped MDP's cost function c is obtained from Eq. 1 using the cost-to-go oracle as a potential function. Instead of directly applying RL algorithms on M 0, we use the fact that the cost-to-go oracle shortens the effective planning horizon of M, and propose THOR: Truncated HORizon Policy Search summarized in Alg. 1. The general idea of THOR is that instead of searching for policies that optimize the total cost over the entire infinitely long horizon, we focus on searching for polices that minimizes the total cost over a truncated horizon, i.e., a k−step time window. Below we first show how we derive THOR from the insight we obtained in Sec. 3.Let us define a k-step truncated value function V π,k M and similar state action value function Q π,k M on the reshaped MDP M as: DISPLAYFORM0 At any time state s, V π,k M only considers (reshaped) cost signals c from a k-step time window. We are interested in searching for a policy that can optimizes the total cost over a finite k-step horizon as shown in Eq. 5. For MDPs with large or continuous state spaces, we cannot afford to enumerate all states s ∈ S to find a policy that minimizes the k−step disadvantage function as in Eq. 5. Instead one can leverage the approximate policy iteration idea and minimize the weighted cost over state space using a state distribution ν BID11 BID2: DISPLAYFORM1 For parameterized policy π (e.g., neural network policies), we can implement the minimization in Eq. 10 using gradient-based update procedures (e.g., Stochastic Gradient Descent, Natural Gradient BID10 BID1) in the policy's parameter space. In the setting where the system cannot be reset to any state, a typical choice of exploration policy is the currently learned policy (possibly mixed with a random process BID12 to futher encourage exploration). Denote π n as the currently learned policy after iteration n and P r πn (·) as the average state distribution induced by executing π n (parameterized by θ n) on the MDP. Replacing the exploration distribution by P r πn (·) in Eq. 10, and taking the derivative with respect to the policy parameter θ, the policy gradient is: DISPLAYFORM2 where τ k ∼ π n denotes a partial k−step trajectory τ k = {s 1, a 1, ..., s k, a k |s 1 = s} sampled from executing π n on the MDP from state s. Replacing the expectation by empirical samples from π n, replacing Q π,k M by a critic approximated by Generalized disadvantage Estimator (GAE) π,k M, we get back to the gradient used in Alg. 1: DISPLAYFORM3 where |τ | denotes the length of the trajectory τ. If using the classic policy gradient formulation on the reshaped MDP M we should have the following expression, which is just a re-formulation of the classic policy gradient BID29: DISPLAYFORM0 which is true since the cost c i (we denote c i (s, a) as c i for notation simplicity) at time step i is correlated with the actions at time step t = i all the way back to the beginning t = 1. In other words, in the policy gradient format, the effectiveness of the cost c t is back-propagated through time all the way back the first step. Our proposed gradient formulation in Alg. 1 shares a similar spirit of Truncated Back-Propagation Through Time BID30, and can be regarded as a truncated version of the classic policy gradient formulation: at any time step t, the cost c is back-propagated through time at most k-steps: DISPLAYFORM1 In Eq. 13, for any time step t, we ignore the correlation between c t and the actions that are executed k-step before t, hence elimiates long temporal correlations between costs and old actions. In fact, AGGREVATE D BID25, a policy gradient version of AGGREVATE, sets k = 1 and can be regarded as No Back-Propagation Through Time. The above gradient formulation provides a natural half-way point between IL and RL. When k = 1 andV e = V * M0 (the optimal value function in the original MDP M 0): DISPLAYFORM0 where, for notation simplicity, we here use E τ to represent the expectation over trajectories sampled from executing policy π θ, and A π * M0 is the advantage function on the original MDP M 0. The fourth expression in the above equation is exactly the gradient proposed by AGGREVATED BID25. AGGREVATED performs gradient descent with gradient in the format of the fourth expression in Eq. 14 to discourage the log-likelihood of an action a t that has low advantage over π * at a given state s t.On the other hand, when we set k = ∞, i.e., no truncation on horizon, then we return back to the classic policy gradient on the MDP M obtained from cost shaping withV e. As optimizing M is the same as optimizing the original MDP M 0 , our formulation is equivalent to a pure RL approach on M 0. In the extreme case when the oracleV e has nothing to do with the true optimal oracle V *, as there is no useful information we can distill from the oracle and RL becomes the only approach to solve M 0. We evaluated THOR on robotics simulators from OpenAI Gym BID4. Throughout this section, we report reward instead of cost, since OpenAI Gym by default uses reward. The baseline we compare against is TRPO-GAE and AGGREVATED BID25.To simulate oracles, we first train TRPO-GAE until convergence to obtain a policy as an expert π e. We then collected a batch of trajectories by executing π e. Finally, we use TD learning to train a value functionV e that approximates V e. In all our experiments, we ignored π e and only used the pre-trainedV e for reward shaping. Hence our experimental setting simulates the situation where we only have a batch of expert demonstrations available, and not the experts themselves. This is a much harder setting than the interactive setting considered in previous work BID20 BID25 BID5. Note that π e is not guaranteed to be an optimal policy, and V e is only trained on the demonstrations from π e, therefore the oracleV e is just a coarse estimator of V * M0. Our goal is to show that, compared to AGGREVATED, THOR with k > 1 in significantly better performance; compared to TRPO-GAE, THOR with some k << H converges faster and is more sample efficient. For fair comparison to RL approaches, we do not pre-train policy or critic using demonstration data, though initialization using demonstration data is suggested in theory and has been used in practice to boost the performance BID20 BID3.For all methods we report statistics (mean and standard deviation) from 25 seeds that are i.i.d generated. For trust region optimization on the actor π θ and GAE on the critic, we simply use the recommended parameters in the code-base from TRPO-GAE. We did not tune any parameters except the truncation length k. We consider two discrete action control tasks with sparse rewards: Mountain-Car, Acrobot and a modified sparse reward version of CartPole. All simulations have sparse reward in the sense that no reward signal is given until the policy succeeds (e.g., Acrobot swings up). In these settings, pure RL approaches that rely on random exploration strategies, suffer from the reward sparsity. On the other Note that in our setting whereV e is imperfect, THOR with k > 1 works much better than AG-GREVATED (THOR with k = 1) in Acrobot. In Mountain Car, we observe that AGGREVATED achieves good performance in terms of the mean, but THOR with k > 1 (especially k = 10) in much higher mean+std, which means that once THOR receives the reward signal, it can leverage this signal to perform better than the oracles. We also show that THOR with k > 1 (but much smaller than H) can perform better than TRPO-GAE. In general, as k increases, we get better performance. We make the acrobot setting even harder by setting H = 200 to even reduce the chance of a random policy to receive reward signals. FIG1 to FIG1, we can see that THOR with different settings of k always learns faster than TRPO-GAE, and THOR with k = 50 and k = 100 significantly outperform TRPO-GAE in both mean and mean+std. This indicates that THOR can leverage both reward signals (to perform better than AGGREVATED) and the oracles (to learn faster or even outperform TRPO). We tested our approach on simulators with continuous state and actions from MuJoCo simulators: a modified sparse reward Inverted Pendulum, a modifed sparse reward Inverted Double Pendulum, Hopper and Swimmer. Note that, compared to the sparse reward setting, Hopper and Swimmer do not have reward sparsity and policy gradient methods have shown great BID21 BID23. Also, due to the much larger and more complex state space and control space compared to the simulations we consider in the previous section, the value function estimatorV e is much less accurate in terms of estimating V * M0 since the trajectories demonstrated from experts may only cover a very small part of the state and control space. FIG2 shows the of our approach. For all simulations, we require k to be around 20% ∼ 30% of the original planning horizon H to achieve good performance. AGGREVATED (k = 1) learned very little due to the imperfect value function estimatorV e. We also tested k = H, where we observe that reward shaping withV e gives better performance than TRPO-GAE. This empirical observation is consistent with the observation from BID16 BID16 ) used SARSA (, not policy gradient based methods). This indicates that even whenV e is not close to V *, policy gradient methods can still employ the oracleV e just via reward shaping. Finally, we also observed that our approach significantly reduces the variance of the performance of the learned polices (e.g., Swimmer in FIG2) in all experiments, including the sparse reward setting. This is because truncation can significantly reduce the variance from the policy gradient estimation when k is small compared to H. We propose a novel way of combining IL and RL through the idea of cost shaping with an expert oracle. Our theory indicates that cost shaping with the oracle shortens the learner's planning horizon as a function of the accuracy of the oracle compared to the optimal policy's value function. Specifically, when the oracle is the optimal value function, we show that by setting k = 1 reveals previous imitation learning algorithm AGGREVATED. On the other hand, we show that when the oracle is imperfect, using planning horizon k > 1 can learn a policy that outperforms a policy that would been learned by AGGREVATE and AGGREVATED (i.e., k = 1). With this insight, we propose THOR (Truncated HORizon policy search), a gradient based policy search algorithm that explicitly focusing on minimizing the total cost over a finite planning horizon. Our formulation provides a natural half-way point between IL and RL, and experimentally we demonstrate that with a reasonably accurate oracle, our approach can outperform RL and IL baselines. We believe our high-level idea of shaping the cost with the oracle and then focusing on optimizing a shorter planning horizon is not limited to the practical algorithm we proposed in this work. In fact our idea can be combined with other RL techniques such as Deep Deterministic Policy Gradient (DDPG) BID12, which has an extra potential advantage of storing extra information from the expert such as the offline demonstrations in its replay buffer BID28 ). Though in our experiments, we simply used some expert's demonstrations to pre-trainV e using TD learning, there are other possible ways to learn a more accurateV e. For instance, if an expert is available during training BID20, one can online updateV e by querying expert's feedback. A PROOF OF THEOREM 3.1 Figure 3: The special MDP we constructed for theorem 3.1Proof. We prove the theorem by constructing a special MDP shown in Fig 3, where H = ∞. The MDP has deterministic transition, 2H + 2 states, and each state has two actions a 1 and a 2 as shown in Fig. 3. Every episode starts at state s 0. For state s i (states on the top line), we have c(s i) = 0 and for state s i (states at the bottom line) we have c(s i) = 1.It is clear that for any state s i, we have Q * (s i, a 1) = 0, Q * (s i, a 2) = γ, Q * (s i, a 1) = 1 and Q * (s i, a 2) = 1 + γ, for i ≥ 1. Let us assume that we have an oracleV e such thatV e (s i) = 0.5 + δ and V e (s i) = 0.5 − δ, for some positive real number δ. Hence we can see that |V e (s) − V * (s)| = 0.5 + δ, for all s. DenoteQ e (s, a) = c(s, a) + γE s ∼Psa [V e (s)], we know thatQ e (s i, a 1) = γ(0.5 + δ),Q e (s i, a 2) = γ(0.5 − δ),Q e (s i, a 1) = 1 + γ(0.5 + δ) andQ e (s i, a 2) = 1 + γ(0.5 − δ).It is clear that the optimal policy π * has cost J(π *) = 0. Now let us compute the cost of the induced policy from oracleQ e:π(s) = arg min aQ e (s, a). As we can seeπ makes a mistake at every state as arg min aQ e (s, a) = arg min a Q * (s, a). Hence we have J(π) = γ 1−γ. Recall that in our constructed example, we have = 0.5 + δ. Now let δ → 0 + (by δ → 0 + we mean δ approaches to zero from the right side), we have → 0.5, hence J(π) = Proof of Theorem 3.2. In this proof, for notation simplicity, we denote V π M0 as V π for any π. Using the definition of value function V π, for any state s 1 ∈ S we have: DISPLAYFORM0
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryUlhzWCZ
Combining Imitation Learning and Reinforcement Learning to learn to outperform the expert
Recently, Generative Adversarial Networks (GANs) have emerged as a popular alternative for modeling complex high dimensional distributions. Most of the existing works implicitly assume that the clean samples from the target distribution are easily available. However, in many applications, this assumption is violated. In this paper, we consider the observation setting in which the samples from a target distribution are given by the superposition of two structured components, and leverage GANs for learning of the structure of the components. We propose a novel framework, demixing-GAN, which learns the distribution of two components at the same time. Through extensive numerical experiments, we demonstrate that the proposed framework can generate clean samples from unknown distributions, which further can be used in demixing of the unseen test images. In this paper, we consider the classical problem of separating two structured signals observed under the following superposition model: DISPLAYFORM0 where X ∈ X and N ∈ N are the constituent signals/components, and X, N ⊆ R p denote the two structured sets. In general the separation problem is inherently ill-posed; however, with enough structural assumption on X and N, it has been established that separation is possible. Depending on the application one might be interested in estimating only X (in this case, N is considered as the corruption), which is referred to as denoising, or in recovering (estimating) both X and N which is referred to as demixing. Both denoising and demixing arise in a variety of important practical applications in the areas of signal/image processing, computer vision, machine learning, and statistics BID7 BID11 BID3 BID6. Most of the existing demixing techniques assume some prior knowledge on the structures of X and N in order to recover the desired component signals. Prior knowledge about the structure of X and N can only be obtained if one has access to the generative mechanism of the signals and clean samples from the probability distribution defined over sets X and N. In many practical settings, none of these may be feasible. In this paper, we consider the problem of separating constituent signals from superposed observations when we do not have access to the clean samples from any of the distributions (fully unsupervised approach). In particular, we are given a set of superposed observations DISPLAYFORM1 where X i ∈ X and Y i ∈ N are i.i.d samples from their respective (unknowns) distributions. In this setup, we explore two questions: First, How can one learn the prior knowledge about the individual components from superposition samples?; hence, we concern with a learning problem. Second, Can we leverage the implicitly learned constituent distributions for tasks such as demixing of a test image?. As a , in the latter question, we deal with a inference task. Motivated by the recent success of generative models in high-dimensional statistical inference tasks such as compressed sensing in BID4, in this paper, we focus on Generative Adversarial Network (GAN) based generative models to implicitly learn an unknown distribution, i.e., generate samples from the learned distribution. Most of the existing works on GANs typically assume 2 PRIOR ART To overcome the inherent ambiguity issue in problem, many existing methods have assumed that the structure of sets X and N (i.e., the structures can be low-rank matrices, or have sparse representation in some domain BID23) is a prior known and also that the signals from X and N are "distinguishable" BID10 BID27 BID9 BID12. Knowledge of the structures is a big restriction in many real-world applications. Recently, there have been some attempts to automate this hardcoding approach. Among them, structured sparsity BID17, dictionary learning BID10, and in general manifold learning are the prominent ones. While these approaches have been successful to some extent, they still cannot fully address the need for the prior structure. Over the last decade, deep neural networks have been demonstrated that they can learn useful representations of real-world signals such as natural images, and thus have helped us to understand the structure of the high dimensional signals, for e.g., using deep generative models BID30. In this paper, we focus on Generative Adversarial Networks GANs) BID14 as the generative models for implicitly learning the distribution of constituent components. GANs have been established as a very successful tool for generating structured high-dimensional signals BID2 BID31 as they do not directly learn a probability distribution; instead, they generate samples from the target distribution(s) BID13. In particular, if we assume that the structured signals are drawn from a distribution lying on a low-dimensional manifold, GANs generate points in the high-dimensional space that resemble those coming from the true underlying distribution. Since their inception, there has been a flurry of works on GANs BID35 BID33 BID29 to name a few. In most of the existing works on GANs with a few notable exceptions such as BID32; BID5; BID20 BID16; BID34, it is implicitly assumed that one has access to clean samples of the desired signal. However, in many practical scenarios, the desired signal is often accompanied by unnecessary components. Recently, GANs have also been used for capturing the structure of high-dimensional signals specifically for solving inverse problems such as sparse recovery, compressive sensing, and phase retrieval BID4 BID20 BID16. For instance, BID4 has shown that generative models provide a good prior for structured signals, for e.g., natural images, under compressive sensing settings over sparsity-based recovery methods. They rigorously analyze the statistical properties of a generative model based on compressed sensing and provide some theoretical guarantees and experimental evidence to support their claims. However, they don't explicitly propose an optimization procedure to solve the recovery problem. They simply suggest using stochastic gradient-based methods in the low-dimensional latent space to recover the signal of interest. This has been addressed in BID26, where the authors propose using a projected gradient descent algorithm for solving the recovery problem directly in the ambient space (space of the desired signal). They provide theoretical guarantees for the convergence of their algorithm and also demonstrate improved empirical over BID4. While GANs have found many applications, most of them need direct access to the clean samples from the unknown distribution, which is not the case in many real applications such as medical imaging. AmbientGAN framework BID5 partially addresses this problem. In particular, they studied various measurement models and showed that their GAN model can find samples of clean signals from corrupted observations. However, AmbientGAN assumes that the observation model and parameters are known. That is, they have assumed that one has access to the samples of corruption part, which is a strong restriction in the real-world applications. One of our main contributions is addressing this limitation by studying the demixing problem. Actually, if we can learn the distribution of both components (e.g., generating samples from them), then we can use the samples of the second component (corruption part) for downstream task such as denoising without explicitly needing the samples from the corruption part. That is why our framework is a purely unsupervised approach. In addition, AmbientGAN just learns the distribution of the clean images; however, it has not been used for the task of image denoisng (i.e., how to denoise an unseen corrupted image). Our framework addresses this issue in a general scenario of demixing of unseen test images.3 AND PROPOSED IDEA 3.1 PRELIMINARIES Generative Adversarial Networks (GANs) are one of the successful generative models in practice was first introduced by BID14 for generating samples from an unknown target distribution. As opposed to the other approaches for density estimation such as Variational AutoEncoders (VAEs) BID21, which try to learn the distribution itself, GANs are designed to generate samples from the target probability density function. This is done through a zero-sum game between two players, generator, G and discriminator, D in which the generator G plays the role of producing the fake samples and discriminator D plays the role of a cop to find the fake and genuine samples. Mathematically, this is accomplished through the following min-max optimization problem: DISPLAYFORM0 where θ g and θ d are the parameters of generator network and discriminator network, respectively. D x denotes the target probability distribution, and D z represents the probability distribution of the hidden variables z ∈ R h, which is assumed either a uniform distribution in [−1, 1] h, or standard normal. One can also use identity function instead of log function in the above expression. The ing formulation is called WGAN A. et al.. It has been shown that if G and D have enough capacity, then solving optimization problem by alternative stochastic gradient descent guarantees the distribution D g at the output of the generator converges to D x. Having discussed the basic setup of GANs, next we present the proposed modifications to the basic GAN setup that allows for usage of GANs as a generative model for demixing structured signals. In this section, we discuss our main contribution. First, we start with learning problem (the first question in the introduction section) and explain our GAN framework for learning the distribution of two constituent components. Next, we move to the inference part (the second question in the introduction section) to show that how we can leverage the learned generating process for demixing of a test mixed image. Finally, we provide some theoretical intuition about the success/failure of the demixing-GAN. FIG0 shows the GAN architecture, we use for the purpose of separating or demixing of two structured signals form their superposition. As illustrated, we have used two generators and have fed them with two random noise vectors z 1 ∈ R h1 and z 2 ∈ R h2 according to a uniform distribution defined on a hyper-cube, where h 1, h 2 are less than the dimension of the input images. We also assume that they are independent of each other. Next, the output of generators are summed up and the is fed to the discriminator along with the superposition samples, y i s. In FIG0, we just show the output of each generator after training for an experiment case in which the mixed images consist of 64 MNIST binary image (for X part) and a second component constructed by random sinusoidal (for N part) (please see the experiment section for more details). Somewhat surprisingly, the architecture based on two generators can produce samples from the distribution of each component after enough number of training iterations. We note that this approach is fully unsupervised as we only have access to the mixed samples and nothing from the samples of constituent components is known. As mentioned above, this is in sharp contrast with the AmbientGAN. As a , the demixing-GAN framework can generate samples from both components (If for example the second component is drawn from a random sinusoidal, then the generated samples can be used in the task of denoising where the corruption components are sampled from highly structured sinusoidal waves). Now, we can use the trained generators in FIG0 for demixing of the constituent components for a given test mixed image which has not been used in training. To this end, we use our assumption that the components have some structure and the representation of this structure is given by the last layer of the trained generator. This observation together with this fact that in GANs, the low-dimension random vector z is representing the hidden variables, leads us to this point: in order to demix a new test mixed image, we have to find a hidden representations corresponding to each component which give the smallest distance to the constituent images in the space of G θg 1 and G θg 2 BID26 BID4. In other words, we have to solve the following optimization problem: DISPLAYFORM0 where u denotes the test mixed image. Now, each component can be estimated by evaluating DISPLAYFORM1 and G θg 2 (z 2) 1. While the optimization problem in is non-convex, we can still solve it through an alternative minimization fashion. We note that in optimization problem, we did not project on the box sets on which z 1 and z 2 lie on. Instead we have used regularizer terms in the objective functions (which are not meant as projection step). We empirically have observed that imposing these regularizers can help to obtain good quality images in our experiment; plus, they may help that the gradient flow to be close in the region of interest by generators. This is also used in BID4. Now, we provided some theoretical intuitions for the demixing-GAN. Recall that the superposition model is given by Y = X + N, and D y, D x and D n denote the distribution of Y, X, and N, respectively. Let DISPLAYFORM0 denotes the joint distribution of the hidden random vectors 1 G θg 1 and G θg 2 denote the first and second trained generator with parameter θg 1 and θg 2, respectively.with marginal probability as D zi for i = 1, 2. We note that in demxing setting there are not samples from the component N as opposed to the typical denoising scenarios. Now we have the following mini-max loss as: DISPLAYFORM1 Following the standard GAN framework, for the fixed G 1 and G 2, we have: DISPLAYFORM2 where DISPLAYFORM3. Hence, the optimal discriminator is given by D * = Dx * Dn Dx * Dn+Dg 1 * Dg 2 since Y = X + N and the fact that D y and D g are the pdf and defined over. This means that the global optimal of the above optimization problem is achieved iff D x * D n = D g1 * D g2 (* denotes the convolution operator). However, this condition is generally an ill-posed equation. That is, in general, D x = D g1 and D n = D g2. In the best case, we can have hope to uniquely determine the distributions up to a permutation (similar thing is also true for the ICA method). This is the point actually we need some notion of incoherence between two constituent structures, X, and N. So, the question is this under what incoherent condition, we can have a well-conditioned equation? Also, by using the above optimality condition and taking the Fourier transform (or using the characteristic function, Φ) from this optimality equation, we obtain: DISPLAYFORM4 ). For DISPLAYFORM5 ) to be well-defined, a straightforward condition is that the Φ n should be non-zero almost everywhere. As a , even if we somehow figure out the right incoherent condition, D x is uniquely determined by D n if the Fourier transform of D n is non-zero. While we currently do not have a right answer for the above question, we conjecture that in addition to the incoherence issue in the signal domain, the hidden space (z-space) in both generators play an important role to make the demixing problem possible. We investigate this idea and the other things empirically in the experiment section. In this section, we present various experiments showing the efficacy of the proposed framework (depicted in FIG0) in two different setups. First, we will focus on learning the structured distributions from the superposition observation samples. Next, we explore the use of generative models from the proposed GAN framework in an inference task. In all the following experiments, we did our best for choosing all the hyper-parameters. In this section, we present the of our experiments for learning the distribution of the constituent components on different datasets. We first present our experiment with MNIST dataset, and then we show the similar set of experiments with Fashion-MNIST dataset (F-MNIST) BID15. Next, we illustrate the performance of the demixing-GAN on Quick-Draw Qui (a). Finally, we present some experimental investigation for the conditions under which the demixing-GAN is failed. We start the experiments with considering four sets of constituent components. We have used the network architectures for discriminator and generators similar to the one proposed in DCGAN BID25. DCGAN is a CNN based-GAN consists of convolutional layers followed by batch normalization (except the last layer of the generator and first layer of discriminator). In the first two experiments in this section, we use the superposition of the MNIST digits and some random corruption signals. That is, we mix the MNIST digits with two (structured) corruption signals: random sinusoidal waves, and random vertical and horizontal lines. In particular, first we generate random sinusoidal waves in which the amplitude, frequency, and phase are random numbers, and second we construct the random vertical and horizontal lines. In FIG1, we show the training evolution of two fixed random vectors, z 1 and z 2 in R 100 in the output of two generators. In the top left panel, we have added one random sinusoidal wave to the clean digits. As we can see, our proposed GAN architecture can learn two distributions and generate samples from each of them. In the bottom left panel, we repeat the same experiment with random vertical and horizontal lines as the second component (two random vertical and two random horizontal lines are added to the clean digits). While there is some notion of mode collapse, still two generators can produce the samples from the distribution of the constituent components. For the third experiment in this section, our mixed images comprise of two MNIST digits from 0 to 9. In this case, we are interested in learning the distribution from which each of the digits is drawn. Similar to the previous cases, the top right panel in FIG1 shows the evolution of two fixed random vectors, z 1 and z 2. As we can see, after 32 epochs, the output of the generators would be the samples of MNIST digits. Finally, in the last experiment of this section, we generate the mixed images as the superposition of digits 1 and 2. In the training set of MNIST dataset, there are around 6000 samples from each digit of 1 and 2. We have used these digits to form the set of superposition images. The bottom right panel of FIG1 shows the output of two generators, which can learn the distribution of two digits. The interesting point in these experiments is that each GAN can learn the different variety of existing digits in MNIST training dataset, and we typically do not see mode collapse, which is a major problem in the training of GANs BID13. In this section, we illustrate the performance of the proposed demixing-GAN for F-MNIST dataset. The training dataset in F-MNIST includes 60000 gray-scale images with size of 28 × 28 classified in 10 classes. The different labels denote objects, which include T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, and Ankle boot. Similar to the experiments with MNIST dataset being illustrated in FIG1, we train the demixing-GAN where we have used InfoGAN architecture for the generators. The architecture of the generators in InfoGAN is very similar to the DCGAN discussed above with the same initialization procedure. The dimension of input noise to the generators is set to 62. We have also used the same discriminator in DCGAN. Left panel in Figure 3 shows the output of two generators, which can learn the distribution of dress and bag images during 21 epochs from mixed dress and bag images. Figure 3: Left Panel: evolution of output samples by two generators for fixed z1 and z2. The mixed images comprise only two objects, dress, and bag in the training F-MNIST dataset. One generator produces the samples from dress distribution, while the other one outputs the samples from the bag distribution. Right panel: the performance of the trained generators for demixing of two constituent components (inference). The first two columns are the ground-truth components. The third column is the ground-truth mixed image and the last two columns denote the recovered components. The first row uses the same generator trained for only one digit (drawn from MNIST test dataset) and a random sinusoidal. The second row uses the generator trained only for digits 1 and 2. The last row shows the of demixing with ICA method. We now test the performance of two trained generators in a demixing scenario for the test mixed images, which have not been seen in the training time. Right panel in Figure 3 shows our experiment in which we have illustrated the demixing on three different test mixed images. Here, we have compared the performance of the demixing-GAN with Independent component analysis (ICA) method BID18 ) (We have implemented ICA using Scikit-learn module BID24). In the top and middle rows of Figure 3 (right panel), we consider the mixed images generated by adding a digit (drawn from MNIST test dataset) and a random sinusoidal. Then the goal is to separate (demix) these two from their given sum. To do this, we use the GAN trained for learning the distribution of digits and sinusoidal waves (the top left panel of FIG1) and solve the optimization problem in through an alternative minimization approach. As a , we obtain z 1 and z 2. The corresponding constituent components is then obtained by evaluating G θg 1 (z 1) and G θg 2 (z 2). In the right panel of Figure 3, the first two columns denote the ground-truth of the constituent components. The middle one is the mixed ground-truth, and the last two show the recovered components using demixing-GAN and ICA. In the last row, digits 1 and 2 drawn from the MNIST test dataset, are added to each other and we apply the GAN trained for learning the distribution of digits 1 and 2 (bottom right panel in FIG1). As we can see, our proposed GAN can separate two digits; however, ICA method fails in demixing of two components. In addition, Table 1 has compared numerically the quality of recovered components with the corresponding ground-truth ones through mean square error (MSE) and Peak Signal-to-Noise Ratio (PSNR) criteria. Now, we evaluate the performance of the trained demixing-GAN on the F-MNIST dataset. For each panel in Figure 4, the first two columns denote two objects from F-MNIST test dataset as the ground-truth components. The third column is the ground-truth mixed image, and the last two columns show the recovered constituent components similar to the previous case. The top left uses the generator trained for only two objects for 20 epochs. The top right uses the generator trained for all 10 objects for 20 epochs. The bottom left uses the same generator trained for only two objects for 30 epochs. The bottom right shows the of demixing with ICA method. As we can see, ICA fails to separate the components (images of F-MNIST) from each other, while the proposed demixing-GAN can separate the mixed images from each other. While the estimated image components are not exactly matched to the ground-truth ones (first two columns), they are semantically similar to the ground-truth components. In this section, we explore the performance of the demixing-GAN when the superposed images is the sum of digits 8 from MNIST dataset and dresses from the F-MNIST dataset. The experiment for this setup has been illustrated in the left panel of FIG2. Since our goal is to separate dress from the digit 8, for the first generator, we have used the InfoGAN architecture being used in the experiment demixing-GAN demixing-GAN demixing-GAN ICA Method Figure 4: Performance of the trained generators for demixing of two constituent components. In all panels, the first two columns are the ground-truth components. The third column is the ground-truth mixed image and the last two columns denote the recovered components. The top left uses the generator trained for only two objects for 20 epochs. The top right uses the generator trained for all 10 objects for 20 epochs. The bottom left uses the same generator trained for only two objects for 30 epochs. The bottom right shows the of demixing with ICA method.in section 4.1.2 and similarly the DCGAN architecture for the second generator as section 4.1.1. As a , the input noise to the first generator is drawn uniformly from [−1, 1] 62, and uniformly from [−1, 1] 100 for the second generator. The left panel of FIG2 shows the evolution of output samples by two generators for fixed z 1 and z 2. As we can see, after 21 epoch, the first generator is able to generate dress samples and the second one outputs samples of digit 8. Similar to the previous testing scenarios, we now evaluate the performance of the demixing-GAN in comparison with ICA for separating a test image which is a superposition of a digit 8 drawn randomly from MNIST test dataset and dress object drawn randomly from F-MNIST test dataset. Right panel in FIG2 shows the performance of the demixng-GAN and ICA method. As we can see, ICA fails to demix two images from their superposition, whereas the demixing-GAN is able to separate digit 8 very well and to some extend the dress object from the input superposed image. MSE and PSNR values for the first component using ICA recovery method is given by 0.40364 and 3.94005, respectively. Also, MSE and PSNR for the first component using ICA recovery method is given by 0.15866 and 7.99536, respectively. In this section, we present our demixing framework for another dataset, Quick-Draw dataset (Qui, a) released by Google recently. The Quick Draw Dataset is a collection of 50 million drawings categorized in 345 classes, contributed by players of the game Quick, Draw! Qui (b). For the the experiment in the left panel of FIG3, we consider only two objects, face, and flower in the Quick Draw Dataset (training set includes 16000 images of size 28 × 28 for each class). As a , the input mixed images are the superposition of different faces and flowers. The left panel of FIG3 shows the evolution of the random vectors z 1 and z 2 (drawn uniformly from [−1, 1] 64 ). As we can see, after 31 epochs, one generator can produce various kind of faces, while the other one generates different shapes of flowers. Now, we consider a more challenging scenario in which the constituent components in the mixed images are just airplane shapes. That is, we randomly select the airplane shapes from 16000 images in the training set, and add them together to construct the input mixed images. We have been noticed that in the 16000 images of the airplane shapes, in general, there are two structures. One is related to the airplanes having been drawn by the players in more simple and somehow flat manner (they are mostly similar to an ellipse with or without wings) in the Quick, Draw game, while the second one consists of the more detailed shapes (they have a tail and maybe with different orientation). Right panel of FIG3 depicts the performance of the demixing-GAN for this setup. One surprising point is that while both components in the superposition are drawn from one class (e.g., airplane shapes), the demixing-GAN is still able to demix the hidden structure in the airplane distribution. Thus, we think that just having the same distribution for both of the constituent components is not necessarily a barrier for demixing performance. We conjecture that somehow different features of the shapes drawn from the same distribution makes demixing possible by forcing the enouth incoherence between the components. As we can see, after 31 epochs, both generators can learn two mentioned structures, and regarding two structures, they can cluster the shape of airplanes into two types. In this section, we empirically explore our observation about the failure of the demixing-GAN. We focus on two spaces, hidden space (z-space) and signal or generator space (the output of generators) in discovering the failure of demixing-GAN.Our first observation concerns the z-space. We observe that if the hidden vectors form z-space of two generators are aligned to each other, then the two generators cannot output the samples in the signal space, representing the distribution of the constituent components. To be more precise, in the left panel of Figure 7, we consider separating digits 8 and 2 from their superpositions. However, here, we feed both generators with the same vector, i.e., z 1 = z 2 in each batch (this is considered as the extreme case where precisely the hidden variables equal to each other) and track the evolution of the output samples generated by both generators. As we can see, even after 21 epochs, the generated samples by both generators are an unclear combination of both digits 2 and 8, and they are not separated clearly as opposed to the case when we feed the generators with i.i.d random vectors. We also repeat the same experiment with two aligned vectors z 1 and z 2, i.e., z 2 = 0.1z 1, the right panel of Figure 7 shows the evolution of the output samples generated by both generators for this setup. As shown in this experiment, two generators cannot learn the distribution of digits 8 and 2. While we do not currently have a mathematical argument for this observation, we conjecture that the hidden space (z-space) is one of the essential pieces in the demixing performance of the proposed demixing-GAN. We think that having (random) independent or close orthogonal vector z's for the input of each generator is a necessary condition for the success of learning the distribution of the constituent components, and consequently demixing of them. Further investigation of this line of study is indeed an interesting research direction, and we defer it for future research. In addition to the hidden space, here we design some experiments in the generator space that reveals the condition under which the demixing is failed. In particular, we consider the airplane images in Quick-Draw dataset. To construct the input mixed images, we consider randomly chosen images of the airplane from 16000 images as the first component. Then, the second component is constructed by rotating exactly the same one in the first components in a counterclockwise direction. We consider 4 different rotations, 0 DISPLAYFORM0 •. This experiment is sort of similar to the one in the right panel of FIG3 in which we have seen that demixing-GAN can capture the internal structure in the airplane dataset by clustering them into two types. Now we perform the demixing-GAN on these datasets. FIG4 illustrated the the evolution of the generators for various rotation degrees. The top left panel shows the case in which exactly both components are the same. Obviously, the demixing, in this case, is impossible as there is no hope to distinguish the components from each other. Moving on, in the other panels of FIG4, we have different rotation settings. As we can see, once we move forward to the 90•, both generators can capture the samples from the airplane distribution; however, as not clear as the case in which we had added the airplane shapes randomly to construct the input mixed images. We conjecture that changing the orientation of one component can make it incoherent to some extent from the other component, and consequently makes the demixing possible. In other words, we see again when two images show some distinguishable structures (in this case, the first one has 0-oriented object and the other is the same object, but rotated 90• counterclockwise), then the demixing-GAN can capture these structures. In this paper, we considered a GAN framework for learning the structure of the constituent components in a superposition observation model. We empirically showed that it is possible to implicitly learn the underlying distribution of each component and use them in the downstream task such as demixing of a test mixed image. We also investigated the conditions under which the proposed demixing framework fails through extensive experimental simulations and provided some theoretical insights.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BygbVL8KO4
An unsupervised learning approach for separating two structured signals from their superposition