source
sequence | source_labels
sequence | rouge_scores
sequence | paper_id
stringlengths 9
11
| ic
unknown | target
sequence |
---|---|---|---|---|---|
[
"This paper explores the scenarios under which\n",
"an attacker can claim that ‘Noise and access to\n",
"the softmax layer of the model is all you need’\n",
"to steal the weights of a convolutional neural network\n",
"whose architecture is already known.",
"We\n",
"were able to achieve 96% test accuracy using\n",
"the stolen MNIST model and 82% accuracy using\n",
"stolen KMNIST model learned using only\n",
"i.i.d. Bernoulli noise inputs.",
"We posit that this\n",
"theft-susceptibility of the weights is indicative\n",
"of the complexity of the dataset and propose a\n",
"new metric that captures the same.",
"The goal of\n",
"this dissemination is to not just showcase how far\n",
"knowing the architecture can take you in terms of\n",
"model stealing, but to also draw attention to this\n",
"rather idiosyncratic weight learnability aspects of\n",
"CNNs spurred by i.i.d. noise input.",
"We also disseminate\n",
"some initial results obtained with using\n",
"the Ising probability distribution in lieu of the i.i.d.\n",
"Bernoulli distribution",
"In this paper, we consider the fate of an adamant attacker who is adamant about only using noise as input to a convolutional neural network (CNN) whose architecture is known and whose weights are the target of theft.",
"We assume that the attacker has earned access to the softmax layer and is not restricted in terms of the number of inputs to be used to carry out the attack.",
"At the outset, we'd like to emphasize that our goal in disseminating these results is not to convince the reader on the real-world validity of the attacker-scenario described above or to showcase a novel attack.",
"This paper contains our initial explorations after a chance discovery that we could populate the weights of an MNIST-trained CNN model by just using noise as input into the framework described below.Preliminary work.",
"Under review by the International Conference on Machine Learning (ICML).",
"Do not distribute.Through a set of empirical experiments, which we are duly open sourcing to aid reproducibility, we seek to draw the attention of the community on the following two issues:1.",
"This risk of model weight theft clearly entails an interplay between the dataset as well as the architecture.",
"Given a fixed architecture, can we use the level of susceptibility as a novel metric of complexity of the dataset?2",
".",
"Given the wide variations in success attained by varying the noise distribution, how do we formally characterize the relationship between the input noise distribution being used by the attacker and the true distribution of the data, while considering a specific CNN architecture?",
"What aspects of the true data distribution are actually important for model extraction?The",
"rest of the paper is structured as follows:In Section 2, we provide a brief literature survey of the related work. In",
"Section 3, we describe the methodology used to carry out the attack. In",
"Section 4, we cover the main results obtained and conclude the paper in Section 5.",
"In this paper, we demonstrated a framework for extracting model parameters by training a new model on random impulse response pairs gleaned from the softmax output of the victim neural network.",
"We went on to demonstrate the variation in model extractability based on the dataset which the original model was trained on.",
"Finally, we proposed our framework as a method for which relative dataset complexity can be measured."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11764705181121826,
0,
0.21052631735801697,
0.31578946113586426,
0,
0,
0.1111111044883728,
0.1249999925494194,
0.13333332538604736,
0,
0.25,
0.11764705181121826,
0.1249999925494194,
0,
0,
0.10526315122842789,
0,
0,
0.11764705181121826,
0,
0,
0.09999999403953552,
0.1860465109348297,
0.11428570747375488,
0.04999999701976776,
0.1395348757505417,
0.09999999403953552,
0.054054051637649536,
0.07692307233810425,
0.07692307233810425,
0.09302325546741486,
0.08695651590824127,
0.06896550953388214,
0.09090908616781235,
0.08695651590824127,
0.10526315122842789,
0.07692307233810425,
0
] | H1le3y356N | true | [
"Input only noise , glean the softmax outputs, steal the weights"
] |
[
"We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN).",
"It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem.",
"Our model optimizes λ-Jeffreys divergence between the model distribution and the true data distribution.",
"We show that it takes the best properties of VAE and GAN objectives.",
"It consists of two parts.",
"One of these parts can be optimized by using the standard adversarial training, and the second one is the very objective of the VAE model.",
"However, the straightforward way of substituting the VAE loss does not work well if we use an explicit likelihood such as Gaussian or Laplace which have limited flexibility in high dimensions and are unnatural for modelling images in the space of pixels.",
"To tackle this problem we propose a novel approach to train the VAE model with an implicit likelihood by an adversarially trained discriminator.",
"In an extensive set of experiments on CIFAR-10 and TinyImagent datasets, we show that our model achieves the state-of-the-art generation and reconstruction quality and demonstrate how we can balance between mode-seeking and mode-covering behaviour of our model by adjusting the weight λ in our objective.",
"Variational autoencoder (VAE) (Kingma et al., 2014; Rezende et al., 2014; Titsias & Lázaro-Gredilla, 2014 ) is one of the most popular approaches for modeling complex high-dimensional distributions.",
"It has been applied successfully to many practical problems.",
"It has several nice properties such as learning low-dimensional representations for the objects and ability to conditional generation.",
"Due to an explicit reconstruction term in its objective, one may ensure that VAE can generate all objects from the training set.",
"These advantages, however, come at a price.",
"It is a known fact that VAE tends to generate unrealistic objects, e.g., blurred images.",
"Such behaviour can be explained by the properties of a maximum likelihood estimation (MLE) which is used to fit a restricted VAE model p θ (x) in data that comes from a complex distribution p with an equiprobable mixture of two Gaussians with learnable location and scale.",
"Plots a)-c) show pairwise comparisons of optimal log-densities, the plot d) compares optimal densities themselves.",
").",
"This way, we encourage our model to be mode-seeking while still having relatively high values of p θ (x) on all objects from a training set, thus preventing the mode-collapse.",
"We note that J λ (p θ (x) p * (x)) is not symmetric with respect to p θ (x) and p * (x) and by the weight λ we can balance between mode-seeking and mass-covering behaviour.",
"However, the straightforward way of substituting each KL term with GAN and VAE losses does not work well in practice if we use an explicit likelihood for object reconstruction in VAE objective.",
"Such simple distributions as Gaussian or Laplace that are usually used in VAE have limited flexibility and are unnatural for modelling images in the space of pixels.",
"To tackle this problem we propose a novel approach to train the VAE model in an adversarial manner.",
"We show how we can estimate the implicit likelihood in our loss function by an adversarially trained discriminator.",
"We theoretically analyze the introduced loss function and show that under assumptions of optimal discriminators, our model minimizes the λ-Jeffreys divergence J λ (p θ (x) p * (x)) and we call our method as Implicit λ-Jeffreys Autoencoder (λ-IJAE).",
"In an extensive set of experiments, we evaluate the generation and reconstruction ability of our model on CIFAR10 (Krizhevsky et al., 2009) and TinyImagenet datasets.",
"It shows the state-of-the-art trade-off between generation and reconstruction quality.",
"We demonstrate how we can balance between the ability of generating realistic images and the reconstruction ability by changing the weight λ in our objective.",
"Based on our experimental study we derive a default choice for λ that establishes a reasonable compromise between mode-seeking and mass-covering behaviour of our model and this choice is consistent over these two datasets.",
"In the paper, we considered a fusion of VAE and GAN models that takes the best of two worlds: it has sharp and coherent samples and can encode observations into low-dimensional representations.",
"We provide a theoretical analysis of our objective and show that it is equivalent to the Jeffreys divergence.",
"In experiments, we demonstrate that our model achieves a good balance between generation and reconstruction quality.",
"It confirms our assumption that the Jeffreys divergence is the right choice for learning complex high-dimensional distributions in the case of the limited capacity of the model.",
"Proof.",
"Now we will show the second term is equal to zero given our assumptions:",
"Where we have used the (1) and (2) properties of the likelihoods r(x|y) (Definition 1):",
"To generate the plot 1 we considered the following setup: a target distribution was a mixture:",
"While the model as an equiprobable mixture of two learnable Gaussians:",
"The optimal θ was found by making 10,000 stochastic gradient descent iterations on Monte Carlo estimations of the corresponding divergences with a batch size of 1000.",
"We did 50 independent runs for each method to explore different local optima and chose the best one based on a divergence estimate with 100,000 samples Monte Carlo samples."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
1,
0,
0.1818181723356247,
0.34285715222358704,
0.07407407462596893,
0.23255813121795654,
0.1666666567325592,
0.22727271914482117,
0.17241378128528595,
0.1249999925494194,
0,
0.14999999105930328,
0.09090908616781235,
0.06896551698446274,
0.05128204822540283,
0.25,
0.1111111044883728,
0.1538461446762085,
0.11999999731779099,
0.1538461446762085,
0.12765957415103912,
0.29999998211860657,
0.14999999105930328,
0.17543859779834747,
0.21739129722118378,
0.1249999925494194,
0.1818181723356247,
0.1538461446762085,
0.19999998807907104,
0.25,
0.15789473056793213,
0.13636362552642822,
0.0555555522441864,
0.2222222238779068,
0.1111111044883728,
0.24242423474788666,
0.12765957415103912,
0.19999998807907104
] | Syxc1yrKvr | true | [
"We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN)"
] |
[
"Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task.",
"This approach encounters difficulty when transfer is not mutually beneficial, for instance, when tasks are sufficiently dissimilar or change over time.",
"Here, we use the connection between gradient-based meta-learning and hierarchical Bayes to propose a mixture of hierarchical Bayesian models over the parameters of an arbitrary function approximator such as a neural network.",
"Generalizing the model-agnostic meta-learning (MAML) algorithm, we present a stochastic expectation maximization procedure to jointly estimate parameter initializations for gradient descent as well as a latent assignment of tasks to initializations.",
"This approach better captures the diversity of training tasks as opposed to consolidating inductive biases into a single set of hyperparameters.",
"Our experiments demonstrate better generalization on the standard miniImageNet benchmark for 1-shot classification.",
"We further derive a novel and scalable non-parametric variant of our method that captures the evolution of a task distribution over time as demonstrated on a set of few-shot regression tasks.",
"Meta-learning algorithms aim to increase the efficiency of learning by treating task-specific learning episodes as examples from which to generalize BID39 .",
"The central assumption of a meta-learning algorithm is that some tasks are inherently related and so inductive transfer can improve generalization and sample efficiency BID4 BID2 .",
"Recent metalearning algorithms have encoded this assumption by learning global hyperparameters that provide a task-general inductive bias.",
"In learning a single set of hyperparameters that parameterize, for example, a metric space BID46 or an optimizer for gradient descent BID31 BID8 , these meta-learning algorithms make the assumption that tasks are equally related and therefore mutual transfer is appropriate.",
"This assumption has been cemented in recent few-shot learning benchmarks, which consist of a set of tasks generated in a systematic manner (e.g., BID8 BID46 .However",
", the real world often presents scenarios in which an agent must decide what degree of transfer is appropriate. In the",
"case of positive transfer, a subset of tasks may be more strongly related to each other and so non-uniform transfer poses a strategic advantage. Negative",
"transfer in the presence of dissimilar or outlier tasks worsens generalization performance BID34 . Moreover",
", when the underlying task distribution is non-stationary, inductive transfer to initial tasks should exhibit graceful degradation to address the catastrophic forgetting problem BID16 . However",
", the consolidation of all inductive biases into a single set of hyperparameters cannot flexibly account for variability in the task distribution. In contrast",
", in order to deal with this degree of task heterogeneity, extensive task-switching literature reveals that people detect and readily adapt even in the face of significantly novel contexts (see BID5 , for a review).In this work",
", we learn a mixture of hierarchical models that allows the meta-learner to adaptively select over a set of learned parameter initializations for gradient-based fast adaptation BID8 to a new task. The method",
"is equivalent to clustering task-specific parameters in the hierarchical model induced by recasting gradient-based meta-learning as hierarchical Bayes BID13 and generalizes the model-agnostic meta-learning (MAML) algorithm introduced in BID8 .By treating",
"the assignment of task-specific parameters to clusters as latent variables in a probabilistic model, we can directly detect similarities between tasks on the basis of the task-specific likelihood, which may be parameterized by a black-box model such as a neural network. Our approach",
"therefore alleviates the need for explicit geometric or probabilistic modelling assumptions about the weights of a parametric model and provides a scalable method to regulate information transfer between episodes.We extend our latent variable model to the non-parametric setting and leverage stochastic point estimation for scalable inference in a Dirichlet process mixture model (DPMM) BID30 . To the best",
"of our knowledge, no previous work has considered a scalable stochastic point estimation in a non-parametric mixture model. Furthermore",
", we are not aware of prior work applying nonparametric mixture modelling techniques to high-dimensional parameter spaces such as those of deep neural networks. The non-parametric",
"extension allows the complexity of a meta-learner to evolve by introducing or removing clusters in alignment with the changing composition of the dataset and preserves performance on previously encountered tasks better than a parametric counterpart.",
"Meta-learning is a source of learned inductive bias.",
"Occasionally, the inductive bias is harmful because the experience gained from solving one task does not transfer well to another.",
"On the other hand, if tasks are closely related, they can benefit from a greater amount of inductive transfer.",
"Here, we present an approach that allows a gradient-based meta-learner to explicitly modulate the amount of transfer between tasks, as well as to adapt its parameter dimensionality when the underlying task distribution evolves.",
"We formulate this as probabilistic inference in a mixture model that defines a clustering of task-specific parameters.",
"To ensure scalability, we make use of the recent connection between gradient-based meta-learning and hierarchical Bayes BID13 to perform approximate maximum a posteriori (MAP) inference in both a finite and an infinite mixture model.",
"This approach admits non-conjugate likelihoods parameterised with a black-box function approximator such as a deep neural network, and therefore learns to identify underlying genres of tasks using the standard gradient descent learning rule.",
"We demonstrate that this approach allows the model complexity to grow along with the evolving complexity of the observed tasks in both a few-shot regression and a few-shot classification problem.",
"BID31 43.44 ± 0.77 60.60 ± 0.71 SNAIL BID20 b 45.1 ± --55.2 ± --prototypical networks BID42 c 46.61 ± 0.78 65.77 ± 0.70 MAML BID8 48.70 ± 1.84 63.11 ± 0.92 LLAMA BID13 49.40 BID45 49.82 ± 0.78 63.70 ± 0.67 KNN + GNN embedding BID10 49.44 ± 0.28 64.02 ± 0.51 GNN BID10 50.33 ± 0.36 66.41 ± 0.63 fwCNN (Hebb) BID22 50.21 ± 0.37 64.75 ± 0.49 fwResNet (Hebb) BID22 56.84 ± 0.52 71.00 ± 0.34 SNAIL BID20 55.",
"BID23 57.10 ± 0.70 70.04 ± 0.63 MAML BID8 Figure 7: An evolving dataset of miniImageNet few-shot classification tasks where for the first 20k iterations we train on the standard dataset, then switch to a \"pencil\" effect set of tasks for 10k iterations before finally switching to a \"blurred\" effect set of tasks until 40k.",
"Responsibilities γ ( ) for each cluster are plotted over time.",
"Note the change in responsibilities as the dataset changes at iterations 20k and 30k."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2926829159259796,
0.09090908616781235,
0.5,
0.23529411852359772,
0.1818181723356247,
0.10810810327529907,
0.31372547149658203,
0.1395348757505417,
0.2448979616165161,
0.09756097197532654,
0.29032257199287415,
0.08163265138864517,
0.1818181723356247,
0.1702127605676651,
0.10526315122842789,
0.21276594698429108,
0.260869562625885,
0.27586206793785095,
0.4150943458080292,
0.31372547149658203,
0.1666666567325592,
0.2535211145877838,
0.1428571343421936,
0.12244897335767746,
0.1818181723356247,
0.1875,
0.1860465109348297,
0.1395348757505417,
0.3333333432674408,
0.25,
0.4642857015132904,
0.178571417927742,
0.3265306055545807,
0,
0.17391303181648254,
0.05714285373687744,
0.10810810327529907
] | HyxpNnRcFX | true | [
"We use the connection between gradient-based meta-learning and hierarchical Bayes to learn a mixture of meta-learners that is appropriate for a heterogeneous and evolving task distribution."
] |
[
"We introduce a new routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent's state and the child's vote.",
"Unlike previously proposed routing algorithms, the parent's ability to reconstruct the child is not explicitly taken into account to update the routing probabilities.",
"This simplifies the routing procedure and improves performance on benchmark datasets such as CIFAR-10 and CIFAR-100.",
"The new mechanism 1) designs routing via inverted dot-product attention; 2) imposes Layer Normalization as normalization; and 3) replaces sequential iterative routing with concurrent iterative routing.",
"Besides outperforming existing capsule networks, our model performs at-par with a powerful CNN (ResNet-18), using less than 25% of the parameters. ",
"On a different task of recognizing digits from overlayed digit images, the proposed capsule model performs favorably against CNNs given the same number of layers and neurons per layer. ",
"We believe that our work raises the possibility of applying capsule networks to complex real-world tasks.",
"Capsule Networks (CapsNets) represent visual features using groups of neurons.",
"Each group (called a \"capsule\") encodes a feature and represents one visual entity.",
"Grouping all the information about one entity into one computational unit makes it easy to incorporate priors such as \"a part can belong to only one whole\" by routing the entire part capsule to its parent whole capsule.",
"Routing is mutually exclusive among parents, which ensures that one part cannot belong to multiple parents.",
"Therefore, capsule routing has the potential to produce an interpretable hierarchical parsing of a visual scene.",
"Such a structure is hard to impose in a typical convolutional neural network (CNN).",
"This hierarchical relationship modeling has spurred a lot of interest in designing capsules and their routing algorithms (Sabour et al., 2017; Hinton et al., 2018; Wang & Liu, 2018; Zhang et al., 2018; Li et al., 2018; Rajasegaran et al., 2019; .",
"In order to do routing, each lower-level capsule votes for the state of each higher-level capsule.",
"The higher-level (parent) capsule aggregates the votes, updates its state, and uses the updated state to explain each lower-level capsule.",
"The ones that are well-explained end up routing more towards that parent.",
"This process is repeated, with the vote aggregation step taking into account the extent to which a part is routed to that parent.",
"Therefore, the states of the hidden units and the routing probabilities are inferred in an iterative way, analogous to the M-step and E-step, respectively, of an Expectation-Maximization (EM) algorithm.",
"Dynamic Routing (Sabour et al., 2017) and EMrouting (Hinton et al., 2018) can both be seen as variants of this scheme that share the basic iterative structure but differ in terms of details, such as their capsule design, how the votes are aggregated, and whether a non-linearity is used.",
"We introduce a novel routing algorithm, which we called Inverted Dot-Product Attention Routing.",
"In our method, the routing procedure resembles an inverted attention mechanism, where dot products are used to measure agreement.",
"Specifically, the higher-level (parent) units compete for the attention of the lower-level (child) units, instead of the other way around, which is commonly used in attention models.",
"Hence, the routing probability directly depends on the agreement between the parent's pose (from the previous iteration step) and the child's vote for the parent's pose (in the current iteration step).",
"We also propose two modifications for our routing procedure -(1) using Layer Normalization (Ba et al., 2016) as normalization, and (2) doing inference of the latent capsule states and routing probabilities jointly across multiple capsule layers (instead of doing it layer-wise).",
"These modifications help scale up the model to more challenging datasets.",
"Our model achieves comparable performance as the state-of-the-art convolutional neural networks (CNNs), but with much fewer parameters, on CIFAR-10 (95.14% test accuracy) and CIFAR-100 (78.02% test accuracy).",
"We also introduce a challenging task to recognize single and multiple overlapping objects simultaneously.",
"To be more precise, we construct the DiverseMultiMNIST dataset that contains both single-digit and overlapping-digits images.",
"With the same number of layers and the same number of neurons per layer, the proposed CapsNet has better convergence than a baseline CNN.",
"Overall, we argue that with the proposed routing mechanism, it is no longer impractical to apply CapsNets on real-world tasks.",
"We will release the source code to reproduce the experiments.",
"In this work, we propose a novel Inverted Dot-Product Attention Routing algorithm for Capsule networks.",
"Our method directly determines the routing probability by the agreements between parent and child capsules.",
"Routing algorithms from prior work require child capsules to be explained by parent capsules.",
"By removing this constraint, we are able to achieve competitive performance against SOTA CNN architectures on CIFAR-10 and CIFAR-100 with the use of a low number of parameters.",
"We believe that it is no longer impractical to apply capsule networks to datasets with complex data distribution.",
"Two future directions can be extended from this paper:",
"• In the experiments, we show how capsules layers can be combined with SOTA CNN backbones.",
"The optimal combinations between SOTA CNN structures and capsules layers may be the key to scale up to a much larger dataset such as ImageNet.",
"• The proposed concurrent routing is as a parallel-in-time and weight-tied inference process.",
"The strong connection with Deep Equilibrium Models (Bai et al., 2019) can potentially lead us to infinite-iteration routing.",
"Suofei Zhang, Quan Zhou, and Xiaofu Wu.",
"Fast dynamic routing based on weighted kernel density estimation.",
"In International Symposium on Artificial Intelligence and Robotics, pp. 301-309.",
"Springer, 2018.",
"A MODEL CONFIGURATIONS FOR CIFAR-10/CIFAR-100",
"The configuration choices of Dynamic Routing CapsNets and EM Routing CapsNets are followed by prior work (Sabour et al., 2017; Hinton et al., 2018) .",
"We empirically find their configurations perform the best for their routing mechanisms (instead of applying our network configurations to their routing mechanisms).",
"The optimizers are chosen to reach the best performance for all models.",
"We list the model specifications in Table 2 , 3, 4, 5, 6, 7, 8, and 9.",
"We only show the specifications for CapsNets with a simple convolutional backbone.",
"When considering a ResNet backbone, two modifications are performed.",
"First, we replace the simple feature backbone with ResNet feature backbone.",
"Then, the input dimension of the weights after the backbone is set as 128.",
"A ResNet backbone contains a 3 × 3 convolutional layer (output 64-dim.), three 64-dim.",
"residual building block (He et al., 2016) with stride 1, and four 128-dim.",
"residual building block with stride 2.",
"The ResNet backbone returns a 16 × 16 × 128 tensor.",
"For the optimizers, we use stochastic gradient descent with learning rate 0.1 for our proposed method, baseline CNN, and ResNet-18 (He et al., 2016) .",
"We use Adam (Kingma & Ba, 2014) with learning rate 0.001 for Dynamic Routing CapsNets and Adam with learning rate 0.01 for EM Routing CapsNets.",
"We decrease the learning rate by 10 times when the model trained on 150 epochs and 250 epochs, and there are 350 epochs in total."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.35555556416511536,
0.05405404791235924,
0.24242423474788666,
0.19512194395065308,
0.25,
0.1304347813129425,
0.05882352590560913,
0.0714285671710968,
0.13333332538604736,
0.08163265138864517,
0,
0.11764705181121826,
0.06451612710952759,
0.12244897335767746,
0.0624999962747097,
0.0555555522441864,
0.06896550953388214,
0.10526315122842789,
0.09756097197532654,
0.06451612710952759,
0.19354838132858276,
0.05405404791235924,
0.04999999329447746,
0.20512819290161133,
0.1818181723356247,
0,
0.17777776718139648,
0.1875,
0.05882352590560913,
0.10810810327529907,
0.21052631735801697,
0.07407406717538834,
0.1818181723356247,
0.1875,
0,
0.2222222238779068,
0.17142856121063232,
0,
0.05882352590560913,
0.0952380895614624,
0.19354838132858276,
0.10810810327529907,
0.07999999821186066,
0.14814814925193787,
0.1428571343421936,
0,
0.051282044500112534,
0.1666666567325592,
0.06666666269302368,
0.11428570747375488,
0.2666666507720947,
0.07407406717538834,
0.07407406717538834,
0,
0.06451612710952759,
0.1249999925494194,
0.0833333283662796,
0.07407406717538834,
0.1818181723356247,
0.21621620655059814,
0.14999999105930328
] | HJe6uANtwH | true | [
"We present a new routing method for Capsule networks, and it performs at-par with ResNet-18 on CIFAR-10/ CIFAR-100."
] |
[
" We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing.",
"Such data can be used to train automated dialogue agents performing customer care tasks for the enterprises or organizations.",
"In particular, the framework takes the documents as input and generates the tasks for obtaining the annotations for simulating dialog flows.",
"The dialog flows are used to guide the collection of utterances produced by crowd workers.",
"The outcomes include dialogue data grounded in the given documents, as well as various types of annotations that help ensure the quality of the data and the flexibility to (re)composite dialogues.",
"There has been growing interest in using automated dialogue agents to assist customers through online chat.",
"However, despite recent effort in training automated agents with human-human dialogues, it often faces the bottleneck that a large number of chat logs or simulated dialogues with various scenarios are required.",
"Meanwhile, enterprises and organizations often own a large number of business documents that could address customers' requests, such as technical documentation, policy guidance and Q&A webpages.",
"However, customers would still prefer having interactive conversations with agents instead of searching and reading through lengthy documents.",
"Taken together, a promising solution is to build machine assisted agents that could perform task-oriented dialogues that are based on the content of the business documents.",
"In task-oriented dialogues for customer care, a recurrent theme is a diagnostic process -identifying the contextual conditions that apply to the customer to retrieve the most relevant solutions.",
"Meanwhile, business documents often contain similar information, with prior conditions, in for example if-clauses or subtitles, followed by corresponding solutions.",
"Therefore, these documents can be used to guide diagnostic dialogues-we call them document-grounded dialogues.",
"For example, the sample business document in Figure 1 contains information for an agent to perform the dialogue on the right, where P-n (S-n) denotes text span n labeled a precondition (solution) and \"O-D\" denotes \"out of domain\".",
"The preconditions are expressed in various ways such as subtitles or if-clauses, followed by corresponding solution if that precondition applies.",
"In this work, we hypothesize that an essential capability for a dialogue agent to perform goal-oriented information retrieval tasks should be to recognize the preconditions and their associated solutions covered by the given documents and then use them to carry out the diagnostic interactions.",
"Towards this goal, we introduce DOC2DIAL, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing.",
"It aims to minimize the effort for handcrafting dialog flows that is specific to the document but still introduce dynamic dialog scenes.",
"It also provides quality control over the data collection process.",
"We guide our investigation with the following principles: 1) We aim to identify the document content that provides solution(s) to a user's request as well as describes the prerequisites required.",
"2) The generated dialog flows should be specific to the given document without relying on heavily supervised or handcrafted work.",
"3) The generated data tasks should be easy to scale -feasible to crowdsourcing platforms and could be updated with respect to changes in the document.",
"Thus, we propose a pipeline of three interconnected tasks for dialogue composition based on business documents: (1) labeling text spans as preconditions or solutions in a given documents (TextAnno); (2) identifying the relation(s) between these preconditions or solutions (RelAnno; (3) simulating dialog flows based on the linked preconditions/solutions and applying them to guide the collection of human generated utterances(DialAnno).",
"For the dialogue collection, we could deploy it via both synchronized and asynchronized processes.",
"An asychronized process allows crowd workers to work on the production and evaluation of individual turns without the constraints of timing or having a dialog partner.",
"The outcome includes the document grounded dialogues as well as various types of implicit and explicit annotations that help ensure the quality of the data.",
"Such data sets can be used to develop various types of dialogue agent technologies in the context of customer care.",
"Our primary contributions can be summarized as follows: (1) we introduce DOC2DIAL, an end-to-end framework 1 to generate task-oriented conversational data from business documents; (2) we propose a novel pipeline approach of three interconnected tasks that minimizes the dialog flow crafting manual effort and enables comprehensive cross-task validation to ensure the quality of dialogue data; (3) the system supports both synchronized and asynchronized dialogue collection processes.",
"We demonstrate that such setting allows flexible dialogue composition by utterances, which are guided by the given document content and its annotations."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.8888888955116272,
0.307692289352417,
0.1621621549129486,
0,
0.17777776718139648,
0.2222222238779068,
0.11999999731779099,
0.08888888359069824,
0.10526315122842789,
0.13636362552642822,
0.04651162400841713,
0.19999998807907104,
0.05882352590560913,
0.1818181723356247,
0.04999999329447746,
0.1355932205915451,
0.7179487347602844,
0.10256409645080566,
0.06666666269302368,
0.04444443807005882,
0,
0.1428571343421936,
0.1428571343421936,
0.11764705181121826,
0,
0.09756097197532654,
0.1538461446762085,
0.202531635761261,
0.09756097197532654
] | S1eTMp59LB | true | [
"We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing for train automated dialogue agents"
] |
[
"Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. ",
"While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. ",
"By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales which time-domain models have yet to achieve. ",
"We demonstrate that our model captures longer-range dependencies than time-domain models such as WaveNet across a diverse set of unconditional generation tasks, including single-speaker speech generation, multi-speaker speech generation, and music generation.",
"Audio waveforms have complex structure at drastically varying timescales, which presents a challenge for generative models.",
"Local structure must be captured to produce high-fidelity audio, while longrange dependencies spanning tens of thousands of timesteps must be captured to generate audio which is globally consistent.",
"Existing generative models of waveforms such as WaveNet (van den Oord et al., 2016a) and SampleRNN (Mehri et al., 2016) are well-adapted to model local dependencies, but as these models typically only backpropagate through a fraction of a second, they are unable to capture high-level structure that emerges on the scale of several seconds.",
"We introduce a generative model for audio which captures longer-range dependencies than existing end-to-end models.",
"We primarily achieve this by modelling 2D time-frequency representations such as spectrograms rather than 1D time-domain waveforms ( Figure 1 ).",
"The temporal axis of a spectrogram is orders of magnitude more compact than that of a waveform, meaning dependencies that span tens of thousands of timesteps in waveforms only span hundreds of timesteps in spectrograms.",
"In practice, this enables our spectrogram models to generate unconditional speech and music samples with consistency over multiple seconds whereas time-domain models must be conditioned on intermediate features to capture structure at similar timescales.",
"Modelling spectrograms can simplify the task of capturing global structure, but can weaken a model's ability to capture local characteristics that correlate with audio fidelity.",
"Producing high-fidelity audio has been challenging for existing spectrogram models, which we attribute to the lossy nature of spectrograms and oversmoothing artifacts which result from insufficiently expressive models.",
"To reduce information loss, we model high-resolution spectrograms which have the same dimensionality as their corresponding time-domain signals.",
"To limit oversmoothing, we use a highly expressive autoregressive model which factorizes the distribution over both the time and frequency dimensions.",
"Modelling both fine-grained details and high-level structure in high-dimensional distributions is known to be challenging for autoregressive models.",
"To capture both local and global structure in spectrograms with hundreds of thousands of dimensions, we employ a multiscale approach which generates spectrograms in a coarse-to-fine manner.",
"A low-resolution, subsampled spectrogram that captures high-level structure is generated initially, followed by an iterative upsampling procedure that adds high-resolution details.",
"(1x, 5x, 25x, 125x)",
"Figure 1: Spectrogram and waveform representations of the same 4 second audio signal.",
"The waveform spans nearly 100,000 timesteps whereas the temporal axis of the spectrogram spans roughly 400.",
"Complex structure is nested within the temporal axis of the waveform at various timescales, whereas the spectrogram has structure which is smoothly spread across the time-frequency plane.",
"Combining these representational and modelling techniques yields a highly expressive and broadly applicable generative model of audio.",
"Our contributions are are as follows:",
"• We introduce MelNet, a generative model for spectrograms which couples a fine-grained autoregressive model and a multiscale generation procedure to jointly capture local and global structure.",
"• We show that MelNet is able to model longer-range dependencies than existing time-domain models.",
"Additionally, we include an ablation to demonstrate that multiscale modelling is essential for modelling long-range dependencies.",
"• We demonstrate that MelNet is broadly applicable to a variety of audio generation tasks, including unconditional speech and music generation.",
"Furthermore, MelNet is able to model highly multimodal data such as multi-speaker and multilingual speech.",
"We have introduced MelNet, a generative model for spectral representations of audio.",
"MelNet combines a highly expressive autoregressive model with a multiscale modelling scheme to generate high-resolution spectrograms with realistic structure on both local and global scales.",
"In comparison to previous works which model time-domain signals directly, MelNet is particularly well-suited to model long-range temporal dependencies.",
"Experiments show promising results across a diverse set of audio generation tasks.",
"Furthermore, we believe MelNet provides a foundation for various directions of future work.",
"Two particularly promising directions are text-to-speech synthesis and representation learning:",
"• Text-to-Speech Synthesis: MelNet utilizes a more flexible probabilistic model than existing end-to-end text-to-speech models, making it well-suited to model expressive, multi-modal speech data.",
"• Representation Learning: MelNet is able to uncover salient structure from large quantities of unlabelled audio.",
"Large-scale, pre-trained autoregressive models for language modelling have demonstrated significant benefits when fine-tuned for downstream tasks.",
"Likewise, representations learned by MelNet could potentially aid downstream tasks such as speech recognition."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.1395348757505417,
0.1538461446762085,
0.3181818127632141,
0.12903225421905518,
0.052631575614213943,
0.13114753365516663,
0.3333333134651184,
0.1111111044883728,
0.04999999701976776,
0.1702127605676651,
0.10256409645080566,
0.1904761791229248,
0.12121211737394333,
0.17142856121063232,
0.24242423474788666,
0.10526315122842789,
0.05714285373687744,
0,
0.0714285671710968,
0,
0,
0.19354838132858276,
0,
0.5263158082962036,
0.19999998807907104,
0.2666666507720947,
0.4000000059604645,
0.2666666507720947,
0.29629629850387573,
0.2631579041481018,
0.1249999925494194,
0.07407406717538834,
0.0714285671710968,
0.07999999821186066,
0.15789473056793213,
0.06451612710952759,
0.13333332538604736,
0.06896550953388214
] | r1gIa0NtDH | true | [
"We introduce an autoregressive generative model for spectrograms and demonstrate applications to speech and music generation"
] |
[
"Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations.",
"In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations.",
"Furthermore, we see these failures to generalize more frequently in more modern networks.",
"We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed.",
"We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations.",
"Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans.",
"Deep convolutional neural networks (CNNs) have revolutionized computer vision.",
"Perhaps the most dramatic success is in the area of object recognition, where performance is now described as \"superhuman\" (He et al., 2015) .",
"A key to the success of any machine learning method is the inductive bias of the method, and clearly the choice of architecture in a neural network significantly affects the inductive bias.",
"In particular, the choice of convolution and pooling in CNNs is motivated by the desire to endow the networks with invariance to irrelevant cues such as image translations, scalings, and other small deformations (Fukushima & Miyake, 1982; BID33 .",
"This motivation was made explicit in the 1980s by Fukushima in describing the \"neocognitron\" architecture, which served as inspiration for modern CNNs (LeCun et al., 1989) , \"After finishing the process of learning, pattern recognition is performed on the basis of similarity in shape between patterns, and is not affected by deformation, nor by changes in size, nor by shifts in the position of the input patterns.\" (Fukushima, 1988) Despite the excellent performance of CNNs on object recognition, the vulnerability to adversarial attacks suggests that superficial changes can result in highly non-human shifts in prediction (e.g. BID1 BID27 .",
"In addition, filtering the image in the Fourier domain (in a way that does not change human prediction) also results in a substantial drop in prediction accuracy BID13 .",
"These and other results BID20 indicate that CNNs are not invariant to cues that are irrelevant to the object identity.An argument against adversarial attacks on CNNs is that they often involve highly unnatural transformations to the input images, hence in some sense we would not expect CNNs to be invariant to these transformations.",
"When considering more natural transformations, there is preliminary evidence that AlexNet BID15 ) is robust to some of them BID33 .",
"On the other hand, there is also preliminary evidence for lack of robustness in the more modern networks for object classification BID2 and detection BID21 along with studies suggesting that with small CNNs and the MNIST data, data augmentation is the main feature affecting CNN invariance BID14 ).",
"An indirect method to probe for invariances measures the linearity of the learned representations under natural transformations to the input image (Lenc Figure 1 : Examples of jagged predictions of modern deep convolutional neural networks.",
"Top: A negligible vertical shift of the object (Kuvasz) results in an abrupt decrease in the network's predicted score of the correct class.",
"Middle: A tiny increase in the size of the object (Lotion) produces a dramatic decrease in the network's predicted score of the correct class.",
"Bottom: A very small change in the bear's posture results in an abrupt decrease in the network's predicted score of the correct class.",
"Colored dots represent images chosen from interesting x-axis locations of the graphs on the right.",
"These dots illustrate sensitivity of modern neural networks to small, insignificant (to a human), and realistic variations in the image.",
"BID17 BID12 Fawzi & Frossard, 2015; BID6 .",
"The recent work of BID10 investigates adversarial attacks that use only rotations and translations.",
"They find that \"simple transformations, namely translations and rotations alone, are sufficient to fool neural network-based vision models on a significant fraction of inputs\" and show that advanced data augmentation methods can make the networks more robust.In this paper, we directly ask \"why are modern CNNs not invariant to natural image transformations despite the architecture being explicitly designed to provide such invariances?\".",
"Specifically, we systematically examine the invariances of three modern deep CNNs: VGG-16 BID26 , ResNet-50 (He et al., 2016) , and InceptionResNet-V2 BID28 .",
"We find that modern deep CNNs are not invariant to translations, scalings and other realistic image transformations, and this lack of invariance is related to the subsampling operation and the biases contained in image datasets.",
"Figure 1 contains examples of abrupt failures following tiny realistic transformations for the InceptionResNet-V2 CNN.",
"Shifting or scaling the object by just one pixel could result in a sharp change in prediction.",
"In the top row, we embed the original image in a larger image and shift it in the image plane (while filling in the rest of the image with a simple inpainting procedure).",
"In the middle row, we repeat this protocol with rescaling.",
"In the bottom row, we show frames from a BBC film in which the ice bear moves almost imperceptibly between frames and the network's output changes dramatically 1 .",
"In order to measure how typical these failures are, we randomly chose images from the ImageNet validation set and measured the output of three modern CNNs as we embedded these images in a larger image and systematically varied the vertical translation.",
"As was the case in figure 1, we used a simple inpainting procedure to fill in the rest of the image."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.1860465109348297,
0.3030303120613098,
0.10256409645080566,
0.3199999928474426,
0.3529411852359772,
0.17391303181648254,
0,
0.16326530277729034,
0.23529411852359772,
0.3606557250022888,
0.15686273574829102,
0.15686273574829102,
0.29411762952804565,
0.17391303181648254,
0.2647058665752411,
0.17543859779834747,
0.1304347813129425,
0.1304347813129425,
0.1304347813129425,
0.09756097197532654,
0.2978723347187042,
0,
0.09756097197532654,
0.2619047462940216,
0.16326530277729034,
0.9122806787490845,
0.1428571343421936,
0.09302324801683426,
0.19999998807907104,
0.10810810327529907,
0.11538460850715637,
0.22580644488334656,
0.2222222238779068
] | HJxYwiC5tm | true | [
"Modern deep CNNs are not invariant to translations, scalings and other realistic image transformations, and this lack of invariance is related to the subsampling operation and the biases contained in image datasets."
] |
[
"We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN).",
"Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network.",
"Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running.",
"In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion.",
"The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training.",
"Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.",
"The synthesis of realistic human motion has recently seen increased interest BID12 BID39 BID4 BID14 BID1 BID25 with applications beyond animation and video games.",
"The simulation of human looking virtual agents is likely to become mainstream with the dramatic advancement of Artificial Intelligence and the democratization of Virtual Reality.",
"A challenge for human motion synthesis is to automatically generate new variations of motions while preserving a certain style, e.g., generating large numbers of different Bollywood dances for hundreds of characters in an animated scene of an Indian party.",
"Aided by the availability of large human-motion capture databases, many database-driven frameworks have been employed to this end, including motion graphs BID18 BID33 BID27 , as well as linear BID34 BID2 BID36 and kernel methods BID29 BID31 BID8 BID28 BID42 , which blend key-frame motions from a database.",
"It is hard for these methods, however, to add new variations to existing motions in the database while keeping the style consistent.",
"This is especially true for motions with a complex style such as dancing and martial arts.",
"More recently, with the rapid development in deep learning, people have started to use neural networks to accomplish this task BID13 .",
"These works have shown promising results, demonstrating the ability of using high-level parameters (such as a walking-path) to synthesize locomotion tasks such as jumping, running, walking, balancing, etc.",
"These networks do not generate new variations of complex motion, however, being instead limited to specific use cases.In contrast, our paper provides a robust framework that can synthesize highly complex human motion variations of arbitrary styles, such as dancing and martial arts, without querying a database.",
"We achieve this by using a novel deep auto-conditioned RNN (acRNN) network architecture.Recurrent neural networks are autoregressive deep learning frameworks which seek to predict sequences of data similar to a training distribution.",
"Such a framework is intuitive to apply to human motion, which can be naturally modeled as a time series of skeletal joint positions.",
"We are not the first to leverage RNNs for this task BID4 BID14 BID1 BID25 , and these works produce reasonably realistic output at a number of tasks such as sitting, talking, smoking, etc.",
"However, these existing methods also have a critical drawback: the motion becomes unrealistic within a couple of seconds and is unable to recover.This issue is commonly attributed to error accumulation due to feeding network output back into itself BID13 .",
"This is reasonable, as the network during training is given ground-truth input sequences to condition its subsequent guess, but at run time, must condition this guess on its own output.",
"As the output distribution of the network will not be identical to that of the ground-truth, it is in effect encountering a new situation at test-time.",
"The acRNN structure compensates for this by linking the network's own predicted output into its future input streams during training, a similar approach to the technique proposed in BID0 .",
"Our method is light-weight and can be used in conjunction with any other RNN based learning scheme.",
"Though straightforward, this technique fixes the issue of error accumulation, and allows the network to output incredibly long sequences without failure, on the order of hundreds of seconds (see Figure 5 ).",
"Though we are yet as unable to prove the permanent stability of this structure, it seems empirically that motion can be generated without end.",
"In summary, we present a new RNN training method capable for the first time of synthesizing potentially indefinitely long sequences of realistic and complex human motions with respect to different styles.",
"We have shown the effectiveness of the acLSTM architecture to produce extended sequences of complex human motion.",
"We believe our work demonstrates qualitative state-of-the-art results in motion generation, as all previous work has focused on synthesizing relatively simple human motion for extremely short time periods.",
"These works demonstrate motion generation up to a couple of seconds at most while acLSTM does not fail even after over 300 seconds.",
"Though we are as of yet unable to prove indefinite stability, it seems empirically that acLSTM can generate arbitrarily long sequences.",
"Current problems that exist include choppy motion at times, self-collision of the skeleton, and unrealistic sliding of the feet.",
"Further developement of GAN methods, such as BID19 , could result in increased realism, though these models are notoriously hard to train as they often result in mode collapse.",
"Combining our technique with physically based simulation to ensure realism after synthesis is also a potential next step.",
"Finally, it is important to study the effects of using various condition lengths during training.",
"We begin the exploration of this topic in the appendix, but further analysis is needed.",
"Figure 9 might imply some sort of trade off between motion change over time and short-term motion prediction error when training with different condition lengths.",
"However, it is also possible that limiting motion magnitude on this particular dataset might correspond to lower error.",
"Further experiments of various condition lengths on several motion styles need to be conducted to say anything meaningful about the effect.C VISUAL DIAGRAM OF AUTO-CONDITIONED LSTM"
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.29411762952804565,
0.1304347813129425,
0.06666666269302368,
0.1249999925494194,
0,
0.10256409645080566,
0.11428570747375488,
0.12121211737394333,
0.12765957415103912,
0.07017543911933899,
0.06451612710952759,
0.2222222238779068,
0,
0.052631575614213943,
0.1111111044883728,
0.1463414579629898,
0.0624999962747097,
0.04444444179534912,
0.08510638028383255,
0.052631575614213943,
0.05882352590560913,
0,
0.0714285671710968,
0.10256409645080566,
0,
0.19512194395065308,
0.23076923191547394,
0.054054051637649536,
0,
0,
0.0714285671710968,
0,
0,
0.07692307233810425,
0,
0.05714285373687744,
0,
0.054054051637649536
] | r11Q2SlRW | true | [
"Synthesize complex and extended human motions using an auto-conditioned LSTM network"
] |
[
" {\\em Saliency methods} attempt to explain a deep net's decision by assigning a {\\em score} to each feature/pixel in the input, often doing this credit-assignment via the gradient of the output with respect to input. \n",
"Recently \\citet{adebayosan} questioned the validity of many of these methods since they do not pass simple {\\em sanity checks}, which test whether the scores shift/vanish when layers of the trained net are randomized, or when the net is retrained using random labels for inputs.",
"% for the inputs.",
" %Surprisingly, the tested methods did not pass these checks: the explanations were relatively unchanged",
". \n\nWe propose a simple fix to existing saliency methods that helps them pass sanity checks, which we call {\\em competition for pixels}.",
"This involves computing saliency maps for all possible labels in the classification task, and using a simple competition among them to identify and remove less relevant pixels from the map.",
"Some theoretical justification is provided for it and its performance is empirically demonstrated on several popular methods.",
"Saliency methods attempt to explain a deep net's decision to humans by assigning a score to each feature/pixel in the input, often doing this credit-assignment via the gradient of the output with respect to input (from now on refered to as just \"gradient\").",
"Here we are interested in tasks involving multiclass classification, and for simplicity the exposition will assume the input is an image.",
"Then a saliency method assigns scores to input pixels, which are presented as a heat map.",
"(Extensions of these ideas to higher-level features of the net will not be discussed here.)",
"While gradient alone is often too noisy, it as well as related notions are the basis of other more successful methods.",
"In Gradient Input (Shrikumar et al., 2017 ) the pixel score is the product of the corresponding coordinate of gradient vector with the pixel value.",
"Layer-wise Relevance Propagation (LRP) (Bach et al., 2015) uses a back-propagation technique where every node in the deep net receives a share of the output which it distributes to nodes below it.",
"This happens all the way to the input layer, whereby every pixel gets assigned a share of the output, which is its score.",
"Another rule Deep-Lift (Shrikumar et al., 2016) does this in a different way and is related to Shapley values of cooperative game theory.",
"DASP Ancona et al. (2019) is a state of the art method that performs an efficient approximation of the Shapley values.",
"The perceived limitations of these methods in turn motivated a long list of new ones.",
"Montavon et al. (2018) provides a survey of existing methods, and brief discussion is presented in Section 2.",
"The focus of the current paper is an evaluation of saliency methods called sanity checks in Adebayo et al. (2018) .",
"This involves randomizing the model parameters or the data labels (see Section 2 for details).",
"The authors show that maps produced using corrupted parameters and data are often difficult to visually distinguish from those produced using the original parameters and data.",
"The authors concluded that \"...widely deployed saliency methods are independent of both the data the model was trained on, and the model parameters.\"",
"The current paper shows how to pass sanity checks via a simple modification to existing methods: Competition for pixels.",
"Section 3 motivates this idea by pointing out a significant issue with previous methods: they produce saliency maps for a chosen output (label) node using gradient information only for that node while ignoring the gradient information from the other (non-chosen) outputs.",
"To incorporate information from non-chosen labels/outputs in the multiclass setting we rely on an axiom called completeness satisfied by many saliency methods, according to which the sum of pixel scores in a map is equal to the value of the chosen node (see Section 3).",
"Existing methods design saliency maps for all outputs and the map for each label satisfies completeness.",
"One can then view the various scores assigned to a single pixel as its \"votes\" for different labels.",
"The competition idea is roughly to zero out any pixel whose vote for the chosen label was lower than for another (nonchosen) label.",
"Section 4 develops theory to explain why this modification helps pass sanity checks in the multi-class setting, and yet produces maps not too different from existing saliency maps.",
"It also introduces a notion called approximate completeness and suggests that it is both a reasonable alternative to completeness in practice, and also allows our analysis of the competition idea to go through.",
"We the present an new empirical finding that saliency methods that were not designed to satisfy completeness in practice seem to satisfy approximate completeness anyway.",
"This may be relevant for future research in this area.",
"Section 5 reports experiments applying the competition idea to three well-regarded methods, Gradient Input, LRP, and DASP, and shows that they produce sensible saliency maps while also passing the sanity checks.",
"List of testbeds and methods is largely borrowed from Adebayo et al. (2018) , except for inclusion of DASP, which draws inspiration from cooperative game theory.",
"Adebayo et al. (2018) and Montavon et al. (2018) provide surveys of saliency methods.",
"Brief descriptions of some methods used in our experiments appear in Appendix Section 7.1.",
"Here we briefly discuss the issue most relevant to the current paper, which is the interplay between tests/evaluations of saliency methods and principled design of new methods.",
"Competition among labels is a simple modification to existing saliency methods that produces saliency maps by combining information from maps from all labels, instead of just the chosen label.",
"Our modification keeps existing methods relevant for visual evaluation (as shown on three wellknown methods Gradient Input, LRP, and DASP) while allowing them to pass sanity checks of Adebayo et al. (2018) , which had called into question the validity of saliency methods.",
"Possibly our modification even improves the quality of the map, by zero-ing out irrelevant features.",
"We gave some theory in Section 4 to justify the competition idea for methods which satisfy approximate completeness.",
"Many methods satisfy completeness by design, and experimentally we find other methods satisfy approximate completeness.",
"We hope the simple analysis of Section 4-modeling the saliency map as \"noisy signal\" mixed with \"white noise\"-will inspire design of other new saliency maps.",
"We leave open the question of what is the optimum way to design saliency maps by combining information from all labels 3 .",
"When pixel values are spatially correlated it is natural to involve that in designing the competition.",
"This is left for future work.",
"The sanity checks of Adebayo et al. (2018) randomize the net in a significant way, either by randomizing a layer or training on corrupted data.",
"It is an interesting research problem to devise sanity checks that are less disruptive.",
"Sundararajan et al. (2017) also computes the gradient of the chosen class's logit.",
"However, instead of evaluating this gradient at one fixed data point, integrated gradients consider the path integral of this value as the input varies from a baseline,x, to the actual input, x along a straight line.",
"Bach et al. (2015) proposed an approach for propagating importance scores called Layerwise Relevance Propagation (LRP).",
"LRP decomposes the output of the neural network into a sum of the relevances of coordinates of the input.",
"Specifically, if a neural network computes a function f (x) they attempt to find relevance scores R"
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12244897335767746,
0.1428571343421936,
0.08695651590824127,
0.1875,
0.4390243887901306,
0.2978723347187042,
0.05714285373687744,
0.145454540848732,
0.051282044500112534,
0.1764705777168274,
0.11764705181121826,
0.10256409645080566,
0.04999999329447746,
0.12244897335767746,
0.14999999105930328,
0.09302324801683426,
0.15789473056793213,
0.12121211737394333,
0.05405404791235924,
0.31578946113586426,
0.060606054961681366,
0.14999999105930328,
0.19999998807907104,
0.3243243098258972,
0.14814814925193787,
0.17241378128528595,
0.1764705777168274,
0.1621621549129486,
0.14999999105930328,
0.260869562625885,
0.2978723347187042,
0.29999998211860657,
0,
0.2916666567325592,
0.04651162400841713,
0.13333332538604736,
0.060606054961681366,
0.1904761791229248,
0.31111109256744385,
0.2711864411830902,
0.060606054961681366,
0.2702702581882477,
0.06451612710952759,
0.1463414579629898,
0.19999998807907104,
0.22857142984867096,
0,
0.1860465109348297,
0.3030303120613098,
0.06451612710952759,
0.11999999731779099,
0.05714285373687744,
0.1249999925494194,
0.11428570747375488
] | BJeGZxrFvS | true | [
"We devise a mechanism called competition among pixels that allows (approximately) complete saliency methods to pass the sanity checks."
] |
[
"Classification systems typically act in isolation, meaning they are required to implicitly memorize the characteristics of all candidate classes in order to classify.",
"The cost of this is increased memory usage and poor sample efficiency.",
"We propose a model which instead verifies using reference images during the classification process, reducing the burden of memorization.",
"The model uses iterative non-differentiable queries in order to classify an image.",
"We demonstrate that such a model is feasible to train and can match baseline accuracy while being more parameter efficient.",
"However, we show that finding the correct balance between image recognition and verification is essential to pushing the model towards desired behavior, suggesting that a pipeline of recognition followed by verification is a more promising approach towards designing more powerful networks with simpler architectures."
] | [
0,
0,
0,
0,
0,
1
] | [
0.1428571343421936,
0.060606054961681366,
0.20512819290161133,
0.12121211737394333,
0.1463414579629898,
0.21052631735801697
] | HygF59JVo7 | false | [
"Image classification via iteratively querying for reference image from a candidate class with a RNN and use CNN to compare to the input image"
] |
[
"To reduce memory footprint and run-time latency, techniques such as neural net-work pruning and binarization have been explored separately.",
" However, it is un-clear how to combine the best of the two worlds to get extremely small and efficient models",
". In this paper, we, for the first time, define the filter-level pruning problem for binary neural networks, which cannot be solved by simply migrating existing structural pruning methods for full-precision models",
". A novel learning-based approach is proposed to prune filters in our main/subsidiary network frame-work, where the main network is responsible for learning representative features to optimize the prediction performance, and the subsidiary component works as a filter selector on the main network",
". To avoid gradient mismatch when training the subsidiary component, we propose a layer-wise and bottom-up scheme",
". We also provide the theoretical and experimental comparison between our learning-based and greedy rule-based methods",
". Finally, we empirically demonstrate the effectiveness of our approach applied on several binary models, including binarizedNIN, VGG-11, and ResNet-18, on various image classification datasets",
". For bi-nary ResNet-18 on ImageNet, we use 78.6% filters but can achieve slightly better test error 49.87% (50.02%-0.15%) than the original model",
"Deep neural networks (DNN), especially deep convolution neural networks (DCNN), have made remarkable strides during the last decade.",
"From the first ImageNet Challenge winner network, AlexNet, to the more recent state-of-the-art, ResNet, we observe that DNNs are growing substantially deeper and more complex.",
"These modern deep neural networks have millions of weights, rendering them both memory intensive and computationally expensive.",
"To reduce computational cost, the research into network acceleration and compression emerges as an active field.A family of popular compression methods are the DNN pruning algorithms, which are not only efficient in both memory and speed, but also enjoy relatively simple procedure and intuition.",
"This line of research is motivated by the theoretical analysis and empirical discovery that redundancy does exist in both human brains and several deep models BID7 BID8 .",
"According to the objects to prune, we can categorize existing research according to the level of the object, such as connection (weights)-level pruning, unit/channel/filter-level pruning, and layer-level pruning BID28 .",
"Connection-level pruning is the most widely studied approach, which produces sparse networks whose weights are stored as sparse tensors.",
"Although both the footprint memory and the I/O consumption are reduced BID12 , Such methods are often not helpful towards the goal of computation acceleration unless specifically-designed hardware is leveraged.",
"This is because the dimensions of the weight tensor remain unchanged, though many entries are zeroed-out.",
"As a wellknown fact, the MAC operations on random structured sparse matrices are generally not too much faster than the dense ones of the same dimension.",
"In contrast, structural pruning techniques BID28 , such as unit/channel/filter-level pruning, are more hardware friendly, since they aim to produce tensors of reduced dimensions or having specific structures.",
"Using these techniques, it is possible to achieve both computation acceleration and memory compression on general hardware and is common for deep learning frameworks.We consider the structural network pruning problem for a specific family of neural networks -binary neural networks.",
"A binary neural network is a compressed network of a general deep neural network through the quantization strategy.",
"Convolution operations in DCNN 1 inherently involve matrix multiplication and accumulation (MAC).",
"MAC operations become much more energy efficient if we use low-precision (1 bit or more) fixed-point number to approximate weights and activation functions (i.e., to quantify neurons) BID3 .",
"To the extreme extent, the MAC operation can even be degenerated to Boolean operations, if both weights and activation are binarized.",
"Such binary networks have been reported to achieve ∼58x computation saving and ∼32x memory saving in practice.",
"However, the binarization operation often introduces noises into DNNs , thus the representation capacity of DNNs will be impacted significantly, especially if we also binarize the activation function.",
"Consequently, binary neural networks inevitably require larger model size (more parameters) to compensate for the loss of representation capacity.Although Boolean operation in binary neural networks is already quite cheap, even smaller models are still highly desired for low-power embedded systems, like smart-phones and wearable devices in virtual reality applications.",
"Even though quantization (e.g., binarization) has significantly reduced the redundancy of each weight/neuron representation, our experiment shows that there is still heavy redundancy in binary neural networks, in terms of network topology.",
"In fact, quantization and pruning are orthogonal strategies to compress neural networks: Quantization reduces the precision of parameters such as weights and activations, while pruning trims the connections in neural networks so as to attain the tightest network topology.",
"However, previous studies on network pruning are all designed for full-precision models and cannot be directly applied for binary neural networks whose both weights and activations are 1-bit numbers.",
"For example, it no longer makes any sense to prune filters by comparing the magnitude or L 1 norm of binary weights, and it is nonsensical to minimize the distance between two binary output tensors.We, for the first time, define the problem of simplifying binary neural networks and try to learn extremely efficient deep learning models by combining pruning and quantization strategies.",
"Our experimental results demonstrate that filters in binary neural networks are redundant and learning-based pruning filter selection is constantly better than those existing rule-based greedy pruning criteria (like by weight magnitude or L 1 norm).We",
"propose a learning-based method to simplify binary neural network with a main-subsidiary framework, where the main network is responsible for learning representative features to optimize the prediction performance, whereas the subsidiary component works as a filter selector on the main network to optimize the efficiency. The",
"contributions of this paper are summarized as follows:• We propose a learning-based structural pruning method for binary neural networks to significantly reduce the number of filters/channels but still preserve the prediction performance on large-scale problems like the ImageNet Challenge.• We",
"show that our non-greedy learning-based method is superior to the classical rule-based methods in selecting which objects to prune. We design",
"a main-subsidiary framework to iteratively learn and prune feature maps. Limitations",
"of the rule-based methods and advantages of the learning-based methods are demonstrated by theoretical and experimental results. In addition",
", we also provide a mathematical analysis for L 1 -norm based methods.• To avoid",
"gradient mismatch of the subsidiary component, we train this network in a layerwise and bottom-up scheme. Experimentally",
", the iterative training scheme helps the main network to adopt the pruning of previous layers and find a better local optimal point.2 RELATED WORK",
"2.1 PRUNING Deep Neural Network pruning has been explored in many different ways for a long time. BID13 proposed",
"Optimal Brain Surgeon (OBS) to measure the weight importance using the second-order derivative information of loss function by Taylor expansion. BID9 further adapts",
"OBS for deep neural networks and has reduced the retraining time. Deep Compression BID12",
"prunes connections based on weight magnitude and achieved great compression ratio. The idea of dynamic masks",
"BID10 is also used for pruning. Other approaches used Bayesian",
"methods and exploited the diversity of neurons to remove weights BID23 BID22 . However, these methods focus on",
"pruning independent connection without considering group information. Even though they harvest sparse",
"connections, it is still hard to attain the desired speedup on hardware.To address the issues in connection-level pruning, researchers proposed to increase the groupsparsity by applying sparse constraints to the channels, filters, and even layers BID28 BID0 BID25 BID1 . used LASSO constraints and reconstruction",
"loss to guide network channel selection. introduced L 1 -Norm rank to prune filters",
", which reduces redundancy and preserves the relatively important filters using a greedy policy. BID21 leverages a scaling factor from batch",
"normalization to prune channels. To encourage the scaling factor to be sparse",
", a regularization term is added to the loss function. On one hand, methods mentioned above are all",
"designed for full-precision models and cannot be trivially transferred to binary networks. For example, to avoid introducing any non-Boolean",
"operations, batch normalization in binary neural networks (like XNOR-Net) typically doesn't have scaling (γ) and shifting (β) parameters BID3 . Since all weights and activation only have two possible",
"values {1, −1}, it is also invalid to apply classical tricks such as ranking filters by their L 1 -Norms, adding a LASSO constraint, or minimizing the reconstruction error between two binary vectors. On the other hand, greedy policies that ignore the correlations",
"between filters cannot preserve all important filters."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1666666567325592,
0.2222222238779068,
0.4000000059604645,
0.1538461446762085,
0.23529411852359772,
0.1249999925494194,
0.19512194395065308,
0.09090908616781235,
0.1764705777168274,
0.24390242993831635,
0.17142856121063232,
0.10344827175140381,
0.09090908616781235,
0.2380952388048172,
0.1666666567325592,
0.08888888359069824,
0.060606054961681366,
0.0476190410554409,
0.08695651590824127,
0.3333333432674408,
0.1875,
0.06666666269302368,
0.12765957415103912,
0.15789473056793213,
0.23529411852359772,
0.09302324801683426,
0.2222222238779068,
0.12244897335767746,
0.23999999463558197,
0.27272728085517883,
0.3478260934352875,
0.18867924809455872,
0.26923075318336487,
0.3272727131843567,
0.15789473056793213,
0.13793103396892548,
0.12121211737394333,
0.11764705181121826,
0.17142856121063232,
0.1904761791229248,
0.15789473056793213,
0.09999999403953552,
0.375,
0.05882352590560913,
0.14814814925193787,
0.1764705777168274,
0.06666666269302368,
0.1428571343421936,
0.06451612710952759,
0.10526315122842789,
0.13793103396892548,
0.1111111044883728,
0.277777761220932,
0.1818181723356247,
0.1355932205915451,
0
] | ryxfHnCctX | true | [
"we define the filter-level pruning problem for binary neural networks for the first time and propose method to solve it."
] |
[
"Wide adoption of complex RNN based models is hindered by their inference performance, cost and memory requirements.",
"To address this issue, we develop AntMan, combining structured sparsity with low-rank decomposition synergistically, to reduce model computation, size and execution time of RNNs while attaining desired accuracy.",
"AntMan extends knowledge distillation based training to learn the compressed models efficiently.",
"Our evaluation shows that AntMan offers up to 100x computation reduction with less than 1pt accuracy drop for language and machine reading comprehension models.",
"Our evaluation also shows that for a given accuracy target, AntMan produces 5x smaller models than the state-of-art.",
"Lastly, we show that AntMan offers super-linear speed gains compared to theoretical speedup, demonstrating its practical value on commodity hardware.",
"Remarkable advances in deep learning (DL) have produced great models across a wide variety of tasks such as computer vision, machine reading, speech generation and image recognition BID7 .",
"However, wide adoption of these models is still limited by their inference performance, cost and memory requirements.",
"On the client side, all pervasive devices like smart-phones, tablets and laptops have limited memory and computational resources to handle large complex DL models.",
"On the server side, intensive computation can render the models too slow to meet responsiveness requirements and too expensive to scale, preventing their deployment in production.Model Compression is a flourishing area that aims to reduce the computational and memory complexity of DL models to address the aforementioned problems without significantly affecting accuracy.",
"Compressing Convolution Neural Networks (CNNs) have already been widely explored in the past few years BID3 , while our work focuses on Recurrent Neural Networks (RNNs), which are broadly used among various natural language processing tasks BID17 BID24 BID29 .",
"It is well known that large RNN models are computation and memory intensive (Zhang et al.) .",
"In particular, their computation increases linearly with sequence length, and their recurrent unit has to be computed sequentially, one step at a time with limited parallelism, both of which makes long execution time a crucial issue for RNN inference computation.",
"Compressing RNNs, however, is challenging, because a recurrent unit is shared across all the time steps in sequence, compressing the unit will aggressively affect all the steps.Inducing sparsity is one of the prominent approaches used for RNN compression.",
"BID18 proposed a pruning approach that deletes up to 90% connections in RNNs.",
"The obtained sparse matrices, however, have an irregular/non-structured pattern of non-zero weights, which is unfriendly for efficient computation in modern hardware systems BID12 BID25 .",
"To address this issue, BID19 proposed inducing block-sparsity in RNNs via pruning or group lasso regularization.",
"Similarly, BID26 introduces ISS, intrinsic structured sparsity for LSTMs BID9 , a type of RNN , such that a sparse LSTM can be transformed into a dense one but with smaller size.",
"ISS conveniently turns sparsity into efficient execution, but as its sparse structure is quite coarse-grained, it is hard to push out high sparsity without degrading accuracy, especially in RNNs where the hidden dimension is smaller than input dimension (elaborated in Section 5.1).Our",
"work explores a new line of structured sparsity on RNNs, using predefined compact structures as opposed to pruning and regularization based approaches. We",
"take inspiration from predefined compact CNN structures such as group convolutions BID11 and depth-wise separable convolutions BID4 . Specifically",
", we replace matrix-vector multiplications (MVs), the dominant part of RNN computations, with localized group projections (LGP).LGP divides",
"the input and output vectors into groups where the elements of the output group is computed as a linear combination of those from the corresponding input group. In addition",
", to empower the information flow across multiple groups along the steps of RNN computation, we use a permutation matrix or a dense-square matrix to combine outputs across groups, helping the compact structure to retain accuracy.Furthermore, we combine LGP with low-rank matrix decomposition in order to further reduce the computations. This is possible",
"as low rank and sparsity are complimentary to each other. Low-rank decomposition",
"such as SVD approximates a low-rank multiplication Ax as P Qx, where P and Q are dense. By imposing LGP-based",
"sparsity on P and Q, we reduce the computation further. For a given rank reduction",
"factor of r, we reduce the computation cost and model size by O(r 2 ), compared to O(r) by using low-rank decomposition methods like SVD BID6 alone.We call our compression approach AntMan -'shrink in scale' by synergistically combining structured sparsity and low-rank decomposition, but 'increase in strength' by enabling the flow across structured groups along RNN sequence to retain accuracy.To train RNN models with AntMan, we use teacher-student training paradigm BID1 by combining the label loss with teacher-MSE-loss and teacher-KL-divergence-loss. To improve the training efficiency",
", we develop a new technique to decide proper coefficients to obtain high accuracy efficiently with minimal trials.We evaluate AntMan on multiple RNN based models for machine reading comprehension and language modeling. For a well-known MRC model BID24",
", we reduce the computational complexity and model size of LSTMs (a particular type of RNN) by up to 25x with less than 1pt drop in F1 score. For PTB BID29 language model, we",
"achieve a computational reduction of 50x with no drop in perplexity, and 100x with just a single point drop in perplexity. We also construct language models",
"for PTB with perplexities ranging from 64 to 70, but with 3x to 5x fewer overall model weights (5x to 25x reduction in RNN weights) than the state-of-art.Last but not least, we develop efficient implementations of inference kernels on CPUs to serve models compressed by AntMan. We show that unlike computation with",
"unstructured sparsity, AntMan offers significant performance improvement for large RNN models even with modest levels of sparsity. Our evaluations show that a 2x to 10x",
"theoretical reduction in computation can result in up to 2x to 30x actual speedup, respectively, for moderate to large RNNs, demonstrating attractive practical value of AntMan on commodity hardware.",
"We develop AntMan, combining structured sparsity and low-rank decomposition, to reduce the computation, size and execution time of RNN models by order(s) of magnitude while achieving similar accuracy.",
"We hope its compression efficiency and effectiveness would help unblock and enable many great RNN-based models deployed in practice.",
"We discuss and compare AntMan with several compression techniques as below.Quantization: 16 and 8-bit quantization (original 32-bit) can be supported fairly easily on commodity hardware, resulting in a maximum compression of 4x.",
"Even more aggressive quantization (e.g., 2-7 bit) hardly provides additional computational benefit because commodity hardware does not support those in their instruction set, while 1-bit quantization does not offer comparable accuracy.In comparison, we demonstrate that AntMan achieves up to 100x reduction in computation without loss in accuracy.",
"Moreover, quantization can be applied to AntMan to further reduce the computation, and vice versa, as quantization and AntMan are complementary techniques.Pruning: Pruning can be used to generate both unstructured and structured sparsity.",
"The former is not computationally efficient while the latter requires specialized implementation for efficient execution.While we did not present pruning results in the paper, we did try out techniques on both PTB and BiDAF models to generate random sparsity as well as blocked sparsity.",
"In both cases, we were able to get more that 10x reduction in computation even in the absence of Knowledge distillation.",
"Therefore pruning provides excellent computation reduction.However, as discussed in the paper, those theoretical computational reductions cannot be efficiently converted into practical performance gains: Unstructured sparsity resulting from pruning suffers from poor computation efficiency; a 10x theoretical reduction leads to less than 4x improvement in performance while AntMan achieves 30x performance gain with 10x reduction for PTB like models.",
"TAB7 It is possible to achieve structured sparsity such as block sparsity through pruning.",
"However, structured sparsity requires implementing specialized kernels to take advantage of the computation reduction.",
"Its efficiency greatly depends on the implementation, and in general is far from the theoretical computation reduction.On the contrary both ISS and AntMan achieve good computation reduction, and can be efficiently executed using readily available BLAS libraries such as Intel MKL resulting in super linear speedups as shown in the paper.Direct Design: We compared AntMan with smaller RNN models (with smaller hidden dimension) trained using the larger teacher model.",
"Our results show that for the same level of compression AntMan achieves much higher accuracy.",
"TAB6 SVD RNN:We constructed compressed models by replacing matrix-multiplication with SVD of various rank, and trained the SVD based models using knowledge distillation.",
"Once again, we find that for the same level of compression, AntMan achieves much higher accuracy than SVD.",
"TAB6 Block Tensor Decomposition (BTD) : BTD is designed to compress RNNs whose inputs are produced by convolution based models, and contain certain redundancies.",
"AntMan, on the other hand, is generic to all RNN based models.",
"Also, BTD is designed to compress only the input vector and not the hidden vectors.",
"This hinders the performance of BTD over a range of RNNs, where the hidden vectors are also large.",
"Here, we compare the performance of AntMan with ISS, without using any knowledge distillation.",
"Please note that knowledge distillation is part of the training process for AntMan, but it is not for ISS.",
"Nevertheless, it is interesting to see how AntMan performs in the absence of a teacher.When trained without knowledge distillation, our experiments show that AntMan and ISS have complimentary strengths.",
"On the PTB dataset, with a 10x compute reduction, AntMan does not generalize well without a teacher, while ISS incurs less than 1pt loss in perplexity compared to the original model.",
"This is demonstrated by the first row and column in Table 3 , and the third row in TAB2 .",
"On the contrary, for the BiDAF, AntMan incurs less than 1pt reduction in F1 score for nearly 10x compute reduction 2 , while ISS incurs nearly 2pt reduction in F1 score with less than 5x compute reduction on average.",
"This is shown in TAB9 .AntMan",
"can successfully compress BiDAF, while ISS fails because ISS compresses an LSTM by effectively reducing its hidden dimension, while AntMan preserves the hidden dimension size. The LSTMs",
"in the BiDAF model have large input dimensions making them computationally expensive, but they have very small hidden dimensions. Therefore",
", reducing the already small hidden dimension results in significant loss of accuracy. On the contrary",
", the PTB model has large input as well as hidden dimensions, allowing ISS to work effectively."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.31578946113586426,
0.16326530277729034,
0.24242423474788666,
0.2666666507720947,
0.051282044500112534,
0.04878048226237297,
0.12244897335767746,
0.2631579041481018,
0.1818181723356247,
0.2153846174478531,
0,
0.2631579041481018,
0.178571417927742,
0.11538460850715637,
0.11764705181121826,
0.13333332538604736,
0.05405404791235924,
0.11999999731779099,
0.06666666269302368,
0.1818181723356247,
0.052631575614213943,
0.09999999403953552,
0.09302324801683426,
0.1269841194152832,
0.12121211737394333,
0.10256409645080566,
0.1111111044883728,
0.2247190922498703,
0.13793103396892548,
0.22641508281230927,
0.1860465109348297,
0.1764705777168274,
0.17391303181648254,
0.1702127605676651,
0.2978723347187042,
0.1538461446762085,
0.11538460850715637,
0.1230769157409668,
0.08510638028383255,
0.10169491171836853,
0.19512194395065308,
0.08571428060531616,
0.05882352590560913,
0.17142856121063232,
0.15189872682094574,
0.1111111044883728,
0.39024388790130615,
0.051282044500112534,
0.13333332538604736,
0.1818181723356247,
0.11428570747375488,
0.05405404791235924,
0.22857142984867096,
0.15789473056793213,
0.19999998807907104,
0.03999999538064003,
0.1111111044883728,
0,
0,
0.04444443807005882,
0,
0.0555555522441864,
0.05405404791235924
] | BJgsN3R9Km | true | [
"Reducing computation and memory complexity of RNN models by up to 100x using sparse low-rank compression modules, trained via knowledge distillation."
] |
[
"Graph-structured data such as social networks, functional brain networks, gene regulatory networks, communications networks have brought the interest in generalizing deep learning techniques to graph domains.",
"In this paper, we are interested to design neural networks for graphs with variable length in order to solve learning problems such as vertex classification, graph classification, graph regression, and graph generative tasks.",
"Most existing works have focused on recurrent neural networks (RNNs) to learn meaningful representations of graphs, and more recently new convolutional neural networks (ConvNets) have been introduced.",
"In this work, we want to compare rigorously these two fundamental families of architectures to solve graph learning tasks.",
"We review existing graph RNN and ConvNet architectures, and propose natural extension of LSTM and ConvNet to graphs with arbitrary size.",
"Then, we design a set of analytically controlled experiments on two basic graph problems, i.e. subgraph matching and graph clustering, to test the different architectures. ",
"Numerical results show that the proposed graph ConvNets are 3-17% more accurate and 1.5-4x faster than graph RNNs.",
"Graph ConvNets are also 36% more accurate than variational (non-learning) techniques.",
"Finally, the most effective graph ConvNet architecture uses gated edges and residuality.",
"Residuality plays an essential role to learn multi-layer architectures as they provide a 10% gain of performance.",
"Convolutional neural networks of BID20 and recurrent neural networks of BID17 are deep learning architectures that have been applied with great success to computer vision (CV) and natural language processing (NLP) tasks.",
"Such models require the data domain to be regular, such as 2D or 3D Euclidean grids for CV and 1D line for NLP.",
"Beyond CV and NLP, data does not usually lie on regular domains but on heterogeneous graph domains.",
"Users on social networks, functional time series on brain structures, gene DNA on regulatory networks, IP packets on telecommunication networks are a a few examples to motivate the development of new neural network techniques that can be applied to graphs.",
"One possible classification of these techniques is to consider neural network architectures with fixed length graphs and variable length graphs.In the case of graphs with fixed length, a family of convolutional neural networks has been developed on spectral graph theory by BID6 .",
"The early work of BID5 proposed to formulate graph convolutional operations in the spectral domain with the graph Laplacian, as an analogy of the Euclidean Fourier transform as proposed by BID14 .",
"This work was extended by BID16 to smooth spectral filters for spatial localization.",
"BID9 used Chebyshev polynomials to achieve linear complexity for sparse graphs, BID21 applied Cayley polynomials to focus on narrow-band frequencies, and BID27 dealt with multiple (fixed) graphs.",
"Finally, BID19 simplified the spectral convnets architecture using 1-hop filters to solve the semi-supervised clustering task.",
"For related works, see also the works of , and references therein.For graphs with variable length, a generic formulation was proposed by BID12 ; BID29 based on recurrent neural networks.",
"The authors defined a multilayer perceptron of a vanilla RNN.",
"This work was extended by BID22 using a GRU architecture and a hidden state that captures the average information in local neighborhoods of the graph.",
"The work of BID30 introduced a vanilla graph ConvNet and used this new architecture to solve learning communication tasks.",
"BID25 introduced an edge gating mechanism in graph ConvNets for semantic role labeling.",
"Finally, BID4 designed a network to learn nonlinear approximations of the power of graph Laplacian operators, and applied it to the unsupervised graph clustering problem.",
"Other works for drugs design, computer graphics and vision are presented by BID10 ; BID1 ; BID26 .In",
"this work, we study the two fundamental classes of neural networks, RNNs and ConvNets, in the context of graphs with arbitrary length. Section",
"2 reviews the existing techniques. Section",
"3 presents the new graph NN models. Section",
"4 reports the numerical experiments.",
"This work explores the choice of graph neural network architectures for solving learning tasks with graphs of variable length.",
"We developed analytically controlled experiments for two fundamental graph learning problems, that are subgraph matching and graph clustering.",
"Numerical experiments showed that graph ConvNets had a monotonous increase of accuracy when the network gets deeper, unlike graph RNNs for which performance decreases for a large number of layers.",
"This led us to consider the most generic formulation of gated graph ConvNets, Eq. (11).",
"We also explored the benefit of residuality for graphs, Eq. (12).",
"Without residuality, existing graph neural networks are not able to stack more than a few layers.",
"This makes this property essential for graph neural networks, which receive a 10% boost of accuracy when more than 6 layers were stacked.",
"Future work will focus on solving domain-specific problems in chemistry, physics, and neuroscience."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.09999999403953552,
0.17777776718139648,
0.09999999403953552,
0.23529411852359772,
0.29411762952804565,
0.2380952388048172,
0.29411762952804565,
0.07407406717538834,
0.3571428656578064,
0.060606054961681366,
0.13636362552642822,
0.10526315122842789,
0.12903225421905518,
0.07999999821186066,
0.23529411852359772,
0.19512194395065308,
0,
0.09756097197532654,
0.06451612710952759,
0.21739129722118378,
0.07999999821186066,
0.20512819290161133,
0.17142856121063232,
0.13793103396892548,
0.21621620655059814,
0.060606054961681366,
0.37837836146354675,
0.09090908616781235,
0.1666666567325592,
0.0952380895614624,
0.23529411852359772,
0.1818181723356247,
0.2380952388048172,
0.4516128897666931,
0.29629629850387573,
0.0624999962747097,
0.10256409645080566,
0.06896550953388214
] | HyXBcYg0b | true | [
"We compare graph RNNs and graph ConvNets, and we consider the most generic class of graph ConvNets with residuality."
] |
[
"Complex-value neural networks are not a new concept, however, the use of real-values has often been favoured over complex-values due to difficulties in training and accuracy of results.",
"Existing literature ignores the number of parameters used.",
"We compared complex- and real-valued neural networks using five activation functions.",
"We found that when real and complex neural networks are compared using simple classification tasks, complex neural networks perform equal to or slightly worse than real-value neural networks.",
"However, when specialised architecture is used, complex-valued neural networks outperform real-valued neural networks.",
"Therefore, complex–valued neural networks should be used when the input data is also complex or it can be meaningfully to the complex plane, or when the network architecture uses the structure defined by using complex numbers.",
"In recent years complex numbers in neural networks are increasingly frequently used.",
"ComplexValued neural networks have been sucessfully applied to a variety of tasks specifically in signal processing where the input data has a natural interpretation in the complex domain.In most publications complex-valued neural networks are compared to real-valued architectures.",
"We need to ensure that these architectures are comparable in their ability to approximate functions.",
"A common metric for their capacity are the number of real-valued parameters.",
"The number of parameters of complex-valued neural networks are rarely studied aspects.",
"While complex numbers increase the computational complexity, their introduction also assumes a certain structure between weights and input.",
"Hence, it is not sufficient to increase the number of parameters.Even more important than in real-valued networks is the choice of activation function for each layer.",
"We test 5 functions: identity or no activation function, rectifier linear unit, hyperbolic tangent, magnitude, squared magnitude.",
"This paper explores the performance of complex-valued multi-layer perceptrons (MLP) with varying depth and width in consideration of the number of parameters and choice of activation function on benchmark classification tasks.In section 2 we will give an overview of the past and current developments in the applications of complex-valued neural networks.",
"We shortly present the multi-layer perceptron architecture in section 3 using complex numbers and review the building blocks of complex-valued network.In section 4 we consider the multi-layer perceptron with respect to the number of real-valued parameters in both the complex and real case.",
"We construct complex MLPs with the same number of units in each layer.",
"We propose two methods to define comparable networks: A fixed number of real-valued neurons per layer or a fixed budget of real-valued parameters.In the same section we also consider the structure that is assumed by introducing complex numbers into a neural network.We present the activation function to be used in our experiments in section 5.",
"In section 6 we present our experiments and their settings.",
"Section 7 discuss the results of different multi-layer perceptrons on MNIST digit classification, CIFAR-10 image classification, CIFAR-100 image classification, Reuters topic classification and bAbI question answering.",
"We identify a general direction of why and how to use complex-valued neural networks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.20000000298023224,
0.380952388048172,
0.25,
0.1111111044883728,
0.0833333283662796,
0.09756097197532654,
0,
0.17391304671764374,
0.07407406717538834,
0.4000000059604645,
0.25,
0.12903225421905518,
0.3243243098258972,
0,
0.2641509473323822,
0.47826087474823,
0.307692289352417,
0.20000000298023224,
0.08695651590824127,
0.2222222238779068,
0.2222222238779068
] | HkCy2uqQM | true | [
"Comparison of complex- and real-valued multi-layer perceptron with respect to the number of real-valued parameters."
] |
[
"The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. \n",
"In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs.",
"The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest.",
"Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators.",
"Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks.",
"In many domains, one has to deal with large-scale data with underlying non-Euclidean structure.",
"Prominent examples of such data are social networks, genetic regulatory networks, functional networks of the brain, and 3D shapes represented as discrete manifolds.",
"The recent success of deep neural networks and, in particular, convolutional neural networks (CNNs) BID19 have raised the interest in geometric deep learning techniques trying to extend these models to data residing on graphs and manifolds.",
"Geometric deep learning approaches have been successfully applied to computer graphics and vision ; BID3 a) ; BID24 , brain imaging BID18 , and drug design BID10 problems, to mention a few.",
"For a comprehensive presentation of methods and applications of deep learning on graphs and manifolds, we refer the reader to the review paper BID4 .Related",
"work. The earliest",
"neural network formulation on graphs was proposed by BID11 and BID27 , combining random walks with recurrent neural networks (their paper has recently enjoyed renewed interest in BID20 ; BID30 ). The first CNN-type",
"architecture on graphs was proposed by BID5 . One of the key challenges",
"of extending CNNs to graphs is the lack of vector-space structure and shift-invariance making the classical notion of convolution elusive. Bruna et al. formulated convolution-like",
"operations in the spectral domain, using the graph Laplacian eigenbasis as an analogy of the Fourier transform BID29 ). BID13 used smooth parametric spectral filters",
"in order to achieve localization in the spatial domain and keep the number of filter parameters independent of the input size. BID8 proposed an efficient filtering scheme using",
"recurrent Chebyshev polynomials applied on the Laplacian operator. BID17 simplified this architecture using filters",
"operating on 1-hop neighborhoods of the graph. BID0 proposed a Diffusion CNN architecture based",
"on random walks on graphs. BID24 (and later, Hechtlinger et al. (2017) ) proposed",
"a spatial-domain generalization of CNNs to graphs using local patch operators represented as Gaussian mixture models, showing a significant advantage of such models in generalizing across different graphs. In BID25 , spectral graph CNNs were extended to multiple",
"graphs and applied to matrix completion and recommender system problems.Main contribution. In this paper, we construct graph CNNs employing an efficient",
"spectral filtering scheme based on Cayley polynomials that enjoys similar advantages of the Chebyshev filters BID8 ) such as localization and linear complexity. The main advantage of our filters over BID8 is their ability",
"to detect narrow frequency bands of importance during training, and to specialize on them while being well-localized on the graph. We demonstrate experimentally that this affords our method greater",
"flexibility, making it perform better on a broad range of graph learning problems.Notation. We use a, a, and A to denote scalars, vectors, and matrices, respectively.z",
"denotes the conjugate of a complex number, Re{z} its real part, and i is the imaginary unit. diag(a 1 , . . . , a n ) denotes an n×n diagonal matrix with diagonal elements",
"a 1 , . . . , a n . Diag(A) = diag(a 11 , . . . , a nn ) denotes an n × n diagonal matrix obtained",
"by setting to zero the off-diagonal elements of A. Off(A) = A − Diag(A) denotes the matrix containing only the off-diagonal elements of A. I is the identity matrix and A • B denotes the Hadamard (element-wise) product of matrices A and B. Proofs are given in the appendix.",
"In this paper, we introduced a new efficient spectral graph CNN architecture that scales linearly with the dimension of the input data.",
"Our architecture is based on a new class of complex rational Cayley filters that are localized in space, can represent any smooth spectral transfer function, and are highly regular.",
"The key property of our model is its ability to specialize in narrow frequency bands with a small number of filter parameters, while still preserving locality in the spatial domain.",
"We validated these theoretical properties experimentally, demonstrating the superior performance of our model in a broad range of graph learning problems."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.04651162400841713,
0.1599999964237213,
0.052631575614213943,
0.10526315122842789,
0.06451612710952759,
0.09090908616781235,
0,
0.09999999403953552,
0,
0,
0,
0.1428571343421936,
0,
0,
0.12903225421905518,
0,
0,
0.08695651590824127,
0,
0.0952380895614624,
0.06666666269302368,
0.04999999701976776,
0.0555555522441864,
0.11428570747375488,
0.054054051637649536,
0,
0.04878048598766327,
0.20000000298023224,
0.054054051637649536,
0.054054051637649536,
0.13793103396892548
] | S1680_1Rb | true | [
"A spectral graph convolutional neural network with spectral zoom properties."
] |
[
"We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods.",
"Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models.",
"To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to \"collapsing\" to low-latency yet poor-accuracy models.",
"Moreover, we seamlessly extend FasterSeg to a new collaborative search (co-searching) framework, simultaneously searching for a teacher and a student network in the same single run.",
"The teacher-student distillation further boosts the student model’s accuracy.",
"Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg.",
"For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy.",
"Semantic segmentation predicts pixel-level annotations of different semantic categories for an image.",
"Despite its performance breakthrough thanks to the prosperity of convolutional neural networks (CNNs) (Long et al., 2015) , as a dense structured prediction task, segmentation models commonly suffer from heavy memory costs and latency, often due to stacking convolutions and aggregating multiple-scale features, as well as the increasing input image resolutions.",
"However, recent years witness the fast-growing demand for real-time usage of semantic segmentation, e.g., autonomous driving.",
"Such has motivated the enthusiasm on designing low-latency, more efficient segmentation networks, without sacrificing accuracy notably (Zhao et al., 2018; Yu et al., 2018a) .",
"The recent success of neural architecture search (NAS) algorithms has shed light on the new horizon in designing better semantic segmentation models, especially under latency of other resource constraints.",
"Auto-DeepLab (Liu et al., 2019a) first introduced network-level search space to optimize resolutions (in addition to cell structure) for segmentation tasks.",
"and Li et al. (2019) adopted pre-defined network-level patterns of spatial resolution, and searched for operators and decoders with latency constraint.",
"Despite a handful of preliminary successes, we observe that the successful human domain expertise in designing segmentation models appears to be not fully integrated into NAS frameworks yet.",
"For example, human-designed architectures for real-time segmentation (Zhao et al., 2018; Yu et al., 2018a) commonly exploit multi-resolution branches with proper depth, width, operators, and downsample rates, and find them contributing vitally to the success: such flexibility has not been unleashed by existing NAS segmentation efforts.",
"Furthermore, the trade-off between two (somewhat conflicting) goals, i.e., high accuracy and low latency, also makes the search process unstable and prone to \"bad local minima\" architecture options.",
"As the well-said quote goes: \"those who do not learn history are doomed to repeat it\".",
"Inheriting and inspired by the successful practice in hand-crafted efficient segmentation, we propose a novel NAS framework dubbed FasterSeg, aiming to achieve extremely fast inference speed and competitive accuracy.",
"We designed a special search space capable of supporting optimization over multiple branches of different resolutions, instead of a single backbone.",
"These searched branches are adaptively aggregated for the final prediction.",
"To further balance between accuracy versus latency and avoiding collapsing towards either metric (e.g., good latency yet poor accuracy), we design a decoupled and fine-grained latency regularization, that facilitates a more flexible and effective calibration between latency and accuracy.",
"Moreover, our NAS framework can be easily extended to a collaborative search (co-searching), i.e., jointly searching for a complex teacher network and a light-weight student network in a single run, whereas the two models are coupled by feature distillation in order to boost the student's accuracy.",
"We summarize our main contributions as follows:",
"• A novel NAS search space tailored for real-time segmentation, where multi-resolution branches can be flexibility searched and aggregated.",
"• A novel decoupled and fine-grained latency regularization, that successfully alleviates the \"architecture collapse\" problem in the latency-constrained search.",
"• A novel extension to teacher-student co-searching for the first time, where we distill the teacher to the student for further accuracy boost of the latter.",
"• Extensive experiments demonstrating that FasterSeg achieves extremely fast speed (over 30% faster than the closest manually designed competitor on CityScapes) and maintains competitive accuracy.",
"We introduced a novel multi-resolution NAS framework, leveraging successful design patterns in handcrafted networks for real-time segmentation.",
"Our NAS framework can automatically discover FasterSeg, which achieved both extremely fast inference speed and competitive accuracy.",
"Our search space is intrinsically of low-latency and is much larger and challenging due to flexible searchable expansion ratios.",
"More importantly, we successfully addressed the \"architecture collapse\" problem, by proposing the novel regularized latency optimization of fine-granularity.",
"We also demonstrate that by seamlessly extending to teacher-student co-searching, our NAS framework can boost the student's accuracy via effective distillation.",
"A STEM AND HEAD MODULE Stem: Our stem module aims to quickly downsample the input image to 1 8 resolution while increasing the number of channels.",
"The stem module consists of five 3 × 3 convolution layers, where the first, second, and fourth layer are of stride two and double the number of channels.",
"Head: As shown in Figure 1 , feature map of shape (C 2s × H × W ) is first reduced in channels by a 1 × 1 convolution layer and bilinearly upsampled to match the shape of the other feature map (C s × 2H × 2W ).",
"Then, two feature maps are concatenated and fused together with a 3 × 3 convolution layer.",
"Note that we not necessarily have C 2s = 2C s because of the searchable expansion ratios."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3589743673801422,
0.1666666567325592,
0.07407406717538834,
0.0952380895614624,
0,
0.0714285671710968,
0.15789473056793213,
0.06666666269302368,
0.0937499925494194,
0.0555555522441864,
0.04878048226237297,
0.043478257954120636,
0.051282044500112534,
0,
0.17391303181648254,
0.13114753365516663,
0,
0,
0.1304347813129425,
0.1111111044883728,
0,
0.03999999538064003,
0.1355932205915451,
0.07999999821186066,
0.10810810327529907,
0,
0,
0.1395348757505417,
0.34285715222358704,
0.11428570747375488,
0,
0.05714285373687744,
0.1538461446762085,
0,
0,
0.07547169178724289,
0.060606054961681366,
0
] | BJgqQ6NYvB | true | [
"We present a real-time segmentation model automatically discovered by a multi-scale NAS framework, achieving 30% faster than state-of-the-art models."
] |
[
"This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions.",
"We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration.",
"Reinforcement learning from demonstrations has proven to be an effective strategy for attacking problems that require sample efficiency and involve hard exploration.",
"For example, , Pohlen et al. (2018) and Salimans and Chen (2018b) have shown that RL with demonstrations can address the hard exploration problem in Montezuma's Revenge.",
"Večerík et al. (2017) , Merel et al. (2017) and have demonstrated similar results in robotics.",
"Many other works have shown that demonstrations can accelerate learning and address hard-exploration tasks (e.g. see Hester et al., 2018; Kim et al., 2013; Nair et al., 2018) .",
"In this paper, we attack the problem of learning from demonstrations in hard exploration tasks in partially observable environments with highly variable initial conditions.",
"These three aspects together conspire to make learning challenging:",
"1. Sparse rewards induce a difficult exploration problem, which is a challenge for many state of the art RL methods.",
"An environment has sparse reward when a non-zero reward is only seen after taking a long sequence of correct actions.",
"Our approach is able to solve tasks where standard methods run for billions of steps without seeing a single non-zero reward.",
"2. Partial observability forces the use of memory, and also reduces the generality of information provided by a single demonstration, since trajectories cannot be broken into isolated transitions using the Markov property.",
"An environment has partial observability if the agent can only observe a part of the environment at each timestep.",
"3. Highly variable initial conditions (i.e. changes in the starting configuration of the environment in each episode) are a big challenge for learning from demonstrations, because the demonstrations can not account for all possible configurations.",
"When the initial conditions are fixed it is possible to be extremely efficient through tracking Peng et al., 2018) ; however, with a large variety of initial conditions the agent is forced to generalize over environment configurations.",
"Generalizing between different initial conditions is known to be difficult (Ghosh et al., 2017; Langlois et al., 2019) .",
"Our approach to these problems combines demonstrations with off-policy, recurrent Q-learning in a way that allows us to make very efficient use of the available data.",
"In particular, we vastly outperform behavioral cloning using the same set of demonstrations in all of our experiments.",
"Another desirable property of our approach is that our agents are able to learn to outperform the demonstrators, and in some cases even to discover strategies that the demonstrators were not aware of.",
"In one of our tasks the agent is able to discover and exploit a bug in the environment in spite of all the demonstrators completing the task in the intended way.",
"Learning from a small number of demonstrations under highly variable initial conditions is not straight-forward.",
"We identify a key parameter of our algorithm, the demo-ratio, which controls the proportion of expert demonstrations vs agent experience in each training batch.",
"This hyper-parameter has a dramatic effect on the performance of the algorithm.",
"Surprisingly, we find that the optimal demo ratio is very small (but non-zero) across a wide variety of tasks.",
"The mechanism our agents use to efficiently extract information from expert demonstrations is to use them in a way that guides (or biases) the agent's own autonomous exploration of the environment.",
"Although this mechanism is not obvious from the algorithm construction, our behavioral analysis confirms the presence of this guided exploration effect.",
"To demonstrate the effectiveness of our approach we introduce a suite of tasks (which we call the Hard-Eight suite) that exhibit our three targeted properties.",
"The tasks are set in a procedurally-generated 3D world, and require complex behavior (e.g. tool use, long-horizon memory) from the agent to succeed.",
"The tasks are designed to be difficult challenges in our targeted setting, and several state of the art methods (themselves ablations of our approach) fail to solve them.",
"The main contributions of this paper are, firstly we design a new agent that makes efficient use of demonstrations to solve sparse reward tasks in partially observed environments with highly variable initial conditions.",
"Secondly, we provide an analysis of the mechanism our agents use to exploit information from the demonstrations.",
"Lastly, we introduce a suite of eight tasks that support this line of research.",
"Figure 1 : The R2D3 distributed system diagram.",
"The learner samples batches that are a mixture of demonstrations and the experiences the agent generates by interacting with the environment over the course of training.",
"The ratio between demos and agent experiences is a key hyper-parameter which must be carefully tuned to achieve good performance.",
"We propose a new agent, which we refer to as Recurrent Replay Distributed DQN from Demonstrations (R2D3).",
"R2D3 is designed to make efficient use of demonstrations to solve sparse reward tasks in partially observed environments with highly variable initial conditions.",
"This section gives an overview of the agent, and detailed pseudocode can be found in Appendix A.",
"The architecture of the R2D3 agent is shown in Figure 1 .",
"There are several actor processes, each running independent copies of the behavior against an instance of the environment.",
"Each actor streams its experience to a shared agent replay buffer, where experience from all actors is aggregated and globally prioritized ) using a mixture of max and mean of the TD-errors with priority exponent η = 1.0 as in Kapturowski et al. (2018) .",
"The actors periodically request the latest network weights from the learner process in order to update their behavior.",
"In addition to the agent replay, we maintain a second demo replay buffer, which is populated with expert demonstrations of the task to be solved.",
"Expert trajectories are also prioritized using the scheme of Kapturowski et al. (2018) .",
"Maintaining separate replay buffers for agent experience and expert demonstrations allows us to prioritize the sampling of agent and expert data separately.",
"The learner process samples batches of data from both the agent and demo replay buffers simultaneously.",
"A hyperparameter ρ, the demo ratio, controls the proportion of data coming from expert demonstrations versus from the agent's own experience.",
"The demo ratio is implemented at a batch level by randomly choosing whether to sample from the expert replay buffer independently for each element with probability ρ.",
"Using a stochastic demo ratio in this way allows us to target demo ratios that are smaller than the batch size, which we found to be very important for good performance.",
"The objective optimized by the learner uses of n-step, double Q-learning (with n = 5) and a dueling architecture (Wang et al., 2016; .",
"In addition to performing network updates, the learner is also responsible for pushing updated priorities back to the replay buffers.",
"In each replay buffer, we store fixed-length (m = 80) sequences of (s,a,r) tuples where adjacent sequences overlap by 40 time-steps.",
"The sequences never cross episode boundaries.",
"Given a single batch of trajectories we unroll both online and target networks (Mnih et al., 2015) on the same sequence of states to generate value estimates with the recurrent state initialized to zero.",
"Proper initialization of the recurrent state would require always replaying episodes from the beginning, which would add significant complexity to our implementation.",
"As an approximation of this we treat the first 40 steps of each sequence as a burn-in phase, and apply the training objective to the final 40 steps only.",
"An Hard-Eight task suite.",
"In each task an agent ( ) must interact with objects in its environment in order to gain access to a large apple ( ) that provides reward.",
"The 3D environment is also procedurally generated so that every episode the state of the world including object shapes, colors, and positions is different.",
"From the point of view of the agent the environment is partially observed.",
"Because it may take hundreds of low-level actions to collect an apple the reward is sparse which makes exploration difficult.",
"alternative approximation would be to store stale recurrent states in replay, but we did not find this to improve performance over zero initialization with burn-in.",
"In this paper, we introduced the R2D3 agent, which is designed to make efficient use of demonstrations to learn in partially observable environments with sparse rewards and highly variable initial conditions.",
"We showed through several experiments on eight very difficult tasks that our approach is able to outperform multiple state of the art baselines, two of which are themselves ablations of R2D3.",
"We also identified a key parameter of our algorithm, the demo ratio, and showed that careful tuning of this parameter is critical to good performance.",
"Interestingly we found that the optimal demo ratio is surprisingly small but non-zero, which suggests that there may be a risk of overfitting to the demonstrations at the cost of generalization.",
"For future work, we could investigate how this optimal demo ratio changes with the total number of demonstrations and, more generally, the distribution of expert trajectories relative to the task variability.",
"We introduced the Hard-Eight suite of tasks and used them in all of our experiments.",
"These tasks are specifically designed to be partially observable tasks with sparse rewards and highly variable initial conditions, making them an ideal testbed for showcasing the strengths of R2D3 in contrast to existing methods in the literature.",
"Our behavioral analysis showed that the mechanism R2D3 uses to efficiently extract information from expert demonstrations is to use them in a way that guides (or biases) the agent's own autonomous exploration of the environment.",
"An in-depth analysis of agent behavior on the Hard-Eight task suite is a promising direction for understanding how different RL algorithms make selective use of information.",
"A R2D3",
"Below we include pseudocode for the full R2D3 agent.",
"The agent consists first of a single learner process which samples from both demonstration and agent buffers in order to update its policy parameters."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.9019607901573181,
0.24242423474788666,
0.2978723347187042,
0.23529411852359772,
0.052631575614213943,
0.07843136787414551,
0.5416666865348816,
0.05882352590560913,
0.09090908616781235,
0.04651162400841713,
0.1304347813129425,
0.07407406717538834,
0.0952380895614624,
0.21052631735801697,
0.24137930572032928,
0.1428571343421936,
0.35999998450279236,
0.1428571343421936,
0.1538461446762085,
0.16326530277729034,
0.29999998211860657,
0.21276594698429108,
0.0555555522441864,
0.09090908616781235,
0.2641509473323822,
0.09090908616781235,
0.1304347813129425,
0.12244897335767746,
0.1599999964237213,
0.5964912176132202,
0.24390242993831635,
0.15789473056793213,
0,
0.21276594698429108,
0.08888888359069824,
0.0952380895614624,
0.5957446694374084,
0.1428571343421936,
0.1666666567325592,
0.09756097197532654,
0.1515151411294937,
0.0952380895614624,
0.2083333283662796,
0.052631575614213943,
0.1818181723356247,
0.09756097197532654,
0.09302324801683426,
0.07692307233810425,
0.1111111044883728,
0.04081632196903229,
0.04651162400841713,
0.04444443807005882,
0,
0.10526315122842789,
0.08888888359069824,
0.12244897335767746,
0,
0.2448979616165161,
0.08510638028383255,
0.17142856121063232,
0.2222222238779068,
0.12244897335767746,
0.5090909004211426,
0.14814814925193787,
0.1666666567325592,
0.1538461446762085,
0.15094339847564697,
0.1538461446762085,
0.3448275923728943,
0.25,
0.11999999731779099,
0.05882352590560913,
0.1666666567325592
] | SygKyeHKDH | true | [
"We introduce R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions."
] |
[
"We investigate the learned dynamical landscape of a recurrent neural network solving a simple task requiring the interaction of two memory mechanisms: long- and short-term.",
"Our results show that while long-term memory is implemented by asymptotic attractors, sequential recall is now additionally implemented by oscillatory dynamics in a transverse subspace to the basins of attraction of these stable steady states.",
"Based on our observations, we propose how different types of memory mechanisms can coexist and work together in a single neural network, and discuss possible applications to the fields of artificial intelligence and neuroscience.",
"Recurrent neural networks (RNN) are widely used to carry out tasks that require learning temporal dependencies across several scales.",
"Training RNN's to perform such tasks offers its share of challenges, from well-known exploding and vanishing gradients, to the difficulties of storing, accessing, and forgetting memories BID10 BID1 .",
"Viewed as dynamical system, the activity structure of recurrent network state spaces can reveal how networks learn tasks, and can help guide training and architecture design.",
"In this study, we perform a dynamical system analysis of a trained RNN on a simple tasks that requires two types of memory paradigms interact: short-term memory of past inputs and a delayed output during classification.While gating units found in LSTM BID11 and in a variety of other architectures (e.g., BID2 van der Westhuizen & Lasenby, 2018 ) directly aim at addressing these long-scale temporal learning issues, they are always used in conjunction with so-called \"vanilla\" recurrent units that shoulder the majority of computation.",
"It is not yet well understood how internal network dynamics supported by such circuits combine information from external inputs to solve complex tasks that require remembering information from the past and delaying output changes.",
"On one hand, attractor networks are a known solution to keep finite memories indefinitely BID5 .",
"On the other, orthogonal transformations (e.g., identity and rotations) are used to build explicit RNN solutions to recall tasks BID6 BID13 BID0 .",
"Indeed, for the well-studied copy task, where a sequence of symbols needs to be outputted after a long delay, it is known that the best solution is to use rotations to store the sequences, much like clocks that align at the time of recall BID3 .",
"However, it is unclear how attractor dynamics and orthogonal (rotational) transformations interact when a task requires both long term memory and sequential recall.",
"We explore this situation here.Leveraging slow-point analysis techniques BID12 , we uncover how short-term memory tasks with delayed outputs give rise to attractor dynamics with oscillatory transients in low-dimensional activity subspaces.",
"Our result uncovers how the boundaries of basins of attractions that are linked to memory attractors interact with transverse oscillatory dynamics to support timed, sequential computations of integrated inputs.",
"This provides novel insights into dynamical strategies to solve complex temporal tasks with randomly connected recurrent units.",
"Moreover, such transient oscillatory dynamics are consistent with periodic activity found throughout the brain BID9 , and we discuss the impact of our findings on computations in biological circuits.",
"We have seen in this study that long-term memory and sequential recall can be implemented by a simple RNN fairly easily, and in parallel, by acting on different subspaces of the RNN phase space.",
"Specifically, sequential recall is achieved by rotational dynamics localized around the origin, which occur in a subspace orthogonal to the separatrices of the basins of attraction that solve the classification task.",
"Our findings suggest that this population-level periodic activity may serve as a general \"precision timing\" mechanism that can be combined with distinct, learned computations.",
"Indeed, oscillations enable the introduction of small delays, transverse to low dimensional activity of recurrent neural circuits.",
"An interesting line of future work would be to investigate more thoroughly this mechanism in the presence of distinct computational tasks, such as character-level prediction, or arithmetic operations.",
"We believe that learning a delayed recall in conjunction with any task will lead to generic, emergent oscillations that enable transient dynamics transverse to the subspaces used to perform other computations.",
"It may be possible to leverage this geometric understanding for faster training by initializing networks in a way that promotes transverse rotations.Furthermore, this oscillatory mechanism is consistent with observations of oscillatory dynamics in the brain BID9 .",
"Together with the known phenomena whereby neurons in the brain perform tasks with low-dimensional activity patterns, and that the same neurons engage in oscillatory activity when viewed at the population-level, our findings are consistent with a general principle of delayed recall in neural networks, either biological or artificial."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4736841917037964,
0.21276594698429108,
0.21276594698429108,
0.05714285373687744,
0.04878048226237297,
0.19999998807907104,
0.08988763391971588,
0.1249999925494194,
0.06451612710952759,
0.10256409645080566,
0.07692307233810425,
0.3684210479259491,
0.12765957415103912,
0.1428571343421936,
0.060606054961681366,
0.045454539358615875,
0.30434781312942505,
0.1860465109348297,
0.05128204822540283,
0.1249999925494194,
0.04651162400841713,
0.1818181723356247,
0.03999999538064003,
0.145454540848732
] | SJevPNShnV | true | [
"We investigate how a recurrent neural network successfully learns a task combining long-term memory and sequential recall."
] |
[
"The problem of exploration in reinforcement learning is well-understood in the tabular case and many sample-efficient algorithms are known.",
"Nevertheless, it is often unclear how the algorithms in the tabular setting can be extended to tasks with large state-spaces where generalization is required.",
"Recent promising developments generally depend on problem-specific density models or handcrafted features.",
"In this paper we introduce a simple approach for exploration that allows us to develop theoretically justified algorithms in the tabular case but that also give us intuitions for new algorithms applicable to settings where function approximation is required.",
"Our approach and its underlying theory is based on the substochastic successor representation, a concept we develop here.",
"While the traditional successor representation is a representation that defines state generalization by the similarity of successor states, the substochastic successor representation is also able to implicitly count the number of times each state (or feature) has been observed.",
"This extension connects two until now disjoint areas of research.",
"We show in traditional tabular domains (RiverSwim and SixArms) that our algorithm empirically performs as well as other sample-efficient algorithms.",
"We then describe a deep reinforcement learning algorithm inspired by these ideas and show that it matches the performance of recent pseudo-count-based methods in hard exploration Atari 2600 games.",
"Reinforcement learning (RL) tackles sequential decision making problems by formulating them as tasks where an agent must learn how to act optimally through trial and error interactions with the environment.",
"The goal in these problems is to maximize the sum of the numerical reward signal observed at each time step.",
"Because the actions taken by the agent influence not just the immediate reward but also the states and associated rewards in the future, sequential decision making problems require agents to deal with the trade-off between immediate and delayed rewards.",
"Here we focus on the problem of exploration in RL, which aims to reduce the number of samples (i.e., interactions) an agent needs in order to learn to perform well in these tasks when the environment is initially unknown.The sample efficiency of RL algorithms is largely dependent on how agents select exploratory actions.",
"In order to learn the proper balance between immediate and delayed rewards agents need to navigate through the state space to learn about the outcome of different transitions.",
"The number of samples an agent requires is related to how quickly it is able to explore the state-space.",
"Surprisingly, the most common approach is to select exploratory actions uniformly at random, even in high-profile success stories of RL (e.g., BID26 BID17 .",
"Nevertheless, random exploration often fails in environments with sparse rewards, that is, environments where the agent observes a reward signal of value zero for the majority of states.",
"RL algorithms tend to have high sample complexity, which often prevents them from being used in the real-world.",
"Poor exploration strategies is one of the main reasons for this high sample-complexity.",
"Despite all of its shortcomings, uniform random exploration is, to date, the most commonly used approach for exploration.",
"This is mainly due to the fact that most approaches for tackling the exploration problem still rely on domain-specific knowledge (e.g., density models, handcrafted features), or on having an agent learn a perfect model of the environment.",
"In this paper we introduced a general method for exploration in RL that implicitly counts state (or feature) visitation in order to guide the exploration process.",
"It is compatible to representation learning and the idea can also be adapted to be applied to large domains.This result opens up multiple possibilities for future work.",
"Based on the results presented in Section 3, for example, we conjecture that the substochastic successor representation can be actually used to generate algorithms with PAC-MDP bounds.",
"Investigating to what extent different auxiliary tasks impact the algorithm's performance, and whether simpler tasks such as predicting feature activations or parts of the input BID7 are effective is also worth studying.",
"Finally, it might be interesting to further investigate the connection between representation learning and exploration, since it is also known that better representations can lead to faster exploration BID8 ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.260869562625885,
0.07999999821186066,
0,
0.12903225421905518,
0.08695651590824127,
0.14035087823867798,
0.052631575614213943,
0.1702127605676651,
0.5614035129547119,
0.10344827175140381,
0.12765957415103912,
0.06779660284519196,
0.1621621549129486,
0.11764705181121826,
0.13333332538604736,
0.15094339847564697,
0.15094339847564697,
0.1304347813129425,
0.1463414579629898,
0.13333332538604736,
0.1249999925494194,
0.19230768084526062,
0.15094339847564697,
0.14814814925193787,
0.06896550953388214,
0.145454540848732
] | S1giVsRcYm | true | [
"We propose the idea of using the norm of the successor representation an exploration bonus in reinforcement learning. In hard exploration Atari games, our the deep RL algorithm matches the performance of recent pseudo-count-based methods."
] |
[
"Deep generative modeling using flows has gained popularity owing to the tractable exact log-likelihood estimation with efficient training and synthesis process.",
"However, flow models suffer from the challenge of having high dimensional latent space, same in dimension as the input space.",
"An effective solution to the above challenge as proposed by Dinh et al. (2016) is a multi-scale architecture, which is based on iterative early factorization of a part of the total dimensions at regular intervals.",
"Prior works on generative flows involving a multi-scale architecture perform the dimension factorization based on a static masking.",
"We propose a novel multi-scale architecture that performs data dependent factorization to decide which dimensions should pass through more flow layers.",
"To facilitate the same, we introduce a heuristic based on the contribution of each dimension to the total log-likelihood which encodes the importance of the dimensions.",
"Our proposed heuristic is readily obtained as part of the flow training process, enabling versatile implementation of our likelihood contribution based multi-scale architecture for generic flow models.",
"We present such an implementation for the original flow introduced in Dinh et al. (2016), and demonstrate improvements in log-likelihood score and sampling quality on standard image benchmarks.",
"We also conduct ablation studies to compare proposed method with other options for dimension factorization.",
"Deep Generative Modeling aims to learn the embedded distributions and representations in input (especially unlabelled) data, requiring no/minimal human labelling effort.",
"Learning without knowledge of labels (unsupervised learning) is of increasing importance because of the abundance of unlabelled data and the rich inherent patterns they posses.",
"The representations learnt can then be utilized in a number of downstream tasks such as semi-supervised learning Odena, 2016) , synthetic data augmentation and adversarial training (Cisse et al., 2017) , text analysis and model based control etc.",
"The repository of deep generative modeling majorly includes Likelihood based models such as autoregressive models (Oord et al., 2016b; Graves, 2013) , latent variable models (Kingma & Welling, 2013) , flow based models (Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018) and implicit models such as generative adversarial networks (GANs) (Goodfellow et al., 2014) .",
"Autoregressive models (Salimans et al., 2017; Oord et al., 2016b; a; achieve exceptional log-likelihood score on many standard datasets, indicative of their power to model the inherent distribution.",
"But, they suffer from slow sampling process, making them unacceptable to adopt in real world applications.",
"Latent variable models such as variational autoencoders (Kingma & Welling, 2013) tend to better capture the global feature representation in data, but do not offer an exact density estimate.",
"Implicit generative models such as GANs which optimize a generator and a discriminator in a min-max fashion have recently become popular for their ability to synthesize realistic data (Karras et al., 2018; Engel et al., 2019) .",
"But, GANs do not offer a latent space suitable for further downstream tasks, nor do they perform density estimation.",
"Flow based generative models (Dinh et al., 2016; Kingma & Dhariwal, 2018) perform exact density estimation with fast inference and sampling, due to their parallelizability.",
"They also provide an information rich latent space suitable for many applications.",
"However, the dimension of latent space for flow based generative models is same as the high-dimensional input space, by virtue of bijectivity nature of flows.",
"This poses a bottleneck for flow models to scale with increasing input dimensions due to computational complexity.",
"An effective solution to the above challenge is a multi-scale architecture, introduced by Dinh et al. (2016) , which performs iterative early gaussianization of a part of the total dimensions at regular intervals of flow layers.",
"This not only makes the model computational and memory efficient but also aids in distributing the loss function throughout the network for better training.",
"Many prior works including Kingma & Dhariwal (2018) ; Atanov et al. (2019) ; Durkan et al. (2019) ; implement multi-scale architecture in their flow models, but use static masking methods for factorization of dimensions.",
"We propose a multi-scale architecture which performs data dependent factorization to decide which dimensions should pass through more flow layers.",
"For the decision making, we introduce a heuristic based on the amount of total log-likelihood contributed by each dimension, which in turn signifies their individual importance.",
"We lay the ground rules for quantitative estimation and qualitative sampling to be satisfied by an ideal factorization method for a multi-scale architecture.",
"Since in the proposed architecture, the heuristic is obtained as part of the flow training process, it can be universally applied to generic flow models.",
"We present such implementations for flow models based on affine/additive coupling and ordinary differential equation (ODE) and achieve quantitative and qualitative improvements.",
"We also perform ablation studies to confirm the novelty of our method.",
"Summing up, the contributions of our research are,",
"We proposed a novel multi-scale architecture for generative flows which employs a data-dependent splitting based the individual contribution of dimensions to the total log-likelihood.",
"Implementations of the proposed method for several state-of-the-art flow models such as RealNVP (Dinh et al., 2016) , Glow(Kingma & Dhariwal, 2018) and i-ResNet (Behrmann et al., 2018) were presented.",
"Empirical studies conducted on benchmark image datasets validate the strength of our proposed method, which improves log-likelihood scores and is able to generate qualitative samples.",
"Ablation study results confirm the power of LCMA over other options for dimension factorization."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.1666666567325592,
0.1764705777168274,
0.43478259444236755,
0.4516128897666931,
0.3333333432674408,
0.555555522441864,
0.29999998211860657,
0.19512194395065308,
0.13333332538604736,
0.1666666567325592,
0.1111111044883728,
0.15686273574829102,
0.072727270424366,
0.2380952388048172,
0.12903225421905518,
0.13636362552642822,
0.1249999925494194,
0.060606054961681366,
0.09756097197532654,
0,
0.1621621549129486,
0.19354838132858276,
0.2978723347187042,
0.10810810327529907,
0.2666666507720947,
0.3529411852359772,
0.4000000059604645,
0.3243243098258972,
0.21621620655059814,
0.11428570747375488,
0.2222222238779068,
0.17391303181648254,
0.5945945978164673,
0.0952380895614624,
0.25,
0.20689654350280762
] | H1eRI04KPB | true | [
"Data-dependent factorization of dimensions in a multi-scale architecture based on contribution to the total log-likelihood"
] |
[
"Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years.",
"While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective.",
"What is more, no exhaustive empirical comparison has been performed in the past.",
"In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them.",
"By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation.",
"Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.",
"While DNNs have had a large impact on a variety of different tasks BID10 BID8 BID12 BID21 BID28 , explaining their predictions is still challenging.",
"The lack of tools to inspect the behavior of these black-box models makes DNNs less trustable for those domains where interpretability and reliability are crucial, like autonomous driving, medical applications and finance.In this work, we study the problem of assigning an attribution value, sometimes also called \"relevance\" or \"contribution\", to each input feature of a network.",
"More formally, consider a DNN that takes an input x = [x 1 , ..., x N ] ∈ R N and produces an output S(x) = [S 1 (x), ..., S C (x)], where C is the total number of output neurons.",
"Given a specific target neuron c, the goal of an attribution method is to determine the contribution R c = [R c 1 , ..., R c N ] ∈ R N of each input feature x i to the output S c .",
"For a classification task, the target neuron of interest is usually the output neuron associated with the correct class for a given sample.",
"When the attributions of all input features are arranged together to have the same shape of the input sample we talk about attribution maps FIG0 , which are usually displayed as heatmaps where red color indicates features that contribute positively to the activation of the target output, and blue color indicates features that have a suppressing effect on it.The problem of finding attributions for deep networks has been tackled in several previous works BID22 BID30 BID24 BID2 BID20 BID25 BID31 .",
"Unfortunately, due to slightly different problem formulations, lack of compatibility with the variety of existing DNN architectures and no common benchmark, a comprehensive comparison is not available.",
"Various new attribution methods have been published in the last few years but we believe a better theoretical understanding of their properties is fundamental.",
"The contribution of this work is twofold:1.",
"We prove that -LRP BID2 and DeepLIFT (Rescale) BID20 can be reformulated as computing backpropagation for a modified gradient function (Section 3).",
"This allows the construction of a unified framework that comprises several gradient-based attribution methods, which reveals how these methods are strongly related, if not equivalent under certain conditions.",
"We also show how this formulation enables a more convenient implementation with modern graph computational libraries.2.",
"We introduce the definition of Sensitivity-n, which generalizes the properties of Completeness BID25 and Summation to Delta BID20 and we compare several methods against this metric on widely adopted datasets and architectures.",
"We show how empirical results support our theoretical findings and propose directions for the usage of the attribution methods analyzed (Section 4).",
"In this work, we have analyzed Gradient * Input, -LRP, Integrated Gradients and DeepLIFT (Rescale) from theoretical and practical perspectives.",
"We have shown that these four methods, despite their apparently different formulation, are strongly related, proving conditions of equivalence or approximation between them.",
"Secondly, by reformulating -LRP and DeepLIFT (Rescale), we have shown how these can be implemented as easy as other gradient-based methods.",
"Finally, we have proposed a metric called Sensitivity-n which helps to uncover properties of existing attribution methods but also traces research directions for more general ones.Nonlinear operations.",
"For a nonlinear operation with a single input of the form x i = f (z i ) (i.e. any nonlinear activation function), the DeepLIFT multiplier (Sec. 3.5.2 in Shrikumar et al. BID20 ) is: DISPLAYFORM0 Nonlinear operations with multiple inputs (eg. 2D pooling) are not addressed in BID20 .",
"For these, we keep the original operations' gradient unmodified as in the DeepLIFT public implementation.",
"By linear model we refer to a model whose target output can be written as S c (x) = i h i (x i ), where all h i are compositions of linear functions.",
"As such, we can write DISPLAYFORM1 for some some a i and b i .",
"If the model is linear only in the restricted domain of a task inputs, the following considerations hold in the domain.",
"We start the proof by showing that, on a linear model, all methods of TAB0 are equivalent.Proof.",
"In the case of Gradient * Input, on a linear model it holds DISPLAYFORM2 , being all other derivatives in the summation zero.",
"Since we are considering a linear model, all nonlinearities f are replaced with the identity function and therefore ∀z : g DL (z) = g LRP (z) = f (z) = 1 and the modified chain-rules for LRP and DeepLIFT reduce to the gradient chain-rule.",
"This proves that -LRP and DeepLIFT with a zero baseline are equivalent to Gradient * Input in the linear case.",
"For Integrated Gradients the gradient term is constant and can be taken out of the integral: DISPLAYFORM3 , which completes the proof the proof of equivalence for the methods in TAB0 in the linear case.If we now consider any subset of n features x S ⊆ x, we have for Occlusion-1: DISPLAYFORM4 where the last equality holds because of the definition of linear model (Equation 9 ).",
"This shows that Occlusion-1, and therefore all other equivalent methods, satisfy Sensitivity-n for all n if the model is linear.",
"If, on the contrary, the model is not linear, there must exists two features x i and x j such that DISPLAYFORM5 .",
"In this case, either Sensitivity-1 or Sensitivity-2 must be violated since all methods assign a single attribution value to x i and x j ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.1249999925494194,
0,
0.13333332538604736,
0,
0.08888888359069824,
0,
0.0952380895614624,
0,
0.09090908616781235,
0,
0.07894736528396606,
0.10526315122842789,
0.1111111044883728,
0,
0,
0.14999999105930328,
0,
0.09999999403953552,
0.12121211737394333,
0,
0.05714285373687744,
0.0624999962747097,
0.20000000298023224,
0.0357142835855484,
0,
0.09999999403953552,
0,
0,
0.13333332538604736,
0,
0.08888888359069824,
0.1249999925494194,
0.0312499962747097,
0,
0,
0.17142856121063232
] | Sy21R9JAW | true | [
"Four existing backpropagation-based attribution methods are fundamentally similar. How to assess it?"
] |
[
"We study SGD and Adam for estimating a rank one signal planted in matrix or tensor noise.",
"The extreme simplicity of the problem setup allows us to isolate the effects of various factors: signal to noise ratio, density of critical points, stochasticity and initialization.",
"We observe a surprising phenomenon: Adam seems to get stuck in local minima as soon as polynomially many critical points appear (matrix case), while SGD escapes those.",
"However, when the number of critical points degenerates to exponentials (tensor case), then both algorithms get trapped.",
"Theory tells us that at fixed SNR the problem becomes intractable for large $d$ and in our experiments SGD does not escape this.",
"We exhibit the benefits of warm starting in those situations.",
"We conclude that in this class of problems, warm starting cannot be replaced by stochasticity in gradients to find the basin of attraction.",
"Reductionism consists of breaking down the study of complex systems and phenomena into their atomic components.",
"While the use of stochastic gradient based algorithms has shown tremendous success at minimizing complicated loss functions arising in deep learning, our understanding of why, when and how this happens is still limited.",
"Statements such as stochastic gradients escape from isolated critical points along the road to the best basin of attraction, or SGD generalizes better because it does not get stuck in steep local minima still need to be better understood.",
"Can we prove or replicate these phenomena in the simplest instances of the problem?",
"We study the behavior of stochastic gradient descent (SGD) BID11 and an adaptive variant (Adam) BID8 under a class of well studied non-convex problems.",
"The single spiked models were originally designed for studying principal component analysis on matrices BID12 BID3 BID5 and have also been extended to higher order tensors BID10 .",
"Adaptive stochastic optimization methods have been gaining popularity in the deep learning community thanks to fast training on some benchmarks.",
"However, it has been observed that despite reaching a low value of the loss function, the solutions found by Adam do not generalize as well as SGD solutions do.",
"An assumption, widely spread and adopted in the community, has been that SGD's randomness helps escaping local critical points [WRS + 17] .",
"While the problem has been thoroughly studied theoretically [MR14, HSS15, HSSS16, BAGJ18], our contribution is to propose experimenting with this simple model to challenge claims such as those on randomized gradient algorithms in this very simple setup.",
"It is noteworthy that the landscape of non-global critical points of these toy datasets are studied BID0 BID2 BID1 and formally linked to the neural nets empirical loss functions BID2 BID9 .",
"For this problem, the statistical properties of the optimizers are well understood, and in the more challenging tensor situation, also the impact of (spectral) warm start has been discussed BID10 .",
"We will examine the solutions found by SGD and Adam and compare them with spectral and power methods.",
"This allows to empirically elucidate the existence of multiple regimes: (1) the strong signal regime where all first order methods seem to find good solutions (2) when polynomially many critical points appear, in the matrix case, SGD converges while Adam gets trapped, unless if initialized in the basin of attraction (3) in the presence of exponentially many critical points (the tensor case), all algorithms fail, unless if d is moderately small and the SNR large enough to allow for proper initialization.2",
"Single spiked models, and stochastic gradients",
"We propose to study algorithms used for minimizing deep learning loss functions, at optimizing a non-convex objective on simple synthetic datasets.",
"Studying simplified problems has the advantage that the problem's properties, and the behavior of the optimizer and the solution, can be studied rigorously.",
"The use of such datasets can help to perform sanity checks on improvement ideas to the algorithms, or to mathematically prove or disprove intuitions.",
"The properties of the toy data sets align with some properties of deep learning loss functions.",
"From the optimization standpoint, the resulting tensor problems may appear to be even harder than deep learning problems.",
"We observe that finding good solutions is hard unless if proper initialization is performed, while the value of stochasticity in gradient estimates seems too narrow and does not appear to compensate for poor initialization heuristics.",
"Each column represents the values of those quantities along iterations of the algorithm.",
"The prefix sp. refers to spectral initialization and l.",
"refers to a decreasing learning weight scheduled in 1/ √ t.",
"We observe the value of warm starting as soon as λ is large enough.",
"Even at high SNR λ = 6, randomly initialized SGD fails while spectrally initialized SGD succeeds.",
"Adam drifts to a non optimal critical point in that regime, even with spectral warm start."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.37037035822868347,
0.060606054961681366,
0.1111111044883728,
0,
0.1818181723356247,
0,
0,
0.07999999821186066,
0.0476190447807312,
0.043478257954120636,
0,
0.12121211737394333,
0.21621620655059814,
0,
0.11428570747375488,
0.0624999962747097,
0.045454543083906174,
0.052631575614213943,
0.1111111044883728,
0.23076923191547394,
0.13333332538604736,
0.25,
0.06451612710952759,
0.0714285671710968,
0,
0,
0.07692307233810425,
0.09302325546741486,
0,
0.10526315122842789,
0,
0,
0.0833333283662796,
0.07692307233810425
] | rkl8DES3nE | true | [
"SGD and Adam under single spiked model for tensor PCA"
] |
[
"This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.",
"How to analyze the specific rationale of each prediction made by the CNN presents one of key issues of understanding neural networks, but it is also of significant practical values in certain applications.",
"In this study, we propose to distill knowledge from the CNN into an explainable additive model, so that we can use the explainable model to provide a quantitative explanation for the CNN prediction.",
"We analyze the typical bias-interpreting problem of the explainable model and develop prior losses to guide the learning of the explainable additive model.",
"Experimental results have demonstrated the effectiveness of our method.",
"Convolutional neural networks (CNNs) BID17 BID15 BID10 have achieved superior performance in various tasks, such as object classification and detection.",
"Besides the discrimination power of neural networks, the interpretability of neural networks has received an increasing attention in recent years.In this paper, we focus on a new problem, i.e. explaining the specific rationale of each network prediction semantically and quantitatively.",
"\"Semantic explanations\" and \"quantitative explanations\" are two core issues of understanding neural networks.",
"In this paper, we focus on a new task, i.e. explaining the logic of each CNN prediction semantically and quantitatively, which presents considerable challenges in the scope of understanding neural networks.",
"We propose to distill knowledge from a pre-trained performer into an interpretable additive explainer.",
"We can consider that the performer and the explainer encode similar knowledge.",
"The additive explainer decomposes the prediction score of the performer into value components from semantic visual concepts, in order to compute quantitative contributions of different concepts.",
"The strategy of using an explainer for explanation avoids decreasing the discrimination power of the performer.",
"In preliminary experiments, we have applied our method to different benchmark CNN performers to prove the broad applicability.Note that our objective is not to use pre-trained visual concepts to achieve super accuracy in classification/prediction.",
"Instead, the explainer uses these visual concepts to mimic the logic of the performer and produces similar prediction scores as the performer.In particular, over-interpreting is the biggest challenge of using an additive explainer to interpret another neural network.",
"In this study, we design two losses to overcome the bias-interpreting problems.",
"Besides, in experiments, we also measure the amount of the performer knowledge that could not be represented by visual concepts in the explainer.",
"Table 4 : Classification accuracy of the explainer and the performer.",
"We use the the classification accuracy to measure the information loss when using an explainer to interpret the performer.",
"Note that the additional loss for bias-interpreting successfully overcame the bias-interpreting problem, but did not decrease the classification accuracy of the explainer.",
"Another interesting finding of this research is that sometimes, the explainer even outperformed the performer in classification.",
"A similar phenomenon has been reported in BID9 .",
"A possible explanation for this phenomenon is given as follows.",
"When the student network in knowledge distillation had sufficient representation power, the student network might learn better representations than the teacher network, because the distillation process removed abnormal middle-layer features corresponding to irregular samples and maintained common features, so as to boost the robustness of the student network.",
"Table 5 : Relative deviations of the explainer.",
"The additional loss for bias-interpreting successfully overcame the bias-interpreting problem and just increased a bit (ignorable) relative deviation of the explainer.",
"BID40 ) used a tree structure to summarize the inaccurate rationale of each CNN prediction into generic decision-making models for a number of samples.",
"This method assumed the significance of a feature to be proportional to the Jacobian w.r.t. the feature, which is quite problematic.",
"This assumption is acceptable for BID40 , because the objective of BID40 ) is to learn a generic explanation for a group of samples, and the inaccuracy in the explanation for each specific sample does not significantly affect the accuracy of the generic explanation.",
"In comparisons, our method focuses on the quantitative explanation for each specific sample, so we design an additive model to obtain more convincing explanations.",
"Baseline Our method Figure 4 : We compared the contribution distribution of different visual concepts (filters) that was estimated by our method and the distribution that was estimated by the baseline.",
"The baseline usually used very few visual concepts to make predictions, which was a typical case of bias-interpreting.",
"In comparisons, our method provided a much more reasonable contribution distribution of visual concepts.",
"Legs & feet Tail Figure 9 : Quantitative explanations for object classification.",
"We assigned contributions of filters to their corresponding object parts, so that we obtained contributions of different object parts.",
"According to top figures, we found that different images had similar explanations, i.e. the CNN used similar object parts to classify objects.",
"Therefore, we showed the grad-CAM visualization of feature maps BID24 on the bottom, which proved this finding.",
"We visualized interpretable filters in the top conv-layer of a CNN, which were learned based on BID38 .",
"We projected activation regions on the feature map of the filter onto the image plane for visualization.",
"Each filter represented a specific object part through different images.",
"BID38 ) learned a CNN, where each filter in the top conv-layer represented a specific object part.",
"Thus, we annotated the name of the object part that corresponded to each filter based on visualization results (see FIG4 for examples).",
"We simply annotate each filter of the top conv-layer in a performer once, so the total annotation cost was O(N ), where N is the filter number."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
1,
0.21276594698429108,
0.17777776718139648,
0.17142856121063232,
0.14814814925193787,
0.15789473056793213,
0.290909081697464,
0.13333332538604736,
0.2916666567325592,
0.1875,
0.20689654350280762,
0.1428571343421936,
0.0624999962747097,
0.16326530277729034,
0.20408162474632263,
0.13333332538604736,
0.15789473056793213,
0.1428571343421936,
0.12121211737394333,
0.0555555522441864,
0.11764705181121826,
0.07692307233810425,
0,
0.2181818187236786,
0.07692307233810425,
0.1621621549129486,
0.14999999105930328,
0.2631579041481018,
0.25,
0.1428571343421936,
0.1463414579629898,
0.1111111044883728,
0.1249999925494194,
0,
0.05882352590560913,
0.10256409645080566,
0.05882352590560913,
0.17142856121063232,
0.060606054961681366,
0.0714285671710968,
0.1764705777168274,
0.10256409645080566,
0.1428571343421936
] | SJfWKsC5K7 | true | [
"This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically."
] |
[
"We present methodology for using dynamic evaluation to improve neural sequence models.",
"Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns.",
"Dynamic evaluation outperforms existing adaptation approaches in our comparisons.",
"Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively.",
"Sequence generation and prediction tasks span many modes of data, ranging from audio and language modelling, to more general timeseries prediction tasks.",
"Applications of such models include speech recognition, machine translation, dialogue generation, speech synthesis, forecasting, and music generation, among others.",
"Neural networks can be applied to these tasks by predicting sequence elements one-by-one, conditioning on the history of sequence elements, forming an autoregressive model.",
"Convolutional neural networks (CNNs) and recurrent neural networks (RNNs), including long-short term memory (LSTM) networks BID7 in particular, have achieved many successes at these tasks.",
"However, in their basic form, these models have a limited ability to adapt to recently observed parts of a sequence.Many sequences contain repetition; a pattern that occurs once is more likely to occur again.",
"For instance, a word that occurs once in a document is much more likely to occur again.",
"A sequence of handwriting will generally stay in the same handwriting style.",
"A sequence of speech will generally stay in the same voice.",
"Although RNNs have a hidden state that can summarize the recent past, they are often unable to exploit new patterns that occur repeatedly in a test sequence.",
"This paper concerns dynamic evaluation, which we investigate as a candidate solution to this problem.",
"Our approach adapts models to recent sequences using gradient descent based mechanisms.",
"We show several ways to improve on past dynamic evaluation approaches in Section 5, and use our improved methodology to achieve state-of-the-art results in Section 7.",
"In Section 6 we design a method to dramatically to reduce the number of adaptation parameters in dynamic evaluation, making it practical in a wider range of situations.",
"In Section 7.4 we analyse dynamic evaluation's performance over varying time-scales and distribution shifts, and demonstrate that dynamically evaluated models can generate conditional samples that repeat many patterns from the conditioning data.",
"This work explores and develops methodology for applying dynamic evaluation to sequence modelling tasks.",
"Experiments show that the proposed dynamic evaluation methodology gives large test time improvements across character and word level language modelling.",
"Our improvements to language modelling have applications to speech recognition and machine translation over longer contexts, including broadcast speech recognition and paragraph level machine translation.",
"Overall, dynamic evaluation is shown to be an effective method for exploiting pattern re-occurrence in sequences."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.4761904776096344,
0,
0.1111111044883728,
0.054054051637649536,
0,
0,
0.0624999962747097,
0,
0.04999999701976776,
0,
0.09999999403953552,
0.09999999403953552,
0.05882352590560913,
0.0833333283662796,
0,
0.1875,
0.060606054961681366,
0.04999999701976776,
0.52173912525177,
0.27586206793785095,
0.0714285671710968,
0.23999999463558197
] | rkdU7tCaZ | true | [
"Paper presents dynamic evaluation methodology for adaptive sequence modelling"
] |
[
"We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. \n",
"This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. \n",
"With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. \n",
"We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. \n",
"This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator.",
"Generative Adversarial Networks (GANs) BID5 are a framework to construct a generative model that can mimic the target distribution, and in recent years it has given birth to arrays of state-of-the-art algorithms of generative models on image domain BID23 Ledig et al., 2017; BID30 BID20 .",
"The most distinctive feature of GANs is the discriminator D(x) that evaluates the divergence between the current generative distribution p G (x) and the target distribution q(x) BID5 BID16 .",
"The algorithm of GANs trains the generator model by iteratively training the discriminator and generator in turn, with the discriminator acting as an increasingly meticulous critic of the current generator.Conditional GANs (cGANs) are a type of GANs that use conditional information BID13 for the discriminator and generator, and they have been drawing attention as a promising tool for class conditional image generation BID17 , the generation of the images from text BID20 BID30 , and image to image translation BID10 BID31 .",
"Unlike in standard GANs, the discriminator of cGANs discriminates between the generator distribution and the target distribution on the set of the pairs of generated samples x and its intended conditional variable y.",
"To the authors' knowledge, most frameworks of discriminators in cGANs at the time of writing feeds the pair the conditional information y into the discriminator by naively concatenating (embedded) y to the input or to the feature vector at some middle layer BID13 BID2 BID20 BID30 BID18 BID22 BID3 BID24 .",
"We would like to however, take into account the structure of the assumed conditional probabilistic models underlined by the structure of the discriminator, which is a function that measures the information theoretic distance between the generative distribution and the target distribution.By construction, any assumption about the form of the distribution would act as a regularization on the choice of the discriminator.",
"In this paper, we propose a specific form of the discriminator, a form motivated by a probabilistic model in which the distribution of the conditional variable y given x is discrete or uni-modal continuous distributions.",
"This model assumption is in fact common in many real world applications, including class-conditional image generation and super-resolution.As we will explain in the next section, adhering to this assumption will give rise to a structure of the discriminator that requires us to take an inner product between the embedded condition vector y and the feature vector (Figure 1d ).",
"With this modification, we were able to significantly improve the quality of the class conditional image generation on 1000-class ILSVRC2012 dataset BID21 with a single pair of a discriminator and generator (see the generated examples in Figure 2 ).",
"Also, when we applied our model of cGANs to a super-resolution task, we were able to produce high quality super-resolution images that are more discriminative in terms of the accuracy of the label classifier than the cGANs based on concatenation, as well as the bilinear and the bicubic method.",
"Any specification on the form of the discriminator imposes a regularity condition for the choice for the generator distribution and the target distribution.",
"In this research, we proposed a model for the discriminator of cGANs that is motivated by a commonly occurring family of probabilistic models.",
"This simple modification was able to significantly improve the performance of the trained generator on conditional image generation task and super-resolution task.",
"The result presented in this paper is strongly suggestive of the importance of the choice of the form of the discriminator and the design A RESULTS OF CLASS CONDITIONAL IMAGE GENERATION ON CIFAR-10 AND CIFAR-100As a preliminary experiment, we compared the performance of conditional image generation on CIFAR-10 and CIFAR-100 3.",
"For the discriminator and the generator, we reused the same architecture used in BID14 for the task on CIFAR-10.",
"For the adversarial objective functions, we used (9), and trained both machine learners with the same optimizer with same hyper parameters we used in Section 5.",
"For our projection model, we added the projection layer to the discriminator in the same way we did in the ImageNet experiment (before the last linear layer).",
"Our projection model achieved better performance than other methods on both CIFAR-10 and CIFAR-100.",
"Concatenation at hidden layer (hidden concat) was performed on the output of second ResBlock of the discriminator.",
"We tested hidden concat as a comparative method in our main experiments on ImageNet, because the concatenation at hidden layer performed better than the concatenation at the input layer (input concat) when the number of classes was large (CIFAR-100).To",
"explore how the hyper-parameters affect the performance of our proposed architecture, we conducted hyper-parameter search on CIFAR-100 about the Adam hyper-parameters (learning rate α and 1st order momentum β 1 ) for both our proposed architecture and the baselines. Namely",
", we varied each one of these parameters while keeping the other constant, and reported the inception scores for all methods including several versions of concat architectures to compare. We tested",
"with concat module introduced at (a) input",
"layer, (b) hidden",
"layer, and at (c) output",
"layer. As we can",
"see in Figure 11 , our projection architecture excelled over all other architectures for all choice of the parameters, and achieved the inception score of 9.53. Meanwhile",
", concat architectures were able to achieve all 8.82 at most. The best",
"concat model in term of the inception score on CIFAR-100 was the hidden concat with α = 0.0002 and β 1 = 0, which turns out to be the very choice of the parameters we picked for our ImageNet experiment. In this",
"experiment, we followed the footsteps of Plug and Play Generative model (PPGNs) BID15 and augmented the original generator loss with an additional auxiliary classifier loss. In particular",
", we used the losses given by : DISPLAYFORM0 wherep pre (y|x) is the fixed model pretrained for ILSVRC2012 classification task. For the actual",
"experiment, we trained the generator with the original adversarial loss for the first 400K updates, and used the augmented loss for the last 50K updates. For the learning",
"rate hyper parameter, we adopted the same values as other experiments we described above. For the pretrained",
"classifier, we used ResNet50 model used in BID7 . FIG0 compares the",
"results generated by vanilla objective function and the results generated by the augmented objective function. As we can see in",
"TAB2 , we were able to significantly outperform PPGNs in terms of inception score. However, note that",
"the images generated here are images that are easy to classify. The method with auxiliary",
"classifier loss seems effective in improving the visual appearance, but not in training faithful generative model."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.978723406791687,
0.2857142686843872,
0.2142857164144516,
0.19999998807907104,
0.22727271914482117,
0.2153846174478531,
0.2083333283662796,
0.27848100662231445,
0.2083333283662796,
0.2539682388305664,
0.3283582031726837,
0.307692289352417,
0.2222222238779068,
0.24137930572032928,
0.26229506731033325,
0.19999998807907104,
0.3181818127632141,
0.1860465109348297,
0.1904761791229248,
0.1538461446762085,
0.09090908616781235,
0.2790697515010834,
0.10810810327529907,
0.15789473056793213,
0.178571417927742,
0.0714285671710968,
0.1538461446762085,
0,
0,
0,
0,
0.1666666567325592,
0.05405404791235924,
0.1666666567325592,
0.12765957415103912,
0.08888888359069824,
0.045454539358615875,
0.052631575614213943,
0.1818181723356247,
0.10810810327529907,
0.19999998807907104,
0.1666666567325592,
0.15789473056793213
] | ByS1VpgRZ | true | [
"We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model."
] |
[
"Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets.",
"However, when training and test distribution differ, this distribution shift can have a significant effect.",
"With a novel perspective on function transfer learning, we are able to lower bound the change of performance when transferring from training to test set with the Wasserstein distance between the embedded training and test set distribution.",
"We find that there is a trade-off affecting performance between how invariant a function is to changes in training and test distribution and how large this shift in distribution is.",
"Empirically across several data domains, we substantiate this viewpoint by showing that test performance correlates strongly with the distance in data distributions between training and test set.",
"Complementary to the popular belief that more data is always better, our results highlight the utility of also choosing a training data distribution that is close to the test data distribution when the learned function is not invariant to such changes.",
"Imagine there are two students who are studying for an exam.",
"Student A studies by diligently learning the class material by heart.",
"Student B studies by learning the underlying reasons for why things are the way they are.",
"Come test day, student A is only able to answer test questions that are very similar to the class material while student B has no trouble answering different looking questions that follow the same reasoning.",
"Distilled from this example, we note there is a trade-off between how \"well\" a student studied, i.e., how indifferent the student is to receiving exercise or test questions, and how close the test questions are to the exercise questions.",
"While most machine learning work studies the generalization error, i.e., the error when testing on different samples from the same distribution, we do not take the match of train and test distribution as given.",
"In fact, it appears that the distance between train and test distribution may be critical for successful \"generalization\".",
"Following a similar line of thought, Uguroglu & Carbonell (2011) devised a distribution measurement to select only features that do not vary from one domain to another.",
"In contrast, we are interested in linking performance directly to the distance between train and test distribution.",
"Invariance to distribution shifts: We say that a function is invariant to a given input perturbation when the corresponding output does not change with the perturbation.",
"This is desirable when trying to achieve robustness to irrelevant data variations which are called nuisances (Achille & Soatto, 2018) .",
"As outlined by Achille & Soatto (2018) ; Censi & Murray (2011) , the \"optimal\" learned function from input to output is maximally invariant to all data variations that do not contain information about the output.",
"To the extent to which a learner reacts to such nuisance variations, which carry no information about the output, it will incur a performance change in expectation.",
"The difficulty lies in knowing what can be ignored and what cannot.",
"Similarity between training and test distribution: Another strategy would be to ensure that the training and test distribution match which has been investigated in a number of diverse settings Arjovsky et al., 2017) .",
"Variations of this theme were encountered by Zhang et al. (2016) , where they show that networks are able to fit random labels perfectly, yet understandably fail to generalize to the test set of the correct label distribution.",
"Following popular wisdom, one would be led to believe that more data is all you need.",
"The presented theory and experiments however clearly detail that, while the amount of data is important, ensuring that train and test distribution are close may be similarly significant to perform well on the test set.",
"From the small-scale and real-world experiments we are left with the startling observation that, frequently, neural networks do not find the \"true\" functional relationship between input and output.",
"If this were the case, distribution shifts between training and testing should have a smaller impact.",
"Whether this problem can be remedied by finding richer function classes or whether it may be inherently unsolvable will have to be investigated.",
"An important aspect of this work is how we measure distribution distances.",
"By using a representation network, we obtain low dimensional embeddings and reduce the effect of noisy data.",
"This embedding is however in itself limited by its features, training data, and objective.",
"To apply the insights of this work, it will therefore be paramount to carefully choose an embedding for each dataset and task, whose features are able to meaningfully model the various data shifts a desired learning algorithm would react to.",
"As an example, a word embedding trained only on English texts will not provide meaningful results on other languages and hence is useless for modeling a distribution shift.",
"Through this work, we emphasize the consequences of using models for predictions, which do not share the invariances of the true functional relationship.",
"In this case, data distribution shifts lead to a deterioration of performance.",
"As a remedy to this issue, we propose applying the Fréchet distance to measure the distance of the dataset distributions to infer the degree of mismatch.",
"With this measure, we can deduce important criteria to choose training sets, select data augmentation techniques, and help optimize networks and their invariances.",
"We believe that making the problem explicit and having a way to measure progress through the FD score may allow for a new wave of innovative ideas on how to address generalization under data shifts."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1860465109348297,
0.2222222238779068,
0.37735849618911743,
0.4444444477558136,
0.3829787075519562,
0.26923075318336487,
0.1249999925494194,
0.0624999962747097,
0.1666666567325592,
0.19607841968536377,
0.23076923191547394,
0.2181818187236786,
0.4000000059604645,
0.1702127605676651,
0.5128204822540283,
0.3636363446712494,
0.09756097197532654,
0.18518517911434174,
0.2222222238779068,
0.1818181723356247,
0.30188679695129395,
0.2142857164144516,
0.10526315122842789,
0.3333333432674408,
0.25531914830207825,
0.21052631735801697,
0.04651162400841713,
0.05882352590560913,
0.10256409645080566,
0.1111111044883728,
0.16949151456356049,
0.2083333283662796,
0.1428571343421936,
0.1764705777168274,
0.09756097197532654,
0.09090908616781235,
0.18518517911434174
] | SJgSflHKDr | true | [
"The Frechet Distance between train and test distribution correlates with the change in performance for functions that are not invariant to the shift."
] |
[
"We introduce a new procedural dynamic system that can generate a variety of shapes that often appear as curves, but technically, the figures are plots of many points.",
"We name them spiroplots and show how this new system relates to other procedures or processes that generate figures.",
"Spiroplots are an extremely simple process but with a surprising visual variety.",
"We prove some fundamental properties and analyze some instances to see how the geometry or topology of the input determines the generated figures.",
"We show that some spiroplots have a finite cycle and return to the initial situation, whereas others will produce new points infinitely often.",
"This paper is accompanied by a JavaScript app that allows anyone to generate spiroplots."
] | [
1,
0,
0,
0,
0,
0
] | [
0.1860465109348297,
0.1621621549129486,
0.13333332538604736,
0.10526315122842789,
0.09756097197532654,
0.1249999925494194
] | 8PAFHtYh17 | false | [
"A new, very simple dynamic system is introduced that generates pretty patterns; properties are proved and possibilities are explored"
] |
[
"Unsupervised image-to-image translation aims to learn a mapping between several visual domains by using unpaired training pairs.",
"Recent studies have shown remarkable success in image-to-image translation for multiple domains but they suffer from two main limitations: they are either built from several two-domain mappings that are required to be learned independently and/or they generate low-diversity results, a phenomenon known as model collapse.",
"To overcome these limitations, we propose a method named GMM-UNIT based on a content-attribute disentangled representation, where the attribute space is fitted with a GMM.",
"Each GMM component represents a domain, and this simple assumption has two prominent advantages.",
"First, the dimension of the attribute space does not grow linearly with the number of domains, as it is the case in the literature.",
"Second, the continuous domain encoding allows for interpolation between domains and for extrapolation to unseen domains.",
"Additionally, we show how GMM-UNIT can be constrained down to different methods in the literature, meaning that GMM-UNIT is a unifying framework for unsupervised image-to-image translation.",
"Translating images from one domain into another is a challenging task that has significant influence on many real-world applications where data are expensive, or impossible to obtain and to annotate.",
"Image-to-Image translation models have indeed been used to increase the resolution of images (Dong et al., 2014) , fill missing parts (Pathak et al., 2016) , transfer styles (Gatys et al., 2016) , synthesize new images from labels (Liu et al., 2017) , and help domain adaptation (Bousmalis et al., 2017; Murez et al., 2018) .",
"In many of these scenarios, it is desirable to have a model mapping one image to multiple domains, while providing visual diversity (i.e. a day scene ↔ night scene in different seasons).",
"However, the existing models can either map an image to multiple stochastic results in a single domain, or map in the same model multiple domains in a deterministic fashion.",
"In other words, most of the methods in the literature are either multi-domain or multi-modal.",
"Several reasons have hampered a stochastic translation of images to multiple domains.",
"On the one hand, most of the Generative Adversarial Network (GAN) models assume a deterministic mapping (Choi et al., 2018; Pumarola et al., 2018; Zhu et al., 2017a) , thus failing at modelling the correct distribution of the data .",
"On the other hand, approaches based on Variational Auto-Encoders (VAEs) usually assume a shared and common zero-mean unit-variance normally distributed space Zhu et al., 2017b) , limiting to two-domain translations.",
"In this paper, we propose a novel image-to-image translation model that disentangles the visual content from the domain attributes.",
"The attribute latent space is assumed to follow a Gaussian mixture model (GMM), thus naming the method: GMM-UNIT (see Figure 1 ).",
"This simple assumption allows four key properties: mode-diversity thanks to the stochastic nature of the probabilistic latent model, multi-domain translation since the domains are represented as clusters in the same attribute spaces, scalability because the domain-attribute duality allows modeling a very large number of domains without increasing the dimensionality of the attribute space, and few/zero-shot generation since the continuity of the attribute representation allows interpolating between domains and extrapolating to unseen domains with very few or almost no observed data from these domains.",
"The code and models will be made publicly available.",
": GMM-UNIT working principle.",
"The content is extracted from the input image (left, purple box), while the attribute (turquoise box) can be either sampled (top images) or extracted from a reference image (bottom images).",
"Either way, the generator (blue box) is trained to output realistic images belonging to the domain encoded in the attribute vector.",
"This is possible thanks to the disentangled attribute-content latent representation of GMM-UNIT and the generalisation properties associated to Gaussian mixture modeling.",
"In this paper, we present a novel image-to-image translation model that maps images to multiple domains and provides a stochastic translation.",
"GMM-UNIT disentangles the content of an image from its attributes and represents the attribute space with a GMM, which allows us to have a continuous encoding of domains.",
"This has two main advantages: first, it avoids the linear growth of the dimension of the attribute space with the number of domains.",
"Second, GMM-UNIT allows for interpolation across-domains and the translation of images into previously unseen domains.",
"We conduct extensive experiments in three different tasks, namely two-domain translation, multidomain translation and multi-attribute multi-domain translation.",
"We show that GMM-UNIT achieves quality and diversity superior to state of the art, most of the times with fewer parameters.",
"Future work includes the possibility to thoroughly learn the mean vectors of the GMM from the data and extending the experiments to a higher number of GMM components per domain."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.3030303120613098,
0.31578946113586426,
0.1538461446762085,
0.06666666269302368,
0.11428570747375488,
0.13333332538604736,
0.39024388790130615,
0.17777776718139648,
0.0714285671710968,
0.30434781312942505,
0.5128204822540283,
0.06666666269302368,
0.4285714328289032,
0.043478257954120636,
0.08695651590824127,
0.29411762952804565,
0.2631579041481018,
0.15789473056793213,
0,
0.09999999403953552,
0.1463414579629898,
0.1764705777168274,
0.17142856121063232,
0.5714285373687744,
0.2926829159259796,
0.05882352590560913,
0.19354838132858276,
0.1249999925494194,
0.17142856121063232,
0.10256409645080566
] | HkeFQgrFDr | true | [
"GMM-UNIT is an image-to-image translation model that maps an image to multiple domains in a stochastic fashion."
] |
[
"We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning.",
"While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic black-box architectures towards a design that provides a strong prior for iterative reasoning, enabling it to support explainable and structured learning, as well as generalization from a modest amount of data.",
"The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control (MAC) cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces.",
"We demonstrate the model's strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model.",
"More importantly, we show that the new model is more computationally efficient, data-efficient, and requires an order of magnitude less time and/or data to achieve good results.",
"This paper considers how best to design neural networks to perform the iterative reasoning necessary for complex problem solving.",
"Putting facts and observations together to arrive at conclusions is a central necessary ability as we work to move neural networks beyond their current great success with sensory perception tasks BID20 BID18 towards displaying Artificial General Intelligence.Figure 1: A sample image from the CLEVR dataset, with a question: \"There is a purple cube behind a metal object left to a large ball; what material is it?\"",
"Concretely, we develop a novel model that we apply to the CLEVR dataset BID15 for visual question answering (VQA).",
"VQA BID3 BID10 ) is a challenging multimodal task that requires responding to natural language questions about images.",
"However, BID0 show how the first generation of successful models on VQA tasks tend to acquire only superficial comprehension of both the image and the question, exploiting dataset biases rather than capturing a sound perception and reasoning process that would lead to the correct answer BID27 .",
"CLEVR was created to address this problem.",
"As illustrated in figure 1, instances in the dataset consist of rendered images featuring 3D objects of several shapes, colors, materials and sizes, coupled with unbiased, compositional questions that require an array of challenging reasoning skills such as following transitive relations, counting objects and comparing their properties, without allowing any shortcuts around such reasoning.",
"Notably, each instance in CLEVR is also accompanied by a tree-structured functional program that was both used to construct the question and reflects its reasoning procedure -a series of predefined operations -that can be composed together to answer it.Most neural networks are essentially very large correlation engines that will hone in on any statistical, potentially spurious pattern that allows them to model the observed data more accurately.",
"In contrast, we seek to create a model structure that requires combining sound inference steps to solve a problem instance.",
"At the other extreme, some approaches adopt symbolic structures that resemble the expression trees of programming languages to perform reasoning BID2 BID13 .",
"In particular, some approaches to CLEVR use the supplied functional programs for supervised or semi-supervised training BID1 BID16 .",
"Not only do we wish to avoid using such supervision in our work, but we in general suspect that the rigidity of these structures and the use of an inventory of operation-specific neural modules undermines robustness and generalization, and at any rate requires more complex reinforcement learning methods.To address these weaknesses, while still seeking to use a sound and transparent underlying reasoning process, we propose Compositional Attention Networks, a novel, fully differentiable, non-modular architecture for reasoning tasks.",
"Our model is a straightforward recurrent neural network with attention; the novelty lies in the use of a new Memory, Attention and Composition (MAC) cell.",
"The constrained and deliberate design of the MAC cell was developed as a kind of strong structural prior that encourages the network to solve problems by stringing together a sequence of transparent reasoning steps.",
"MAC cells are versatile but constrained neural units.",
"They explicitly separate out memory from control, both represented recurrently.",
"The unit contains three sub-units: The control unit updates the control representation based on outside instructions (for VQA, the question), learning to successively attend to different parts of the instructions; the read unit gets information out of a knowledge base (for VQA, the image) based on the control signal and the previous memory; the write unit updates the memory based on soft self-attention to previous memories, controlled by the retrieved information and the control signal.",
"A universal MAC unit with a single set of parameters is used throughout the reasoning process, but its behavior can vary widely based on the context in which it is applied -the input to the control unit and the contents of the knowledge base.",
"With attention, our MAC network has the capacity to represent arbitrarily complex acyclic reasoning graphs in a soft manner, while having physically sequential structure.",
"The result is a continuous counterpart to module networks that can be trained end-to-end simply by backpropagation.We test the behavior of our new network on CLEVR and its associated datasets.",
"On the primary CLEVR reasoning task, we achieve an accuracy of 98.9%, halving the error rate compared to the previous state-of-the-art FiLM model BID24 .",
"In particular, we show that our architecture yields better performance on questions involving counting and aggregation.",
"In supplementary studies, we show that the MAC network learns more quickly (both in terms of number of training epochs and training time) and more effectively from limited amounts of training data.",
"Moreover, it also achieves a new state-of-the-art performance of 82.5% on the more varied and difficult humanauthored questions of the CLEVR-Humans dataset.",
"The careful design of our cell encourages compositionality, versatility and transparency.",
"We achieve these properties by defining attention-based interfaces that constrict the cell's input and output spaces, and so constrain the interactions both between and inside cells in order to guide them towards simple reasoning behaviors.",
"Although each cell's functionality has only a limited range of possible continuous reasoning behaviors, when chained together in a MAC network, the whole system becomes expressive and powerful.",
"In the future, we believe that the architecture will also prove beneficial for other multi-step reasoning and inference tasks, for instance in machine comprehension and textual question answering.",
"Overall, when designing the MAC cell, we have attempted to formulate the inner workings of an elementary, yet generic reasoning skills: the model decomposes the problem into steps, focusing on one at a time.",
"At each such step, it takes into account:• The control c i : Some aspect of the task -pointing to the future work that has left to be done.•",
"The previous memory or memories: The partial solution or evidence the cell has acquired so far -pointing to the past work that has already been achieved.•",
"The newly retrieved information m new : that is retrieved from the knowledge base KB and may or may not be transitively related to that partial solution or evidence -the present, or current work.Considering these three sources of information together, the cell finally adds the new information up into its working memory, m i , progressing one more step towards the final answer.",
"We have given a first demonstration of how a sequence of Memory, Attention and Control (MAC) cells combined into a Compositional Attention Network provides a very effective tool for neural reasoning.",
"In future work, we wish to explore this promising architecture for other tasks and domains, including real-world VQA, machine comprehension and textual question answering.",
"In this section we provide detailed discussion of related work.",
"Several models have been applied to the CLEVR task.",
"These can be partitioned into two groups, module networks that use the strong supervision provided as a tree-structured functional program associated with each instance, and end-to-end, fully differentiable networks that combine a fairly standard stack of CNNs with components that aid them in performing reasoning tasks.",
"We also discuss the relation of MAC to other approaches, such as memory networks and neural computers."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.37837836146354675,
0.1249999925494194,
0.1904761791229248,
0.30434781312942505,
0.13333332538604736,
0.1666666567325592,
0.07894736528396606,
0.2222222238779068,
0.1111111044883728,
0.20689654350280762,
0,
0.1230769157409668,
0.14999999105930328,
0.0555555522441864,
0.1538461446762085,
0.1111111044883728,
0.1463414579629898,
0.19512194395065308,
0.2083333283662796,
0,
0,
0.19354838132858276,
0.2545454502105713,
0.1428571343421936,
0.2448979616165161,
0.1463414579629898,
0.11764705181121826,
0.13636362552642822,
0.25641024112701416,
0.13793103396892548,
0.1599999964237213,
0.2222222238779068,
0.23255813121795654,
0.20408162474632263,
0.1304347813129425,
0.04878048226237297,
0.11428570747375488,
0.27272728085517883,
0.1463414579629898,
0.0714285671710968,
0.14814814925193787,
0.16949151456356049,
0.22857142984867096
] | S1Euwz-Rb | true | [
"We present a novel architecture, based on dynamic memory, attention and composition for the task of machine reasoning."
] |
[
" Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset. ",
"As a consequence the information stored in the latent space is seldom sufficient to reconstruct a particular image. ",
"To help understand the type of information stored in the latent space we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space.",
"This allows us to ''imagine'' the information captured in the latent space. ",
"We argue that this is necessary to make a VAE into a truly generative model. ",
"We use our GAN to visualise the latent space of a standard VAE and of a $\\beta$-VAE.",
"Variational auto-encoders (VAEs) have made a significant impact since their introduction by Kingma and Welling (2014) .",
"However, one of their perceived problems is their reconstruction performance.",
"This has spawned a wave of research into trying to improve the reconstruction performance (Zhao et al., 2017; Dai and Wipf, 2019; Larsen et al., 2016; Gao et al., 2017; Brock et al., 2017) .",
"We argue that such attempts are misguided.",
"The whole point of VAEs is to capture only compressible information and discard information specific to any particular image.",
"This is a consequence of the well known evidence lower bound or ELBO objective function consisting of a negative log-probability of generating the original image from the latent representation (this is often implemented as a mean squared error between the image and the reconstruction, although as we argue in Appendix A this term should be proportional to the logarithm of the mean squared error) and a KL-divergence between the probability distribution representing a latent code and a 'prior distribution' (usually taken as a multivariate normal with mean zero and unit variance).",
"These two terms have a nice interpretation in terms of the minimum description length (Rissanen, 1978 )-this has been described elsewhere, for example, Chen et al. (2016) .",
"The KL-term can be viewed as a measure of the amount of information in the latent code while the log-probability of the image measures the amount of information required to change the image produced by the decoder into the input image (see Section 3 for details).",
"That is, the latent space of a VAE can be viewed as a model of the dataset-capturing compressible information while not encoding any image specific information (which is cheaper to communicate using the reconstruction loss).",
"The great strength of a VAE is that it builds a model of the dataset that does not over-fit (i.e. code for in-compressible features found in specific images).",
"However, because of this it typically will not do a good job of reconstructing images as the latent code does not contain enough information to do the reconstruction (for very restrictive dataset such as MNIST and Celeb-A a lot of information can be captured in the latent space, but for more complex datasets like ImageNet or CIFAR the reconstructions are poor).",
"Of course, if you want good reconstructions on the training set then the simplest solution is to remove the KL-divergence term and just use an autoencoder.",
"However, having a model that does not over-fit the dataset can be useful, but in this case the decoder of a standard VAE should not be regarded as a generative model-that is not its purpose.",
"If we wish to generate realistic looking images we need to imagine the information discarded by the encoder.",
"As a rather simplified analogy, consider a verbal description of an image \"a five year old girl in a blue dress standing on a beach\".",
"If we asked different artists to depict such scene there is clearly not enough information to provide pixel-wise or feature-wise similarity between their interpretation although each artist could render a convincing image that satisfies the description.",
"In a similar manner if we want a VAE to act as a generative model we need to build a renderer that will imagine an image consistent with the latent variable representation.",
"A simple way to achieve this is using a modified Generative Adversarial Network (GAN).",
"We call such a model a latent space renderer-GAN (or LSR-GAN).",
"To generate an image we choose a latent vector z from the prior distribution for the VAE.",
"This is passed to a generator network that generates an image,x, with the same dimensions as that of the dataset used to train the VAE.",
"The generated image has both to convince a discriminator network that it is a real image-as is usual for a GAN (Goodfellow et al., 2014) -at the same time the VAE encoder should mapx close to z.",
"To accomplish this we add an additional cost to the normal GAN loss function for the generator (L GEN )",
"where q φ (·|x) is the probability distribution generated by the VAE encoder given an imagex and z is the latent vector that was put into the GAN generator.",
"Note that when training the LSR-GAN we freeze the weights of the VAE encoder.",
"The constant λ is an adjustable hyperparameter providing a trade-off between how realistic the image should look and how closely it captures the information in the latent space.",
"This modification of the objective function can clearly be applied to any GAN or used with any VAE.",
"Although the idea is simple, it provides a powerful method for visualising (imagining) the information stored in a latent space.",
"Interestingly, it also appears to provide a powerful regularisation mechanism to stabilize the training for GANs.",
"Combinations of VAEs and GANs are, of course, not new (Makhzani et al., 2016; Larsen et al., 2016; Brock et al., 2017; Huang et al., 2018; Srivastava et al., 2017) .",
"In all cases we are aware of GANs have been combined with VAEs to \"correct\" for the poor reconstruction performance of the VAE (see Appendix B for a more detailed discussion of the literature on VAE-GAN hybrids).",
"As we have argued (and expound on in more detail in Section 3), we believe that the decoder of a VAE does the job it is designed to do.",
"They cannot reconstruct images accurately, because the latent space of a VAE loses information about the image, by design.",
"All we can do is imagine the type of image that a point in the latent space represents.",
"In the next section, we show examples of images generated by the LSR-GAN for both normal VAEs and β-VAEs (we also spend time describing VAEs, β-VAEs and the LSR-GAN in more detail).",
"In addition, in this section we present a number of systematic experiments showing the performance of a VAE and LSR-GAN.",
"In Section 3, we revisit the minimum description length formalism to explain why we believe a VAE is doomed to fail as a generative model.",
"We conclude in Section 4.",
"We cover more technical aspects in the appendices.",
"In Appendix A we show that the correct loss function for a VAE requires minimising a term proportional to the logarithm of the mean squared error.",
"In Appendix B we draw out the similarities and differences between our approach to hybridising VAEs with GANs and other work in this area.",
"We present some additional experimental results in Appendix C. A detailed description of the architecture of LSR-GAN is given in Appendix D. We end the paper with Appendix E by showing some samples generated by randomly drawing latent variables and feeding them to the LSR-GAN.",
"VAEs are often taken to be a pauper's GAN.",
"That is, a method for generating samples that is easier to train than a GAN, but gives slightly worse results.",
"If this is the only objective then it is clearly legitimate to modify the VAE in anyway that will improve its performance.",
"However, we believe that this risks losing one of their most desirable properties, namely their ability to learn features of the whole dataset while avoiding encoding information specific to particular images.",
"We have argued that because of this property, a VAE is not an ideal generative model.",
"It will not be able to reconstruct data accurately and consequently will struggle even more with generating new samples.",
"One of the weaknesses of the vast literature on VAEs is that it often attempts to improve them without regard to what makes VAEs special.",
"As we have argued in this paper, a consistent way of using the latent space of a VAE is to use a GAN as a data renderer, using the VAE encoder to ensure that the GAN is generating images that represent the information encoded in the VAE's latent space.",
"This involves \"imagining\" the information that the VAE disregards.",
"LSR-GAN can be particularly useful in generating random samples, although, as shown in Appendix E, for very diverse datasets the samples are often not recognisable as real world objects.",
"Although there are already many VAE-GAN hybrids, to the best of our knowledge, they are all designed to \"fix\" the VAE.",
"In our view VAEs are not broken and \"fixing\" them is actually likely to break them (i.e. by encoding image specific information in the latent space).",
"Although, the main idea in this paper is relatively simple, we believe its main contribution is as a corrective to the swath of literature on VAEs that, in our view, often throws the baby out with the bath water in an attempt to fix VAEs despite the fact that perform in exactly the way they were designed to."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1538461446762085,
0.3720930218696594,
0.9433962106704712,
0.31578946113586426,
0.19512194395065308,
0.3414634168148041,
0.0476190410554409,
0.05714285373687744,
0.14814814925193787,
0.060606058686971664,
0.1395348757505417,
0.1573033630847931,
0.1538461446762085,
0.27586206793785095,
0.2857142686843872,
0.23076923191547394,
0.2631579041481018,
0.07999999821186066,
0.2545454502105713,
0.2926829159259796,
0.1249999925494194,
0.19672130048274994,
0.30188679695129395,
0.09999999403953552,
0.1666666567325592,
0.2857142686843872,
0.3404255211353302,
0.23728813230991364,
0.17777776718139648,
0.19607841968536377,
0.31578946113586426,
0.23529411852359772,
0.1860465109348297,
0.3181818127632141,
0.1463414579629898,
0.04255318641662598,
0.20689654350280762,
0.3461538553237915,
0.3636363446712494,
0.3720930218696594,
0.18867923319339752,
0.27272728085517883,
0.2083333283662796,
0.06451612710952759,
0.11764705181121826,
0.2857142686843872,
0.16326530277729034,
0.16393442451953888,
0.11428570747375488,
0.17777776718139648,
0.260869562625885,
0.25925925374031067,
0.1904761791229248,
0.09090908616781235,
0.1702127605676651,
0.4482758641242981,
0.23529411852359772,
0.07547169178724289,
0.1818181723356247,
0.19230768084526062,
0.19718308746814728
] | BJe4PyrFvB | true | [
"To understand the information stored in the latent space, we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space."
] |
[
"Human brain function as measured by functional magnetic resonance imaging\n",
"(fMRI), exhibits a rich diversity.",
"In response, understanding the individual variability\n",
"of brain function and its association with behavior has become one of the\n",
"major concerns in modern cognitive neuroscience.",
"Our work is motivated by the\n",
"view that generative models provide a useful tool for understanding this variability.\n",
"To this end, this manuscript presents two novel generative models trained\n",
"on real neuroimaging data which synthesize task-dependent functional brain images.\n",
"Brain images are high dimensional tensors which exhibit structured spatial\n",
"correlations.",
"Thus, both models are 3D conditional Generative Adversarial networks\n",
"(GANs) which apply Convolutional Neural Networks (CNNs) to learn an\n",
"abstraction of brain image representations.",
"Our results show that the generated\n",
"brain images are diverse, yet task dependent.",
"In addition to qualitative evaluation,\n",
"we utilize the generated synthetic brain volumes as additional training data to improve\n",
"downstream fMRI classifiers (also known as decoding, or brain reading).\n",
"Our approach achieves significant improvements for a variety of datasets, classifi-\n",
"cation tasks and evaluation scores.",
"Our classification results provide a quantitative\n",
"evaluation of the quality of the generated images, and also serve as an additional\n",
"contribution of this manuscript.",
"Functional Magnetic Resonance Imaging (fMRI) is a common tool used by cognitive neuroscientists to investigate the properties of brain function in response to stimuli.",
"Classic analysis approaches BID19 focused on analyzing group-averaged brain function images.",
"However, it was discovered that brain activation patterns vary significantly between individuals.",
"Thus, modern analysis now prioritizes understanding the inter-subject variability of brain function (Dubois & Adolphs, 2016; BID6 . Our work is motivated by the view that generative models provide a useful tool for understanding this variability -as they enable the synthesis of a variety of plausible brain images representing different hypothesized individuals, and high-quality generative models can be analyzed to posit potential mechanisms that explain this variability BID10 . The results presented in this paper provide -to our knowledge for the first time, positive results suggesting that it is indeed possible to generate high quality diverse and task dependent brain images.While we can qualitatively evaluate generative brain images, quantitative evaluation allows us to objectively compare between various results. To this end, we utilize the generated synthetic brain volumes as additional training data to improve downstream fMRI classifiers. The use of classifiers to predict behavior associated with brain images is also known as decoding or brain reading BID21 BID16 ).",
"Classifiers such as support vector machines and deep networks have been applied for decoding brain images.",
"For example, Cox & Savoy (2003) attempted to classify which of 10 categories of object a subject was looking at (including similar categories, such as horses and cows) based on limited number of brain images.",
"Besides visual tasks, BID14 distinguished active regions of brains when subjects listened to linguistic words, where the stimuli included five items from each of 12 semantic categories (animals, body parts etc.).Beyond",
"providing a model for individual variability, high quality brain image synthesis addresses pressing data issues in cognitive neuroscience. Progress",
"in the computational neurosciences is stifled by the difficulty of obtaining brain data either because of a limited culture of data sharing, or due to medical privacy regulations BID18 . For the",
"computational neuroscientist, generated images deliver unlimited quantities of high quality brain imaging data that can be used to develop state of the art tools before application to real subjects and/or patients BID21 . This approach",
"of using modern generative models to synthesize data, which in turn accelerates scientific study, has already proven useful in many scientific fields such as particle physics and astronomy BID1 . Our work represent",
"a first application for this approach to neuroscience.One of the promising generative models are Generative Adversarial Networks (GANs) BID7 , capturing complex distributions using a non-cooperative two-player game formulation: a generator produces synthetic data by transforming samples drawn from a simple distribution; a discriminator focuses on distinguishing synthetic and real data. Despite (or possibly",
"due to) the compelling formulation, GAN training is known to be unstable. To address this difficulty",
"various variants have been proposed. Wasserstein GANs (WGANs) formulate",
"the objective using the Wasserstein distance rather than the classical Jenson-Shannon divergence. Improved training of WGANs BID9 applies",
"an additional gradient penalty, which avoids the critical weight clipping in WGANs which might lead to pathological behavior. Dualing GANs restrict the discriminator",
"and formulate the dual objective (Li et al., 2017) . Beyond improving the stability, conditional",
"GANs (Mirza & Osindero, 2014) make it possible to control the data generation process by conditioning the model on additional information. Auxiliary Classifier GANs (AC-GANs) BID15 )",
"unify a GAN and classifier to a single architecture, employing labels to generate ImageNet samples. 3D GANs BID23 reconstruct 3D objects and BID4",
"propose to use improved WGAN to enhance the stability of 3D GANs.We make the following contributions in this paper:1. We develop Improved Conditional Wasserstein GANs",
"(ICW-GAN) and Auxiliary Classifier and Discriminator GANs (ACD-GAN), two types of 3D conditional GANs to synthesize fMRI brain data, both of which we find to be stable to train. 2. We assess the qualitative quality and diversity",
"of generated brain volumes. Our results suggest that the proposed models are able",
"to generate high quality task-dependent and diverse 3D brain images. 3. We evaluate our models on three datasets using a series",
"of image classification tasks with support vector machines and deep network classifiers at two levels of brain image resolution. Results show that augmenting training data using synthetic",
"data generated by our models can greatly improve test classification accuracy of brain volumes.",
"Generative models provide a useful tool for understanding the individual variability of brain images.",
"The results of this manuscript show -to our knowledge for the first time, that 3-D conditional GANs, in particular our proposed ICW-GAN and ACD-GAN, can generate high quality diverse and task dependent brain images.",
"We hope our results inspire additional research on generative models for brain imaging data.",
"Beyond qualitative evaluation, we evaluate quantitative performance by using the generated images as additional training data in a predictive model -mixing synthetic and real data to train classifiers.",
"The results show that our synthetic data augmentation can significantly improve classification accuracy -a result which may be of independent interest.",
"Future work will focus on additional qualitative evaluation of the generated images by neuroscience experts and exploration of various applications.",
"We also plan to more throughly investigate the trained models to further explore what it may contribute to the science of individual variability in neuroimaging.",
"Finally, we plan to expand our models to combine data across multiple studies -each of which use different labels, by exploring techniques for merging labels based on the underlying cognitive processes BID17 .6 SUPPLEMENTARY MATERIAL"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.06666666269302368,
0,
0,
0.1249999925494194,
0,
0,
0,
0.06666666269302368,
0.12903225421905518,
0.13333332538604736,
0.13793103396892548,
0.06666666269302368,
0.07999999821186066,
0,
0.2222222238779068,
0.07999999821186066,
0.24242423474788666,
0.19354838132858276,
0,
0.1599999964237213,
0.07692307233810425,
0.0624999962747097,
0,
0.09302324801683426,
0.12903225421905518,
0.0624999962747097,
0.14814814925193787,
0.1666666567325592,
0.15094339847564697,
0.038461532443761826,
0.051282044500112534,
0.08695651590824127,
0.11538460850715637,
0.07999999821186066,
0.11594202369451523,
0.0555555522441864,
0.06896550953388214,
0,
0.09756097197532654,
0.05882352590560913,
0.08888888359069824,
0.25641024112701416,
0.1428571343421936,
0.23529411852359772,
0.12121211737394333,
0.2926829159259796,
0.2222222238779068,
0.23529411852359772,
0.11764705181121826,
0.1538461446762085,
0.05882352590560913,
0.1702127605676651,
0.1463414579629898,
0.10256409645080566,
0.0476190410554409,
0.03703703358769417
] | BJaU__eCZ | true | [
"Two novel GANs are constructed to generate high-quality 3D fMRI brain images and synthetic brain images greatly help to improve downstream classification tasks."
] |
[
"Transferring representations from large-scale supervised tasks to downstream tasks have shown outstanding results in Machine Learning in both Computer Vision and natural language processing (NLP).",
"One particular example can be sequence-to-sequence models for Machine Translation (Neural Machine Translation - NMT).",
"It is because, once trained in a multilingual setup, NMT systems can translate between multiple languages and are also capable of performing zero-shot translation between unseen source-target pairs at test time.",
"In this paper, we first investigate if we can extend the zero-shot transfer capability of multilingual NMT systems to cross-lingual NLP tasks (tasks other than MT, e.g. sentiment classification and natural language inference).",
"We demonstrate a simple framework by reusing the encoder from a multilingual NMT system, a multilingual Encoder-Classifier, achieves remarkable zero-shot cross-lingual classification performance, almost out-of-the-box on three downstream benchmark tasks - Amazon Reviews, Stanford sentiment treebank (SST) and Stanford natural language inference (SNLI).",
"In order to understand the underlying factors contributing to this finding, we conducted a series of analyses on the effect of the shared vocabulary, the training data type for NMT models, classifier complexity, encoder representation power, and model generalization on zero-shot performance.",
"Our results provide strong evidence that the representations learned from multilingual NMT systems are widely applicable across languages and tasks, and the high, out-of-the-box classification performance is correlated with the generalization capability of such systems."
] | [
0,
0,
0,
1,
0,
0,
0
] | [
0,
0,
0.10256409645080566,
0.1428571343421936,
0.125,
0,
0.04999999701976776
] | H1gni9-ojX | false | [
"Zero-shot cross-lingual transfer by using multilingual neural machine translation "
] |
[
"This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner.",
"Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent.",
"Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.",
"Discovering state-of-the-art neural network architectures requires substantial effort of human experts.",
"Recently, there has been a growing interest in developing algorithmic solutions to automate the manual process of architecture design.",
"The automatically searched architectures have achieved highly competitive performance in tasks such as image classification BID35 BID36 BID13 a; BID26 and object detection BID36 .The",
"best existing architecture search algorithms are computationally demanding despite their remarkable performance. For",
"example, obtaining a state-of-the-art architecture for CIFAR-10 and ImageNet required 2000 GPU days of reinforcement learning (RL) BID36 or 3150 GPU days of evolution BID26 . Several",
"approaches for speeding up have been proposed, such as imposing a particular structure of the search space BID13 a) , weights or performance prediction for each individual architecture (Brock et al., 2018; Baker et al., 2018) and weight sharing/inheritance across multiple architectures BID0 BID24 Cai et al., 2018; Bender et al., 2018) , but the fundamental challenge of scalability remains. An inherent",
"cause of inefficiency for the dominant approaches, e.g. based on RL, evolution, MCTS BID20 , SMBO BID12 or Bayesian optimization BID9 , is the fact that architecture search is treated as a black-box optimization problem over a discrete domain, which leads to a large number of architecture evaluations required.In this work, we approach the problem from a different angle, and propose a method for efficient architecture search called DARTS (Differentiable ARchiTecture Search). Instead of",
"searching over a discrete set of candidate architectures, we relax the search space to be continuous, so that the architecture can be optimized with respect to its validation set performance by gradient descent. The data efficiency",
"of gradient-based optimization, as opposed to inefficient black-box search, allows DARTS to achieve competitive performance with the state of the art using orders of magnitude less computation resources. It also outperforms",
"another recent efficient architecture search method, ENAS BID24 . Notably, DARTS is simpler",
"than many existing approaches as it does not involve controllers BID35 Baker et al., 2017; BID36 BID24 BID33 , hypernetworks (Brock et al., 2018) or performance predictors BID12 ), yet it is generic enough handle both convolutional and recurrent architectures.The idea of searching architectures within a continuous domain is not new BID27 Ahmed & Torresani, 2017; BID30 BID28 , but there are several major distinctions. While prior works seek to",
"fine-tune a specific aspect of an architecture, such as filter shapes or branching patterns in a convolutional network, DARTS is able to learn high-performance architecture building blocks with complex graph topologies within a rich search space. Moreover, DARTS is not restricted",
"to any specific architecture family, and is applicable to both convolutional and recurrent networks.In our experiments (Sect. 3) we show that DARTS is able to design a convolutional cell that achieves 2.76 ± 0.09% test error on CIFAR-10 for image classification using 3.3M parameters, which is competitive with the state-of-the-art result by regularized evolution BID26 obtained using three orders of magnitude more computation resources. The same convolutional cell also",
"achieves 26.7% top-1 error when transferred to ImageNet (mobile setting), which is comparable to the best RL method BID36 . On the language modeling task, DARTS",
"efficiently discovers a recurrent cell that achieves 55.7 test perplexity on Penn Treebank (PTB), outperforming both extensively tuned LSTM BID17 and all the existing automatically searched cells based on NAS BID35 and ENAS BID24 .Our contributions can be summarized as",
"follows:• We introduce a novel algorithm for differentiable network architecture search based on bilevel optimization, which is applicable to both convolutional and recurrent architectures.• Through extensive experiments on image",
"classification and language modeling tasks we show that gradient-based architecture search achieves highly competitive results on CIFAR-10 and outperforms the state of the art on PTB. This is a very interesting result, considering",
"that so far the best architecture search methods used non-differentiable search techniques, e.g. based on RL BID36 or evolution BID26 BID13 ).• We achieve remarkable efficiency improvement",
"(reducing the cost of architecture discovery to a few GPU days), which we attribute to the use of gradient-based optimization as opposed to non-differentiable search techniques.• We show that the architectures learned by DARTS",
"on CIFAR-10 and PTB are transferable to ImageNet and WikiText-2, respectively.The implementation of DARTS is available at https://github.com/quark0/darts 2 DIFFERENTIABLE ARCHITECTURE SEARCH We describe our search space in general form in Sect. 2.1, where the computation procedure for an architecture",
"(or a cell in it) is represented as a directed acyclic graph. We then introduce a simple continuous relaxation scheme",
"for our search space which leads to a differentiable learning objective for the joint optimization of the architecture and its weights (Sect. 2.2). Finally, we propose an approximation technique to make",
"the algorithm computationally feasible and efficient (Sect. 2.3).",
"We presented DARTS, a simple yet efficient architecture search algorithm for both convolutional and recurrent networks.",
"By searching in a continuous space, DARTS is able to match or outperform the state-of-the-art non-differentiable architecture search methods on image classification and language modeling tasks with remarkable efficiency improvement by several orders of magnitude.There are many interesting directions to improve DARTS further.",
"For example, the current method may suffer from discrepancies between the continuous architecture encoding and the derived discrete architecture.",
"This could be alleviated, e.g., by annealing the softmax temperature (with a suitable schedule) to enforce one-hot selection.",
"It would also be interesting to explore performance-aware architecture derivation schemes based on the one-shot model learned during the search process.A EXPERIMENTAL DETAILS A.1",
"ARCHITECTURE SEARCH A.1.1 CIFAR-10Since the architecture will be varying throughout the search process, we always use batch-specific statistics for batch normalization rather than the global moving average.",
"Learnable affine parameters in all batch normalizations are disabled during the search process to avoid rescaling the outputs of the candidate operations.To carry out architecture search, we hold out half of the CIFAR-10 training data as the validation set.",
"A small network of 8 cells is trained using DARTS for 50 epochs, with batch size 64 (for both the training and validation sets) and the initial number of channels 16.",
"The numbers were chosen to ensure the network can fit into a single GPU.",
"We use momentum SGD to optimize the weights w, with initial learning rate η w = 0.025 (annealed down to zero following a cosine schedule without restart BID14 ), momentum 0.9, and weight decay 3 × 10 −4 .",
"We use zero initialization for architecture variables (the α's in both the normal and reduction cells), which implies equal amount of attention (after taking the softmax) over all possible ops.",
"At the early stage this ensures weights in every candidate op to receive sufficient learning signal (more exploration).",
"We use Adam BID10 as the optimizer for α, with initial learning rate η α = 3 × 10 −4 , momentum β = (0.5, 0.999) and weight decay 10 −3 .",
"The search takes one day on a single GPU 3 .A.1.2",
"PTB For architecture search, both the embedding and the hidden sizes are set to 300. The linear",
"transformation parameters across all incoming operations connected to the same node are shared (their shapes are all 300 × 300), as the algorithm always has the option to focus on one of the predecessors and mask away the others. Tying the",
"weights leads to memory savings and faster computation, allowing us to train the continuous architecture using a single GPU. Learnable",
"affine parameters in batch normalizations are disabled, as we did for convolutional cells. The network",
"is then trained for 50 epochs using SGD without momentum, with learning rate η w = 20, batch size 256, BPTT length 35, and weight decay 5 × 10 −7 . We apply variational",
"dropout BID3 of 0.2 to word embeddings, 0.75 to the cell input, and 0.25 to all the hidden nodes. A dropout of 0.75 is",
"also applied to the output layer. Other training settings",
"are identical to those in BID18 ; BID31 . Similarly to the convolutional",
"architectures, we use Adam for the optimization of α (initialized as zeros), with initial learning rate η α = 3 × 10 −3 , momentum β = (0.9, 0.999) and weight decay 10 −3 . The search takes 6 hours on a",
"single GPU."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2790697515010834,
0.24137930572032928,
0.25806450843811035,
0.052631575614213943,
0.17391303181648254,
0.11999999731779099,
0.14999999105930328,
0.19999998807907104,
0.20779220759868622,
0.18390804529190063,
0.23728813230991364,
0.4814814627170563,
0.10256409645080566,
0.15555554628372192,
0.1875,
0.35555556416511536,
0.039215680211782455,
0.1515151411294937,
0.3928571343421936,
0.31578946113586426,
0.145454540848732,
0.21052631735801697,
0.23880596458911896,
0.08888888359069824,
0.3214285671710968,
0.1666666567325592,
0.4651162624359131,
0.260869562625885,
0.1395348757505417,
0.08510638028383255,
0.11764705181121826,
0.15094339847564697,
0.13114753365516663,
0.2545454502105713,
0.09756097197532654,
0.15625,
0.25,
0.04444443807005882,
0.17241378128528595,
0.09999999403953552,
0.1860465109348297,
0.13333332538604736,
0.21739129722118378,
0.0952380895614624,
0.1666666567325592,
0.1304347813129425,
0.0555555522441864,
0.10526315122842789,
0.21212120354175568
] | S1eYHoC5FX | true | [
"We propose a differentiable architecture search algorithm for both convolutional and recurrent networks, achieving competitive performance with the state of the art using orders of magnitude less computation resources."
] |
[
"Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). ",
"The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops.\n\n",
"To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models.",
"Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass.\n\n",
"To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition.",
"Our results show that (1) maximization is an inappropriate decoding objective for open-ended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is the best decoding strategy for generating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text.",
"On February 14th 2019, OpenAI surprised the scientific community with an impressively highquality article about Ovid's Unicorn, written by GPT-2.",
"1 Notably, the top-quality generations obtained from the model rely on randomness in the decoding method, in particular through top-k sampling that samples the next word from the top k most probable choices (Fan et al., 2018; Holtzman et al., 2018; Radford et al., 2019) , instead of aiming to decode text that maximizes likelihood.",
"In fact, decoding strategies that optimize for output with high probability, such as beam search, lead to text that is incredibly degenerate, even when using state-of-the-art models such as GPT-2-117M, as shown in Figure 1 .",
"This may seem counter-intuitive, as one would expect that good models would assign higher probability to more human-like, grammatical text.",
"Indeed, language models do generally assign high scores to well-formed text, yet the highest scores for longer texts are often generic, repetitive, and awkward.",
"Perhaps equally surprising is the right side of Figure 1 , which shows that pure sampling -sampling directly from the probabilities predicted by the model -results in text that is incoherent and almost unrelated to the context.",
"Why is text produced by pure sampling so degenerate?",
"In this work we show that the \"unreliable tail\" is to blame.",
"This unreliable tail is composed of tens of thousands of candidate tokens with relatively low probability that are over-represented in the aggregate.",
"To overcome these shortcomings we introduce Nucleus Sampling ( §3.1).",
"The key intuition of Nucleus Sampling is that the vast majority of probability mass at each time step is concentrated in the nucleus, a small subset of the vocabulary that tends to range between one and a thousand candidates.",
"Instead of relying on a fixed top-k, or using a temperature parameter to control the shape of the distribution without sufficiently suppressing the unreliable tail distribution, we propose sampling from the top-p portion of the probability mass, expanding and contracting the candidate pool dynamically.",
"In order to compare current methods to Nucleus Sampling, we compare various distributional properties of generated text to the reference distribution, such as the likelihood of veering into repetition and the perplexity of generated text.",
"The latter reveals that text generated by maximization or top-k sampling is too probable, indicating a lack of diversity and divergence in vocabulary usage from the human distribution.",
"On the other hand, pure sampling produces text that is significantly less likely than the gold, corresponding to lower generation quality.",
"Vocabulary usage and Self-BLEU (Zhu et al., 2018) statistics reveal that high values of k are needed to make top-k sampling match human statistics.",
"Yet, generations based on high values of k are also found to have incredibly high variance in likelihood, hinting at qualitatively observable incoherency issues.",
"Nucleus Sampling can easily match reference perplexity through a proper value of p, without facing the resulting incoherence caused by setting k high enough to match distributional statistics.",
"Finally, we perform Human Unified with Statistical Evaluation (HUSE; Hashimoto et al., 2019) to jointly assess the overall quality and diversity of the decoding strategies, which cannot be captured using either human or automatics evaluation alone.",
"The HUSE evaluation demonstrates that Nucleus sampling is the best overall decoding strategy.",
"We include generated examples for qualitative analysis -see Figure 9 for a representative example, and further examples in the appendix.",
"This paper provided a deep analysis into the properties of the most common decoding methods for open-ended language generation.",
"We have shown that likelihood maximizing decoding causes repetition and overly generic language usage, while sampling methods without truncation risk sampling from the low-confidence tail of a model's predicted distribution.",
"Further, we proposed Nucleus Samplingas a solution that captures the region of confidence of language models effectively.",
"In future work, we wish to dynamically characterize this region of confidence and include a more semantic utility function to guide the decoding process."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.14035087823867798,
0.18918918073177338,
0.19999998807907104,
0.03999999538064003,
0.10526315122842789,
0.20512820780277252,
0,
0.05714285373687744,
0.06896550953388214,
0,
0.1599999964237213,
0.06779660284519196,
0,
0,
0.04255318641662598,
0.10526315122842789,
0.16949151456356049,
0.158730149269104,
0.22641508281230927,
0.145454540848732,
0.04255318641662598,
0.11764705181121826,
0.07999999821186066,
0.18518517911434174,
0.1269841194152832,
0.04999999701976776,
0.13333332538604736,
0.2666666507720947,
0.25,
0.1860465109348297,
0.11999999731779099
] | rygGQyrFvH | true | [
"Current language generation systems either aim for high likelihood and devolve into generic repetition or miscalibrate their stochasticity—we provide evidence of both and propose a solution: Nucleus Sampling."
] |
[
"Up until very recently, inspired by a mass of researches on adversarial examples for computer vision, there has been a growing interest in designing adversarial attacks for Natural Language Processing (NLP) tasks, followed by very few works of adversarial defenses for NLP.",
"To our knowledge, there exists no defense method against the successful synonym substitution based attacks that aim to satisfy all the lexical, grammatical, semantic constraints and thus are hard to perceived by humans.",
"We contribute to fill this gap and propose a novel adversarial defense method called Synonym Encoding Method (SEM), which inserts an encoder before the input layer of the model and then trains the model to eliminate adversarial perturbations.",
"Extensive experiments demonstrate that SEM can efficiently defend current best synonym substitution based adversarial attacks with little decay on the accuracy for benign examples.",
"To better evaluate SEM, we also design a strong attack method called Improved Genetic Algorithm (IGA) that adopts the genetic metaheuristic for synonym substitution based attacks.",
"Compared with existing genetic based adversarial attack, IGA can achieve higher attack success rate while maintaining the transferability of the adversarial examples.",
"Deep Neural Networks (DNNs) have made great success in various machine learning tasks, such as computer vision (Krizhevsky et al., 2012; He et al., 2016) , and Natural Language Processing (NLP) (Kim, 2014; Lai et al., 2015; Devlin et al., 2018) .",
"However, recent studies have discovered that DNNs are vulnerable to adversarial examples not only for computer vision tasks (Szegedy et al., 2014) but also for NLP tasks (Papernot et al., 2016) , causing a serious threat to their safe applications.",
"For instance, spammers can evade spam filtering system with adversarial examples of spam emails while preserving the intended meaning.",
"In contrast to numerous methods proposed for adversarial attacks (Goodfellow et al., 2015; Nicholas & David, 2017; Anish et al., 2018) and defenses (Goodfellow et al., 2015; Guo et al., 2018; Song et al., 2019) in computer vision, there are only a few list of works in the area of NLP, inspired by the works for images and emerging very recently in the past two years (Zhang et al., 2019) .",
"This is mainly because existing perturbation-based methods for images cannot be directly applied to texts due to their discrete property in nature.",
"Furthermore, if we want the perturbation to be barely perceptible by humans, it should satisfy the lexical, grammatical and semantic constraints in texts, making it even harder to generate the text adversarial examples.",
"Current attacks in NLP can fall into four categories, namely modifying the characters of a word (Liang et al., 2017; Ebrahimi et al., 2017) , adding or removing words (Liang et al., 2017) , replacing words arbitrarily (Papernot et al., 2016) , and substituting words with synonyms (Alzantot et al., 2018; Ren et al., 2019) .",
"The first three categories are easy to be detected and defended by spell or syntax check (Rodriguez & Rojas-Galeano, 2018; Pruthi et al., 2019) .",
"As synonym substitution aims to satisfy all the lexical, grammatical and semantic constraints, it is hard to be detected by automatic spell or syntax check as well as human investigation.",
"To our knowledge, currently there is no defense method specifically designed against the synonym substitution based attacks.",
"In this work, we postulate that the model generalization leads to the existence of adversarial examples: a generalization that is not strong enough causes the problem that there usually exists some neighbors x of a benign example x in the manifold with a different classification.",
"Based on this hypothesis, we propose a novel defense mechanism called Synonym Encoding Method (SEM) that encodes all the synonyms to a unique code so as to force all the neighbors of x to have the same label of x.",
"Specifically, we first cluster the synonyms according to the Euclidean Distance in the embedding space to construct the encoder.",
"Then we insert the encoder before the input layer of the deep model without modifying its architecture, and train the model again to defend adversarial attacks.",
"In this way, we can defend the synonym substitution based adversarial attacks efficiently in the context of text classification.",
"Extensive experiments on three popular datasets demonstrate that the proposed SEM can effectively defend adversarial attacks, while maintaining the efficiency and achieving roughly the same accuracy on benign data as the original model does.",
"To our knowledge, SEM is the first proposed method that can effectively defend the synonym substitution based adversarial attacks.",
"Besides, to demonstrate the efficacy of SEM, we also propose a genetic based attack method, called Improved Genetic Algorithm (IGA), which is well-designed and more efficient as compared with the first proposed genetic based attack algorithm, GA (Alzantot et al., 2018) .",
"Experiments show that IGA can degrade the classification accuracy more significantly with lower word substitution rate than GA.",
"At the same time IGA keeps the transferability of adversarial examples as GA does.",
"Synonym substitution based adversarial attacks are currently the best text attack methods, as they are hard to be checked by automatic spell or syntax check as well as human investigation.",
"In this work, we propose a novel defense method called Synonym Encoding Method (SEM), which encodes the synonyms of each word to defend adversarial attacks for text classification task.",
"Extensive experiments show that SEM can defend adversarial attacks efficiently and degrade the transferability of adversarial examples, at the same time SEM maintains the classification accuracy on benign data.",
"To our knowledge, this is the first and efficient text defense method in word level for state-of-the-art synonym substitution based attacks.",
"In addition, we propose a new text attack method called Improved Genetic Attack (IGA), which in most cases can achieve much higher attack success rate as compared with existing attacks, at the same time IGA could maintain the transferability of adversarial examples."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.11320754140615463,
0.3199999928474426,
0.19607841968536377,
0.23255813121795654,
0.2666666507720947,
0.20512819290161133,
0.07407406717538834,
0.03703703358769417,
0.10810810327529907,
0.14492753148078918,
0.04999999329447746,
0.2083333283662796,
0.20689654350280762,
0.13636362552642822,
0.12765957415103912,
0.3888888955116272,
0.1111111044883728,
0.11764705181121826,
0.23529411852359772,
0.19512194395065308,
0.37837836146354675,
0.12244897335767746,
0.37837836146354675,
0.17543859779834747,
0.1621621549129486,
0.1249999925494194,
0.30434781312942505,
0.3333333432674408,
0.1818181723356247,
0.550000011920929,
0.20338982343673706
] | BJl_a2VYPH | true | [
"The first text adversarial defense method in word level, and the improved generic based attack method against synonyms substitution based attacks."
] |
[
"A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation.",
"This makes BPTT both computationally impractical and biologically implausible.",
"For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. ",
"However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. ",
"Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies.",
"Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. ",
"This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states. ",
"Recurrent Neural Networks (RNNs) are state-of-the-art for many machine learning sequence processing tasks.",
"Examples where models based on RNNs shine include speech recognition BID21 BID3 , image captioning BID32 BID29 BID17 , machine translation BID1 BID26 BID18 , and speech synthesis BID20 .",
"It is common practice to train these models using backpropagation through time (BPTT), wherein the network states are unrolled in time and gradients are backpropagated through the unrolled graph.",
"Since the parameters of an RNN are shared across the different time steps, BPTT is more prone to vanishing and exploding gradients (Hochreiter, 1991; BID2 BID11 than equivalent deep feedforward networks with as many stages.",
"This makes credit assignment particularly difficult for events that have occurred many time steps in the past, and thus makes it challenging in practice to capture long-term dependencies in the data (Hochreiter, 1991; BID2 .",
"Having to wait for the end of the sequence in order to compute gradients is neither practical for machines nor animals when the dependencies extend over very long timescales.",
"Training is slowed down considerably by long waiting times, as the rate of convergence crucially depends on how often parameters can be updated.In practice, proper long-term credit assignment in RNNs is very inconvenient, and it is common practice to employ truncated versions of BPTT for long sequences BID23 BID24 .",
"In truncated BPTT (TBPTT), gradients are backpropagated only for a fixed and limited number of time steps and parameters are updated after each such subsequence.",
"Truncation is often motivated by computational concerns: memory, computation time and the advantage of faster learning obtained when making more frequent updates of the parameters rather than having to wait for the end of the sequence.",
"However, it makes capturing correlations across distant states even harder.Regular RNNs are parametric: their hidden state vector has a fixed size.",
"We believe that this is a critical element in the classical analysis of the difficulty of learning long-term dependencies BID2 .",
"Indeed, the fixed state dimension becomes a bottleneck through which information has to flow, both forward and backward.We thus propose a semi-parametric RNN, where the next state is potentially conditioned on all the previous states of the RNN, making it possible-thanks to attention-to jump through any distance through time.",
"We distinguish three types of states in our proposed semi-parametric RNN:• The fixed-size hidden state h (t) , the conventional state of an RNN model at time t;• The monotonically-growing macrostate M = {m (1) , . . . , m",
"(s) }, the array of all past microstates, which plays the role of a random-access memory;• And the fixed-size microstate m (i) , which is the ith hidden state (one of the h (t) ) that was chosen for inclusion within the macrostate M. There are as many hidden states as there are timesteps in the sequence being analyzed by the RNN.",
"A subset of them will become microstates, and this subset is called the macrostate.The computation of the next hidden state h (t+1) is based on the whole macrostate M, in addition to the external input x (t) .",
"The macrostate being variable-length, we must devise a special mechanism to read from this ever-growing array.",
"As a key component of our model, we propose to use an attention mechanism over the microstate elements of the macrostate.The attention mechanism in the above setting may be regarded as providing adaptive, dynamic skip connections: any past microstate can be linked, via a dynamic decision, to the current hidden state.",
"Skip connections allow information to propagate over very long sequences.",
"Such architectures should naturally make it easier to learn long-term dependencies.",
"We name our algorithm sparse attentive backtracking (SAB).",
"SAB is especially well-suited to sequences in which two parts of a task are closely related yet occur very far apart in time.Inference in SAB involves examining the macrostate and selecting some of its microstates.",
"Ideally, SAB will not select all microstates, instead attending only to the most salient or relevant ones (e.g., emotionally loaded, in animals).",
"The attention mechanism will select a number of relevant microstates to be incorporated into the hidden state.",
"During training, local backpropagation of gradients happens in a short window of time around the selected microstates only.",
"This allows for the updates to be asynchronous with respect to the time steps we attend to, and credit assignment takes place more globally in the proposed algorithm.With the proposed framework for SAB, we present the following contributions:• A principled way of doing sparse credit assignment, based on a semi-parametric RNN.•",
"A novel way of mitigating exploding and vanishing gradients, based on reducing the number of steps that need to be backtracked through temporal skip connections.•",
"Competitive results compared to full backpropagation through time (BPTT), and much better results as compared to Truncated Backpropagation through time, with significantly shorter truncation windows in our model. Mechanisms",
"such as SAB may also be biologically plausible. Imagine having",
"taken a wrong turn on a roadtrip and finding out about it several miles later. Our mental focus",
"would most likely shift directly to the location in time and space where we had made the wrong decision, without replaying in reverse the detailed sequence of experienced traffic and landscape impressions. Neurophysiological",
"findings support the existence of such attention mechanisms and their involvement in credit assignment and learning in biological systems. In particular, hippocampal",
"recordings in rats indicate that brief sequences of prior experience are replayed both in the awake resting state and during sleep, both of which conditions are linked to memory consolidation and learning BID7 BID6 BID8 . Moreover, it has been observed",
"that these replay events are modulated by the reward an animal does or does not receive at the end of a task in the sense that they are more pronounced in the presence of a reward signal and less pronounced or absent in the absence of a reward signal BID0 . Thus, the mental look back into",
"the past seems to occur exactly when credit assignment is to be performed.2 RELATED WORK 2.1 TRUNCATED BACKPROPAGATION",
"THROUGH TIME When training on very long sequences, full backpropagation through time becomes computationally expensive and considerably slows down training by forcing the learner to wait for the end of each (possibly very long sequence) before making a parameter update. A common heuristic is to backpropagate the",
"loss of a particular time step through only a limited number of time steps, and hence truncate the backpropagation computation graph BID30 . While truncated backpropagation through time",
"is heavily used in practice, its inability to perform credit assignment over longer sequences is a limiting factor for this algorithm, resulting in failure cases even in simple tasks, such as the Copying Memory and Adding task in BID12 .",
"Improving the modeling of long-term dependencies is a central challenge in sequence modeling, and the exact gradient computation by BPTT is not biologically plausible as well as inconvenient computationally for realistic applications.",
"Because of this, the most widely used algorithm for training recurrent neural networks on long sequences is truncated backpropagation through time, which is known to produced biased estimates of the gradient , focusing on short-term dependencies.",
"We have proposed Sparse Attentive Backtracking, a new biologically motivated algorithm which aims to combine the strengths of full backpropagation through time and truncated backpropagation through time.",
"It does so by only backpropagating gradients through paths selected by its attention mechanism.",
"This allows the RNN to learn long-term dependencies, as with full backpropagation through time, while still allowing it to only backtrack for a few steps, as with truncated backpropagation through time, thus making it possible to update weights as frequently as needed rather than having to wait for the end of very long sequences."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0,
0,
0.06896550953388214,
0,
0,
0,
0.1666666567325592,
0,
0.05714285373687744,
0,
0.04878048226237297,
0.0555555522441864,
0.035087715834379196,
0,
0,
0,
0.06896550953388214,
0,
0.04444444179534912,
0.03333333134651184,
0.0476190447807312,
0,
0.038461536169052124,
0,
0,
0,
0.04651162400841713,
0.05714285373687744,
0,
0.0714285671710968,
0.0363636314868927,
0,
0.1111111044883728,
0,
0,
0.09999999403953552,
0.06451612710952759,
0.043478257954120636,
0.0416666641831398,
0,
0,
0,
0.043478257954120636,
0.04999999701976776,
0,
0,
0,
0
] | SJCq_fZ0Z | true | [
"Towards Efficient Credit Assignment in Recurrent Networks without Backpropagation Through Time"
] |
[
"We propose a novel adversarial learning framework in this work.",
"Existing adversarial learning methods involve two separate networks, i.e., the structured prediction models and the discriminative models, in the training.",
"The information captured by discriminative models complements that in the structured prediction models, but few existing researches have studied on utilizing such information to improve structured prediction models at the inference stage.",
"In this work, we propose to refine the predictions of structured prediction models by effectively integrating discriminative models into the prediction.",
"Discriminative models are treated as energy-based models.",
"Similar to the adversarial learning, discriminative models are trained to estimate scores which measure the quality of predicted outputs, while structured prediction models are trained to predict contrastive outputs with maximal energy scores.",
"In this way, the gradient vanishing problem is ameliorated, and thus we are able to perform inference by following the ascent gradient directions of discriminative models to refine structured prediction models.",
"The proposed method is able to handle a range of tasks, \\emph{e.g.}, multi-label classification and image segmentation. ",
"Empirical results on these two tasks validate the effectiveness of our learning method.",
"This work focuses on applying adversarial learning BID9 to solve structured prediction tasks, e.g., multi-label classification and image segmentation.",
"Adversarial learning can be formalized as a minimax two-player game between structured prediction models and discriminative models.",
"Discriminative models are learned to distinguish between the outputs predicted by the structured prediction models and the training data, while structured prediction models are learned to predict outputs to fool discriminative models.",
"Though structured prediction models are trained by the gradients of discriminative models, existing methods rarely use discriminative models to improve structured prediction models at the inference stage.A straightforward way of utilizing discriminative models for inference is to follow the ascent gradient directions of discriminative models to refine the predicted outputs.",
"However, due to the wellknown gradient vanishing problems, the gradients of discriminative models are almost zero for all the predicted outputs.",
"It is difficult to resolve the gradient vanishing problems, because they are caused by the training instability of the existing adversarial learning framework.",
"Consequently, most existing methods do not use the information from discriminative models to refine structured prediction models.Most existing adversarial learning methods take discriminative models as classifiers.",
"If discriminative models well separate real and predicted samples, they tend to assign the same scores to all predicted samples.",
"Differently, energy-based models BID21 BID12 usually predict different energy scores for different samples.",
"Therefore, we propose to train discriminative models as energy-based models to ameliorate the gradient vanishing problems.",
"In our framework, discriminative models are learned to assign scores to evaluate the quality of predicted outputs.",
"Structured prediction models are learned to predict outputs that are judged to have maximum scores by discriminative models.",
"In this way, discriminative models are trained to approximate continues value functions which evaluate the quality of predicted output.",
"The gradients of discriminative models are not zero for predicted outputs.",
"Thus, we can refine structured prediction models by following the ascent gradient directions of discriminative models at the inference stage.",
"In this paper, we refer our method as learning discriminative models to refine structured prediction models (LDRSP).The",
"proposed method learns discriminative models utilizing the data generated by the structured prediction models. BID12",
"found that the key to learning deep value networks is generating proper training data. We propose",
"to augment the training set of discriminative models by following the data generation methods proposed in BID12 . At the training",
"stage, we simultaneously run the inference algorithm to generate extra training samples utilizing models in previous iterations. These samples are",
"useful since they are generated along the gradient-based inference trajectory utilized at the inference stage. We also augment the",
"training set with adversarial samples BID10 . These samples are used",
"as negative samples to train the discriminative models.To validate our method, experiments are conducted on multi-label classification, binary image segmentation, and 3-class face segmentation tasks, and experimental results indicate that our method can learn discriminative models to effectively refine structured prediction models.This work has two contributions: (1) We propose a novel adversarial learning framework for structured prediction, in which the information captured by discriminative models can be used to improve structured prediction models at the inference stage. (2) We propose to learn",
"discriminative models to approximate continues value functions that evaluate the quality of the predicted outputs, and thus ameliorate the gradient vanishing problems.",
"This paper proposes a novel learning framework, in which discriminative models are learned to refine structured prediction models.",
"Discriminative models are trained as energy-based models to estimate scores that measure the quality of generated samples.",
"Structured prediction models are trained to predict contrastive samples with maximum energy scores.",
"Once models are learned, we perform inference by following the ascent gradient directions of discriminative models to refine structured prediction models.",
"We apply the proposed method to multi-label classification and image segmentation tasks.",
"The experimental results indicate that discriminative models learned by the proposed methods can effectively refine generative models.",
"As the future work, we will explore different ways to generate extra training samples and apply our method to more challenging tasks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4571428596973419,
0.35555556416511536,
0.38461539149284363,
0.3720930218696594,
0.06451612710952759,
0.31372547149658203,
0.307692289352417,
0.13333332538604736,
0.10526315122842789,
0.21739129722118378,
0.39024388790130615,
0.2666666507720947,
0.37288135290145874,
0.22727271914482117,
0.21739129722118378,
0.3829787075519562,
0.1860465109348297,
0.10810810327529907,
0.25641024112701416,
0.19512194395065308,
0.19999998807907104,
0.22727271914482117,
0.1666666567325592,
0.4651162624359131,
0.3333333432674408,
0.2631579041481018,
0.24390242993831635,
0.2380952388048172,
0.22727271914482117,
0.24390242993831635,
0.11764705181121826,
0.5393258333206177,
0.17777776718139648,
0.5238094925880432,
0.1463414579629898,
0.15789473056793213,
0.3636363446712494,
0.1621621549129486,
0.24390242993831635,
0.08695651590824127
] | By40DoAqtX | true | [
"We propose a novel adversarial learning framework for structured prediction, in which discriminative models can be used to refine structured prediction models at the inference stage. "
] |
[
"We propose RaPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder.\n",
"Precisely, RaPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces.\n",
"We show that if we feed a reconstructed input to the same autoencoder again, its activated values in a hidden space are equivalent to the corresponding reconstruction in that hidden space given the original input.\n",
"In order to aggregate the hidden space activation values, we propose two metrics, which enhance the novelty detection performance.\n",
"Through extensive experiments using diverse datasets, we validate that RaPP improves novelty detection performances of autoencoder-based approaches.\n",
"Besides, we show that RaPP outperforms recent novelty detection methods evaluated on popular benchmarks.\n",
"How can we characterize novelty when only normality information is given?",
"Novelty detection is the mechanism to decide whether a data sample is an outlier with respect to the training data.",
"This mechanism is especially useful in situations where a proportion of detection targets is inherently small.",
"Examples are fraudulent transaction detection (Pawar et al., 2014; Porwal & Mukund, 2018) , intrusion detection (Lee, 2017; Aoudi et al., 2018) , video surveillance (Ravanbakhsh et al., 2017; Xu et al., 2015b) , medical diagnosis (Schlegl et al., 2017; Baur et al., 2018) and equipment failure detection (Kuzin & Borovicka, 2016; Zhao et al., 2017; Beghi et al., 2014) .",
"Recently, deep autoencoders and their variants have shown outstanding performances in finding compact representations from complex data, and the reconstruction error has been chosen as a popular metric for detecting novelty (An & Cho, 2015; Vasilev et al., 2018) .",
"However, this approach has a limitation of measuring reconstruction quality only in an input space, which does not fully utilize hierarchical representations in hidden spaces identified by the deep autoencoder.",
"In this paper, we propose RAPP, a new method of detecting novelty samples exploiting hidden activation values in addition to the input values and their autoencoder reconstruction values.",
"While ordinary reconstruction-based methods carry out novelty detection by comparing differences between input data before the input layer and reconstructed data at the output layer, RAPP extends these comparisons to hidden spaces.",
"We first collect a set of hidden activation values by feeding the original input to the autoencoder.",
"Subsequently, we feed the autoencoder reconstructed input to the autoencoder to calculate another set of activation values in the hidden layers.",
"This procedure does not need additional training of the autoencoder.",
"In turn, we quantify the novelty of the input by aggregating these two sets of hidden activation values.",
"To this end, we devise two metrics.",
"The first metric measures the total amount of reconstruction errors in input and hidden spaces.",
"The second metric normalizes the reconstruction errors before summing up.",
"Note that RAPP falls back to the ordinary reconstruction-based method if we only aggregate input values before the input layer and the reconstructed values at the output layer.",
"Also, we explain the motivations that facilitated the development of RAPP.",
"We show that activation values in a hidden space obtained by feeding a reconstructed input to the autoencoder are equivalent to the corresponding reconstruction in that hidden space for the original input.",
"We refer the latter quantity as a hidden reconstruction of the input.",
"Note that this is a natural extension of the reconstruction to the hidden space.",
"Unfortunately, we cannot directly compute the hidden reconstruction as in the computation of the ordinary reconstruction because the autoencoder does not impose any correspondence between encoding-decoding pairs of hidden layers during the training.",
"Nevertheless, we show that it can be computed by feeding a reconstructed input to the autoencoder again.",
"Consequently, RAPP incorporates hidden reconstruction errors as well as the ordinary reconstruction error in detecting novelty.",
"With extensive experiments, we demonstrate using diverse datasets that our method effectively improves autoencoder-based novelty detection methods.",
"In addition, we show by evaluating on popular benchmark datasets that RAPP outperforms competing methods recently developed.",
"Our contributions are summarized as follows.",
"• We propose a new novelty detection method by utilizing hidden activation values of an input and its autoencoder reconstruction, and provide aggregation functions for them to quantify novelty of the input.",
"• We provide motivation that RAPP extends the reconstruction concept in the input space into the hidden spaces.",
"Precisely, we show that hidden activation values of a reconstructed input are equivalent to the corresponding hidden reconstruction of the original input.",
"• We demonstrate that RAPP improves autoencoder-based novelty detection methods in diverse datasets.",
"Moreover, we validate that RAPP outperforms recent novelty detection methods on popular benchmark datasets.",
"In this paper, we propose a novelty detection method which utilizes hidden reconstructions along a projection pathway of deep autoencoders.",
"To this end, we extend the concept of reconstruction in the input space to hidden spaces found by an autoencoder and present a tractable way to compute the hidden reconstructions, which requires neither modifying nor retraining the autoencoder.",
"Our experimental results show that the proposed method outperforms other competing methods in terms of AUROC for diverse datasets including popular benchmarks.",
"A SVD COMPUTATION TIME",
"We compare running times of training an autoencoder and computing SVD for NAP.",
"We choose two packages for the SVD computation: Pytorch SVD and fbpca provided in https://fbpca.",
"readthedocs.io/en/latest/.",
"Since the time complexity of SVD is linear in the number of data samples 1 , we mainly focus on the performance of SVD with varying the number of columns of the input matrix that SVD is applied.",
"To obtain variable sizes of the columns, we vary the depth and bottleneck size of autoencoders.",
"The result is shown below.",
"Notably, Pytorch SVD utilizing GPU is at least 47x faster than training neural networks.",
"Even, fbpca running only on CPU achieves at least 2.4x speedup.",
"The detailed setups to obtain the matrices for the experiment are given in the 1 20 100 20 40 20 90 2 18 100 18 40 18 90 3 16 100 16 40 16 90 4 14 100 14 40 14 90 5 12 100 12 40 12 90 6 10 100 10 40 10 90 7 8 100 8 40 8 90 8 6 100 6 40 6 90 9 4 100 4 40 4 90 10 2 100 2 40 2 90 11 2 80 2 30 2 70 12 2 60 2 20 2 50 13 2 40 2 10 2 30 14 2 20 2 10 B STANDARD DEVIATIONS OF EXPERIMENTAL RESULTS",
"We provide the standard deviations of the result in We investigate the performance of NAP while increasing the number of hidden layers involved in the NAP computation.",
"Specifically, we consider two ways for the increment:",
"1) adding hidden layers one by one from the input layer (forward addition), and",
"2) adding hidden layers one by one from the bottleneck layer (backward addition).",
"Experimental results on two datasets are shown below.",
"For most cases, more hidden layers tend to result in higher performance.",
"The values are obtained from one trial, not averaged over 5 trials as done in Section 5."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.8648648858070374,
0.17142856121063232,
0.22727271914482117,
0.277777761220932,
0.11428570747375488,
0.1249999925494194,
0.0714285671710968,
0.12121211737394333,
0.1249999925494194,
0.037735845893621445,
0.178571417927742,
0.21739129722118378,
0.3255814015865326,
0.17391303181648254,
0.3636363446712494,
0.23529411852359772,
0.07407406717538834,
0.3030303120613098,
0,
0.0624999962747097,
0,
0.05128204822540283,
0,
0.44999998807907104,
0.1428571343421936,
0.19999998807907104,
0.09302324801683426,
0.1764705777168274,
0.12903225421905518,
0.11764705181121826,
0.05882352590560913,
0,
0.4888888895511627,
0.12121211737394333,
0.22857142984867096,
0.13333332538604736,
0.12903225421905518,
0.277777761220932,
0.20408162474632263,
0.05128204822540283,
0.0952380895614624,
0.13333332538604736,
0.06451612710952759,
0,
0,
0,
0.06451612710952759,
0,
0.03333332762122154,
0.05714285373687744,
0.07999999821186066,
0.19999998807907104,
0.20689654350280762,
0,
0.06896550953388214,
0.1818181723356247
] | HkgeGeBYDB | true | [
"A new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder."
] |
[
"Learning preferences of users over plan traces can be a challenging task given a large number of features and limited queries that we can ask a single user.",
"Additionally, the preference function itself can be quite convoluted and non-linear.",
"Our approach uses feature-directed active learning to gather the necessary information about plan trace preferences.",
"This data is used to train a simple feedforward neural network to learn preferences over the sequential data.",
"We evaluate the impact of active learning on the number of traces that are needed to train a model that is accurate and interpretable.",
"This evaluation is done by comparing the aforementioned feedforward network to a more complex neural network model that uses LSTMs and is trained with a larger dataset without active learning.",
"When we have a human-in-the-loop during planning, learning that person's preferences over plan traces becomes an important problem.",
"These preferences can be used to choose a plan from amongst a set of plans that are comparable by the planner's cost metrics.",
"Such a plan would naturally be more desired by the human.",
"The user may not like to constantly dictate their preferences, and may not always be in the loop during execution.",
"Thus, it is important for the user's preference function to be learned well, and for the user to be able to verify them.",
"For verification, there ought to be a way to interpret how the model's decisions were made, and verify how faithful the learned model is to the user's preferences.A user's preferences function may be quite complex with dependencies over different subsets of features.",
"The utility of some features maybe non-linear as well.",
"Such a preference function may require a fair amount of information to approximate.",
"We cannot expect a single user to give feedback over a large set of traces to get the relevant information.",
"So Active learning, with a sufficiently expressive user interface for feedback, is essential to minimize queries and redundant information.In this work, our objective was to model the user's preferences over plan traces.",
"There do exist techniques that efficiently represent and reason about preference relationships.",
"CP-nets BID1 and Generalized additive independence BID2 ) models are typically used to represent preferences over sets of variables without consideration to the order in which they appear.",
"While these models can be adapted to handle sequential data, they are not intended for it.",
"LTL rules, however, can capture trajectory preferences very well and are used in PDDL 3.0 BID3 , and LPP BID0 .",
"However, it can be very hard for a user to express their preferences in this form.",
"We discuss existing approaches in more detail and the differences with respect to our work under the related work section.In our approach to learning preferences, we want to efficiently identify the relevant features and the degree to which they affect the preference score of a plan.",
"We thus employ a feature-directed active learning approach that specifically picks plan traces that are most informative about the feature's effects on preference.",
"After active learning, we encode a plan trace in terms of the relevant features it contains.",
"We gather a set of training data from active learning, along with the user's preference score to help train a simple Neural Network (NN) that we call the FeatureNN model.",
"We use a Neural Network as they can approximate complex functions to a good degree.",
"Our approach is in one way, related to Generalized Additive Independence in that we try to learn a utility function over pertinent features, but we do not explicitly define or restrict the form of any utility functions.",
"Rather a simple one hidden-layer feed-forward neural network learns the functions, dependencies, and relative weights over the relevant features.",
"The FeatureNN then predicts a preference score for each plan reflecting the user's preferences.",
"We also compare the performance of the FeatureNN to another SequenceNN model that processes sequential data using an LSTM BID6 module.",
"The SequenceNN is not trained with data from active learning, but with a larger dataset of traces with ratings.",
"This is to evaluate how efficient our active learning approach is with respect to the number of traces.",
"Specifically, we compare the number of traces required by SequenceNN and FeatureNN for the same accuracy and interpretability.Neural networks, unlike linear functions, are not as easy to interpret.",
"Even simple NN with a single hidden layer can be a challenge.",
"We help the user interpret the decisions of the neural network by showing how the preference score is affected by removing different features of a plan trace.",
"This is similar to using Saliency Maps BID7 in images to explain what parts of the image contributed to the classification.",
"In this way, we can explain to the user what plan features contributed to the preference value and by how much.",
"The difference in preference score should correspond to the user's expectations as per their preference model.",
"The more similar the effect of changes are to the user's preferences, the more interpretable the NN model is to the user as it approximates well their own preference function.",
"Such a method of explaining a decision(score) is also related to explaining using counterfactuals BID5 .",
"Here the counterfactual is the plan trace without a specific feature.",
"Additionally, when the specific features used to compute preferences comes from the user's feedback (during active learning), this interpretability is obviously improved.We present our work by first defining the problem before delving into the methodology of our approach.",
"In the Methodology section, we discuss the domain used, the user preference model, and the feature-directed active learning process.",
"We also discuss the two neural network models used to learn the preference model, viz. the FeatureNN and the SequenceNN models.",
"Then we present our experimental results in which we compare the two models with respect to their accuracy in predicting the preference score, as well as interpretability.",
"Lastly, we discuss the results and possible extensions to the work.",
"Even with as little as 13 features and a relatively uncomplicated preference function, a sufficiently powerful SequenceNN model did not find the underlying preference function.",
"Instead, it found correlations that predicted the preference score to a very high level of accuracy.",
"This, unfortunately, makes the model suffer in interpretability.As the number of features increases, the hypothesis space of a NN will increase significantly.",
"This makes it much more likely for any NN to find spurious correlations, and suffer in interpretability.",
"So active learning and using a simpler NN becomes very important for learning preferences in plan traces.As for prior feature knowledge, we assumed knowledge about what features were categorical (binary in our experiments) and what features were cardinal.",
"Rather than assume this knowledge, we can get this from the user as well, and reduce the assumptions about the domain features.",
"Alternatively, we could have just encoded all features as cardinal features, and let the neural network determine what features were categorical.",
"While this is certainly possible, we think it better to get this knowledge from the user and encode the plan trace based on this knowledge.",
"This makes the job of the neural network easier, and less likely to learn spurious correlations.In our current encoding of features in FeatureNN model and our experiments, we have not included a preference dependency that considers the number of steps between features.",
"For example, I would like to have a donut within 3 plan steps after having a coffee.",
"This omission was not intentional.",
"One can easily encode such a sequential feature as a variable as well.",
"The number of steps between the two (state) features becomes a cardinal variable to represents this sequential feature.",
"In our approach, we use feature-directed Active Learning complemented with an intuitive and expressive user interface to learn the user's preference function efficiently.",
"The traces obtained during active learning are rated and annotated by the user.",
"These traces are encoded as a vector over the features that the user indicated as relevant to their preferences.",
"The feature vectors are used to train a simple feedforward Neural Network to learn the preference function.",
"We show that the SimpleNN neural network is more accurate and interpretable with fewer, more informative plan traces as compared to the LSTM based SequenceNN model.",
"The latter was trained with a larger dataset of rated plan traces without active learning.Our current experiments use a user preference function over only a few variables.",
"It is important to see how efficiently our framework learns a more complex preference function.",
"Moreover, the current preference function is completely deterministic as it provides consistent annotation and rating to the plan trace.",
"A human, however, might not behave in a consistent manner.",
"We will test with a noisy or probabilistic preference model in future work.The user interface itself can be extended to include more complex annotations.",
"For example, the user can also provide annotations for some features to be added/dropped from the plan.",
"This is especially useful for cardinal feature as the modified feature count represents what is ideal to the user.",
"For example, if the user's preference doesn't increase after visiting more than 2 lakes.",
"Then this can be communicated by removing extra lake features from a plan trace.We have mentioned categorical and cardinal features, but our framework is also intended to support real-valued features.",
"We would need to adapt our active learning process to elicit feedback as to what the minimum, optimum and maximum values of such features are.",
"These would be the minimum essential points to sample for approximating the underlying utility function.Lastly, we would like to simplify the function by which we choose plan traces in successive rounds of active learning.",
"We think that the similarity with traces from previous rounds is unnecessary, and might not appreciably reduce the cognitive load on the user.",
"We think that just diversity and selecting traces that are much more preferred(closer to 1.0) or much less preferred(closer to 0.0) would be sufficient."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3125,
0,
0.3478260934352875,
0.1666666567325592,
0.20689654350280762,
0.11428570747375488,
0.38461539149284363,
0.13333332538604736,
0.10526315122842789,
0,
0,
0.09302325546741486,
0,
0,
0.1538461446762085,
0.20000000298023224,
0,
0.11428570747375488,
0,
0.0714285671710968,
0.0833333283662796,
0.08888888359069824,
0.2666666507720947,
0.1666666567325592,
0.0555555522441864,
0,
0.04878048598766327,
0.07692307233810425,
0.1818181723356247,
0.0714285671710968,
0.1599999964237213,
0.25,
0.05714285373687744,
0,
0.06666666269302368,
0.07692307233810425,
0.07407406717538834,
0,
0,
0.0952380895614624,
0.1111111044883728,
0.09302325546741486,
0.1666666567325592,
0,
0,
0,
0,
0,
0,
0,
0.29999998211860657,
0,
0,
0.06896550953388214,
0,
0.0833333283662796,
0,
0,
0,
0.06451612710952759,
0.2857142686843872,
0.23999999463558197,
0,
0.1249999925494194,
0.29411762952804565,
0,
0.07692307233810425,
0,
0,
0.0833333283662796,
0,
0,
0.052631575614213943,
0.12903225421905518,
0.21621620655059814,
0.06896550953388214,
0.06896550953388214
] | BkGeETnQcE | true | [
"Learning preferences over plan traces using active learning."
] |
[
"Reinforcement learning is a promising framework for solving control problems, but its use in practical situations is hampered by the fact that reward functions are often difficult to engineer.",
"Specifying goals and tasks for autonomous machines, such as robots, is a significant challenge: conventionally, reward functions and goal states have been used to communicate objectives.",
"But people can communicate objectives to each other simply by describing or demonstrating them.",
"How can we build learning algorithms that will allow us to tell machines what we want them to do?",
"In this work, we investigate the problem of grounding language commands as reward functions using inverse reinforcement learning, and argue that language-conditioned rewards are more transferable than language-conditioned policies to new environments.",
"We propose language-conditioned reward learning (LC-RL), which grounds language commands as a reward function represented by a deep neural network.",
"We demonstrate that our model learns rewards that transfer to novel tasks and environments on realistic, high-dimensional visual environments with natural language commands, whereas directly learning a language-conditioned policy leads to poor performance.",
" Figure 1 : A task where an agent (green triangle) must execute the command \"go to the fruit bowl.\"",
"This is a simple example where the reward function is easier to specify than the policy.While reinforcement learning provides a powerful and flexible framework for describing and solving control tasks, it requires the practitioner to specify objectives in terms of reward functions.",
"Engineering reward functions is often done by experienced practitioners and researchers, and even then can pose a significant challenge, such as when working with complex image-based observations.",
"While researchers have investigated alternative means of specifying objectives, such as learning from demonstration BID1 , or through binary preferences BID5 , language is often a more natural and desirable way for humans to communicate goals.A common approach to building natural language interfaces for reinforcement learning agents is to build language-conditioned policies that directly map observations and language commands to a sequence of actions that perform the desired task.",
"However, this requires the policy to solve two challenging problems together: understanding how to plan and solve tasks in the physical world, and understanding the language command itself.",
"The trained policy must simultaneously interpret a command and plan through possibly complicated environment dynamics.",
"The performance of the system then hinges entirely on its ability to generalize to new environments -if either the language interpretation or the physical control fail to generalize, the entire system will fail.",
"We can recognize instead that the role of language in such a system is to communicate the goal, and rather than mapping language directly to policies, we propose to learn how to convert language-defined goals into reward functions.",
"In this manner, the agent can learn how to plan and perform the task on its own via reinforcement learning, directly interacting with the environment, without relying on zero-shot transfer of policies.",
"A simple example is shown in Figure 1 , where an agent is tasked with navigating through a house.",
"If an agent is commanded \"go to the fruit bowl\", a valid reward function could simply be a fruit bowl detector from first-person views of the agent.",
"However, if we were to learn a mapping from language to actions, given the same goal description, the model would need to generate a different plan for each house.In this work, we investigate the feasibility of grounding free-form natural language commands as reward functions using inverse reinforcement learning (IRL).",
"Learning language-conditioned rewards poses unique computational problems.",
"IRL methods generally require solving a reinforcement learning problem as an inner-loop BID26 , or rely on potentially unstable adversarial optimization procedures BID8 BID10 .",
"This is compounded by the fact that we wish to train our model across multiple tasks, meaning the IRL problem itself is an inner-loop.",
"In order to isolate the language-learning problem from the difficulties in solving reinforcement learning and adversarial learning problems, we base our method on an exact MaxEnt IRL BID26 procedure, which requires full knowledge of environment dynamics to train a language-conditioned reward function represented by a deep neural network.",
"While using exact IRL procedures may seem limiting, in many cases (such as indoor robotic navigation) full environment dynamics are available, and this formulation allows us to remove the difficulty of using RL from the training procedure.",
"The crucial insight is that we can use dynamic programming methods during training to learn a reward function that maps from observations, but we do not need knowledge of dynamics to use the reward function, meaning during test time we can evaluate using a reinforcement learning agent without knowledge of the underlying environment dynamics.",
"We evaluate our method on a dataset of realistic indoor house navigation and pick-and-place tasks using the SUNCG dataset, with natural language commands.",
"We demonstrate that our approach generalizes not only to novel tasks, but also to entirely new scenes, while directly learning a language-conditioned policy leads to poor performance and fails to generalize.",
"In this paper, we introduced LC-RL, an algorithm for scalable training of language-conditioned reward functions represented by neural networks.",
"Our method restricts training to tractable domains with known dynamics, but learns a reward function which can be used with standard RL methods in environments with unknown dynamics.",
"We demonstrate that the reward-learning approach to instruction following outperforms the policy-learning when evaluated in test environments, because the reward-learning enables an agent to learn and interact within the test environment rather than relying on zero-shot policy transfer."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.04878048226237297,
0.06666666269302368,
0.060606054961681366,
0.2978723347187042,
0.4117647111415863,
0.3478260934352875,
0,
0.15686273574829102,
0.0952380895614624,
0.1666666567325592,
0.10526315122842789,
0.12903225421905518,
0.0476190410554409,
0.16326530277729034,
0.04444443807005882,
0.11764705181121826,
0.05128204822540283,
0.23728813230991364,
0.17391303181648254,
0.14999999105930328,
0.052631575614213943,
0.23333333432674408,
0.11764705181121826,
0.17543859779834747,
0.25641024112701416,
0.1818181723356247,
0.11428570747375488,
0.0952380895614624,
0.1249999925494194
] | r1lq1hRqYQ | true | [
"We ground language commands in a high-dimensional visual environment by learning language-conditioned rewards using inverse reinforcement learning."
] |
[
"Many biological learning systems such as the mushroom body, hippocampus, and cerebellum are built from sparsely connected networks of neurons.",
"For a new understanding of such networks, we study the function spaces induced by sparse random features and characterize what functions may and may not be learned.",
"A network with d inputs per neuron is found to be equivalent to an additive model of order d, whereas with a degree distribution the network combines additive terms of different orders.",
"We identify three specific advantages of sparsity: additive function approximation is a powerful inductive bias that limits the curse of dimensionality, sparse networks are stable to outlier noise in the inputs, and sparse random features are scalable.",
"Thus, even simple brain architectures can be powerful function approximators.",
"Finally, we hope that this work helps popularize kernel theories of networks among computational neuroscientists.",
"Kernel function spaces are popular among machine learning researchers as a potentially tractable framework for understanding artificial neural networks trained via gradient descent [e.g. 1, 2, 3, 4, 5, 6].",
"Artificial neural networks are an area of intense interest due to their often surprising empirical performance on a number of challenging problems and our still incomplete theoretical understanding.",
"Yet computational neuroscientists have not widely applied these new theoretical tools to describe the ability of biological networks to perform function approximation.",
"The idea of using fixed random weights in a neural network is primordial, and was a part of Rosenblatt's perceptron model of the retina [7] .",
"Random features have then resurfaced under many guises: random centers in radial basis function networks [8] , functional link networks [9] , Gaussian processes (GPs) [10, 11] , and so-called extreme learning machines [12] ; see [13] for a review.",
"Random feature networks, where the neurons are initialized with random weights and only the readout layer is trained, were proposed by Rahimi and Recht in order to improve the performance of kernel methods [14, 15] and can perform well for many problems [13] .",
"In parallel to these developments in machine learning, computational neuroscientists have also studied the properties of random networks with a goal towards understanding neurons in real brains.",
"To a first approximation, many neuronal circuits seem to be randomly organized [16, 17, 18, 19, 20] .",
"However, the recent theory of random features appears to be mostly unknown to the greater computational neuroscience community.",
"Here, we study random feature networks with sparse connectivity: the hidden neurons each receive input from a random, sparse subset of input neurons.",
"This is inspired by the observation that the connectivity in a variety of predominantly feedforward brain networks is approximately random and sparse.",
"These brain areas include the cerebellar cortex, invertebrate mushroom body, and dentate gyrus of the hippocampus [21] .",
"All of these areas perform pattern separation and associative learning.",
"The cerebellum is important for motor control, while the mushroom body and dentate gyrus are The function shown is the sparse random feature approximation to an additive sum of sines, learned from poorly distributed samples (red crosses).",
"Additivity offers structure which may be leveraged for fast and efficient learning.",
"general learning and memory areas for invertebrates and vertebrates, respectively, and may have evolved from a similar structure in the ancient bilaterian ancestor [22] .",
"Recent work has argued that the sparsity observed in these areas may be optimized to balance the dimensionality of representation with wiring cost [20] .",
"Sparse connectivity has been used to compress artificial networks and speed up computation [23, 24, 25] , whereas convolutions are a kind of structured sparsity [26, 27] .",
"We show that sparse random features approximate additive kernels [28, 29, 30, 31] with arbitrary orders of interaction.",
"The in-degree of the hidden neurons d sets the order of interaction.",
"When the degrees of the neurons are drawn from a distribution, the resulting kernel contains a weighted mixture of interactions.",
"These sparse features offer advantages of generalization in high-dimensions, stability under perturbations of their input, and computational and biological efficiency.",
"Inspired by their ubiquity in biology, we have studied sparse random networks of neurons using the theory of random features, finding the advantages of additivity, stability, and scalability.",
"This theory shows that sparse networks such as those found in the mushroom body, cerebellum, and hippocampus can be powerful function approximators.",
"Kernel theories of neural circuits may be more broadly applicable in the field of computational neuroscience.",
"Expanding the theory of dimensionality in neuroscience Learning is easier in additive function spaces because they are low-dimensional, a possible explanation for few-shot learning in biological systems.",
"Our theory is complementary to existing theories of dimensionality in neural systems [16, 44, 45, 46, 47, 20, 48, 49, 50] , which defined dimensionality using a skewness measure of covariance eigenvalues.",
"Kernel theory extends this concept, measuring dimensionality similarly [51] in the space of nonlinear functions spanned by the kernel.",
"Limitations We model biological neurons as simple scalar functions, completely ignoring time and neuromodulatory context.",
"It seems possible that a kernel theory could be developed for timeand context-dependent features.",
"Our networks suppose i.i.d. weights, but weights that follow Dale's law should also be considered.",
"We have not studied the sparsity of activity, postulated to be relevant in cerebellum.",
"It remains to be demonstrated how the theory can make concrete, testable predictions, e.g. whether this theory may explain identity versus concentration encoding of odors or the discrimination/generalization tradeoff under experimental conditions.",
"Appendices: Additive function approximation in the brain As said in the main text, Kandasamy and Yu [1] created a theory of the generalization properties of higher-order additive models.",
"They supplemented this with an empirical study of a number of datasets using their Shrunk Additive Least Squares Approximation (SALSA) implementation of the additive kernel ridge regression (KRR).",
"Their data and code were obtained from https: //github.com/kirthevasank/salsa.",
"We compared the performance of SALSA to the sparse random feature approximation of the same kernel.",
"We employ random sparse Fourier features with Gaussian weights N (0, σ 2 I) with σ = 0.05 · √ dn 1/5 in order to match the Gaussian radial basis function used by Kandasamy and Yu [1] .",
"We use m = 300l features for every problem, with regular degree d selected equal to the one chosen by SALSA.",
"The regressor on the features is cross-validated ridge regression (RidgeCV from scikit-learn) with ridge penalty selected from 5 logarithmically spaced points between 10 −4 · n and 10 2 · n.",
"In Figure 2 , we compare the performance of sparse random features to SALSA.",
"Generally, the training and testing errors of the sparse model are slightly higher than for the kernel method, except for the forestfires dataset."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3243243098258972,
0.2380952388048172,
0.09090908616781235,
0.23999999463558197,
0,
0.1249999925494194,
0.2083333283662796,
0.22727271914482117,
0.15789473056793213,
0.20512819290161133,
0.18518517911434174,
0.1428571343421936,
0.1860465109348297,
0.05882352590560913,
0.24242423474788666,
0.21621620655059814,
0.21621620655059814,
0.060606054961681366,
0.07407406717538834,
0.11764705181121826,
0.06896550953388214,
0.10256409645080566,
0.04999999329447746,
0.13636362552642822,
0.22857142984867096,
0.07407406717538834,
0.12121211737394333,
0.17142856121063232,
0.19512194395065308,
0.1538461446762085,
0.1249999925494194,
0.2380952388048172,
0.1702127605676651,
0.11428570747375488,
0.1875,
0.25806450843811035,
0.060606054961681366,
0.12903225421905518,
0.0833333283662796,
0.1463414579629898,
0.09302324801683426,
0,
0.19999998807907104,
0.11538460850715637,
0.15789473056793213,
0.09302324801683426,
0.19354838132858276,
0.1111111044883728
] | rylt7mFU8S | true | [
"We advocate for random features as a theory of biological neural networks, focusing on sparsely connected networks"
] |
[
"We propose a new application of embedding techniques to problem retrieval in adaptive tutoring.",
"The objective is to retrieve problems similar in mathematical concepts.",
"There are two challenges: First, like sentences, problems helpful to tutoring are never exactly the same in terms of the underlying concepts.",
"Instead, good problems mix concepts in innovative ways, while still displaying continuity in their relationships.",
"Second, it is difficult for humans to determine a similarity score consistent across a large enough training set.",
"We propose a hierarchical problem embedding algorithm, called Prob2Vec, that consists of an abstraction and an embedding step.",
"Prob2Vec achieves 96.88\\% accuracy on a problem similarity test, in contrast to 75\\% from directly applying state-of-the-art sentence embedding methods.",
"It is surprising that Prob2Vec is able to distinguish very fine-grained differences among problems, an ability humans need time and effort to acquire.",
"In addition, the sub-problem of concept labeling with imbalanced training data set is interesting in its own right.",
"It is a multi-label problem suffering from dimensionality explosion, which we propose ways to ameliorate.",
"We propose the novel negative pre-training algorithm that dramatically reduces false negative and positive ratios for classification, using an imbalanced training data set.",
"The traditional teaching methods that are widely used at universities for science, technology, engineering, and mathematic (STEM) courses do not take different abilities of learners into account.",
"Instead, they provide learners with a fixed set of textbooks and homework problems.",
"This ignorance of learners' prior background knowledge, pace of learning, various preferences, and learning goals in current education system can cause tremendous pain and discouragement for those who do not keep pace with this inefficient system BID6 ; BID5 ; BID7 ; BID18 ; BID43 .",
"Hence, e-learning methods are given considerable attention in an effort to personalize the learning process by providing learners with optimal and adaptive curriculum sequences.",
"Over the years, many web-based tools have emerged to adaptively recommend problems to learners based on courseware difficulty.",
"These tools tune the difficulty level of the recommended problems for learners and push them to learn by gradually increasing the difficulty level of recommended problems on a specific concept.",
"The downside of such methods is that they do not take the concept continuity and mixture of concepts into account, but focus on the difficulty level of single concepts.",
"Note that a learner who knows every individual concept does not necessarily have the ability to bring all of them together for solving a realistic problem on a mixture of concepts.",
"As a result, the recommender system needs to know similarity/dissimilarity of problems with mixture of concepts to respond to learners' performance more effectively as described in the next paragraph, which is something that is missing in the literature and needs more attention.Since it is difficult for humans to determine a similarity score consistent across a large enough training set, it is not feasible to simply apply supervised methods to learn a similarity score for problems.",
"In order to take difficulty, continuity, and mixture of concepts into account for similarity score used in a personalized problem recommender system in an adaptive practice, we propose to use a proper numerical representation of problems on mixture of concepts equipped with a similarity measure.",
"By virtue of vector representations for a set of problems on both single and mixture of concepts (problem embedding) that capture similarity of problems, learners' performance on a problem can be projected onto other problems.",
"As we see in this paper, creating a proper problem representation that captures mathematical similarity of problems is a challenging task, where baseline text representation methods and their refined versions fail to work.",
"Although the state-of-the-art methods for phrase/sentence/paragraph representation are doing a great job for general purposes, their shortcoming in our application is that they take lexical and semantic similarity of words into account, which is totally invalid when dealing with text related to math or any other special topic.",
"The words or even subject-related keywords of problems are not completely informative and cannot contribute to embedding of math problems on their own; as a result, the similarity of two problems is not highly correlated with the wording of the problems.",
"Hence, baseline methods perform poorly on the problem similarity detection test in problem recommender application.We find that instead of words or even subject-related keywords, conceptual ideas behind the problems determine their identity.",
"The conceptual particles (concepts) of problems are mostly not directly mentioned in problem wording, but there can be footprints of them in problems.",
"Since problem wording does not capture the similarity of problems, we propose an alternative hierarchical approach called Prob2Vec consisting of an abstraction and an embedding step.",
"The abstraction step projects a problem to a set of concepts.",
"The idea is that there exists a concept space with a reasonable dimension N , with N ranging from tens to a hundred, that can represent a much larger variety of problems of order O(2 N ).",
"Each variety can be sparsely inhabited, with some concept combination having only one problem.",
"This is because making problems is itself a creative process: The more innovative a problem is, the less likely it has exactly the same concept combination as another problem.",
"The explicit representation of problems using concepts also enables state-dependent similarity computation, which we will explore in future work.",
"The embedding step constructs a vector representation of the problems based on concept cooccurrence.",
"Like sentence embedding, not only does it capture the common concepts between problems, but also the continuity among concepts.",
"The proposed Prob2Vec algorithm achieves 96.88% accuracy on a problem similarity test, where human experts are asked to label the relative similarity among each triplet of problems.",
"In contrast, the best of the existing methods, which directly applies sentence embedding, achieves 75% accuracy.",
"It is surprising that Prob2Vec is able to distinguish very fine-grained differences among problems, as the problems in some triplets are highly similar to each other, and only humans with extensive training in the subject are able to identify their relative order of similarity.",
"The problem embedding obtained from Prob2Vec is being used in the recommender system of an e-learning tool for an undergraduate probability course for four semesters with successful results on hundreds of students, specially benefiting minorities who tend to be more isolated in the current education system.In addition, the sub-problem of concept labeling in the abstraction step is interesting in its own right.",
"It is a multi-label problem suffering from dimensionality explosion, as there can be as many as 2 N problem types.",
"This results in two challenges: First, there are very few problems for some types, hence a direct classification on 2 N classes suffers from a severe lack of data.",
"Second, per-concept classification suffers from imbalance of training samples and needs a very small per-concept false positive in order to achieve a reasonable per-problem false positive.",
"We propose pre-training of the neural network with negative samples (negative pre-training) that beats a similar idea to oneshot learning BID15 , where the neural network is pre-trained on classification of other concepts to have a warm start on classification of the concept of interest (transfer learning).",
"A hierarchical embedding method called Prob2Vec for subject specific text is proposed in this paper.",
"Prob2Vec is empirically proved to outperform baselines by more than 20% in a properly validated similarity detection test on triplets of problems.",
"The Prob2Vec embedding vectors for problems are being used in the recommender system of an e-learning tool for an undergraduate probability course for four semesters.",
"We also propose negative pre-training for training with imbalanced data sets to decrease false negatives and positives.",
"As future work, we plan on using graphical models along with problem embedding vectors to more precisely evaluate the strengths and weaknesses of students on single and mixture of concepts to do problem recommendation in a more effective way.one of popular methods, E w : w ∈ W , where we use Glove, problem embedding for P i that is denoted by E i is computed as follows: DISPLAYFORM0 where u is the first principle component of E i : 1 ≤ i ≤ M and a is a hyper-parameter which is claimed to result in best performance when a = 10 −3 to a = 10 −4 .",
"We tried different values for a inside this interval and out of it and found a = 10 −5 and a = 10 −3 to best work for our data set when using all words and a = 2 × 10 −2 to best work for when using selected words.",
"(iii) 3-SVD: using the same hierarchical approach as Prob2Vec, concept embedding in the second step can be done with an SVD-based method instead of the Skip-gram method as follows.Recall that the concept dictionary is denoted by {C 1 , C 2 , · · · , C N }, where each problem is labeled with a subset of these concepts.",
"Let N c (C i , C j ) for i = j denote number of cooccurrences of concepts C i and C j in problems of data set; i.e. there are N c (C i , C j ) number of problems that are labeled with both C i and C j .",
"The co-occurrence matrix is formed as follows: The SVD decomposition of the P P M I matrix is as P P M I = U SV , where U, S, V ∈ R N ×N , and S is a diagonal matrix.",
"Denote embedding size of concepts by d ≤ N , and let U d be the first d columns of matrix U , S d be a diagonal matrix with the first d diagonal elements of diagonal matrix S, and V d be the first d rows of matrix V .",
"The followings are different variants of SVD-based concept embedding BID21 : DISPLAYFORM1 • eig: embedding of N concepts are given by N rows of matrix U d that are of embedding length d.",
"• sub: N rows of U d S d are embedding of N concepts.•",
"shifted: the P P M I matrix is defined in a slightly different way in this variant as follows: Note that the P P M I matrix is not necessarily symmetric in this case. By",
"deriving U d and S d matrices as before, embedding of N concepts are given by N rows of U d S d ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3255814015865326,
0.1538461446762085,
0.12244897335767746,
0.04651162400841713,
0.260869562625885,
0.2666666507720947,
0.23999999463558197,
0.11999999731779099,
0.2978723347187042,
0.22727271914482117,
0.3529411852359772,
0.0714285671710968,
0.0952380895614624,
0.05970148742198944,
0.15094339847564697,
0.08695651590824127,
0.19230768084526062,
0.1111111044883728,
0.17543859779834747,
0.16867469251155853,
0.2461538463830948,
0.13793103396892548,
0.19999998807907104,
0.1599999964237213,
0.16393442451953888,
0.13333332538604736,
0.08163265138864517,
0.23076923191547394,
0.20512820780277252,
0.10344827175140381,
0.04651162400841713,
0.14814814925193787,
0.0416666604578495,
0.1395348757505417,
0.043478257954120636,
0.2142857164144516,
0.045454539358615875,
0.1818181723356247,
0.2750000059604645,
0.1304347813129425,
0.17543859779834747,
0.19607841968536377,
0.2769230604171753,
0.3181818127632141,
0.19607841968536377,
0.31372547149658203,
0.3478260934352875,
0.17475727200508118,
0.19672130048274994,
0.21052631735801697,
0.10526315122842789,
0.1355932205915451,
0.1090909019112587,
0.037735845893621445,
0.04999999701976776,
0.15094339847564697,
0.04444443807005882
] | SJl8gnAqtX | true | [
"We propose the Prob2Vec method for problem embedding used in a personalized e-learning tool in addition to a data level classification method, called negative pre-training, for cases where the training data set is imbalanced."
] |
[
"We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections.",
"Each Crescendo block contains independent convolution paths with increased depths.",
"The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks.",
"In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN.",
"Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters.",
"CrescendoNet provides a new way to construct high performance deep convolutional neural networks without residual connections.",
"Moreover, through investigating the behavior and performance of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior, which differs from the FractalNet that is also a deep convolutional neural network without residual connections.",
"Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training.",
"Deep convolutional neural networks (CNNs) have significantly improved the performance of image classification BID3 BID25 .",
"However, training a CNN also becomes increasingly difficult with the network deepening.",
"One of important research efforts to overcome this difficulty is to develop new neural network architectures BID6 BID14 .Recently",
", residual network BID3 and its variants BID8 have used residual connections among layers to train very deep CNN. The residual",
"connections promote the feature reuse, help the gradient flow, and reduce the need for massive parameters. The ResNet BID3",
"and DenseNet BID8 achieved state-of-the-art accuracy on benchmark datasets. Alternatively,",
"FractalNet BID14 expanded the convolutional layers in a fractal form to generate deep CNNs. Without residual",
"connections BID3 and manual deep supervision BID15 , FractalNet achieved high performance on image classification based on network structural design only.Many studies tried to understand reasons behind the representation view of deep CNNs. BID27 showed that",
"residual network can be seen as an ensemble of relatively shallow effective paths. However, BID2 argued",
"that ensembles of shallow networks cannot explain the experimental results of lesioning, layer dropout, and layer reshuffling on ResNet. They proposed that residual",
"connections have led to unrolled iterative estimation in ResNet. Meanwhile, BID14 speculated",
"that the high performance of FractalNet was due to the unrolled iterative estimation of features of the longest path using features of shorter paths. Although unrolled iterative",
"estimation model can explain many experimental results, it is unclear how it helps improve the classification performance of ResNet and FractalNet. On the other hand, the ensemble",
"model can explain the performance improvement easily.In this work, we propose CrescendoNet, a new deep convolutional neural network with ensemble behavior. Same as other deep CNNs, CrescendoNet",
"is created by stacking simple building blocks, called Crescendo blocks FIG0 ). Each Crescendo block comprises a set",
"of independent feed-forward paths with increased number of convolution and batch-norm layers (Ioffe & Szegedy, 2015a). We only use the identical size, 3 ×",
"3, for all convolutional filters in the entire network. Despite its simplicity, CrescendoNet",
"shows competitive performance on benchmark CIFAR10, CI-FAR100, and SVHN datasets.Similar to FractalNet, CrescendoNet does not include residual connections. The high performance of CrescendoNet",
"also comes completely from its network structural design. Unlike the FractalNet, in which the",
"numbers of convolutional layers and associated parameters are increased exponentially, the numbers of convolutional layers and parameters in Crescendo blocks are increased linearly.CrescendoNet shows clear ensemble behavior (Section 3.4). In CrescendoNet, although the longer",
"paths have better performances than those of shorter paths, the combination of different length paths have even better performance. A set of paths generally outperform",
"its subsets. This is different from FractalNet,",
"in which the longest path alone achieves the similar performance as the entire network does, far better than other paths do.Furthermore, the independence between paths in CrescendoNet allows us to introduce a new pathwise training procedure, in which paths in each building block are trained independently and sequentially. The path-wise procedure can reduce",
"the memory needed for training. Especially, we can reduce the amortized",
"memory used for training CrescendoNet to about one fourth.We summarize our contribution as follows:• We propose the Crescendo block with linearly increased convolutional and batch-norm layers. The CrescendoNet generated by stacking",
"Crescendo blocks further demonstrates that the high performance of deep CNNs can be achieved without explicit residual learning.• Through our analysis and experiments,",
"we discovered an emergent behavior which is significantly different from which of FractalNet. The entire CrescendoNet outperforms any",
"subset of it can provide an insight of improving the model performance by increasing the number of paths by a pattern.• We introduce a path-wise training approach",
"for CrescendoNet, which can lower the memory requirements without significant loss of accuracy given sufficient data.",
"CNN has shown excellent performance on image recognition tasks.",
"However, it is still challenging to tune, modify, and design an CNN.",
"We propose CrescendoNet, which has a simple convolutional neural network architecture without residual connections BID3 .",
"Crescendo block uses convolutional layers with same size 3 × 3 and joins feature maps from each branch by the averaging operation.",
"The number of convolutional layers grows linearly in CrescendoNet while exponentially in FractalNet BID14 .",
"This leads to a significant reduction of computational complexity.Even with much fewer layers and a simpler structure, CrescendoNet matches the performance of the original and most of the variants of ResNet on CIFAR10 and CIFAR100 classification tasks.",
"Like FractalNet BID14 , we use dropout and drop-path as regularization mechanisms, which can train CrescendoNet to be an anytime classifier, namely, CrescendoNet can perform inference with any combination of the branches according to the latency requirements.",
"Our experiments also demonstrated that CrescendoNet synergized well with Adam optimization, especially when the training data is sufficient.",
"In other words, we can avoid scheduling the learning rate which is usually performed empirically for training existing CNN architectures.CrescendoNet shows a different behavior from FractalNet in experiments on CIFAR10/100 and SVHN.",
"In FractalNet BID14 , the longest path alone achieves the similar performance as the entire network, far better than other paths, which shows the student-teacher effect.",
"The whole FractalNet except the longest path acts as a scaffold for the training and becomes dispensable later.",
"On the other hand, CrescendoNet shows that the whole network significantly outperforms any set of it.",
"This fact sheds the light on exploring the mechanism which can improve the performance of deep CNNs by increasing the number of paths."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.8125,
0,
0.06896550953388214,
0.1666666567325592,
0,
0.32258063554763794,
0.23529411852359772,
0.10810810327529907,
0,
0.14814814925193787,
0,
0.23529411852359772,
0.0624999962747097,
0,
0.19354838132858276,
0.07999999821186066,
0.06451612710952759,
0.05714285373687744,
0.07407406717538834,
0,
0,
0.1428571343421936,
0.375,
0.052631575614213943,
0,
0.10810810327529907,
0,
0.0952380895614624,
0,
0,
0.09999999403953552,
0,
0.13333332538604736,
0.20512820780277252,
0,
0.21052631735801697,
0.12903225421905518,
0.0833333283662796,
0.07407406717538834,
0.5333333015441895,
0.0555555522441864,
0,
0.04444444179534912,
0,
0,
0.0833333283662796,
0,
0.0624999962747097,
0,
0.11764705181121826
] | HJdXGy1RW | true | [
"We introduce CrescendoNet, a deep CNN architecture by stacking simple building blocks without residual connections."
] |
[
"Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints (such as nonnegativity).",
"Here we propose Deep Random Splines, a flexible class of random functions obtained by transforming Gaussian noise through a deep neural network whose output are the parameters of a spline.",
"Unlike Gaussian processes, Deep Random Splines allow us to readily enforce shape constraints while inheriting the richness and tractability of deep generative models.",
"We also present an observational model for point process data which uses Deep Random Splines to model the intensity function of each point process and apply it to neuroscience data to obtain a low-dimensional representation of spiking activity.",
"Inference is performed via a variational autoencoder that uses a novel recurrent encoder architecture that can handle multiple point processes as input.",
"Gaussian Processes (GPs) are one of the main tools for modeling random functions BID24 .",
"They allow control of the smoothness of the function by choosing an appropriate kernel but have the disadvantage that, except in special cases (for example BID11 ; BID9 ), inference in GP models scales poorly in both memory and runtime.",
"Furthermore, GPs cannot easily handle shape constraints.",
"It can often be of interest to model a function under some shape constraint, for example nonnegativity, monotonicity or convexity/concavity BID22 BID26 BID23 BID20 .",
"While some shape constraints can be enforced by transforming the GP or by enforcing them at a finite number of points, doing so cannot always be done and usually makes inference harder, see for example BID18 .Splines",
"are another popular tool for modeling unknown functions BID29 . When there",
"are no shape constraints, frequentist inference is straightforward and can be performed using linear regression, by writing the spline as a linear combination of basis functions. Under shape",
"constraints, the basis function expansion usually no longer applies, since the space of shape constrained splines is not typically a vector space. However, the",
"problem can usually still be written down as a tractable constrained optimization problem BID26 . Furthermore,",
"when using splines to model a random function, a distribution must be placed on the spline's parameters, so the inference problem becomes Bayesian. BID7 proposed",
"a method to perform Bayesian inference in a setting without shape constraints, but the method relies on the basis function expansion and cannot be used in a shape constrained setting. Furthermore,",
"fairly simple distributions have to be placed on the spline parameters for their approximate posterior sampling algorithm to work adequately, which results in the splines having a restrictive and oversimplified distribution.On the other hand, deep probabilistic models take advantage of the major progress in neural networks to fit rich, complex distributions to data in a tractable way BID25 BID21 BID15 BID10 BID14 . However, their",
"goal is not usually to model random functions.In this paper, we introduce Deep Random Splines (DRS), an alternative to GPs for modeling random functions. DRS are a deep",
"probabilistic model in which standard Gaussian noise is transformed through a neural network to obtain the parameters of a spline, and the random function is then the corresponding spline. This combines",
"the complexity of deep generative models and the ability to enforce shape constraints of splines.We use DRS to model the nonnegative intensity functions of Poisson processes BID16 . In order to ensure",
"that the splines are nonnegative, we use a parametrization of nonnegative splines that can be written as an intersection of convex sets, and then use the method of alternating projections BID28 to obtain a point in that intersection (and differentiate through that during learning). To perform scalable",
"inference, we use a variational autoencoder BID15 with a novel encoder architecture that takes multiple, truly continuous point processes as input (not discretized in bins, as is common).Our contributions are",
": (i) Introducing DRS,",
"(ii) using the method",
"of alternating projections to constrain splines, (iii) proposing a variational",
"autoencoder model whith a novel encoder architecture for point process data which uses DRS, and (iv) showing that our model outperforms",
"commonly used alternatives in both simulated and real data.The rest of the paper is organized as follows: we first explain DRS, how to parametrize them and how constraints can be enforced in section 2. We then present our model and how to",
"do inference in section 3. We then compare our model against competing",
"alternatives in simulated data and in two real spiking activity datasets in section 4, and observe that our method outperforms the alternatives. Finally, we summarize our work in section 5.",
"In this paper we introduced Deep Random Splines, an alternative to Gaussian processes to model random functions.",
"Owing to our key modeling choices and use of results from the spline and optimization literatures, fitting DRS is tractable and allows one to enforce shape constraints on the random functions.",
"While we only enforced nonnegativity and smoothness in this paper, it is straightforward to enforce constraints such as monotonicity (or convexity/concavity).",
"We also proposed a variational autoencoder that takes advantage of DRS to accurately model and produce meaningful low-dimensional representations of neural activity.Future work includes using DRS-VAE for multi-dimensional point processes, for example spatial point processes.",
"While splines would become harder to use in such a setting, they could be replaced by any family of easily-integrable nonnegative functions, such as, for example, conic combinations of Gaussian kernels.",
"Another line of future work involves using a more complicated point process than the Poisson, for example a Hawkes process, by allowing the parameters of the spline in a certain interval to depend on the previous spiking history of previous intervals.",
"Finally, DRS can be applied in more general settings than the one explored in this paper since they can be used in any setting where a random function is involved, having many potential applications beyond what we analyzed here."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.18867923319339752,
0.1666666567325592,
0.13636362552642822,
0.38461539149284363,
0.19512194395065308,
0.11428570747375488,
0.0714285671710968,
0,
0.17777776718139648,
0.1071428507566452,
0.0624999962747097,
0.1702127605676651,
0.1428571343421936,
0.05714285373687744,
0.22727271914482117,
0.13333332538604736,
0.21333332359790802,
0.1702127605676651,
0.2916666567325592,
0.42553192377090454,
0.27586206793785095,
0.23999999463558197,
0,
0,
0.19354838132858276,
0.24390242993831635,
0.17241378128528595,
0.12121211737394333,
0.045454539358615875,
0.21621620655059814,
0.2083333283662796,
0.1428571343421936,
0.3333333432674408,
0.19999998807907104,
0.14814814925193787,
0.0357142798602581
] | rJl97IIt_E | true | [
"We combine splines with neural networks to obtain a novel distribution over functions and use it to model intensity functions of point processes."
] |
[
"The recent development of Natural Language Processing (NLP) has achieved great success using large pre-trained models with hundreds of millions of parameters.",
"However, these models suffer from the heavy model size and high latency such that we cannot directly deploy them to resource-limited mobile devices.",
"In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model.",
"Like BERT, MobileBERT is task-agnostic; that is, it can be universally applied to various downstream NLP tasks via fine-tuning.",
"MobileBERT is a slimmed version of BERT-LARGE augmented with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.",
"To train MobileBERT, we use a bottom-to-top progressive scheme to transfer the intrinsic knowledge of a specially designed Inverted Bottleneck BERT-LARGE teacher to it.",
"Empirical studies show that MobileBERT is 4.3x smaller and 4.0x faster than original BERT-BASE while achieving competitive results on well-known NLP benchmarks.",
"On the natural language inference tasks of GLUE, MobileBERT achieves 0.6 GLUE score performance degradation, and 367 ms latency on a Pixel 3 phone.",
"On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a 90.0/79.2 dev F1 score, which is 1.5/2.1 higher than BERT-BASE.",
"The NLP community has witnessed a revolution of pre-training self-supervised models.",
"These models usually have hundreds of millions of parameters.",
"They are trained on huge unannotated corpus and then fine-tuned for different small-data tasks (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018; Radford et al., 2019; Yang et al., 2019) .",
"Among these models, BERT (Devlin et al., 2018) , which stands for Bidirectional Encoder Representations from Transformers (Vaswani et al., 2017) , shows substantial accuracy improvements compared to training from scratch using annotated data only.",
"However, as one of the largest models ever in NLP, BERT suffers from the heavy model size and high latency, making it impractical for resource-limited mobile devices to deploy the power of BERT in mobile-based machine translation, dialogue modeling, and the like.",
"There have been some works that task-specifically distill BERT into compact models (Turc et al., 2019; Tang et al., 2019; Sun et al., 2019; Tsai et al., 2019) .",
"To the best of our knowledge, there is not yet any work for building a task-agnostic lightweight pre-trained model, that is, a model that can be fine-tuned on downstream NLP tasks just like what the original BERT does.",
"In this paper, we propose MobileBERT to fill this gap.",
"In practice, task-agnostic compression of BERT is desirable.",
"Task-specific compression needs to first fine-tune the original large BERT model into task-specific teachers and then distill.",
"Such a process is way more complicated and costly than directly fine-tuning a task-agnostic compact model.",
"At first glance, it may seem straightforward to obtain a task-agnostic compact version of BERT.",
"For example, one may just take a narrower or shallower architecture of BERT, and then train it with a prediction loss together with a distillation loss (Turc et al., 2019; Sun et al., 2019) .",
"Unfortunately, empirical results show that such a straightforward approach results in significant accuracy loss (Turc et al., 2019) .",
"This may not be that surprising.",
"It aligns with a well-known observation that shallow networks usually do not have enough representation power while narrow and deep networks are difficult to train.",
"Our MobileBERT is designed to be as deep as BERT LARGE while each layer is made much narrower via adopting bottleneck structures and balancing between self-attentions and feed- MobileBERT is trained by progressively transferring knowledge from IB-BERT.",
"forward networks (Figure 1 ).",
"To train MobileBERT, we use a bottom-to-top progressive scheme to transfer the intrinsic knowledge of a specially designed Inverted Bottleneck BERT LARGE (IB-BERT) teacher to it.",
"As a pre-trained NLP model, MobileBERT is both storage efficient (w.r.t model size) and computationally efficient (w.r.t latency) for mobile and resource-constrained environments.",
"Experimental results on several NLP tasks show that while being 4.3× smaller and 4.0× faster, MobileBERT can still achieve competitive results compared to BERT BASE .",
"On the natural language inference tasks of GLUE, MobileBERT can have only 0.6 GLUE score performance degradation with 367 ms latency on a Pixel 3 phone.",
"On the SQuAD v1.1/v2.0 question answering task, MobileBERT obtains 90.3/80.2 dev F1 score which is 1.5/2.1 higher than BERT BASE .",
"2 RELATED WORK 2.1 BERT BERT takes the embedding of source tokens as input.",
"Each building block of BERT contains one Multi-Head self-Attention (MHA) module (Vaswani et al., 2017) and one Feed-Forward Network (FFN) module, which are connected by skip connections.",
"The MHA module allows the model to jointly attend to information from different subspaces, while the position-wise FFN consists of a two-layer linear transformation with gelu activation (Hendrycks & Gimpel, 2016) , which increase the representational power of the model.",
"Figure 1 (a) illustrates the original BERT architecture.",
"In the pre-training stage, BERT is required to predict the masked tokens in sentences (mask language modeling task), as well as whether one sentence is the next sentence of the other (next sentence prediction task).",
"In the fine-tuning stage, BERT is further trained on task-specific annotated data.",
"We perform an ablation study to investigate how each component of MobileBERT contributes to its performance on the dev data of a few GLUE tasks with diverse characteristics.",
"To accelerate the experiment process, we halve the original pre-training schedule in the ablation study.",
"We conduct a set of ablation experiments with regard to Attention Transfer (AT), Feature Map Transfer (FMT) and Pre-training Distillation (PD).",
"The operational OPTimizations (OPT) are removed in these experiments.",
"Moreover, to investigate the effectiveness of the proposed novel architecture of MobileBERT, we compare MobileBERT with two compact BERT models from Turc et al. (2019) .",
"For a fair comparison, we also design our own BERT baseline BERT SMALL* , which is the best model setting we can find with roughly 25M parameters under the original BERT architecture.",
"The detailed model setting of BERT SMALL* can be found in Table 2 .",
"Besides these experiments, to verify the performance of MobileBERT on real-world mobile devices, we export the models with Tensorflow Lite 5 APIs and measure the inference latencies on a single large core of a Pixel 3 phone with a fixed sequence length of 128.",
"The results are listed in Table 5 .",
"We first can see that the propose Feature Map Transfer contributes most to the performance improvement of MobileBERT, while Attention Transfer and Pre-training Distillation also play positive roles.",
"As expected, the proposed operational OPTimizations hurt the model performance a bit, but it brings a crucial speedup of 1.68×.",
"In architecture comparison, we find that although specifically designed for progressive knowledge transfer, our MobileBERT architecture alone is still quite competitive.",
"It outperforms BERT SMALL * and BERT SMALL on all compared tasks, while outperforming the 1.7× sized BERT MEDIUM on the SST-2 task.",
"Finally, we can L1 H1 L1 H2 L1 H3 L1 H4 L12 H1 L12 H2 L12 H3 L12 H4 MobileBERT ( find that although augmented with the powerful progressive knowledge transfer, our MobileBERT still degrades greatly when compared to the IB-BERT LARGE teacher.",
"We visualize the attention distributions of the 1 st and the 12 th layers of a few models in Figure 3 for further investigation.",
"The proposed attention transfer can help the student mimic the attention distributions of the teacher very well.",
"Surprisingly, we find that the attention distributions in the attention heads of \"MobileBERT(bare)+PD+FMT\" are exactly a re-order of those of \"Mobile-BERT(bare)+PD+FMT+AT\" (also the teacher model), even if it has not been trained by the attention transfer objective.",
"This phenomenon indicates that multi-head attention is a crucial and unique part of the non-linearity of BERT.",
"Moreover, it can explain the minor improvements of Attention Transfer in ablation table 5, since the alignment of feature maps lead to the alignment of attention distributions.",
"We have presented MobileBERT which is a task-agnostic compact variant of BERT.",
"It is built upon a progressive knowledge transfer method and a conjugate architecture design.",
"Standard model compression techniques including quantization (Shen et al., 2019) and pruning (Zhu & Gupta, 2017) can be applied to MobileBERT to further reduce the model size as well as the inference latency.",
"In addition, although we have utilized low-rank decomposition for the embedding layer, it still accounts for a large part in the final model.",
"We believe there is a big room for extremely compressing the embedding table (Khrulkov et al., 2019; May et al., 2019) ."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.04347825422883034,
0.05405404791235924,
0.0952380895614624,
0.1428571343421936,
0.04444443807005882,
0.5652173757553101,
0.2083333283662796,
0.25,
0.05882352590560913,
0,
0.08510638028383255,
0.03703703358769417,
0.03448275476694107,
0,
0.10344827175140381,
0,
0.06451612710952759,
0.04999999329447746,
0.21052631735801697,
0.052631575614213943,
0.11538460850715637,
0.04878048226237297,
0,
0.12765957415103912,
0.1090909019112587,
0,
0.04255318641662598,
0.13333332538604736,
0.25,
0.1599999964237213,
0.1666666567325592,
0,
0.07999999821186066,
0.10526315122842789,
0,
0.039215680211782455,
0.11428570747375488,
0.20408162474632263,
0,
0.1395348757505417,
0,
0,
0.11764705181121826,
0,
0.1355932205915451,
0,
0.16326530277729034,
0.0952380895614624,
0.09302324801683426,
0.1428571343421936,
0,
0.13636362552642822,
0,
0.037735845893621445,
0.1538461446762085,
0,
0.22857142984867096,
0.1666666567325592,
0.037735845893621445,
0.045454539358615875,
0.1395348757505417
] | SJxjVaNKwB | true | [
"We develop a task-agnosticlly compressed BERT, which is 4.3x smaller and 4.0x faster than BERT-BASE while achieving competitive performance on GLUE and SQuAD."
] |
[
"The importance weighted autoencoder (IWAE) (Burda et al., 2016) is a popular variational-inference method which achieves a tighter evidence bound (and hence a lower bias) than standard variational autoencoders by optimising a multi-sample objective, i.e. an objective that is expressible as an integral over $K > 1$ Monte Carlo samples.",
"Unfortunately, IWAE crucially relies on the availability of reparametrisations and even if these exist, the multi-sample objective leads to inference-network gradients which break down as $K$ is increased (Rainforth et al., 2018).",
"This breakdown can only be circumvented by removing high-variance score-function terms, either by heuristically ignoring them (which yields the 'sticking-the-landing' IWAE (IWAE-STL) gradient from Roeder et al. (2017)) or through an identity from Tucker et al. (2019) (which yields the 'doubly-reparametrised' IWAE (IWAE-DREG) gradient).",
"In this work, we argue that directly optimising the proposal distribution in importance sampling as in the reweighted wake-sleep (RWS) algorithm from Bornschein & Bengio (2015) is preferable to optimising IWAE-type multi-sample objectives.",
"To formalise this argument, we introduce an adaptive-importance sampling framework termed adaptive importance sampling for learning (AISLE) which slightly generalises the RWS algorithm.",
"We then show that AISLE admits IWAE-STL and IWAE-DREG (i.e. the IWAE-gradients which avoid breakdown) as special cases.",
"Let x be some observation and let z be some latent variable taking values in some space Z. These are modeled via the generative model p θ (z, x) = p θ (z)p θ (x|z) which gives rise to the marginal likelihood p θ (x) = Z p θ (z, x) dz of the model parameters θ.",
"In this work, we analyse algorithms for variational inference, i.e. algorithms which aim to",
"1. learn the generative model, i.e. find a value θ which is approximately equal to the maximum-likelihood estimate (MLE) θ ml := arg max θ p θ (x);",
"2. construct a tractable variational approximation q φ,x (z) of p θ (z|x) = p θ (z, x)/p θ (x), i.e. find the value φ such that q φ ,x (z) is as close as possible to p θ (z|x) in some suitable sense.",
"A few comments about this setting are in order.",
"Firstly, as is common in the literature, we restrict our presentation to a single latent representation-observation pair (z, x) to avoid notational clutter -the extension to multiple independent observations is straightforward.",
"Secondly, we assume that no parameters are shared between the generative model p θ (z, x) and the variational approximation q φ,x (z).",
"This is common in neural-network applications but could be relaxed.",
"Thirdly, our setting is general enough to cover amortised inference.",
"For this reason, we often refer to φ as the parameters of an inference network.",
"Two main classes of stochastic gradient-ascent algorithms for optimising ψ := (θ, φ) which employ K ≥ 1 Monte Carlo samples ('particles') to reduce errors have been proposed.",
"(Roeder et al., 2017) heuristically drops the problematic score-function terms from the IWAE φ-gradient.",
"This induces bias for the IWAE objective.",
"-IWAE-DREG.",
"The 'doubly-reparametrised' IWAE (IWAE-DREG) φ-gradient (Tucker et al., 2019) unbiasedly removes the problematic score-function terms from the IWAE φ-gradient using a formal identity.",
"We have shown that the adaptive-importance sampling paradigm of the reweighted wake-sleep (RWS) (Bornschein & Bengio, 2015) is preferable to the multi-sample objective paradigm of importance weighted autoencoders (IWAEs) (Burda et al., 2016) because the former achieves all the goals of the latter whilst avoiding its drawbacks.",
"A On the rôle of the self-normalisation bias within RWS/AISLE"
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.10810810327529907,
0.10169491171836853,
0.0952380895614624,
0.17543859779834747,
0.12244897335767746,
0.30434781312942505,
0.12121211737394333,
0,
0.07692307233810425,
0.19354838132858276,
0.0555555522441864,
0.145454540848732,
0.08163265138864517,
0.10810810327529907,
0,
0.1428571343421936,
0.0363636314868927,
0.04878048226237297,
0.05882352590560913,
0.0833333283662796,
0.1492537260055542,
0.1111111044883728
] | ryg7jhEtPB | true | [
"We show that most variants of importance-weighted autoencoders can be derived in a more principled manner as special cases of adaptive importance-sampling approaches like the reweighted-wake sleep algorithm."
] |
[
"As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel.",
"Alistarh et al. (2017) describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs.",
"For the first variant, QSGD, they provide strong theoretical guarantees.",
"For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks.",
"Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf.",
"Deep learning is booming thanks to enormous datasets and very large models, leading to the fact that the largest datasets and models can no longer be trained on a single machine.",
"One common solution to this problem is to use distributed systems for training.",
"The most common algorithms underlying deep learning are stochastic gradient descent (SGD) and its variants, which led to a significant amount of research on building and understanding distributed versions of SGD.",
"Implementations of SGD on distributed systems and data-parallel versions of SGD are scalable and take advantage of multi-GPU systems.",
"Data-parallel SGD, in particular, has received significant attention due to its excellent scalability properties (Zinkevich et al., 2010; Bekkerman et al., 2011; Recht et al., 2011; Dean et al., 2012; Coates et al., 2013; Chilimbi et al., 2014; Duchi et al., 2015; Xing et al., 2015; .",
"In data-parallel SGD, a large dataset is partitioned among K processors.",
"These processors work together to minimize an objective function.",
"Each processor has access to the current parameter vector of the model.",
"At each SGD iteration, each processor computes an updated stochastic gradient using its own local data.",
"It then shares the gradient update with its peers.",
"The processors collect and aggregate stochastic gradients to compute the updated parameter vector.",
"Increasing the number of processing machines reduces the computational costs significantly.",
"However, the communication costs to share and synchronize huge gradient vectors and parameters increases dramatically as the size of the distributed systems grows.",
"Communication costs may thwart the anticipated benefits of reducing computational costs.",
"Indeed, in practical scenarios, the communication time required to share stochastic gradients and parameters is the main performance bottleneck (Recht et al., 2011; Seide et al., 2014; Strom, 2015; .",
"Reducing communication costs in data-parallel SGD is an important problem.",
"One promising solution to the problem of reducing communication costs of data-parallel SGD is gradient compression, e.g., through gradient quantization (Dean et al., 2012; Seide et al., 2014; Sa et al., 2015; Gupta et al., 2015; Abadi et al., 2016; Zhou et al., 2016; Bernstein et al., 2018) .",
"(This should not be confused with weight quantization/sparsification, as studied by ; Hubara et al. (2016) ; Park et al. (2017) ; , which we do not discuss here.",
") Unlike full-precision data-parallel SGD, where each processor is required to broadcast its local gradient in full-precision, i.e., transmit and receive huge full-precision vectors at each iteration, quantization requires each processor to transmit only a few communication bits per iteration for each component of the stochastic gradient.",
"One popular such proposal for communication-compression is quantized SGD (QSGD), due to .",
"In QSGD, stochastic gradient vectors are normalized to have unit L 2 norm, and then compressed by quantizing each element to a uniform grid of quantization levels using a randomized method.",
"While most lossy compression schemes do not provide convergence guarantees, QSGD's quantization scheme, is designed to be unbiased, which implies that the quantized stochastic gradient is itself a stochastic gradient, only with higher variance determined by the dimension and number of quantization levels.",
"As a result, are able to establish a number of theoretical guarantees for QSGD, including that it converges under standard assumptions.",
"By changing the number of quantization levels, QSGD allows the user to trade-off communication bandwidth and convergence time.",
"Despite their theoretical guarantees based on quantizing after L 2 normalization, Alistarh et al. opt to present empirical results using L ∞ normalization.",
"We call this variation QSGDinf.",
"While the empirical performance of QSGDinf is strong, their theoretical guarantees on the number of bits transmitted no longer apply.",
"Indeed, in our own empirical evaluation of QSGD, we find the variance induced by quantization is substantial, and the performance is far from that of SGD and QSGDinf.",
"Given the popularity of this scheme, it is natural to ask one can obtain guarantees as strong as those of QSGD while matching the practical performance of the QSGDinf heuristic.",
"In this work, we answer this question in the affirmative by providing a new quantization scheme which fits into QSGD in a way that allows us to establish stronger theoretical guarantees on the variance, bandwidth, and cost to achieve a prescribed gap.",
"Instead of QSGD's uniform quantization scheme, we use an unbiased nonuniform logarithmic scheme, similar to those introduced in telephony systems for audio compression (Cattermole, 1969) .",
"We call the resulting algorithm nonuniformly quantized stochastic gradient descent (NUQSGD).",
"Like QSGD, NUQSGD is a quantized data-parallel SGD algorithm with strong theoretical guarantees that allows the user to trade off communication costs with convergence speed.",
"Unlike QSGD, NUQSGD has strong empirical performance on deep models and large datasets, matching that of QSGDinf.",
"In particular, we provide a new efficient implementation for these schemes using a modern computational framework (Pytorch), and benchmark it on classic large-scale image classification tasks.",
"The intuition behind the nonuniform quantization scheme underlying NUQSGD is that, after L 2 normalization, many elements of the normalized stochastic gradient will be near-zero.",
"By concentrating quantization levels near zero, we are able to establish stronger bounds on the excess variance.",
"In the overparametrized regime of interest, these bounds decrease rapidly as the number of quantization levels increases.",
"Combined with a bound on the expected code-length, we obtain a bound on the total communication costs of achieving an expected suboptimality gap.",
"The resulting bound is slightly stronger than the one provided by QSGD.",
"To study how quantization affects convergence on state-of-the-art deep models, we compare NUQSGD, QSGD, and QSGDinf, focusing on training loss, variance, and test accuracy on standard deep models and large datasets.",
"Using the same number of bits per iteration, experimental results show that NUQSGD has smaller variance than QSGD, as expected by our theoretical results.",
"This smaller variance also translates to improved optimization performance, in terms of both training loss and test accuracy.",
"We also observe that NUQSGD matches the performance of QSGDinf in terms of variance and loss/accuracy.",
"Further, our distributed implementation shows that the resulting algorithm considerably reduces communication cost of distributed training, without adversely impacting accuracy.",
"Our empirical results show that NUQSGD can provide faster end-to-end parallel training relative to data-parallel SGD, QSGD, and Error-Feedback SignSGD (Karimireddy et al., 2019) on the ImageNet dataset.",
"We study data-parallel and communication-efficient version of stochastic gradient descent.",
"Building on QSGD , we study a nonuniform quantization scheme.",
"We establish upper bounds on the variance of nonuniform quantization and the expected code-length.",
"In the overparametrized regime of interest, the former decreases as the number of quantization levels increases, while the latter increases with the number of quantization levels.",
"Thus, this scheme provides a trade-off between the communication efficiency and the convergence speed.",
"We compare NUQSGD and QSGD in terms of their variance bounds and the expected number of communication bits required to meet a certain convergence error, and show that NUQSGD provides stronger guarantees.",
"Experimental results are consistent with our theoretical results and confirm that NUQSGD matches the performance of QSGDinf when applied to practical deep models and datasets including ImageNet.",
"Thus, NUQSGD closes the gap between the theoretical guarantees of QSGD and empirical performance of QSGDinf.",
"One limitation of our study which we aim to address in future work is that we focus on all-to-all reduction patterns, which interact easily with communication compression.",
"In particular, we aim to examine the interaction between more complex reduction patterns, such as ring-based reductions (Hannun et al., 2014) , which may yield superior performance in bandwidthbottlenecked settings, but which interact with communication-compression in non-trivial ways, since they may lead a gradient to be quantized at each reduction step.",
"Read that bit plus N following bits; The encoding, ENCODE(v), of a stochastic gradient is as follows: We first encode the norm v using b bits where, in practice, we use standard 32-bit floating point encoding.",
"We then proceed in rounds, r = 0, 1, · · · .",
"On round r, having transmitted all nonzero coordinates up to and including t r , we transmit ERC(i r ) where t r+1 = t r + i r is either",
"(i) the index of the first nonzero coordinate of h after t r (with t 0 = 0) or",
"(ii) the index of the last nonzero coordinate.",
"In the former case, we then transmit one bit encoding the sign ρ t r+1 , transmit ERC(log(2 s+1 h t r+1 )), and proceed to the next round.",
"In the latter case, the encoding is complete after transmitting ρ t r+1 and ERC(log(2 s+1 h t r+1 )).",
"The DECODE function (for Algorithm 1) simply reads b bits to reconstruct v .",
"Using ERC −1 , it decodes the index of the first nonzero coordinate, reads the bit indicating the sign, and then uses ERC −1 again to determines the quantization level of this first nonzero coordinate.",
"The process proceeds in rounds, mimicking the encoding process, finishing when all coordinates have been decoded.",
"Like , we use Elias recursive coding (Elias, 1975, ERC) to encode positive integers.",
"ERC is simple and has several desirable properties, including the property that the coding scheme assigns shorter codes to smaller values, which makes sense in our scheme as they are more likely to occur.",
"Elias coding is a universal lossless integer coding scheme with a recursive encoding and decoding structure.",
"The Elias recursive coding scheme is summarized in Algorithm 2.",
"For any positive integer N, the following results are known for ERC"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1395348757505417,
0.1249999925494194,
0.260869562625885,
0.1818181723356247,
0.41860464215278625,
0.09999999403953552,
0,
0.0952380895614624,
0.14814814925193787,
0,
0,
0,
0.1666666567325592,
0,
0.09090908616781235,
0.1538461446762085,
0.17391303181648254,
0.1818181723356247,
0.17391303181648254,
0.14999999105930328,
0,
0.08510638028383255,
0,
0.1111111044883728,
0,
0.0952380895614624,
0.11538460850715637,
0.1818181723356247,
0.2666666507720947,
0.17142856121063232,
0.1111111044883728,
0.4516128897666931,
0.3243243098258972,
0.31578946113586426,
0.2448979616165161,
0.05405404791235924,
0.0833333283662796,
0.21621620655059814,
0.4000000059604645,
0.052631575614213943,
0.1621621549129486,
0.06666666269302368,
0.1428571343421936,
0.19354838132858276,
0.1599999964237213,
0.05128204822540283,
0.2222222238779068,
0.12903225421905518,
0.4285714328289032,
0.1249999925494194,
0.1904761791229248,
0.17391303181648254,
0.08695651590824127,
0.23076923191547394,
0.13333332538604736,
0.23076923191547394,
0.2926829159259796,
0.3684210479259491,
0.9629629850387573,
0.052631575614213943,
0.10169491171836853,
0.08163265138864517,
0,
0.05128204822540283,
0.13793103396892548,
0.19999998807907104,
0.10810810327529907,
0.13333332538604736,
0,
0.1538461446762085,
0.06896550953388214,
0,
0.09090908616781235,
0.07407406717538834,
0,
0.07999999821186066
] | HyeJmlrFvH | true | [
"NUQSGD closes the gap between the theoretical guarantees of QSGD and the empirical performance of QSGDinf."
] |
[
"The impressive lifelong learning in animal brains is primarily enabled by plastic changes in synaptic connectivity.",
"Importantly, these changes are not passive, but are actively controlled by neuromodulation, which is itself under the control of the brain.",
"The resulting self-modifying abilities of the brain play an important role in learning and adaptation, and are a major basis for biological reinforcement learning.",
"Here we show for the first time that artificial neural networks with such neuromodulated plasticity can be trained with gradient descent.",
"Extending previous work on differentiable Hebbian plasticity, we propose a differentiable formulation for the neuromodulation of plasticity.",
"We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks.",
"In one task, neuromodulated plastic LSTMs with millions of parameters outperform standard LSTMs on a benchmark language modeling task (controlling for the number of parameters).",
"We conclude that differentiable neuromodulation of plasticity offers a powerful new framework for training neural networks.",
"Neural networks that deal with temporally extended tasks must be able to store traces of past events.",
"Often this memory of past events is maintained by neural activity reverberating through recurrent connections; other methods for handling temporal information exist, including memory networks BID36 or temporal convolutions BID21 .",
"However, in nature, the primary basis for long-term learning and memory in the brain is synaptic plasticity -the automatic modification of synaptic weights as a function of ongoing activity BID16 BID14 .",
"Plasticity is what enables the brain to store information over the long-term about its environment that would be impossible or impractical for evolution to imprint directly into innate connectivity (e.g. things that are different within each life, such as the language one speaks).Importantly",
", these modifications are not a passive process, but are actively modulated on a momentto-moment basis by dedicated systems and mechanisms: the brain can \"decide\" where and when to modify its own connectivity, as a function of its inputs and computations. This neuromodulation",
"of plasticity, which involves several chemicals (particularly dopamine; BID1 He et al. 2015; BID13 BID43 , plays an important role in learning and adaptation BID22 BID32 Kreitzer & Malenka, 2008) . By allowing the brain",
"to control its own modification as a function of ongoing states and events, the neuromodulation of plasticity can filter out irrelevant events while selectively incorporating important information, combat catastrophic forgetting of previously acquired knowledge, and implement a self-contained reinforcement learning algorithm by altering its own connectivity in a reward-dependent manner BID31 BID24 BID7 Hoerzer et al., 2014; BID19 BID4 BID38 .The complex organization",
"of neuromodulated plasticity is not accidental: it results from a long process of evolutionary optimization. Evolution has not only designed",
"the general connection pattern of the brain, but has also sculpted the machinery that controls neuromodulation, endowing the brain with carefully tuned self-modifying abilities and enabling efficient lifelong learning. In effect, this coupling of evolution",
"and plasticity is a meta-learning process (the original and by far most powerful example of meta-learning), whereby a simple but powerful optimization process (evolution guided by natural selection) discovered how to arrange elementary building blocks to produce remarkably efficient learning agents.Taking inspiration from nature, several authors have shown that evolutionary algorithms can design small neural networks (on the order of hundreds of connections) with neuromodulated plasticity (see the \"Related Work\" section below). However, many of the spectacular recent",
"advances in machine learning make use of gradient-based methods (which can directly translate error signals into weight gradients) rather than evolution (which has to discover the gradients through random weight-space exploration). If we could make plastic, neuromodulated",
"networks amenable to gradient descent, we could leverage gradient-based methods for optimizing and studying neuromodulated plastic networks, expanding the abilities of current deep learning architectures to include these important biologically inspired self-modifying abilities.Here we build on the differentiable plasticity framework BID19 BID20 to implement differentiable neuromodulated plasticity. As a result, for the first time to our knowledge",
", we are able to train neuromodulated plastic networks with gradient descent. We call our framework backpropamine in reference",
"to its ability to emulate the effects of natural neuromodulators (like dopamine) in artificial neural networks trained by backpropagation. Our experimental results establish that neuromodulated",
"plastic networks outperform both non-plastic and non-modulated plastic networks, both on simple reinforcement learning tasks and on a complex language modeling task involving a multi-million parameter network. By showing that neuromodulated plasticity can be optimized",
"through gradient descent, the backpropamine framework potentially provides more powerful types of neural networks, both recurrent and feedforward, for use in all the myriad domains in which neural networks have had tremendous impact.",
"This paper introduces a biologically-inspired method for training networks to self-modify their weights.",
"Building upon the differentiable plasticity framework, which already improved performance (sometimes dramatically) over non-plastic architectures on various supervised and RL tasks BID18 BID20 , here we introduce neuromodulated plasticity to let the network control its own weight changes.",
"As a result, for the first time, neuromodulated plastic networks can be trained with gradient descent, opening up a new research direction into optimizing large-scale self-modifying neural networks.As a complement to the benefits in the simple RL domains investigated, our finding that plastic and neuromodulated LSTMs outperform standard LSTMs on a benchmark language modeling task (importantly, a central domain of application of LSTMs) is potentially of great importance.",
"LSTMs are used in real-world applications with massive academic and economic impact.",
"Therefore, if plasticity and neuromodulation consistently improve LSTM performance (for a fixed search space size), the potential benefits could be considerable.",
"We intend to pursue this line of investigation and test plastic LSTMs (both neuromodulated and non) on other problems for which LSTMs are commonly used, such as forecasting.Conceptually, an important comparison point is the \"Learning to Reinforcement Learn\" (L2RL) framework introduced by BID39 .",
"In this meta-learning framework, the weights do not change during episodes: all within-episode learning occurs through updates to the activity state of the network.",
"This framework is explicitly described BID40 as a model of the slow sculpting of prefrontal cortex by the reward-based dopamine system, an analogy facilitated by the features of the A2C algorithm used for meta-training (such as the use of a value signal and modulation of weight changes by a reward prediction error).",
"As described in the RL experiments above, our approach adds more flexibility to this model by allowing the system to store state information with weight changes, in addition to hidden state changes.",
"However, because our framework allows the network to update its own connectivity, we might potentially extend the L2RL model one level higher: rather than using A2C as a hand-designed reward-based weight-modification scheme, the system could now determine its own arbitrary weight-modification scheme, which might make use of any signal it can compute (reward predictions, surprise, saliency, etc.) This emergent weight-modifying algorithm (designed over many episodes/lifetimes by the \"outer loop\" meta-training algorithm) might in turn sculpt network connectivity to implement the meta-learning process described by BID40 .",
"Importantly, this additional level of learning (or \"meta-meta-learning\") is not just a pure flight of fancy: it has undoubtedly taken place in evolution.",
"Because humans (and other animals) can perform meta-learning (\"learning-to-learn\") during their lifetime (Harlow, 1949; BID40 , and because humans are themselves the result of an optimization process (evolution), then meta-meta-learning has not only occurred, but may be the key to some of the most advanced human mental functions.",
"Our framework opens the tantalizing possibility of studying this process, while allowing us to replace evolution with any gradient-based method in the outermost optimization loop.To investigate the full potential of our approach, the framework described above requires several improvements.",
"These include: implementing multiple neuromodulatory signals (each with their own inputs and outputs), as seems to be the case in the brain BID12 Howe & Dombeck, 2016; BID27 ; introducing more complex tasks that could make full use of the flexibility of the framework, including the eligibility traces afforded by retroactive modulation and the several levels of learning mentioned above; and addressing the pitfalls in the implementation of reinforcement learning with reward-modulated Hebbian plasticity (e.g. the inherent interference between the unsupervised component of Hebbian learning and reward-based modifications; BID9 BID8 , so as to facilitate the automatic design of efficient, selfcontained reinforcement learning systems.",
"Finally, it might be necessary to allow the meta-training algorithm to design the overall architecture of the system, rather than simply the parameters of a fixed, hand-designed architecture.",
"With such a rich potential for extension, our framework for neuromodulated plastic networks opens many avenues of exciting research.",
"DISPLAYFORM0 j t , f t and o t are used for controlling the data-flow through the LSTM and i t is the actual data.",
"Therefore, plasticity is introduced in the path that goes through i t (adding plasticity to the control paths of LSTM is for future-work) .",
"The corresponding pre-synaptic and post-synaptic activations (denoted by x i (t − 1) and x j (t) respectively in equations 1 and",
"2) are h t−1 and i t .",
"A layer of size 200 has 40k (200×200) plastic connections.",
"Each plastic connection has its own individual η (used in equation",
"2) that is learned through backpropagation.",
"The plasticity coefficients (α i,j ) are used as shown in equation 1."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0624999962747097,
0,
0.05128204822540283,
0.21621620655059814,
0.060606054961681366,
0.2857142686843872,
0.04999999329447746,
0.060606054961681366,
0.29411762952804565,
0.04444443807005882,
0.045454539358615875,
0.06896550953388214,
0.2222222238779068,
0.039215680211782455,
0.10810810327529907,
0,
0.0416666604578495,
0.0952380895614624,
0.11538460850715637,
0.1249999925494194,
0.1111111044883728,
0.1463414579629898,
0.25531914830207825,
0.04255318641662598,
0.19999998807907104,
0.18867924809455872,
0.16438356041908264,
0,
0.10526315122842789,
0.06896550953388214,
0.10256409645080566,
0,
0.045454539358615875,
0.08888888359069824,
0.05128204822540283,
0.13114753365516663,
0.038461532443761826,
0.12631578743457794,
0.10256409645080566,
0.05714285373687744,
0,
0.05405404791235924,
0,
0,
0,
0.0714285671710968,
0,
0
] | r1lrAiA5Ym | true | [
"Neural networks can be trained to modify their own connectivity, improving their online learning performance on challenging tasks."
] |
[
"Deep learning has made remarkable achievement in many fields.",
"However, learning\n",
"the parameters of neural networks usually demands a large amount of labeled\n",
"data.",
"The algorithms of deep learning, therefore, encounter difficulties when applied\n",
"to supervised learning where only little data are available.",
"This specific task\n",
"is called few-shot learning.",
"To address it, we propose a novel algorithm for fewshot\n",
"learning using discrete geometry, in the sense that the samples in a class are\n",
"modeled as a reduced simplex.",
"The volume of the simplex is used for the measurement\n",
"of class scatter.",
"During testing, combined with the test sample and the\n",
"points in the class, a new simplex is formed.",
"Then the similarity between the test\n",
"sample and the class can be quantized with the ratio of volumes of the new simplex\n",
"to the original class simplex.",
"Moreover, we present an approach to constructing\n",
"simplices using local regions of feature maps yielded by convolutional neural networks.\n",
"Experiments on Omniglot and miniImageNet verify the effectiveness of\n",
"our simplex algorithm on few-shot learning.",
"Deep learning has exhibited outstanding ability in various disciplines including computer vision, natural language processing and speech recognition BID10 .",
"For instance, AlexNet has made a breakthrough on recognizing millions of imagery objects by means of deep Convolutional Neural Network (CNN) BID8 .",
"In the past five years, the algorithmic capability of comprehending visual concepts has been significantly improved by elaborately well-designed deep learning architectures BID4 BID22 .",
"However, training deep neural networks such as the widely employed CNNs of AlexNet BID8 , Inception BID23 , VGG BID20 , and ResNet BID5 , needs the supervision of many class labels which are handcrafted.",
"For example, the number of samples of each class in the ImageNet of object recognition benchmark BID17 is more than one thousand.",
"In fact, the number of labelled samples used for learning parameters of CNNs is far more than that because data augmentation is usually applied.",
"This kind of learning obviously deviates from the manner of human cognition.",
"A child can recognize a new object that she/he has never seen only by several examples, from simple shapes like rectangles to highly semantic animals like tigers.",
"However, deep learning algorithms encounter difficulty in such scenarios where only very sparse data are available for learning to recognize a new category, thus raising the research topic of one-shot learning or few-shot learning BID1 BID25 .The",
"seminal work BID2 models few-shot learning with the Bayesian framework. Empirical",
"knowledge of available categories is learned and parameterized as a probability density function. The unseen",
"class with a handful of examples is modeled as the posterior by updating the prior. Bayesian theory",
"provides a simple and elegant idea for solving learning problems with little data. If decomposed into",
"parts or programs, an object can be described by the joint distribution of Bayesian criterion. In this manner, human-level",
"performance on one-shot learning has been derived for discovering simple visual concepts such as ancient handwritten characters BID9 .With the prevalence of deep",
"learning, the recent work for few-shot learning focuses on the application of deep neural networks that have more capacity to accommodate the complexity of object representations. Siamese neural network facilitates",
"the performance of few-shot recognition by means of twin networks of sharing parameters, optimizing the distances of representative features in intraclasses BID7 . The counterpart of learning data structures",
"by distance is also formulated by triplet loss in BID11 . Researchers in BID11 assert that the distance",
"metrics can learn the intrinsic manifold structures of training data such that the network is more general and robust when employed for untrained objects. A very recent work pertaining to distance-based",
"optimization, named Prototypical Networks BID21 , significantly improves the capability of few-shot recognition. Prototypical Networks attempt to minimize the distance",
"of the test sample to the center of each class and are learned in the end-to-end manner.Memory-augmented architectures are also proposed to help assimilate new classes with more accurate inference BID18 . Matching network embeds metric learning in neural network",
"in the light of attention mechanism which is embodied by softmax BID26 . In a very recent work, the large-scale memory without the",
"need of resetting during training is formulated as an embedded module for arbitrary neural networks to remember the information of rare events BID6 . In order to obtain rapid learning with limited samples, meta",
"learning is exploited both in memory network and matching network. This \"learning to learn\" technique is extended to deal with",
"few-shot learning from the point of view of optimization BID15 . To be specific, a LSTM-based meta learner learns to mimic the",
"exact optimization algorithm and then harnesses the acquired capability to train the learner applied for the few-shot cases. The latest meta learning algorithms also deal with few-shot learning",
"from different angles, e.g. the fast adaptation of neural networks BID3 , and temporal convolution BID13 .In addition to the application of memory module or attention model in",
"LSTM, there is another type of algorithms digging the effective way of transferring the discriminative power of pre-trained models to few-shot circumstances. Resorting to the correlation between the activations in the last feature",
"layers and the associated parameters for softmax, a transformation is learned to derive the parameters for predicting new classes from corresponding activations BID14 .The algorithms based on deep learning can learn more expressive representations",
"for objects, essentially boosting the quality of feature extraction. However, the softmax classifier discriminates all categories by class boundaries",
", bypassing the steps that carefully characterize the structure of each class. Thus the algorithmic performance will deteriorate grossly if the distribution of",
"new class cannot be accurately modeled by trained networks. Besides softmax, another commonly applied method, k nearest neighbors (KNN), is",
"a point-to-point measurement and is incapable of conveying global structural information.To address this issue, we propose a geometric method for few-shot learning. Our perspective is that accurate geometric characterization for each class is essential",
"when only a handful of samples are available, because such sparse data are usually insufficient to fit well-converged parameterized classifier. To this end, we harness convex polytope to fit a class, in the sense that we construct",
"a convex polytope by selecting the samples in the class as the vertices of the polytope. The volume of the polytope is taken as the measurement of class scatter. Thus the polytopal",
"volume may be improved after including the query sample in the test set",
"during the testing trial. The normalized volume with respect to the original counterpart is applied to compute the distance",
"from the test sample to the test set. To highlight the structural details of object parts, we present the construction of polytope based",
"on convolutional feature maps as well.To the best of our understanding, however, there is no exact formula to calculating the volume of general convex polytope. To make our algorithm feasible, therefore, we use the simplest convex polytope -simplex instead. The",
"volume of a simplex can be expressed by the Cayley-Menger determinant BID0 , thus casting the problem",
"of few-shot recognition as a simple calculation of linear algebra. Experiments on Omniglot and miniImageNet datasets verify the effectiveness of our simple algorithm.",
"In this paper, we designed a novel method to deal with few-shot learning problems.",
"Our idea was from the point of view of high dimensional convex geometry and transformed the learning problem to the study of volumes of simplices.",
"The relation between a test sample and a class was investigated via the volumes of different polytopes.",
"By harnessing the power of simplex, we gave a rigorous mathematical formulation for our approach.",
"We also conduced extensive simulations to validate our method.",
"The results on various datasets showed the accuracy and robustness of the geometry-based method, compared to the state-of-the-art results in the literature."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.0952380895614624,
0,
0,
0.1904761791229248,
0,
0.375,
0,
0.0833333283662796,
0,
0.0952380895614624,
0,
0.09999999403953552,
0.0952380895614624,
0,
0.07999999821186066,
0.11764705181121826,
0.10526315122842789,
0,
0,
0.2222222238779068,
0.06451612710952759,
0,
0.05714285373687744,
0,
0.06451612710952759,
0.11764705181121826,
0.08695651590824127,
0.10526315122842789,
0.1304347813129425,
0.260869562625885,
0.07407406717538834,
0.1428571343421936,
0.2142857164144516,
0,
0.05714285373687744,
0.15789473056793213,
0.11428570747375488,
0.07999999821186066,
0.1428571343421936,
0.13793103396892548,
0.17777776718139648,
0.0624999962747097,
0.1860465109348297,
0.27586206793785095,
0.19354838132858276,
0.2222222238779068,
0.05128204822540283,
0.15789473056793213,
0.1395348757505417,
0,
0,
0.0624999962747097,
0.2380952388048172,
0.045454543083906174,
0.0624999962747097,
0,
0.2142857164144516,
0.06666666269302368,
0.08695651590824127,
0,
0.0624999962747097,
0.4615384638309479,
0.1249999925494194,
0,
0,
0.1904761791229248,
0.06666666269302368
] | H1x5K0mSnQ | true | [
"A simplex-based geometric method is proposed to cope with few-shot learning problems."
] |
[
"Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance.",
"We show that a network can learn complicated sequences with a reward-modulated Hebbian learning rule if the network of reservoir neurons is combined with a second network that serves as a dynamic working memory and provides a spatio-temporal backbone signal to the reservoir.",
"In combination with the working memory, reward-modulated Hebbian learning of the readout neurons performs as well as FORCE learning, but with the advantage of a biologically plausible interpretation of both the learning rule and the learning paradigm.",
"Learning complex temporal sequences that extend over a few seconds -such as a movement to grab a bottle or to write a number on the blackboard -looks easy to us but is challenging for computational brain models.",
"A common framework for learning temporal sequences is reservoir computing (alternatively called liquid computing or echo-state networks) [1, 2, 3] .",
"It combines a reservoir, a recurrent network of rate units with strong, but random connections [4] , with a linear readout that feeds back to the reservoir.",
"Training of the readout weights with FORCE, a recursive least-squares estimator [1] , leads to excellent performance on many tasks such as motor movements.",
"The FORCE rule is, however, biologically implausible: update steps of synapses are rapid and large, and require an immediate and precisely timed feedback signal.",
"A more realistic alternative to FORCE is the family of reward-modulated Hebbian learning rules [5, 6, 7] , but plausibility comes at a price: when the feedback (reward minus expected reward) is given only after a long delay, reward-modulated Hebbian plasticity is not powerful enough to learn complex tasks.",
"Here we combine the reservoir network with a second, more structured network that stores and updates a two-dimension continuous variable as a \"bump\" in an attractor [8, 9] .",
"The activity of the attractor network acts as a dynamic working memory and serves as input to the reservoir network ( fig. 1 ).",
"Our approach is related to that of feeding an abstract oscillatory input [10] or a \"temporal backbone signal\" [11] into the reservoir in order to overcome structural weaknesses of reservoir computing that arise if large time spans need to be covered.",
"In computational experiments, we show that a dynamic working memory that serves as an input to a reservoir network facilitates reward-modulated Hebbian learning in multiple ways: it makes a biologically plausible three-factor rule as efficient as FORCE; it admits a delay in the feedback signal; and it allows a single reservoir network to learn and perform multiple tasks.",
"We showed that a dynamic working memory can facilitate learning of complex tasks with biologically plausible three-factor learning rules.",
"Our results indicate that, when combined with a bump attractor, reservoir computing with reward-modulated learning can be as efficient as FORCE [1] , a widely used but biologically unrealistic rule.",
"The proposed network relies on a limited number of trajectories in the attractor network.",
"To increase its capacity, a possible future direction would be to combine input from the attractor network with another, also input-specific, but transient input that would bring the reservoir into a different initial state.",
"In this case the attractor network would work as a time variable (as in [9] ), and the other input as the control signal (as in [1] ).",
"Apart from the biological relevance, the proposed method might be used for real-world applications of reservoir computing (e.g. wind forecasting [13] ) as it is computationally less expensive than FORCE.",
"It might also be an interesting alternative for learning in neuromorphic devices."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11538460850715637,
0.4727272689342499,
0.2857142686843872,
0.14814814925193787,
0.04878048226237297,
0.21739129722118378,
0.21739129722118378,
0.045454539358615875,
0.1249999925494194,
0.21276594698429108,
0.3720930218696594,
0.17241378128528595,
0.4615384638309479,
0.25,
0.20408162474632263,
0.11428570747375488,
0.23076923191547394,
0.17777776718139648,
0.07692307233810425,
0
] | B1g0QmtIIS | true | [
"We show that a working memory input to a reservoir network makes a local reward-modulated Hebbian rule perform as well as recursive least-squares (aka FORCE)"
] |
[
"Convolutional architectures have recently been shown to be competitive on many\n",
"sequence modelling tasks when compared to the de-facto standard of recurrent neural networks (RNNs) while providing computational and modelling advantages due to inherent parallelism.",
"However, currently, there remains a performance\n",
"gap to more expressive stochastic RNN variants, especially those with several layers of dependent random variables.",
"In this work, we propose stochastic temporal convolutional networks (STCNs), a novel architecture that combines the computational advantages of temporal convolutional networks (TCN) with the representational power and robustness of stochastic latent spaces.",
"In particular, we propose a hierarchy of stochastic latent variables that captures temporal dependencies at different time-scales.",
"The architecture is modular and flexible due to the decoupling of the deterministic and stochastic layers.",
"We show that the proposed architecture achieves state of the art log-likelihoods across several tasks.",
"Finally, the model is capable of predicting high-quality synthetic samples over a long-range temporal horizon in modelling of handwritten text.",
"Generative modeling of sequence data requires capturing long-term dependencies and learning of correlations between output variables at the same time-step.",
"Recurrent neural networks (RNNs) and its variants have been very successful in a vast number of problem domains which rely on sequential data.",
"Recent work in audio synthesis, language modeling and machine translation tasks BID8 BID9 BID13 has demonstrated that temporal convolutional networks (TCNs) can also achieve at least competitive performance without relying on recurrence, and hence reducing the computational cost for training.Both RNNs and TCNs model the joint probability distribution over sequences by decomposing the distribution over discrete time-steps.",
"In other words, such models are trained to predict the next step, given all previous time-steps.",
"RNNs are able to model long-term dependencies by propagating information through their deterministic hidden state, acting as an internal memory.",
"In contrast, TCNs leverage large receptive fields by stacking many dilated convolutions, allowing them to model even longer time scales up to the entire sequence length.",
"It is noteworthy that there is no explicit temporal dependency between the model outputs and hence the computations can be performed in parallel.",
"The TCN architecture also introduces a temporal hierarchy: the upper layers have access to longer input sub-sequences and learn representations at a larger time scale.",
"The local information from the lower layers is propagated through the hierarchy by means of residual and skip connections BID2 .However",
", while TCN architectures have been shown to perform similar or better than standard recurrent architectures on particular tasks BID2 , there currently remains a performance gap to more recent stochastic RNN variants BID3 BID7 BID11 BID12 BID14 BID25 . Following",
"a similar approach to stochastic RNNs, BID21 present a significant improvement in the log-likelihood when a TCN model is coupled with latent variables, albeit at the cost of limited receptive field size. The computational",
"graph of generative (left) and inference (right) models of STCN. The approximate posterior",
"q is conditioned on dt and is updated by the prior p which is conditioned on the TCN representations of the previous time-step dt−1. The random latent variables",
"at the upper layers have access to a long history while lower layers receive inputs from more recent time steps.In this work we propose a new approach for augmenting TCNs with random latent variables, that decouples deterministic and stochastic structures yet leverages the increased modeling capacity efficiently. Motivated by the simplicity",
"and computational advantages of TCNs and the robustness and performance of stochastic RNNs, we introduce stochastic temporal convolutional networks (STCN) by incorporating a hierarchy of stochastic latent variables into TCNs which enables learning of representations at many timescales. However, due to the absence",
"of an internal state in TCNs, introducing latent random variables analogously to stochastic RNNs is not feasible. Furthermore, defining conditional",
"random variables across time-steps would result in breaking the parallelism of TCNs and is hence undesirable.In STCN the latent random variables are arranged in correspondence to the temporal hierarchy of the TCN blocks, effectively distributing them over the various timescales (see FIG0 . Crucially, our hierarchical latent",
"structure is designed to be a modular add-on for any temporal convolutional network architecture. Separating the deterministic and stochastic",
"layers allows us to build STCNs without requiring modifications to the base TCN architecture, and hence retains the scalability of TCNs with respect to the receptive field. This conditioning of the latent random variables",
"via different timescales is especially effective in the case of TCNs. We show this experimentally by replacing the TCN",
"layers with stacked LSTM cells, leading to reduced performance compared to STCN.We propose two different inference networks. In the canonical configuration, samples from each",
"latent variable are passed down from layer to layer and only one sample from the lowest layer is used to condition the prediction of the output. In the second configuration, called STCN-dense, we",
"take inspiration from recent CNN architectures BID18 and utilize samples from all latent random variables via concatenation before computing the final prediction.Our contributions can thus be summarized as: 1) We present a modular and scalable approach to augment",
"temporal convolutional network models with effective stochastic latent variables. 2) We empirically show that the STCN-dense design prevents",
"the model from ignoring latent variables in the upper layers BID32 . 3) We achieve state-of-the-art log-likelihood performance,",
"measured by ELBO, on the IAM-OnDB, Deepwriting, TIMIT and the Blizzard datasets. 4) Finally we show that the quality of the synthetic samples",
"matches the significant quantitative improvements.",
"In this paper we proposed STCNs, a novel auto-regressive model, combining the computational benefits of convolutional architectures and expressiveness of hierarchical stochastic latent spaces.",
"We have shown the effectivness of the approach across several sequence modelling tasks and datasets.",
"The proposed models are trained via optimization of the ELBO objective.",
"Tighter lower bounds such as IWAE BID5 or FIVO (Maddison et al., 2017) may further improve modeling performance.",
"We leave this for future work.",
"The network architecture of the proposed model is illustrated in FIG6 .",
"We make only a small modification to the vanilla Wavenet architecture.",
"Instead of using skip connections from Wavenet blocks, we only use the latent sample zt in order to make a prediction of xt.",
"In STCN-dense configuration, zt is the concatenation of all latent variables in the hierarchy, i.e., zt = [z Output layer f (o) : For the IAM-OnDB and Deepwriting datasets we use 1D convolutions with ReLU nonlinearity.",
"We stack 5 of these layers with 256 filters and filter size",
"1. DISPLAYFORM0 For TIMIT and Blizzard datasets Wavenet blocks in the output layer perform significantly better.",
"We stack 5 Wavenet blocks with dilation size",
"1. For each convolution operation in the block we use 256 filters.",
"The filter size of the dilated convolution is set to",
"2. The STCN-dense-large model is constructed by using 512 filters instead of 256."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07999999821186066,
0.2222222238779068,
0,
0.2666666507720947,
0.4390243887901306,
0.32258063554763794,
0.2142857164144516,
0.2142857164144516,
0.1818181723356247,
0.1818181723356247,
0.05405404791235924,
0.12121211737394333,
0.06666666269302368,
0,
0.05128204822540283,
0.11428570747375488,
0.10526315122842789,
0.11764705181121826,
0.07843136787414551,
0.2666666507720947,
0.07692307233810425,
0.2222222238779068,
0.12903225421905518,
0.375,
0.23529411852359772,
0.19230769574642181,
0.24242423474788666,
0.2380952388048172,
0.1875,
0.15789473056793213,
0.14999999105930328,
0.19607843458652496,
0.5,
0.2666666507720947,
0.11764705181121826,
0.10526315122842789,
0.4324324131011963,
0.2142857164144516,
0.1599999964237213,
0,
0.09999999403953552,
0.1599999964237213,
0.1599999964237213,
0.1666666567325592,
0.20408162474632263,
0.23076923191547394,
0.06666666269302368,
0.1818181723356247,
0.07692307233810425,
0.1666666567325592,
0.07407406717538834
] | HkzSQhCcK7 | true | [
"We combine the computational advantages of temporal convolutional architectures with the expressiveness of stochastic latent variables."
] |
[
"The weak contraction mapping is a self mapping that the range is always a subset of the domain, which admits a unique fixed-point.",
"The iteration of weak contraction mapping is a Cauchy sequence that yields the unique fixed-point.",
"A gradient-free optimization method as an application of weak contraction mapping is proposed to achieve global minimum convergence.",
"The optimization method is robust to local minima and initial point position.",
"Many gradient-based optimization methods, such as gradient descent method, Newton's method and so on, face great challenges in finding the global minimum point of a function.",
"As is known, searching for the global minimum of a function with many local minima is difficult.",
"In principle, the information from the derivative of a single point is not sufficient for us to know the global geometry property of the function.",
"For a successful minimum point convergence, the initial point is required to be sufficiently good and the derivative calculation need to be accurate enough.",
"In the gradientbased methods, the domain of searching area will be divided into several subsets with regards to local minima.",
"And eventually it will converge to one local minimum depends on where the initial point locates at.Let (X,d) be a metric space and let T:X → X be a mapping.",
"For the inequality that, d(T",
"(x), T",
"(y)) ≤ qd(x,",
"y), ∀x, y ∈ X.(1)if",
"q ∈ [0, 1), T is called contractive; if q ∈ [0, 1], T is called nonexpansive; if q < ∞, T is called Lipschitz continuous(1; 2). The gradient-based",
"methods are usually nonexpansive mapping the solution exists but is not unique for general situation. For instance, if the",
"gradient descent method is written as a mapping T and the objective function has many local minima, then there are many fixed points accordingly. From the perspective",
"of spectra of bounded operator, for a nonexpansive mapping any minima of the objective function is an eigenvector of eigenvalue equation T (x) = λx ,in which λ",
"= 1. In the optimization",
"problem, nonexpansive mapping sometimes works but their disadvantages are obvious. Because both the existence",
"and uniqueness of solution are important so that the contractive mapping is more favored than the nonexpansive mapping(3; 4).Banach fixed-point theorem",
"is a very powerful method to solve linear or nonlinear system. But for optimization problems",
", the condition of contraction mapping T : X → X that d(T (x), T (y)) ≤ qd(x, y) is usually",
"too strict",
"and luxury.",
"In the paper, we are trying to extend",
"the Banach fixedpoint theorem to an applicable method for optimization problem, which is called weak contraction mapping.In short, weak contraction mapping is a self mapping that always map to the subset of its domain. It is proven that weak contraction mapping",
"admits a fixed-point in the following section. How to apply the weak contraction mapping",
"to solve an optimization problem? Geometrically, given a point, we calculate",
"the height of this point and utilize a hyperplane at the same height to cut the objective function, where the intersection between the hyperplane and the objective function will form a contour or contours. And then map to a point insider a contour,",
"which the range of this mapping is always the subset of its domain. The iteration of the weak contraction mapping",
"yields a fixed-point, which coincides with the global minimum of the objective function.",
"The weak contraction mapping is a self mapping that always map to a subset of domain.",
"Intriguingly, as an extension of Banach fixed-point theorem, the iteration of weak contraction mapping is a Cauchy and yields a unique fixed-point, which fit perfectly with the task of optimization.",
"The global minimum convergence regardless of initial point position and local minima is very significant strength for optimization algorithm.",
"We hope that the advanced optimization with the development of the weak contraction mapping can contribute to empower the modern calculation."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07407406717538834,
0.0833333283662796,
0.4444444477558136,
0.2857142686843872,
0.11428570747375488,
0.1599999964237213,
0.13333332538604736,
0.06896550953388214,
0,
0,
0,
0,
0,
0.07407406717538834,
0.14814814925193787,
0.11764705181121826,
0.11764705181121826,
0.1428571343421936,
0,
0.06666666269302368,
0.3333333432674408,
0.0714285671710968,
0,
0.20512820780277252,
0,
0.09999999403953552,
0,
0.0833333283662796,
0,
0.08695651590824127,
0.11428570747375488,
0.2142857164144516,
0.07407406717538834
] | SygJSiA5YQ | true | [
"A gradient-free method is proposed for non-convex optimization problem "
] |
[
"Over the last decade, two competing control strategies have emerged for solving complex control tasks with high efficacy.",
"Model-based control algorithms, such as model-predictive control (MPC) and trajectory optimization, peer into the gradients of underlying system dynamics in order to solve control tasks with high sample efficiency. ",
"However, like all gradient-based numerical optimization methods,model-based control methods are sensitive to intializations and are prone to becoming trapped in local minima.",
"Deep reinforcement learning (DRL), on the other hand, can somewhat alleviate these issues by exploring the solution space through sampling — at the expense of computational cost.",
"In this paper, we present a hybrid method that combines the best aspects of gradient-based methods and DRL.",
"We base our algorithm on the deep deterministic policy gradients (DDPG) algorithm and propose a simple modification that uses true gradients from a differentiable physical simulator to increase the convergence rate of both the actor and the critic. ",
"We demonstrate our algorithm on seven 2D robot control tasks, with the most complex one being a differentiable half cheetah with hard contact constraints.",
"Empirical results show that our method boosts the performance of DDPGwithout sacrificing its robustness to local minima.",
"In recent years, deep reinforcement learning (DRL) has emerged as a flexible and robust means of teaching simulated robots to complete complex tasks, from manipulation (Kumar et al., 2016) and locomotion (Haarnoja et al., 2018b) , to navigating complex terrain (Peng et al., 2016) .",
"Compared with more direct optimization methods such as gradient descent or second-order optimization, DRL naturally incorporates exploration into its planning, allowing it to learn generalizable policies and robust state value estimations across simulated environments.",
"Perhaps the most salient reason for DRL's surge in popularity is its ability to operate on black-box simulators where the underlying dynamics model is not available.",
"DRL's model-free, Monte-Carlo-style methods have made it applicable to a wide range of physical (and non-physical) simulation environments, including those where a smooth, well-behaved dynamical model does not exist.",
"This comes at two striking costs.",
"First, such sampling procedures may be inefficient, requiring a large number of samples for adequate learning.",
"Second, in order to be generally applicable to any model-free environment, underlying dynamical gradients are not used, even if they are available.",
"In other words, valuable information that could greatly aid control tasks is not taken advantage of in these schemes.",
"When an accurate model of robot dynamics is given, model-based methods such as model-predictive control (MPC) or trajectory optimization have historically been employed.",
"These methods can solve tasks with higher sample efficiency than model-free DRL algorithms.",
"Models provide access to ground-truth, analytical gradients of robot physics without the need for sample-based estimation.",
"However, such methods don't incorporate exploration or learning into their procedures, and are especially prone to becoming trapped in poor local minima.",
"While there has been a recent surge in fast and accurate differentiable simulators not previously available, most applications for control have relied on established local methods such as MPC (de Avila Belbute-Peres et al., 2018) , gradient descent (Degrave et al., 2019) , or trajectory optimization (Hu et al., 2019) to solve control tasks.",
"An ideal algorithm would exploit the efficiency of model-based methods while maintaining DRL's relative robustness to poor local minima.",
"In this paper, we propose an actor-critic algorithm that leverages differentiable simulation and combines the benefits of model-based methods and DRL.",
"We build our method upon standard actor-critic DRL algorithms and use true model gradients in order to improve the efficacy of learned critic models.",
"Our main insights are twofold: First, gradients of critics play an important role in certain DRL algorithms, but optimization of these critics' gradients has not been explored by previous work.",
"Second, the emergence of differentiable simulators enables computation of advantage estimation (AE) gradients with little additional computational overhead.",
"Based on these observations, we present an algorithm that uses AE gradients in order to co-learn critic value and gradient estimation, demonstrably improving convergence of both actor and critic.",
"In this paper, we contribute the following: 1) An efficient hybrid actor-critic method which builds upon deep deterministic policy gradients (DDPG, (Lillicrap et al., 2015) ), using gradient information in order to improve convergence in a simple way.",
"2) A principled mathematical framework for fitting critic gradients, providing a roadmap for applying our method to any deterministic policy gradient method, and",
"3) Demonstrations of our algorithm on seven control tasks, ranging from contact-free classic control problems to complex tasks with accurate, hard contact, such as the HalfCheetah, along with comparisons to both model-based control and DRL baselines.",
"Immediately obvious from our results is the fact that DDPG and our algorithm are both competitive on all problems presented, regardless of problem difficulty.",
"While MPC dominates on the simplest control tasks, it struggles on the more complicated tasks with hard contacts, and DRL approaches dominate.",
"This underscores our thesis -that DRL's exploration properties make it better suited than model-based approaches for problems with a myriad of poor local minima.",
"More naïve model-based approaches, such as GD, can succeed when they begin very close to a local minimum -as is the case with CartPole -but show slow or no improvement in dynamical environments with nontrivial control schemes.",
"This is especially apparent in problems where the optimal strategy requires robots to make locally suboptimal motions in order to build up momentum to be used to escape local minima later.",
"Examples include Pendulum, CartPoleSwingUp, and MountainCar, where the robot must learn to build up momentum through local oscillations before attempting to reach a goal.",
"GD further fails on complex physical control tasks like HalfCheetah, where certain configurations, such as toppling, can be unrecoverable.",
"Finally, we note that although MPC is able to tractably find a good solution for the RollingDie problem, the complex nonlinearities in the contact-heavy dynamics require long planning horizons (100 steps, chosen by running hyperparameter search) in order to find a good trajectory.",
"Thus, although MPC eventually converges to a control sequence with very high reward, it requires abundant computation to converge.",
"DRL-based control approaches are able to find success on all problems, and are especially competitive on those with contact.",
"Compared with DDPG, our hybrid algorithm universally converges faster or to higher returns.",
"The rolling die example presents a particularly interesting contrast.",
"As the die is randomly initialized, it is more valuable to aim for higher return history rather than return mean due to the large variance in the initial state distribution.",
"It can be seen from Figure 4 that our method managed to reach a higher average return history over 16 runs.",
"Manually visualizing the controller from the best run in our method revealed that it discovered a novel two-bounce strategy for challenging initial poses (Figure 3) , while most of the strategies in DDPG typically leveraged one bounce only.",
"There are a few other reasons why our algorithm may be considered superior to MPC.",
"First, our algorithm is applicable to a wider range of reward structures.",
"While we had planned to demonstrate MPC on another classic control problem, namely the Acrobot, MPC is inapplicable to this robot's reward structure.",
"The Acrobot's rewards penalize it with −1 point for every second it has not reached its target pose.",
"MPC requires a differentiable reward, and this reward structure is not.",
"Thus, our Hybrid DDPG algorithm applies to a wider range of problems than MPC.",
"Second, closedloop network controllers are naturally more robust than MPC.",
"Even as noise is added or initial conditions and tasks change, learned controllers can generalize.",
"While MPC can recover from these scenarios, it requires expensive replanning.",
"In these scenarios, MPC becomes especially unattractive to deploy on physical hardware, where power and computational resource constraints can render MPC inapplicable to realtime applications.",
"Figure 3 : Visualization of the twobounce strategy discovered by our algorithm.",
"Solid red box: initial die.",
"Dash cyan curve: trajectory of the die.",
"Blue box: the target zone.",
"Light red boxes: states of the die at collisions and about to enter the target.",
"In this paper, we have presented an actor-critic algorithm that uses AE gradients to co-learn critic value and gradient estimation and improve convergence of both actor and critic.",
"Our algorithm leverages differentiable simulation and combines the benefits of model-based methods and DRL.",
"We designed seven 2D control tasks with three different contact scenarios and compared our method with several state-of-the-art baseline algorithms.",
"We demonstrated our method boosts the performance of DDPG and is much less sensitive to local minima than model-based approaches.",
"In the future, it would be interesting to see if our mathematical framework can be applied to improve the effectiveness of value functions used in other DRL algorithms.",
"A APPENDIX"
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1621621549129486,
0.2083333283662796,
0.09999999403953552,
0.08888888359069824,
0.2631579041481018,
0.38461539149284363,
0.23255813121795654,
0.3243243098258972,
0.1428571343421936,
0.03703703358769417,
0.1818181723356247,
0.1249999925494194,
0,
0.1666666567325592,
0.09999999403953552,
0.1538461446762085,
0.09302324801683426,
0,
0.277777761220932,
0.0476190410554409,
0.17910447716712952,
0.1538461446762085,
0.29999998211860657,
0.3181818127632141,
0.0833333283662796,
0.2702702581882477,
0.1702127605676651,
0.20689654350280762,
0.1904761791229248,
0.19230768084526062,
0.1860465109348297,
0.09999999403953552,
0.13636362552642822,
0.1428571343421936,
0.08510638028383255,
0.1395348757505417,
0.051282044500112534,
0.178571417927742,
0.15789473056793213,
0.10810810327529907,
0.060606054961681366,
0.06896550953388214,
0.13333332538604736,
0.24390242993831635,
0.290909081697464,
0.11428570747375488,
0.1875,
0.1463414579629898,
0.05405404791235924,
0.12903225421905518,
0.1764705777168274,
0,
0,
0.06451612710952759,
0.04651162400841713,
0.1249999925494194,
0,
0.14814814925193787,
0.07999999821186066,
0.1764705777168274,
0.2222222238779068,
0.24242423474788666,
0.1538461446762085,
0.29999998211860657,
0.17777776718139648
] | rkxZCJrtwS | true | [
"We propose a novel method that leverages the gradients from differentiable simulators to improve the performance of RL for robotics control"
] |
[
"We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks.",
"A Bayesian hypernetwork, h, is a neural network which learns to transform a simple noise distribution, p(e) = N(0,I), to a distribution q(t) := q(h(e)) over the parameters t of another neural network (the ``primary network).",
"We train q with variational inference, using an invertible h to enable efficient estimation of the variational lower bound on the posterior p(t | D) via sampling.",
"In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap iid sampling of q(t). ",
"In practice, Bayesian hypernets provide a better defense against adversarial examples than dropout, and also exhibit competitive performance on a suite of tasks which evaluate model uncertainty, including regularization, active learning, and anomaly detection.\n",
"Simple and powerful techniques for Bayesian inference of deep neural networks' (DNNs) parameters have the potential to dramatically increase the scope of applications for deep learning techniques.",
"In real-world applications, unanticipated mistakes may be costly and dangerous, whereas anticipating mistakes allows an agent to seek human guidance (as in active learning), engage safe default behavior (such as shutting down), or use a \"reject option\" in a classification context.",
"DNNs are typically trained to find the single most likely value of the parameters (the \"MAP estimate\"), but this approach neglects uncertainty about which parameters are the best (\"parameter uncertainty\"), which may translate into higher predictive uncertainty when likely parameter values yield highly confident but contradictory predictions.",
"Conversely, Bayesian DNNs model the full posterior distribution of a model's parameters given the data, and thus provides better calibrated confidence estimates, with corresponding safety benefits BID9 BID0 .",
"1 Maintaining a distribution over parameters is also one of the most effective defenses against adversarial attacks BID4 .Techniques",
"for Bayesian DNNs are an active research topic. The most popular",
"approach is variational inference BID2 BID8 , which typically restricts the variational posterior to a simple family of distributions, for instance a factorial Gaussian BID2 BID16 . Unfortunately, from",
"a safety perspective, variational approximations tend to underestimate uncertainty, by heavily penalizing approximate distributions which place mass in regions where the true posterior has low density. This problem can be",
"exacerbated by using a restricted family of posterior distribution; for instance a unimodal approximate posterior will generally only capture a single mode of the true posterior. With this in mind,",
"we propose learning an extremely flexible and powerful posterior, parametrized by a DNN h, which we refer to as a Bayesian hypernetwork in reference to BID17 .A Bayesian hypernetwork",
"(BHN) takes random noise ∼ N (0, I) as input and outputs a sample from the approximate posterior q(θ) for another DNN of interest (the \"primary network\"). The key insight for building",
"such a model is the use of an invertible hypernet, which enables Monte Carlo estimation of the entropy term − logq(θ) in the variational inference training objective.We begin the paper by reviewing previous work on Bayesian DNNs, and explaining the necessary components of our approach (Section 2). Then we explain how to compose",
"these techniques to yield Bayesian hypernets, as well as design choices which make training BHNs efficient, stable and robust (Section 3). Finally, we present experiments",
"which validate the expressivity of BHNs, and demonstrate their competitive performance across several tasks (Section 4).",
"We introduce Bayesian hypernets (BHNs), a new method for variational Bayesian deep learning which uses an invertible hypernetwork as a generative model of parameters.",
"BHNs feature efficient we found the BALD values our implementation computes provide a better-than-random acquisition function (compare the blue line in the top and bottom plots).",
"10 Li & Gal (2017) and BID28 used 10 and 1 model samples, respectively, to estimate gradient.",
"We report the result with 1 sample; results with more samples are given in the appendix.",
"when more perturbation is added to the data (left), uncertainty measures also increase (first row).",
"In particular, the BALD and Mean STD scores, which measure epistemic uncertainty, are strongly increasing for BHNs, but not for dropout.",
"The second row and third row plots show results for adversary detection and error detection (respectively) in terms of the AUC of ROC (y-axis) with increasing perturbation along the x-axis.",
"Gradient direction is estimated with one Monte Carlo sample of the weights/dropout mask.training and sampling, and can express complicated multimodal distributions, thereby addressing issues of overconfidence present in simpler variational approximations.",
"We present a method of parametrizing BHNs which allows them to scale successfully to real world tasks, and show that BHNs can offer significant benefits over simpler methods for Bayesian deep learning.",
"Future work could explore other methods of parametrizing BHNs, for instance using the same hypernet to output different subsets of the primary net parameters.A"
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
1,
0.1395348757505417,
0.05405404791235924,
0.19512194395065308,
0.08888888359069824,
0.23529411852359772,
0.07999999821186066,
0,
0.10256409645080566,
0.06451612710952759,
0.17391303181648254,
0.1621621549129486,
0.1428571343421936,
0.21621620655059814,
0.2222222238779068,
0.1395348757505417,
0.1666666567325592,
0.0555555522441864,
0,
0.23529411852359772,
0.1111111044883728,
0,
0.1538461446762085,
0,
0.0624999962747097,
0.10810810327529907,
0.0476190447807312,
0.1904761791229248,
0.05714285373687744
] | S1fcY-Z0- | true | [
"We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks."
] |
[
"Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels.",
"This paper presents a method called Deep Innovation Protection (DIP) that allows training complex world models end-to-end for such 3D environments.",
"The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in a world model, allowing other components to adapt.",
"We investigate the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss.",
"The ability of the brain to model the world arose from the process of evolution.",
"It evolved because it helped organisms to survive and strive in their particular environments and not because such forward prediction was explicitly optimized for.",
"In contrast to the emergent neural representations in nature, current world model approaches are often directly rewarded for their ability to predict future states of the environment (Schmidhuber, 1990; Ha & Schmidhuber, 2018; Hafner et al., 2018; Wayne et al., 2018) .",
"While it is undoubtedly useful to be able to explicitly encourage a model to predict what will happen next, in this paper we are interested in what type of representations can emerge from the less directed process of artificial evolution and what ingredients might be necessary to encourage the emergence of such predictive abilities.",
"In particular, we are building on the recently introduced world model architecture introduced by Ha & Schmidhuber (2018) .",
"This agent model contains three different components: (1) a visual module, mapping high-dimensional inputs to a lower-dimensional representative code, (2) an LSTM-based memory component, and (3) a controller component that takes input from the visual and memory module to determine the agent's next action.",
"In the original approach, each component of the world model was trained separately and to perform a different and specialised function, such as predicting the future.",
"While Risi & Stanley (2019) demonstrated that these models can also be trained end-to-end through a population-based genetic algorithm (GA) that exclusively optimizes for final performance, the approach was only applied to the simpler 2D car racing domain and it is an open question how such an approach will scale to the more complex 3D VizDoom task that first validated the effectiveness of the world model approach.",
"Here we show that a simple genetic algorithm fails to find a solution to solving the VizDoom task and ask the question what are the missing ingredients necessary to encourage the evolution of more powerful world models?",
"The main insight in this paper is that we can view the optimization of a heterogeneous neural network (such as world models) as a co-evolving system of multiple different sub-systems.",
"The other important insight is that representational innovations discovered in one subsystem (e.g. the visual system learns to track moving objects) require the other sub-systems to adapt.",
"In fact, if the other systems are not given time to adapt, such innovation will likely initially have an adversarial effect on overall performance!",
"In order to optimize such co-evolving heterogeneous neural systems, we propose to reduce the selection pressure on individuals whose visual or memory system was recently changed, given the controller component time to readapt.",
"This Deep Innovation Protection (DIP) approach is inspired by the recently introduced morphological innovation protection method of Cheney et al. (2018) , which allows for the scalable co-optimization of controllers and robot body plans.",
"Our approach is able to find a solution to the VizDoom: Take Cover task, which was first solved by the original world model approach (Ha & Schmidhuber, 2018) .",
"More interestingly, the emergent world models learned to predict events important for the survival of the agent, even though they were not explicitly trained to predict the future.",
"Additionally, our investigates into the training process show that DIP allows evolution to carefully orchestrate the training of the components in these heterogeneous architectures.",
"We hope this work inspires more research that focuses on investigating representations emerging from approaches that do not necessarily only rely on gradient-based optimization.",
"The paper demonstrated that a world model representation for a 3D task can emerge under the right circumstances without being explicitly rewarded for it.",
"To encourage this emergence, we introduced deep innovation protection, an approach that can dynamically reduce the selection pressure for different components in a heterogeneous neural architecture.",
"The main insight is that when components upstream in the neural network change, such as the visual or memory system in a world model, components downstream need time to adapt to changes in those learned representations.",
"The neural model learned to represent situations that require similar actions with similar latent and hidden codes ( Fig. 5 and 7) .",
"Additionally, without a specific forward-prediction loss, the agent learned to predict \"useful\" events that are necessary for its survival (e.g. predicting when the agent is in the line-of-fire of a fireball).",
"In the future it will be interesting to compare the differences and similarities of emergent representations and learning dynamics resulting from evolutionary and gradient descent-based optimization approaches (Raghu et al., 2017) .",
"Interestingly, without the need for a variety of specialized learning methods employed in the original world model paper, a simple genetic algorithm augmented with DIP can not only solve the simpler 2D car racing domain (Risi & Stanley, 2019) , but also more complex 3D domains such as VizDoom.",
"That the average score across 100 random rollouts is lower when compared to the one reported in the original world model paper (824 compared to 1092) is maybe not surprising; if random rollouts are available, training each component separately can results in a higher performance.",
"However, in more complicated domains, in which random rollouts might not be able to provide all relevant experiences (e.g. a random policy might never reach a certain level), the proposed DIP approach could become increasingly relevant.",
"An exciting future direction is to combine the end-to-end training regimen of DIP with the ability of training inside the world model itself (Ha & Schmidhuber, 2018) .",
"However, because the evolved representation is not directly optimized to predict the next time step and only learns to predict future events that are useful for the agent's survival, it is an interesting open question how such a different version of a hallucinate environment could be used for training.",
"A natural extension to this work is to evolve the neural architectures in addition to the weights of the network.",
"Searching for neural architectures in RL has previously only been applied to smaller networks (Risi & Stanley, 2012; Stanley & Miikkulainen, 2002; Gaier & Ha, 2019; Risi & Togelius, 2017; Floreano et al., 2008) but could potentially now be scaled to more complex tasks.",
"While our innovation protection approach is based on evolution, ideas presented here could also be incorporated in gradient descent-based approaches that optimize neural systems with multiple interacting components end-to-end."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.10810810327529907,
0.6060606241226196,
0.05405404791235924,
0.1249999925494194,
0.0833333283662796,
0.05882352590560913,
0.08163265138864517,
0,
0.06896550953388214,
0,
0.05714285373687744,
0.17391304671764374,
0.04651162400841713,
0.05128204822540283,
0,
0,
0,
0.22727271914482117,
0.05405404791235924,
0.17142856121063232,
0.060606054961681366,
0,
0.1764705777168274,
0.052631575614213943,
0.04651162400841713,
0,
0.04999999701976776,
0,
0.13793103396892548,
0.04081632196903229,
0,
0.11428570747375488,
0.03703703358769417,
0,
0.11320754140615463,
0.04878048226237297
] | SygLu0VtPH | true | [
"Deep Innovation Protection allows evolving complex world models end-to-end for 3D tasks."
] |
[
"Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.",
"In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses.",
"Recent work shows that randomized smoothing can be used to provide certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER).",
"The attack-free characteristic makes MACER faster to train and easier to optimize.",
"In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN.",
"For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.",
"Modern neural network classifiers are able to achieve very high accuracy on image classification tasks but are sensitive to small, adversarially chosen perturbations to the inputs (Szegedy et al., 2013; Biggio et al., 2013) .",
"Given an image x that is correctly classified by a neural network, a malicious attacker may find a small adversarial perturbation δ such that the perturbed image x + δ, though visually indistinguishable from the original image, is assigned to a wrong class with high confidence by the network.",
"Such vulnerability creates security concerns in many real-world applications.",
"Researchers have proposed a variety of defense methods to improve the robustness of neural networks.",
"Most of the existing defenses are based on adversarial training (Szegedy et al., 2013; Madry et al., 2017; Goodfellow et al., 2015; Huang et al., 2015; Athalye et al., 2018) .",
"During training, these methods first learn on-the-fly adversarial examples of the inputs with multiple attack iterations and then update model parameters using these perturbed samples together with the original labels.",
"However, such approaches depend on a particular (class of) attack method.",
"It cannot be formally guaranteed whether the resulting model is also robust against other attacks.",
"Moreover, attack iterations are usually quite expensive.",
"As a result, adversarial training runs very slowly.",
"Another line of algorithms trains robust models by maximizing the certified radius provided by robust certification methods (Weng et al., 2018; Gowal et al., 2018; Zhang et al., 2019c) .",
"Using linear or convex relaxations of fully connected ReLU networks, a robust certification method computes a \"safe radius\" r for a classifier at a given input such that at any point within the neighboring radius-r ball of the input, the classifier is guaranteed to have unchanged predictions.",
"However, the certification methods are usually computationally expensive and can only handle shallow neural networks with ReLU activations, so these training algorithms have troubles in scaling to modern networks.",
"In this work, we propose an attack-free and scalable method to train robust deep neural networks.",
"We mainly leverage the recent randomized smoothing technique (Cohen et al., 2019) .",
"A randomized smoothed classifier g for an arbitrary classifier f is defined as g(x) = E η f (x + η), in which η ∼ N (0, σ 2 I).",
"While Cohen et al. (2019) derived how to analytically compute the certified radius of the randomly smoothed classifier g, they did not show how to maximize that radius to make the classifier g robust.",
"Salman et al. (2019) proposed SmoothAdv to improve the robustness of g, but it still relies on the expensive attack iterations.",
"Instead of adversarial training, we propose to learn robust models by directly taking the certified radius into the objective.",
"We outline a few challenging desiderata any practical instantiation of this idea would however have to satisfy, and provide approaches to address each of these in turn.",
"A discussion of these desiderata, as well as a detailed implementation of our approach is provided in Section 4.",
"And as we show both theoretically and empirically, our method is numerically stable and accounts for both classification accuracy and robustness.",
"Our contributions are summarized as follows:",
"• We propose an attack-free and scalable robust training algorithm by MAximizing the CErtified Radius (MACER).",
"MACER has the following advantages compared to previous works: -Different from adversarial training, we train robust models by directly maximizing the certified radius without specifying any attack strategies, and the learned model can achieve provable robustness against any possible attack in the certified region.",
"Additionally, by avoiding time-consuming attack iterations, our proposed algorithm runs much faster than adversarial training.",
"-Different from other methods that maximize the certified radius but are not scalable to deep neural networks, our method can be applied to architectures of any size.",
"This makes our algorithm more practical in real scenarios.",
"• We empirically evaluate our proposed method through extensive experiments on Cifar-10, ImageNet, MNIST, and SVHN.",
"On all tasks, MACER achieves better performance than state-of-the-art algorithms.",
"MACER is also exceptionally fast.",
"For example, on ImageNet, MACER uses 39% less training time than adversarial training but still performs better.",
"In this work we propose MACER, an attack-free and scalable robust training method via directly maximizing the certified radius of a smoothed classifier.",
"We discuss the desiderata such an algorithm would have to satisfy, and provide an approach to each of them.",
"According to our extensive experiments, MACER performs better than previous provable l 2 -defenses and trains faster.",
"Our strong empirical results suggest that adversarial training is not a must for robust training, and defense based on certification is a promising direction for future research.",
"Moreover, several recent papers (Carmon et al., 2019; Zhai et al., 2019; suggest that using unlabeled data helps improve adversarially robust generalization.",
"We will also extend MACER to the semisupervised setting."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2083333283662796,
0.5283018946647644,
0.24137930572032928,
0,
0.0714285671710968,
0.3199999928474426,
0.06779660284519196,
0.14705881476402283,
0,
0.1395348757505417,
0.15686273574829102,
0.0714285671710968,
0.04999999701976776,
0.13636362552642822,
0,
0.1621621549129486,
0.307692289352417,
0.11764705181121826,
0.07017543166875839,
0.08888888359069824,
0.0952380895614624,
0,
0.2142857164144516,
0.08163265138864517,
0.3404255211353302,
0.07407406717538834,
0.043478257954120636,
0,
0,
0.31111109256744385,
0.2686567008495331,
0.22727271914482117,
0.2181818187236786,
0.052631575614213943,
0.04444443807005882,
0.1538461446762085,
0,
0.2666666507720947,
0.307692289352417,
0.1304347813129425,
0.21739129722118378,
0.2641509473323822,
0.08163265138864517,
0.10526315122842789
] | rJx1Na4Fwr | true | [
"We propose MACER: a provable defense algorithm that trains robust models by maximizing the certified radius. It does not use adversarial training but performs better than all existing provable l2-defenses."
] |
[
"Actor-critic methods solve reinforcement learning problems by updating a parameterized policy known as an actor in a direction that increases an estimate of the expected return known as a critic.",
"However, existing actor-critic methods only use values or gradients of the critic to update the policy parameter.",
"In this paper, we propose a novel actor-critic method called the guide actor-critic (GAC).",
"GAC firstly learns a guide actor that locally maximizes the critic and then it updates the policy parameter based on the guide actor by supervised learning.",
"Our main theoretical contributions are two folds.",
"First, we show that GAC updates the guide actor by performing second-order optimization in the action space where the curvature matrix is based on the Hessians of the critic.",
"Second, we show that the deterministic policy gradient method is a special case of GAC when the Hessians are ignored.",
"Through experiments, we show that our method is a promising reinforcement learning method for continuous controls.\n",
"The goal of reinforcement learning (RL) is to learn an optimal policy that lets an agent achieve the maximum cumulative rewards known as the return BID31 .",
"Reinforcement learning has been shown to be effective in solving challenging artificial intelligence tasks such as playing games BID20 and controlling robots BID6 .Reinforcement",
"learning methods can be classified into three categories: value-based, policy-based, and actor-critic methods. Value-based methods",
"learn an optimal policy by firstly learning a value function that estimates the expected return. Then, they infer an",
"optimal policy by choosing an action that maximizes the learned value function. Choosing an action",
"in this way requires solving a maximization problem which is not trivial for continuous controls. While extensions to",
"continuous controls were considered recently, they are restrictive since specific structures of the value function are assumed BID10 BID3 .On the other hand, policy-based",
"methods, also called policy search methods BID6 , learn a parameterized policy maximizing a sample approximation of the expected return without learning the value function. For instance, policy gradient methods",
"such as REIN-FORCE BID34 use gradient ascent to update the policy parameter so that the probability of observing high sample returns increases. Compared with value-based methods, policy",
"search methods are simpler and naturally applicable to continuous problems. Moreover, the sample return is an unbiased",
"estimator of the expected return and methods such as policy gradients are guaranteed to converge to a locally optimal policy under standard regularity conditions BID32 . However, sample returns usually have high",
"variance and this makes such policy search methods converge too slowly.Actor-critic methods combine the advantages of value-based and policy search methods. In these methods, the parameterized policy",
"is called an actor and the learned value-function is called a critic.The goal of these methods is to learn an actor that maximizes the critic. Since the critic is a low variance estimator",
"of the expected return, these methods often converge much faster than policy search methods. Prominent examples of these methods are actor-critic",
"BID32 BID15 , natural actor-critic BID25 , trust-region policy optimization BID27 , and asynchronous advantage actor-critic BID21 . While their approaches to learn the actor are different",
", they share a common property that they only use the value of the critic, i.e., the zero-th order information, and ignore higher-order ones such as gradients and Hessians w.r.t. actions of the critic 1 . To the best of our knowledge, the only actor-critic methods",
"that use gradients of the critic to update the actor are deterministic policy gradients (DPG) BID29 and stochastic value gradients . However, these two methods do not utilize the second-order",
"information of the critic.In this paper, we argue that the second-order information of the critic is useful and should not be ignored. A motivating example can be seen by comparing gradient ascent",
"to the Newton method: the Newton method which also uses the Hessian converges to a local optimum in a fewer iterations when compared to gradient ascent which only uses the gradient BID24 . This suggests that the Hessian of the critic can accelerate actor",
"learning which leads to higher data efficiency. However, the computational complexity of second-order methods is",
"at least quadratic in terms of the number of optimization variables. For this reason, applying second-order methods to optimize the parameterized",
"actor directly is prohibitively expensive and impractical for deep reinforcement learning which represents the actor by deep neural networks.Our contribution in this paper is a novel actor-critic method for continuous controls which we call guide actor-critic (GAC). Unlike existing methods, the actor update of GAC utilizes the secondorder information",
"of the critic in a computationally efficient manner. This is achieved by separating actor learning into two steps. In the first step, we learn",
"a non-parameterized Gaussian actor that locally maximizes the",
"critic under a Kullback-Leibler (KL) divergence constraint. Then, the Gaussian actor is used as a guide for learning a parameterized actor by supervised",
"learning. Our analysis shows that learning the mean of the Gaussian actor is equivalent to performing",
"a second-order update in the action space where the curvature matrix is given by Hessians of the critic and the step-size is controlled by the KL constraint. Furthermore, we establish a connection between GAC and DPG where we show that DPG is a special",
"case of GAC when the Hessians and KL constraint are ignored.",
"Actor-critic methods are appealing for real-world problems due to their good data efficiency and learning speed.",
"However, existing actor-critic methods do not use second-order information of the critic.",
"In this paper, we established a novel framework that distinguishes itself from existing work by utilizing Hessians of the critic for actor learning.",
"Within this framework, we proposed a practical method that uses Gauss-Newton approximation instead of the Hessians.",
"We showed through experiments that our method is promising and thus the framework should be further investigated.Our analysis showed that the proposed method is closely related to deterministic policy gradients (DPG).",
"However, DPG was also shown to be a limiting case of the stochastic policy gradients when the policy variance approaches zero BID29 .",
"It is currently unknown whether our framework has a connection to the stochastic policy gradients as well, and finding such a connection is our future work.Our main goal in this paper was to provide a new actor-critic framework and we do not claim that our method achieves the state-of-the-art performance.",
"However, its performance can still be improved in many directions.",
"For instance, we may impose a KL constraint for a parameterized actor to improve its stability, similarly to TRPO BID27 .",
"We can also apply more efficient policy evaluation methods such as Retrace BID22 ) to achieve better critic learning."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2926829159259796,
0.3125,
0.27586206793785095,
0.21052631735801697,
0,
0.24390242993831635,
0.2857142686843872,
0.1875,
0.19999998807907104,
0.05128204822540283,
0.06896550953388214,
0.1764705777168274,
0.13793103396892548,
0.11764705181121826,
0.052631575614213943,
0.09756097197532654,
0.19512194395065308,
0.12121211737394333,
0.13333332538604736,
0.05405404791235924,
0.3589743673801422,
0.12121211737394333,
0.1538461446762085,
0.2222222238779068,
0.2926829159259796,
0.1395348757505417,
0.3913043439388275,
0.12903225421905518,
0.11428570747375488,
0.28070175647735596,
0.25641024112701416,
0.25,
0.1666666567325592,
0.2666666507720947,
0.25531914830207825,
0.14814814925193787,
0.0624999962747097,
0.2142857164144516,
0.3589743673801422,
0.375,
0.1395348757505417,
0.1666666567325592,
0.21052631735801697,
0,
0.1764705777168274,
0.11428570747375488
] | BJk59JZ0b | true | [
"This paper proposes a novel actor-critic method that uses Hessians of a critic to update an actor."
] |
[
"Deep Infomax~(DIM) is an unsupervised representation learning framework by maximizing the mutual information between the inputs and the outputs of an encoder, while probabilistic constraints are imposed on the outputs.",
"In this paper, we propose Supervised Deep InfoMax~(SDIM), which introduces supervised probabilistic constraints to the encoder outputs.",
"The supervised probabilistic constraints are equivalent to a generative classifier on high-level data representations, where class conditional log-likelihoods of samples can be evaluated.",
"Unlike other works building generative classifiers with conditional generative models, SDIMs scale on complex datasets, and can achieve comparable performance with discriminative counterparts. ",
"With SDIM, we could perform \\emph{classification with rejection}.\nInstead of always reporting a class label, SDIM only makes predictions when test samples' largest logits surpass some pre-chosen thresholds, otherwise they will be deemed as out of the data distributions, and be rejected. ",
"Our experiments show that SDIM with rejection policy can effectively reject illegal inputs including out-of-distribution samples and adversarial examples.",
"Non-robustness of neural network models emerges as a pressing concern since they are observed to be vulnerable to adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2014) .",
"Many attack methods have been developed to find imperceptible perturbations to fool the target classifiers (Moosavi-Dezfooli et al., 2016; Carlini & Wagner, 2017; Brendel et al., 2017) .",
"Meanwhile, many defense schemes have also been proposed to improve the robustnesses of the target models (Goodfellow et al., 2014; Tramèr et al., 2017; Madry et al., 2017; Samangouei et al., 2018 ).",
"An important fact about these works is that they focus on discriminative classifiers, which directly model the conditional probabilities of labels given samples.",
"Another promising direction, which is almost neglected so far, is to explore robustness of generative classifiers (Ng & Jordan, 2002) .",
"A generative classifier explicitly model conditional distributions of inputs given the class labels.",
"During inference, it evaluates all the class conditional likelihoods of the test input, and outputs the class label corresponding to the maximum.",
"Conditional generative models are powerful and natural choices to model the class conditional distributions, but they suffer from two big problems: (1) it is hard to scale generative classifiers on high-dimensional tasks, like natural images classification, with comparable performance to the discriminative counterparts.",
"Though generative classifiers have shown promising results of adversarial robustness, they hardly achieve acceptable classification performance even on CIFAR10 Schott et al., 2018; Fetaya et al., 2019) .",
"(2) The behaviors of likelihood-based generative models can be counter-intuitive and brittle.",
"They may assign surprisingly higher likelihoods to out-of-distribution (OoD) samples (Nalisnick et al., 2018; Choi & Jang, 2018) .",
"Fetaya et al. (2019) discuss the issues of likelihood as a metric for density modeling, which may be the reason of non-robust classification, e.g. OoD samples detection.",
"In this paper, we propose supervised deep infomax (SDIM) by introducing supervised statistical constraints into deep infomax (DIM, Hjelm et al. (2018) ), an unsupervised learning framework by maximizing the mutual information between representations and data.",
"SDIM is trained by optimizing two objectives: (1) maximizing the mutual information (MI) between the inputs and the high-level data representations from encoder; (2) ensuring that the representations satisfy the supervised statistical constraints.",
"The supervised statistical constraints can be interpreted as a generative classifier on high-level data representations giving up the full generative process.",
"Unlike full generative models making implicit manifold assumptions, the supervised statistical constraints of SDIM serve as explicit enforcement of manifold assumption: data representations (low-dimensional) are trained to form clusters corresponding to their class labels.",
"With SDIM, we could perform classification with rejection (Nalisnick et al., 2019; Geifman & El-Yaniv, 2017) .",
"SDIMs reject illegal inputs based on off-manifold conjecture (Samangouei et al., 2018; Gu & Rigazio, 2014) , where illegal inputs, e.g. adversarial examples, lie far away from the data manifold.",
"Samples whose class conditionals are smaller than the pre-chosen thresholds will be deemed as off-manifold, and prediction requests on them will be rejected.",
"The contributions of this paper are :",
"• We propose Supervised Deep Infomax (SDIM), an end-to-end framework whose probabilistic constraints are equivalent to a generative classifier.",
"SDIMs can achieve comparable classification performance with similar discrinimative counterparts at the cost of small over-parameterization.",
"• We propose a simple but novel rejection policy based on off-manifold conjecture: SDIM outputs a class label only if the test sample's largest class conditional surpasses the prechosen class threshold, otherwise outputs rejection.",
"The choice of thresholds relies only on training set, and takes no additional computations.",
"• Experiments show that SDIM with rejection policy can effectively reject illegal inputs, including OoD samples and adversarial examples generated by a comprehensive group of adversarial attacks.",
"We introduce supervised probabilistic constraints to DIM.",
"Giving up the full generative process, SDIMs are equivalent to generative classifiers on high-level data representations.",
"Unlike full conditional generative models which achieve poor classification performance even on CIFAR10, SDIMs attain comparable performance as the discriminative counterparts on complex datasets.",
"The training of SDIM is also computationally similar to discriminative classifiers, and does not require prohibitive computational resources.",
"Our proposed rejection policy based on off-manifold conjecture, a built-in property of SDIM, can effectively reject illegal inputs including OoD samples and adversarial examples.",
"We demonstrate that likelihoods modeled on high-level data representations, rather than raw pixel intensities, are more robust on downstream tasks without the requirement of generating real samples."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.13636362552642822,
0.0555555522441864,
0.1904761791229248,
0.3414634168148041,
0.03333332762122154,
0.4736841917037964,
0.13333332538604736,
0.09090908616781235,
0.04444443807005882,
0.0952380895614624,
0.15789473056793213,
0.1249999925494194,
0.10810810327529907,
0.21052631735801697,
0.17777776718139648,
0.12903225421905518,
0.15789473056793213,
0.04444443807005882,
0.039215680211782455,
0.08510638028383255,
0.10256409645080566,
0.11999999731779099,
0,
0.20408162474632263,
0.09999999403953552,
0,
0.10526315122842789,
0,
0.04255318641662598,
0.12121211737394333,
0.31111109256744385,
0.07692307233810425,
0.23529411852359772,
0.1463414579629898,
0.10810810327529907,
0.41860464215278625,
0.08888888359069824
] | rkg98yBFDr | true | [
"scale generative classifiers on complex datasets, and evaluate their effectiveness to reject illegal inputs including out-of-distribution samples and adversarial examples."
] |
[
"Abstract In this work, we describe a set of rules for the design and initialization of well-conditioned neural networks, guided by the goal of naturally balancing the diagonal blocks of the Hessian at the start of training.",
"We show how our measure of conditioning of a block relates to another natural measure of conditioning, the ratio of weight gradients to the weights.",
"We prove that for a ReLU-based deep multilayer perceptron, a simple initialization scheme using the geometric mean of the fan-in and fan-out satisfies our scaling rule.",
"For more sophisticated architectures, we show how our scaling principle can be used to guide design choices to produce well-conditioned neural networks, reducing guess-work.",
"The design of neural networks is often considered a black-art, driven by trial and error rather than foundational principles.",
"This is exemplified by the success of recent architecture random-search techniques (Zoph and Le, 2016; Li and Talwalkar, 2019) , which take the extreme of applying no human guidance at all.",
"Although as a field we are far from fully understanding the nature of learning and generalization in neural networks, this does not mean that we should proceed blindly.",
"In this work we define a scaling quantity γ l for each layer l that approximates the average squared singular value of the corresponding diagonal block of the Hessian for layer l.",
"This quantity is easy to compute from the (non-central) second moments of the forward-propagated values and the (non-central) second moments of the backward-propagated gradients.",
"We argue that networks that have constant γ l are better conditioned than those that do not, and we analyze how common layer types affect this quantity.",
"We call networks that obey this rule preconditioned neural networks, in analogy to preconditioning of linear systems.",
"As an example of some of the possible applications of our theory, we:",
"• Propose a principled weight initialization scheme that can often provide an improvement over existing schemes; • Show which common layer types automatically result in well-conditioned networks;",
"• Show how to improve the conditioning of common structures such as bottlenecked residual blocks by the addition of fixed scaling constants to the network (Detailed in Appendix E).",
"Although not a panacea, by using the scaling principle we have introduced, neural networks can be designed with a reasonable expectation that they will be optimizable by stochastic gradient methods, minimizing the amount of guess-and-check neural network design.",
"As a consequence of our scaling principle, we have derived an initialization scheme that automatically preconditions common network architectures.",
"Most developments in neural network theory attempt to explain the success of existing techniques post-hoc.",
"Instead, we show the power of the scaling law approach by deriving a new initialization technique from theory directly."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.25,
0.06666666269302368,
0.2857142686843872,
0.11764705181121826,
0.19999998807907104,
0.10256409645080566,
0.15789473056793213,
0.1666666567325592,
0.1428571343421936,
0.0555555522441864,
0.1428571343421936,
0.09090908616781235,
0.054054051637649536,
0.1666666567325592,
0.1818181723356247,
0.2666666507720947,
0.307692289352417,
0.27586206793785095
] | BJedt6VKPS | true | [
"A theory for initialization and scaling of ReLU neural network layers"
] |
[
"We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model.",
"Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT (Devlin et al., 2019), we show that the adversary does not need any real training data to successfully mount the attack.",
"In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks including natural language inference and question answering.",
"Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model.",
"Finally, we study two defense strategies against model extraction—membership classification and API watermarking—which while successful against some adversaries can also be circumvented by more clever ones.",
"Machine learning models represent valuable intellectual property: the process of gathering training data, iterating over model design, and tuning hyperparameters costs considerable money and effort.",
"As such, these models are often only indirectly accessible through web APIs that allow users to query a model but not inspect its parameters.",
"Malicious users might try to sidestep the expensive model development cycle by instead locally reproducing an existing model served by such an API.",
"In these attacks, known as \"model stealing\" or \"model extraction\" (Lowd & Meek, 2005; Tramèr et al., 2016) , the adversary issues a large number of queries and uses the collected (input, output) pairs to train a local copy of the model.",
"Besides theft of intellectual property, extracted models may leak sensitive information about the training data (Tramèr et al., 2016) or be used to generate adversarial examples that evade the model served by the API (Papernot et al., 2017) .",
"With the recent success of contextualized pretrained representations for transfer learning, NLP APIs based on ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) have become increasingly popular (Gardner et al., 2018) .",
"Contextualized pretrained representations boost performance and reduce sample complexity (Yogatama et al., 2019) , and they typically require only a shallow task-specific network-sometimes just a single layer as in BERT.",
"While these properties are advantageous for representation learning, we hypothesize that they also make model extraction easier.",
"In this paper, we demonstrate that NLP models obtained by fine-tuning a pretrained BERT model can be extracted even if the adversary does not have access to any training data used by the API provider.",
"In fact, the adversary does not even need to issue well-formed queries: our experiments show that extraction attacks are possible even with queries consisting of randomly sampled sequences of words coupled with simple task-specific heuristics (Section 3).",
"This result contrasts with prior work, which for large-scale attacks requires at minimum that the adversary can access a small amount of semantically-coherent data relevant to the task (Papernot et al., 2017; Correia-Silva et al., 2018; Orekondy et al., 2019a; Pal et al., 2019; Jagielski et al., 2019) .",
"Extraction performance improves further by using randomly-sampled sentences and paragraphs from Wikipedia (instead of random word sequences) as queries (Section 4).",
"These attacks are cheap; our most expensive attack cost around $500, estimated using rates of current API providers.",
"Step 1: Attacker randomly samples words to form queries and sends them to victim BERT model",
"Step 2: Attacker fine-tunes their own BERT on these queries using the victim outputs as labels Figure 1 : Overview of our model extraction setup for question answering.",
"1 An attacker first queries a victim BERT model, and then uses its predicted answers to fine-tune their own BERT model.",
"This process works even when passages and questions are random sequences of words as shown here.",
"We perform a fine-grained analysis of the randomly-generated queries to shed light on why they work so well for model extraction.",
"Human studies on the random queries show that despite their effectiveness in extracting good models, they are mostly nonsensical and uninterpretable, although queries closer to the original data distribution seem to work better for extraction (Section 5.1).",
"Furthermore, we discover that pretraining on the attacker's side makes model extraction easier (Section 5.2).",
"Finally, we study the efficacy of two simple defenses against extraction -membership classification (Section 6.1) and API watermarking (Section 6.2) -and find that while they work well against naïve adversaries, they fail against more clever ones.",
"We hope that our work spurs future research into stronger defenses against model extraction and, more generally, on developing a better understanding of why these models and datasets are particularly vulnerable to such attacks.",
"We study model extraction attacks against NLP APIs that serve BERT-based models.",
"These attacks are surprisingly effective at extracting good models with low query budgets, even when an attacker uses nonsensical input queries.",
"Our results show that fine-tuning large pretrained language models simplifies the process of extraction for an attacker.",
"Unfortunately, existing defenses against extraction, while effective in some scenarios, are generally inadequate, and further research is necessary to develop defenses robust in the face of adaptive adversaries who develop counter-attacks anticipating simple defenses.",
"Other interesting future directions that follow from the results in this paper include",
"1) leveraging nonsensical inputs to improve model distillation on tasks for which it is difficult to procure input data;",
"2) diagnosing dataset complexity by using query efficiency as a proxy;",
"3) further investigation of the agreement between victim models as a method to identify proximity in input distribution and its incorporation into an active learning setup for model extraction.",
"We provide a distribution of agreement between victim SQuAD models on RANDOM and WIKI queries in Figure 3 ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17777776718139648,
0.11320754140615463,
0.16129031777381897,
0.1428571343421936,
0.09090908616781235,
0.1395348757505417,
0.1395348757505417,
0.1538461446762085,
0.1428571343421936,
0.18518517911434174,
0.20408162474632263,
0,
0.0555555522441864,
0.1538461446762085,
0.11320754140615463,
0.10169491171836853,
0.04999999329447746,
0.05405404791235924,
0.11764705181121826,
0.1702127605676651,
0.10256409645080566,
0.05714285373687744,
0.25,
0.14814814925193787,
0.17142856121063232,
0.07692307233810425,
0.15094339847564697,
0.19354838132858276,
0.04999999329447746,
0.1111111044883728,
0.16326530277729034,
0.0624999962747097,
0.21621620655059814,
0,
0.1666666567325592,
0.1621621549129486
] | Byl5NREFDr | true | [
"Outputs of modern NLP APIs on nonsensical text provide strong signals about model internals, allowing adversaries to steal the APIs."
] |
[
"We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the \"learning to search\" (L2S) approach to structured prediction.",
"RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE).",
"Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses.",
"Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance.",
"Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error.",
"We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction.",
"Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes.",
"This allows us to validate the benefits of our approach on a machine translation task.",
"Recurrent neural networks (RNNs) have been quite successful in structured prediction applications such as machine translation BID27 , parsing BID1 or caption generation .",
"These models use the same repeated cell (or unit) to output a sequence of tokens one by one.",
"As each prediction takes into account all previous predictions, this cell learns to output the next token conditioned on the previous ones.",
"The standard training loss for RNNs is derived from maximum likelihood estimation (MLE): we consider that the cell outputs a probability distribution at each step in the sequence, and we seek to maximize the probability of the ground truth.Unfortunately, this training loss is not a particularly close surrogate to the various test errors we want to minimize.",
"A striking example of discrepancy is that the MLE loss is close to 0/1: it makes no distinction between candidates that are close or far away from the ground truth (with respect to the structured test error), thus failing to exploit valuable information.",
"Another example of train/test discrepancy is called exposure or exploration bias BID22 : in traditional MLE training the cell learns the conditional probability of the next token, based on the previous ground truth tokens -this is often referred to as teacher forcing.",
"However, at test time the model does not have access to the ground truth, and thus feeds its own previous predictions to its next cell for prediction instead.Improving RNN training thus appears as a relevant endeavor, which has received much attention recently.",
"In particular, ideas coming from reinforcement learning (RL), such as the REINFORCE and ACTOR-CRITIC algorithms BID22 BID0 , have been adapted to derive training losses that are more closely related to the test error that we actually want to minimize.In order to address the issues of MLE training, we propose instead to use ideas from the structured prediction field, in particular from the \"learning to search\" (L2S) approach introduced by BID8 and later refined by BID24 and BID5 among others.Contributions.",
"In Section 2, we review the limitations of MLE training for RNNs in details.",
"We also clarify some related claims made in the recent literature.",
"In Section 3, we make explicit the strong links between RNNs and the L2S approach.",
"In Section 4, we present SEARNN, a novel training algorithm for RNNs, using ideas from L2S to derive a global-local loss that is much closer to the test error than MLE.",
"We demonstrate that this novel approach leads to significant improvements on two difficult structured prediction tasks, including a spelling correction problem recently introduced in BID0 .",
"As this algorithm is quite costly, we investigate scaling solutions in Section",
"5. We explore a subsampling strategy that allows us to considerably reduce training times, while maintaining improved performance compared to MLE.",
"We apply this new algorithm to machine translation and report significant improvements in Section",
"6. Finally, we contrast our novel approach to the related L2S and RL-inspired methods in Section 7.",
"We now contrast SEARNN to several related algorithms, including traditional L2S approaches (which are not adapted to RNN training), and RNN training methods inspired by L2S and RL.Traditional L2S approaches.",
"Although SEARNN is heavily inspired by SEARN, it is actually closer to LOLS BID5 , another L2S algorithm.",
"As LOLS, SEARNN is a meta-algorithm where roll-in/roll-out strategies are customizable (we explored most combinations in our experiments).",
"Our findings are in agreement with those of BID5 : we advocate using the same combination, that is, a learned roll-in and a mixed roll-out.",
"The one exception to this rule of thumb is when the associated reduced problem is too hard (as seems to be the case for machine translation), in which case we recommend switching to a reference roll-in.Moreover, as noted in Section 4, SEARNN adapts the optimization process of LOLS (the one difference being that our method is stochastic rather than online): each intermediate dataset is only used for a single gradient step.",
"This means the policy interpolation is of a different nature than in SEARN where intermediate datasets are optimized for fully and the resulting policy is mixed with the previous one.However, despite the similarities we have just underlined, SEARNN presents significant differences from these traditional L2S algorithms.",
"First off, and most importantly, SEARNN is a full integration of the L2S ideas to RNN training, whereas previous methods cannot be used for this purpose directly.",
"Second, in order to achieve this adaptation we had to modify several design choices, including:• the intermediate dataset construction, which significantly differs from traditional L2S; 3• the careful choice of a classifier (those used in the L2S literature do not fit RNNs well);• the design of tailored surrogate loss functions that leverage cost information while being easy to optimize in RNNs.L2S-inspired approaches.",
"Several other papers have tried using L2S-like ideas for better RNN training, starting with which introduces \"scheduled sampling\" to avoid the exposure bias problem.",
"The idea is to start with teacher forcing and to gradually use more and more model predictions instead of ground truth tokens during training.",
"This is akin to a mixed roll-in -an idea which also appears in BID8 ).Wiseman",
"& Rush (2016, BSO) adapt one of the early variants of the L2S framework: the \"Learning A Search Optimization\" approach of Daumé & Marcu (2005, LASO) to train RNNs. However",
"LASO is quite different from the more modern SEARN family of algorithms that we focus on: it does not include either local classifiers or roll-outs, and has much weaker theoretical guarantees. Additionally",
", BSO's training loss is defined by violations in the beam-search procedure, yielding a very different algorithm from SEARNN. Furthermore",
", BSO requires being able to compute a meaningful loss on partial sequences, and thus does not handle general structured losses unlike SEARNN. Finally, its",
"ad hoc surrogate objective provides very sparse sequence-level training signal, as mentioned by their authors, thus requiring warm-start. BID1 use a loss",
"that is similar to LL for parsing, a specific task where cost-to-go are essentially free. This property is",
"also a requirement for BID26 , in which new gradient procedures are introduced to incorporate neural classifiers in the AGGREVATE BID24 variant of L2S. 4 In contrast, SEARNN",
"can be used on tasks without a free cost-to-go oracle.RL-inspired approaches. In structured prediction",
"tasks, we have access to ground truth trajectories, i.e. a lot more information than in traditional RL. One major direction of research",
"has been to adapt RL techniques to leverage this additional information. The main idea is to try to optimize",
"the expectation of the test error directly (under the stochastic policy parameterized by the RNN): DISPLAYFORM0 Since we are taking an expectation over all possible structured outputs, the only term that depends on the parameters is the probability term (the tokens in the error term are fixed). This allows this loss function to support",
"non-differentiable test errors, which is a key advantage. Of course, actually computing the expectation",
"over an exponential number of possibilities is computationally intractable.To circumvent this issue, BID25 subsample trajectories according to the learned policy, while BID22 ; BID23 use the REINFORCE algorithm, which essentially approximates the expectation with a single trajectory sample. BID0 adapt the ACTOR-CRITIC algorithm, where",
"a second critic network is trained to approximate the expectation.While all these approaches report significant improvement on various tasks, one trait they share is that they only work when initialized from a good pre-trained model. This phenomenon is often explained by the sparsity",
"of the information contained in \"sequence-level\" losses. Indeed, in the case of REINFORCE, no distinction is",
"made between the tokens that form a sequence: depending on whether the sampled trajectory is above a global baseline, all tokens are pushed up or down by the gradient update. This means good tokens are sometimes penalized and",
"bad tokens rewarded.In contrast, SEARNN uses \"global-local\" losses, with a local loss attached to each step, which contains global information since the costs are computed on full sequences. To do so, we have to \"sample\" more trajectories through",
"our roll-in/roll-outs. As a result, SEARNN does not require warm-starting to achieve",
"good experimental performance. This distinction is quite relevant, because warm-starting means",
"initializing in a specific region of parameter space which may be hard to escape. Exploration is less constrained when starting from scratch, leading",
"to potentially larger gains over MLE.RL-based methods often involve optimizing additional models (baselines for REINFORCE and the critic for ACTOR-CRITIC), introducing more complexity (e.g. target networks). SEARNN does not.Finally, while maximizing the expected reward allows",
"the RL approaches to use gradient descent even when the test error is not differentiable, it introduces another discrepancy between training and testing. Indeed, at test time, one does not decode by sampling from the stochastic",
"policy. Instead, one selects the \"best\" sequence (according to a search algorithm",
", e.g. greedy or beam search). SEARNN avoids this averse effect by computing costs using deterministic",
"roll-outs -the same decoding technique as the one used at test time -so that its loss is even closer to the test loss. The associated price is that we approximate the gradient by fixing the",
"costs, although they do depend on the parameters.RAML BID19 ) is another RL-inspired approach. Though quite different from the previous papers we have cited, it is also",
"related to SEARNN. Here, in order to mitigate the 0/1 aspect of MLE training, the authors introduce",
"noise in the target outputs at each iteration. The amount of random noise is determined according to the associated reward (target",
"outputs with a lot of noise obtain lower rewards and are thus less sampled). This idea is linked to the label smoothing technique BID28 , where the target distribution",
"at each step is the addition of a Dirac (the usual MLE target) and a uniform distribution. In this sense, when using the KL loss SEARNN can be viewed as doing learned label smoothing",
", where we compute the target distribution from the intermediate costs rather than arbitrarily adding the uniform distribution.Conclusion and future work. We have described SEARNN, a novel algorithm that uses core ideas from the learning to search",
"framework in order to alleviate the known limitations of MLE training for RNNs. By leveraging structured cost information obtained through strategic exploration, we define",
"global-local losses. These losses provide a global feedback related to the structured task at hand, distributed",
"locally within the cells of the RNN. This alternative procedure allows us to train RNNs from scratch and to outperform MLE on three",
"challenging structured prediction tasks. Finally we have proposed efficient scaling techniques that allow us to apply SEARNN on structured",
"tasks for which the output vocabulary is very large, such as neural machine translation.The L2S literature provides several promising directions for further research. Adapting \"bandit\" L2S alternatives BID5 would allow us to apply SEARNN to tasks where only a single",
"trajectory may be observed at any given point (so trying every possible token is not possible). Focused costing BID12 ) -a mixed roll-out policy where a fixed number of learned steps are taken before",
"resorting to the reference policy -could help us lift the quadratic dependency of SEARNN on the sequence length. Finally, targeted sampling BID12 ) -a smart sampling strategy that prioritizes cells where the model is",
"uncertain of what to do -could enable more efficient exploration for large-scale tasks.Let us consider the case where we perform the roll-in up until the t th cell. In order to be able to perform roll-outs from that t th cell, a hidden state is needed. If we used a reference",
"policy roll-in, this state is obtained by running the RNN until the t th cell by using",
"the teacher forcing strategy, i.e. by conditioning the outputs on the ground truth. Finally, SEARNN also needs to know what the predictions for the full sequence were in order to compute the costs",
". When the reference roll-in is used, we obtain the predictions up until the t th cell by simply copying the ground",
"truth. Hence, we discard the outputs of the RNN that are before the t th cell."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.5106382966041565,
0.08163265138864517,
0.2545454502105713,
0.0476190410554409,
0.1904761791229248,
0.09999999403953552,
0.10526315122842789,
0.25,
0.0833333283662796,
0.2380952388048172,
0.08888888359069824,
0.19999998807907104,
0.16393442451953888,
0.19354838132858276,
0.1875,
0.26966291666030884,
0.3589743673801422,
0.1666666567325592,
0.10256409645080566,
0.29629629850387573,
0.2800000011920929,
0.10810810327529907,
0.2222222238779068,
0.20512820780277252,
0.2380952388048172,
0.23999999463558197,
0.1904761791229248,
0.09302324801683426,
0.16326530277729034,
0.1428571343421936,
0.1492537260055542,
0.26923075318336487,
0.1538461446762085,
0.2448979616165161,
0.1304347813129425,
0.1463414579629898,
0.1599999964237213,
0.07017543166875839,
0.2666666507720947,
0.11999999731779099,
0.12765957415103912,
0.1428571343421936,
0.23076923191547394,
0.09999999403953552,
0.1666666567325592,
0.04878048226237297,
0.17910447716712952,
0.10256409645080566,
0.12121211737394333,
0.1249999925494194,
0.15789473056793213,
0.10526315122842789,
0.0952380895614624,
0.10810810327529907,
0,
0.1666666567325592,
0.13114753365516663,
0.14035087823867798,
0.2702702581882477,
0.0476190410554409,
0.11320754140615463,
0.07843136787414551,
0.4000000059604645,
0.1818181723356247,
0.15094339847564697,
0.14035087823867798,
0.27586206793785095,
0.40816324949264526,
0.19999998807907104,
0.21739129722118378,
0.09302324801683426,
0.12903225421905518,
0.06779660284519196,
0.1090909019112587,
0.1764705777168274,
0.14999999105930328,
0.22641508281230927,
0.09302324801683426,
0.1538461446762085
] | HkUR_y-RZ | true | [
"We introduce SeaRNN, a novel algorithm for RNN training, inspired by the learning to search approach to structured prediction, in order to avoid the limitations of MLE training."
] |
[
"Deep reinforcement learning has demonstrated increasing capabilities for continuous control problems,\n",
"including agents that can move with skill and agility through their environment. \n",
"An open problem in this setting is that of developing good strategies for integrating or merging policies\n",
"for multiple skills, where each individual skill is a specialist in a specific skill and its associated state distribution. \n",
"We extend policy distillation methods to the continuous action setting and leverage this technique to combine \\expert policies,\n",
"as evaluated in the domain of simulated bipedal locomotion across different classes of terrain.\n",
"We also introduce an input injection method for augmenting an existing policy network to exploit new input features.\n",
"Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills.\n",
"The combination of these methods allows a policy to be incrementally augmented with new skills.\n",
"We compare our progressive learning and integration via distillation (PLAID) method\n",
"against three alternative baselines.",
"As they gain experience, humans develop rich repertoires of motion skills that are useful in different contexts and environments.",
"Recent advances in reinforcement learning provide an opportunity to understand how motion repertoires can best be learned, recalled, and augmented.",
"Inspired by studies on the development and recall of movement patterns useful for different locomotion contexts BID17 , we develop and evaluate an approach for learning multi-skilled movement repertoires.",
"In what follows, we refer to the proposed method as PLAID: Progressive Learning and Integration via Distillation.For long lived applications of complex control tasks a learning system may need to acquire and integrate additional skills.",
"Accordingly, our problem is defined by the sequential acquisition and integration of new skills.",
"Given an existing controller that is capable of one-or-more skills, we wish to:",
"(a) efficiently learn a new skill or movement pattern in a way that is informed by the existing control policy, and",
"(b) to reintegrate that into a single controller that is capable of the full motion repertoire.",
"This process can then be repeated as necessary.",
"We view PLAID as a continual learning method, in that we consider a context where all tasks are not known in advance and we wish to learn any new task in an efficient manner.",
"However, it is also proves surprisingly effective as a multitask solution, given the three specific benchmarks that we compare against.",
"In the process of acquiring a new skill, we also allow for a control policy to be augmented with additional inputs, without adversely impacting its performance.",
"This is a process we refer to as input injection.Understanding the time course of sensorimotor learning in human motor control is an open research problem BID31 ) that exists concurrently with recent advances in deep reinforcement learning.",
"Issues of generalization, context-dependent recall, transfer or \"savings\" in fast learning, forgetting, and scalability are all in play for both human motor control models and the learning curricula proposed in reinforcement learning.",
"While the development of hierarchical models for skills offers one particular solution that supports scalability and that avoids problems related to forgetting, we eschew this approach in this work and instead investigate a progressive approach to integration into a control policy defined by a single deep network.Distillation refers to the problem of combining the policies of one or more experts in order to create one single controller that can perform the tasks of a set of experts.",
"It can be cast as a supervised regression problem where the objective is to learn a model that matches the output distributions of all expert policies BID13 BID28 BID19 .",
"However, given a new task for which an expert is not given, it is less clear how to learn the new task while successfully integrating this new skill in the pre-existing repertoire of the control policy for an agent.",
"One wellknown technique in machine learning to significantly improve sample efficiency across similar tasks is to use Transfer Learning (TL) BID12 , which seeks to reuse knowledge learned from solving a previous task to efficiently learn a new task.",
"However, transferring knowledge from previous tasks to new tasks may not be straightforward; there can be negative transfer wherein a previously-trained model can take longer to learn a new task via fine-tuning than would a randomlyinitialized model BID16 .",
"Additionally, while learning a new skill, the control policy should not forget how to perform old skills.The core contribution of this paper is a method Progressive Learning and Integration via Distillation (PLAiD) to repeatedly expand and integrate a motion control repertoire.",
"The main building blocks consist of policy transfer and multi-task policy distillation, and the method is evaluated in the context of a continuous motor control problem, that of robust locomotion over distinct classes of terrain.",
"We evaluate the method against three alternative baselines.",
"We also introduce input injection, a convenient mechanism for adding inputs to control policies in support of new skills, while preserving existing capabilities.",
"MultiTasker vs PLAiD: The MultiTasker may be able to produce a policy that has higher overall average reward, but in practise constraints can keep the method from combining skills gracefully.",
"If the reward functions are different between tasks, the MultiTasker can favour a task with higher rewards, as these tasks may receive higher advantage.",
"It is also a non-trivial task to normalize the reward functions for each task in order to combine them.",
"The MultiTasker may also favour tasks that are easier than other tasks in general.",
"We have shown that the PLAiD scales better with respect to the number of tasks than the MultiTasker.",
"We expect PLAiD would further outperform the MultiTasker if the tasks were more difficult and the reward functions dissimilar.In our evaluation we compare the number of iterations PLAiD uses to the number the MultiTasker uses on only the new task, which is not necessarily fair.",
"The MultiTasker gains its benefits from training on the other tasks together.",
"If the idea is to reduce the number of simulation samples that are needed to learn new tasks then the MultiTasker would fall far behind.",
"Distillation is also very efficient with respect to the number of simulation steps needed.",
"Data could be collected from the simulator in groups and learned from in many batches before more data is needed as is common for behavioural cloning.",
"We expect another reason distillation benefits learning multiple tasks is that the integration process assists in pulling policies out of the local minima RL is prone to.Transfer Learning: Because we are using an actor-critic learning method, we also studied the possibility of using the value functions for TL.",
"We did not discover any empirical evidence that this assisted the learning process.",
"When transferring to a new task, the state distribution has changed and the reward function may be completely different.",
"This makes it unlikely that the value function will be accurate on this new task.",
"In addition, value functions are in general easier and faster to learn than policies, implying that value function reuse is less important to transfer.",
"We also find that helpfulness of TL depends on not only the task difficulty but the reward function as well.",
"Two tasks may overlap in state space but the area they overlap could be easily reachable.",
"In this case TL may not give significant benefit because the overall RL problem is easy.",
"The greatest benefit is gained from TL when the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved.",
"We have proposed and evaluated a method for the progressive learning and integration (via distillation) of motion skills.",
"The method exploits transfer learning to speed learning of new skills, along with input injection where needed, as well as continuous-action distillation, using DAGGER-style learning.",
"This compares favorably to baselines consisting of learning all skills together, or learning all the skills individually before integration.",
"We believe that there remains much to learned about the best training and integration methods for movement skill repertoires, as is also reflected in the human motor learning literature.We augment the blind network design by adding features for terrain to create an agent with sight.",
"This network with terrain features has a single convolution layer with 8 filters of width 3.",
"This constitutional layer is followed by a dense layer of 32 units.",
"The dense layer is then concatenated twice, once along each of the original two hidden layers in the blind version of the policy."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07407406717538834,
0.13793103396892548,
0.12121211737394333,
0.05882352590560913,
0.24242423474788666,
0,
0.1818181723356247,
0.4375,
0.1875,
0.29629629850387573,
0,
0.17142856121063232,
0.1666666567325592,
0.0952380895614624,
0.20000000298023224,
0.19999998807907104,
0.06896550953388214,
0.1666666567325592,
0.12903225421905518,
0,
0.260869562625885,
0.0555555522441864,
0.09756097197532654,
0.11764705181121826,
0.13636362552642822,
0.14084506034851074,
0.1860465109348297,
0.12765957415103912,
0.11999999731779099,
0.1304347813129425,
0.22641508281230927,
0.17777776718139648,
0.0833333283662796,
0.1538461446762085,
0.17777776718139648,
0,
0.12121211737394333,
0.06896550953388214,
0.1249999925494194,
0.1538461446762085,
0,
0.15789473056793213,
0.06666666269302368,
0.05128204822540283,
0.17543859779834747,
0.13793103396892548,
0.1764705777168274,
0.12903225421905518,
0.21052631735801697,
0.05714285373687744,
0,
0,
0.1395348757505417,
0.24242423474788666,
0.2631579041481018,
0.1875,
0.14035087823867798,
0,
0,
0
] | B13njo1R- | true | [
"A continual learning method that uses distillation to combine expert policies and transfer learning to accelerate learning new skills."
] |
[
"Deep Reinforcement Learning (DRL) has led to many recent breakthroughs on complex control tasks, such as defeating the best human player in the game of Go.",
"However, decisions made by the DRL agent are not explainable, hindering its applicability in safety-critical settings.",
"Viper, a recently proposed technique, constructs a decision tree policy by mimicking the DRL agent.",
"Decision trees are interpretable as each action made can be traced back to the decision rule path that lead to it.",
"However, one global decision tree approximating the DRL policy has significant limitations with respect to the geometry of decision boundaries.",
"We propose MoET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions.",
"We propose a training procedure to support non-differentiable decision tree experts and integrate it into imitation learning procedure of Viper.",
"We evaluate our algorithm on four OpenAI gym environments, and show that the policy constructed in such a way is more performant and better mimics the DRL agent by lowering mispredictions and increasing the reward.",
"We also show that MoET policies are amenable for verification using off-the-shelf automated theorem provers such as Z3.",
"Deep Reinforcement Learning (DRL) has achieved many recent breakthroughs in challenging domains such as Go (Silver et al., 2016) .",
"While using neural networks for encoding state representations allow DRL agents to learn policies for tasks with large state spaces, the learned policies are not interpretable, which hinders their use in safety-critical applications.",
"Some recent works leverage programs and decision trees as representations for interpreting the learned agent policies.",
"PIRL (Verma et al., 2018) uses program synthesis to generate a program in a Domain-Specific Language (DSL) that is close to the DRL agent policy.",
"The design of the DSL with desired operators is a tedious manual effort and the enumerative search for synthesis is difficult to scale for larger programs.",
"In contrast, Viper (Bastani et al., 2018 ) learns a Decision Tree (DT) policy by mimicking the DRL agent, which not only allows for a general representation for different policies, but also allows for verification of these policies using integer linear programming solvers.",
"Viper uses the DAGGER (Ross et al., 2011) imitation learning approach to collect state action pairs for training the student DT policy given the teacher DRL policy.",
"It modifies the DAGGER algorithm to use the Q-function of teacher policy to prioritize states of critical importance during learning.",
"However, learning a single DT for the complete policy leads to some key shortcomings such as",
"i) less faithful representation of original agent policy measured by the number of mispredictions,",
"ii) lower overall performance (reward), and",
"iii) larger DT sizes that make them harder to interpret.",
"In this paper, we present MOËT (Mixture of Expert Trees), a technique based on Mixture of Experts (MOE) (Jacobs et al., 1991; Jordan and Xu, 1995; Yuksel et al., 2012) , and reformulate its learning procedure to support DT experts.",
"MOE models can typically use any expert as long as it is a differentiable function of model parameters, which unfortunately does not hold for DTs.",
"Similar to MOE training with Expectation-Maximization (EM) algorithm, we first observe that MOËT can be trained by interchangeably optimizing the weighted log likelihood for experts (independently from one another) and optimizing the gating function with respect to the obtained experts.",
"Then, we propose a procedure for DT learning in the specific context of MOE.",
"To the best of our knowledge we are first to combine standard non-differentiable DT experts, which are interpretable, with MOE model.",
"Existing combinations which rely on differentiable tree or treelike models, such as soft decision trees (Irsoy et al., 2012) and hierarchical mixture of experts (Zhao et al., 2019) are not interpretable.",
"We adapt the imitation learning technique of Viper to use MOËT policies instead of DTs.",
"MOËT creates multiple local DTs that specialize on different regions of the input space, allowing for simpler (shallower) DTs that more accurately mimic the DRL agent policy within their regions, and combines the local trees into a global policy using a gating function.",
"We use a simple and interpretable linear model with softmax function as the gating function, which returns a distribution over DT experts for each point in the input space.",
"While standard MOE uses this distribution to average predictions of DTs, we also consider selecting just one most likely expert tree to improve interpretability.",
"While decision boundaries of Viper DT policies must be axis-perpendicular, the softmax gating function supports boundaries with hyperplanes of arbitrary orientations, allowing MOËT to more faithfully represent the original policy.",
"We evaluate our technique on four different environments: CartPole, Pong, Acrobot, and Mountaincar.",
"We show that MOËT achieves significantly better rewards and lower misprediction rates with shallower trees.",
"We also visualize the Viper and MOËT policies for Mountaincar, demonstrating the differences in their learning capabilities.",
"Finally, we demonstrate how a MOËT policy can be translated into an SMT formula for verifying properties for CartPole game using the Z3 theorem prover (De Moura and Bjørner, 2008) under similar assumptions made in Viper.",
"In summary, this paper makes the following key contributions:",
"1) We propose MOËT, a technique based on MOE to learn mixture of expert decision trees and present a learning algorithm to train MOËT models.",
"2) We use MOËT models with a softmax gating function for interpreting DRL policies and adapt the imitation learning approach used in Viper to learn MOËT models.",
"3) We evaluate MOËT on different environments and show that it leads to smaller, more faithful, and performant representations of DRL agent policies compared to Viper while preserving verifiability.",
"We introduced MOËT, a technique based on MOE with expert decision trees and presented a learning algorithm to train MOËT models.",
"We then used MOËT models for interpreting DRL agent policies, where different local DTs specialize on different regions of input space and are combined into a global policy using a gating function.",
"We showed that MOËT models lead to smaller, more faithful and performant representation of DRL agents compared to previous state-of-the-art approaches like Viper while still maintaining interpretability and verifiability.",
"Algorithm 2 Viper training (Bastani et al., 2018) 1: procedure VIPER (MDP e, TEACHER π t , Q-FUNCTION Q πt , ITERATIONS N ) 2:",
"Initialize dataset and student: D ← ∅, π s0 ← π t 3:",
"Sample trajectories and aggregate:",
"Sample dataset using Q values:",
"Train decision tree:",
"return Best policy π s ∈ {π s1 , ..., π s N }."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.05128204822540283,
0,
0.1428571343421936,
0.05882352590560913,
0.25,
0.22727271914482117,
0.3636363446712494,
0,
0.0624999962747097,
0,
0.09090908616781235,
0.06666666269302368,
0,
0.10810810327529907,
0.07407406717538834,
0.05128204822540283,
0.12903225421905518,
0.06666666269302368,
0.07407406717538834,
0,
0,
0.11999999731779099,
0.10526315122842789,
0.0833333283662796,
0.1428571343421936,
0.23529411852359772,
0.22727271914482117,
0.1428571343421936,
0.07999999821186066,
0.1463414579629898,
0.10810810327529907,
0.1463414579629898,
0,
0.06896550953388214,
0.06666666269302368,
0.04081632196903229,
0,
0.21621620655059814,
0.10256409645080566,
0.04878048226237297,
0.1764705777168274,
0.09090908616781235,
0.04878048226237297,
0,
0,
0,
0.10526315122842789,
0.11764705926179886,
0
] | BJlxdCVKDB | true | [
"Explainable reinforcement learning model using novel combination of mixture of experts with non-differentiable decision tree experts."
] |
[
"We consider the problem of unconstrained minimization of a smooth objective\n",
"function in $\\mathbb{R}^d$ in setting where only function evaluations are possible.",
"We propose and analyze stochastic zeroth-order method with heavy ball momentum.",
"In particular, we propose, SMTP, a momentum version of the stochastic three-point method (STP) Bergou et al. (2019).",
"We show new complexity results for non-convex, convex and strongly convex functions.",
"We test our method on a collection of learning to continuous control tasks on several MuJoCo Todorov et al. (2012) environments with varying difficulty and compare against STP, other state-of-the-art derivative-free optimization algorithms and against policy gradient methods.",
"SMTP significantly outperforms STP and all other methods that we considered in our numerical experiments.",
"Our second contribution is SMTP with importance sampling which we call SMTP_IS.",
"We provide convergence analysis of this method for non-convex, convex and strongly convex objectives.",
"In this paper, we consider the following minimization problem",
"where f : R d → R is \"smooth\" but not necessarily a convex function in a Derivative-Free Optimization (DFO) setting where only function evaluations are possible.",
"The function f is bounded from below by f (x * ) where x * is a minimizer.",
"Lastly and throughout the paper, we assume that f is L-smooth.",
"DFO.",
"In DFO setting Conn et al. (2009); Kolda et al. (2003) , the derivatives of the objective function f are not accessible.",
"That is they are either impractical to evaluate, noisy (function f is noisy) (Chen, 2015) or they are simply not available at all.",
"In standard applications of DFO, evaluations of f are only accessible through simulations of black-box engine or software as in reinforcement learning and continuous control environments Todorov et al. (2012) .",
"This setting of optimization problems appears also in applications from computational medicine Marsden et al. (2008) and fluid dynamics Allaire (2001) ; Haslinger & Mäckinen (2003) ; Mohammadi & Pironneau (2001) to localization Marsden et al. (2004; 2007) and continuous control Mania et al. (2018) ; Salimans et al. (2017) to name a few.",
"The literature on DFO for solving (1) is long and rich.",
"The first approaches were based on deterministic direct search (DDS) and they span half a century of work Hooke & Jeeves (1961) ; Su (1979); Torczon (1997) .",
"However, for DDS methods complexity bounds have only been established recently by the work of Vicente and coauthors Vicente (2013); Dodangeh & Vicente (2016) .",
"In particular, the work of Vicente Vicente (2013) showed the first complexity results on non-convex f and the results were extended to better complexities when f is convex Dodangeh & Vicente (2016) .",
"However, there have been several variants of DDS, including randomized approaches Matyas (1965) ; Karmanov (1974a; b) ; Baba (1981) ; Dorea (1983) ; Sarma (1990) .",
"Only very recently, complexity bounds have also been derived for randomized methods Diniz-Ehrhardt et al. (2008) ; Stich et al. (2011); Ghadimi & Lan (2013) ; Ghadimi et al. (2016) ; Gratton et al. (2015) .",
"For instance, the work of Diniz-Ehrhardt et al. (2008) ; Gratton et al. (2015) imposes a decrease condition on whether to accept or reject a step of a set of random directions.",
"Moreover, Nesterov & Spokoiny (2017) derived new complexity bounds when the random directions are normally distributed vectors for both smooth and non-smooth f .",
"They proposed both accelerated and non-accelerated zero-order (ZO) methods.",
"Accelerated derivative-free methods in the case of inexact oracle information was proposed in Dvurechensky et al. (2017) .",
"An extension of Nesterov & Spokoiny (2017) for non-Euclidean proximal setup was proposed by Gorbunov et al. (2018) for the smooth stochastic convex optimization with inexact oracle.",
"More recently and closely related to our work, Bergou et al. (2019) proposed a new randomized direct search method called Stochastic Three Points (STP).",
"At each iteration k STP generates a random search direction s k according to a certain probability law and compares the objective function at three points: current iterate x k , a point in the direction of s k and a point in the direction of −s k with a certain step size α k .",
"The method then chooses the best of these three points as the new iterate:",
"The key properties of STP are its simplicity, generality and practicality.",
"Indeed, the update rule for STP makes it extremely simple to implement, the proofs of convergence results for STP are short and clear and assumptions on random search directions cover a lot of strategies of choosing decent direction and even some of first-order methods fit the STP scheme which makes it a very flexible in comparison with other zeroth-order methods (e.g. two-point evaluations methods like in Nesterov & Spokoiny (2017) , Ghadimi & Lan (2013) , Ghadimi et al. (2016) , Gorbunov et al. (2018) that try to approximate directional derivatives along random direction at each iteration).",
"Motivated by these properties of STP we focus on further developing of this method.",
"We have proposed, SMTP, the first heavy ball momentum DFO based algorithm with convergence rates for non-convex, convex and strongly convex functions under generic sampling direction.",
"We specialize the sampling to the set of coordinate bases and further improve rates by proposing a momentum and importance sampling version SMPT_IS with new convergence rates for non-convex, convex and strongly convex functions too.",
"We conduct large number of experiments on the task of controlling dynamical systems.",
"We outperform two different policy gradient methods and achieve comparable or better performance to the best DFO algorithm (ARS) on the respective environments.",
"Assumption A.2.",
"The probability distribution D on R d satisfies the following properties:",
"2 is positive and finite.",
"2. There is a constant µ D > 0 and norm",
"We establish the key lemma which will be used to prove the theorems stated in the paper.",
"Lemma A.1.",
"Assume that f is L-smooth and D satisfies Assumption A.2.",
"Then for the iterates of SMTP the following inequalities hold:",
"and",
"Proof.",
"By induction one can show that",
"That is, for k = 0 this recurrence holds and update rules for z k , x k and v k−1 do not brake it.",
"From this we get",
"Similarly,"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1428571343421936,
0,
0.3448275923728943,
0.1111111044883728,
0.20689654350280762,
0.30188679695129395,
0.060606054961681366,
0.19999998807907104,
0.12903225421905518,
0,
0.04878048226237297,
0.060606054961681366,
0.06896550953388214,
0,
0.052631575614213943,
0.17391303181648254,
0.23728813230991364,
0.06896550953388214,
0.08888888359069824,
0.04999999329447746,
0.09090908616781235,
0,
0,
0.09090908616781235,
0.09756097197532654,
0.07407406717538834,
0,
0.09090908616781235,
0.1904761791229248,
0.14814814925193787,
0.06451612710952759,
0.06896550953388214,
0.08791208267211914,
0,
0.2790697515010834,
0.3829787075519562,
0.06666666269302368,
0.19999998807907104,
0,
0.08695651590824127,
0.13793103396892548,
0.12121211737394333,
0.06896550953388214,
0,
0,
0.051282044500112534,
0
] | HylAoJSKvH | true | [
"We develop and analyze a new derivative free optimization algorithm with momentum and importance sampling with applications to continuous control."
] |
[
"Using class labels to represent class similarity is a typical approach to training deep hashing systems for retrieval; samples from the same or different classes take binary 1 or 0 similarity values.",
"This similarity does not model the full rich knowledge of semantic relations that may be present between data points.",
"In this work we build upon the idea of using semantic hierarchies to form distance metrics between all available sample labels; for example cat to dog has a smaller distance than cat to guitar.",
"We combine this type of semantic distance into a loss function to promote similar distances between the deep neural network embeddings.",
"We also introduce an empirical Kullback-Leibler divergence loss term to promote binarization and uniformity of the embeddings.",
"We test the resulting SHREWD method and demonstrate improvements in hierarchical retrieval scores using compact, binary hash codes instead of real valued ones, and show that in a weakly supervised hashing setting we are able to learn competitively without explicitly relying on class labels, but instead on similarities between labels.",
"Content-Based Image Retrieval (CBIR) on very large datasets typically relies on hashing for efficient approximate nearest neighbor search; see e.g. BID12 for a review.",
"Early methods such as (LSH) BID5 were data-independent, but Data-dependent methods (either supervised or unsupervised) have shown better performance.",
"Recently, Deep hashing methods using CNNs have had much success over traditional methods, see e.g. Hashnet BID1 , DADH .",
"Most supervised hashing techniques rely on a pairwise binary similarity matrix S = {s ij }, whereby s ij = 1 for images i and j taken from the same class, and 0 otherwise.A richer set of affinity is possible using semantic relations, for example in the form of class hierarchies.",
"BID13 consider the semantic hierarchy for non-deep hashing, minimizing inner product distance of hash codes from the distance in the semantic hierarchy.",
"In the SHDH method , the pairwise similarity matrix is defined from such a hierarchy according to a weighted sum of weighted Hamming distances.In Unsupervised Semantic Deep Hashing (USDH, Jin (2018)), semantic relations are obtained by looking at embeddings on a pre-trained VGG model on Imagenet.",
"The goal of the semantic loss here is simply to minimize the distance between binarized hash codes and their pre-trained embeddings, i.e. neighbors in hashing space are neighbors in pre-trained feature space.",
"This is somewhat similar to our notion of semantic similarity except for using a pre-trained embedding instead of a pre-labeled semantic hierarchy of relations.",
"BID14 consider class-wise Deep hashing, in which a clustering-like operation is used to form a loss between samples both from the same class and different levels from the hierarchy.Recently BID0 explored image retrieval using semantic hierarchies to design an embedding space, in a two step process.",
"Firstly they directly find embedding vectors of the class labels on a unit hypersphere, using a linear algebra based approach, such that the distances of these embeddings are similar to the supplied hierarchical similarity.",
"In the second stage, they train a standard CNN encoder model to regress images towards these embedding vectors.",
"They do not consider hashing in their work.",
"We approached Deep Hashing for retrieval, introducing novel combined loss functions that balance code binarization with equivalent distance matching from hierarchical semantic relations.",
"We have demonstrated new state of the art results for semantic hierarchy based image retrieval (mAHP scores) on CIFAR and ImageNet with both our fully supervised (SHRED) and weakly-supervised (SHREWD) methods."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.260869562625885,
0.05405404791235924,
0.2083333283662796,
0.25641024112701416,
0.05714285373687744,
0.21875,
0.1463414579629898,
0,
0.10526315122842789,
0.1249999925494194,
0.11428570747375488,
0.06779660284519196,
0.1304347813129425,
0.15789473056793213,
0.20338982343673706,
0.0833333283662796,
0.0555555522441864,
0.07692307233810425,
0.1463414579629898,
0.2083333283662796
] | rJgqFi5rOV | true | [
"We propose a new method for training deep hashing for image retrieval using only a relational distance metric between samples"
] |
[
"In this paper, we propose a novel approach to improve a given surface mapping through local refinement.",
"The approach\n",
"receives an established mapping between two surfaces and follows four phases:",
"(i) inspection of the mapping and creation of a sparse\nset of landmarks in mismatching regions;",
"(ii) segmentation with a low-distortion region-growing process based on flattening the\nsegmented parts;",
"(iii) optimization of the deformation of segmented parts to align the landmarks in the planar parameterization domain;\nand",
"(iv) aggregation of the mappings from segments to update the surface mapping.",
"In addition, we propose a new method to deform the\n",
"mesh in order to meet constraints (in our case, the landmark alignment of phase",
"(iii)).",
"We incrementally adjust the cotangent weights for\n",
"the constraints and apply the deformation in a fashion that guarantees that the deformed mesh will be free of flipped faces and will have\n",
"low conformal distortion.",
"Our new deformation approach, Iterative Least Squares Conformal Mapping (ILSCM), outperforms other\n",
"low-distortion deformation methods.",
"The approach is general, and we tested it by improving the mappings from different existing surface\n",
"mapping methods.",
"We also tested its effectiveness by editing the mappings for a variety of 3D objects.",
"C OMPUTING a cross-surface mapping between two surfaces (cross-parameterization) is a fundamental problem in digital geometric processing.",
"A wide range of methods have been developed to find such mappings [2] , [3] , [4] , but no single method results in a perfect mapping in every case.",
"Quite often, the mapping results may be good overall, but some specific, sometimes subtle, semantic features, such as articulations and facial features, may remain misaligned, as illustrated in Figure 1 .",
"These imperfections of the final result are often unacceptable in a production setting where the artist needs a high degree of control over the final result, and will often sacrifice automation of a method for higher control.",
"Typically, improving results using surface mapping methods requires the user to iteratively insert some landmarks and solve for the mapping globally.",
"However, since the imperfections are typically localized to a specific region, a local solution that does not change the mapping globally would be preferred in order to ensure that the method does not introduce artifacts elsewhere on the map.",
"This paper proposes a surface mapping editing approach providing local and precise control over the map adjustments.",
"The process begins with the inspection of an existing vertex-topoint surface mapping between two meshes.",
"In regions where the mapping exhibits some discrepancy, the user sets landmarks positioned at corresponding locations on both meshes.",
"For each such region, we extract a patch on both meshes in order to localize the changes in the mapping, and we flatten them on a common planar domain.",
"The mapping is improved based on a 2D deformation optimization that steers the landmarks toward correspondence while limiting distortion and having theoretical guarantees to maintain the local injectivity of the map.",
"We developed a new 2D deformation approach denoted Iterative Least Squares Conformal Maps (ILSCM), which iteratively minimizes a conformal energy, each iteration ensuring that flips do not occur, and in practice, ensuring progress toward satisfying the constraints.",
"We chose to work with a conformal energy as we want to be able to improve mappings where the deformation between the pair of meshes is not isometric.",
"Our editing approach can successfully align the mapping around landmarks without any degradation of the overall mapping.",
"The local surface maps are extracted from their respective deformed segments and parameterization domains, and are then combined to form an improved global surface mapping.",
"Our approach solves an important practical problem and offers three novel scientific contributions.",
"The first is a practical approach for local surface map editing, which we show, using both qualitative and quantitative metrics, provides better results than other stateof-the-art methods.",
"The second involves a compact segmentation which results in a compromise between a low-distortion flattening and a low-distortion deformation when aligning the landmarks.",
"The third is a new deformation approach, ILSCM, which preserves conformal energy better than other state-of-the-art methods, and that has theoretical guarantees preventing the introduction of foldovers.",
"Our approach carves a new path in between the more classical shape-preserving methods, which often lose local injectivity, and the more current methods, which formulate the injectivity constraint as part of the optimization.",
"These latter approaches typically do not have a bound on the shape-preserving error.",
"In our approach, we are minimizing only the shape-preserving term (i.e., LSCM energy) and iteratively improving the user constraints while maintaining a locally injective map in each iteration.",
"We achieve this by carefully controlling the λ parameter in Eq. 1.",
"At one extreme, if λ is very large (i.e., infinity), the formulation is equivalent to the LSCM formulation.",
"If λ is very small, it takes many iterations for the user constraints to be satisfied, or in some cases, the user constraints may ultimately not be satisfied.",
"Our iterative scheme relies on two important observations.",
"If λ is 0, the solution is the same as the initial configuration.",
"Therefore, if we start in a locally injective configuration, the final result will be a locally injective configuration.",
"If the initial configuration is locally injective, there always exists a λ (however small) that will result in a locally injective configuration, where the user constraints are closer to the target.",
"This scheme will converge to a locally injective configuration.",
"Consequently, we iteratively repeat the optimization to fight against flipped faces, but convergence cannot be guaranteed.",
"It is always possible to design a landmark configuration in which the constraints cannot be met without flipped faces.",
"This is true for the other deformation methods as well.",
"Appendix B demonstrates different failure cases using different deformation methods.",
"In our experiments, the constraints are satisfied (up to numerical precision), even for extreme deformations.",
"In our results, we improved mappings which were initially computed from a variety of methods [1] , [3] , [4] , [10] , [21] , [26] .",
"Even if these initial mappings minimize different deformation energies, the fact that we rely on the LSCM conformal energy to edit them did not prevent our approach to improve the mappings.",
"One must keep in mind that the goal of the editing is not to strictly minimize a deformation energy, but to align important semantic features of the objects and maintain injectivity.",
"We analyzed our results to verify the degree to which the deformation deteriorates the shape of the triangles.",
"We checked 13 of the results found in this paper, and we considered that a detrimental deformation is one in which the angle becomes more than 20 times narrower after deformation.",
"Eleven cases had no such triangles, while the two other cases had two and three, respectively.",
"The worst triangle in our 13 test cases was 24 times narrower than before deformation.",
"Any deformation method is prone to result in thin triangles, so we compared our approach to LIM, SLIM, and KP-Newton for six examples.",
"When looking at the worst triangle found in the deformed meshes, ILSCM performed best for four of the test cases, while KP-Newton performed best for two of the test cases.",
"SLIM and LIM were systematically in third and fourth place behind ILSCM and KP-Newton.",
"Furthermore, our results were better than LIM, SLIM, and KP-Newton in terms of shape preservation and final triangulation, as can be seen in Fig. 12 and in the video.",
"We ran our experiments on a 3.40 GHz Intel Core-i7-4770 CPU with 12 GB of memory.",
"The presented approach was implemented with MATLAB, taking advantage of its sparse matrices and linear solvers.",
"Table 1 shows computation times for the segmentation and the deformation (including mapping extraction) phases.",
"Since our deformation phase is an iterative method, the time to edit a mapping depends on the size of the mismatching regions and iterations.",
"We have presented a novel approach for improving surface mappings locally.",
"Our approach is based on a low-distortion region-growing segmentation followed by an independent planar parameterization of each segment.",
"The mapping is then optimized based on an alignment of the user-prescribed landmarks in the parameterization space of each segment.",
"Our joint planar parameterization deformation for the segments is robust, and results in low distortion.",
"Our new iterative LSCM approach can be reused in several contexts where a deformation with low distortion is required.",
"From a practical perspective, our approach has several",
"(a) Mesh A",
"(b) Mesh B, initial mapping [10]",
"(c) Mesh B, WA [10]",
"(d) Mesh B, our edited mapping",
"(e) Mesh A",
"(f) Mesh B, initial mapping [10]",
"(g) Mesh B, WA [10]",
"(h) Mesh B, our edited mapping advantages.",
"It can be used to improve the mapping resulting from",
"(a) Mesh A skeleton",
"(b) Mesh B, initial skeleton [26]",
"(c) Mesh B, edited skeleton Fig. 24 .",
"When using the mapping to retarget attributes, in this case the skeleton, an incorrect mapping will lead to problems, here putting the thumb joint outside of the mesh.",
"By locally editing the mapping, it is easy to fix such issues.",
"any surface mapping method.",
"It also provides a great deal of control, allowing the user to restrict editing to a specific region and to add as few or as many landmarks as necessary to achieve a desired result.",
"Our local editing leads to interesting questions which open many avenues for future work.",
"One such prospective area is higherlevel landmarks such as lines.",
"This will lead to challenges in terms of easing the interactive placement of these lines on both meshes, but will provide a better set of constraints for the deformation.",
"Another avenue would be to extend the scope to editing deformation transfer.",
"This will combine deformation with editing and enable the user to control animation retargeting."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.5365853905677795,
0.0555555522441864,
0.20512820780277252,
0.15789473056793213,
0.14999999105930328,
0.1666666567325592,
0.4000000059604645,
0.3589743673801422,
0.125,
0.22727271914482117,
0,
0.05405404791235924,
0,
0.09756097197532654,
0.14999999105930328,
0.19512194395065308,
0.19230768084526062,
0.11538460850715637,
0.15094339847564697,
0.1818181723356247,
0.2857142686843872,
0.2380952388048172,
0.14999999105930328,
0.1395348757505417,
0.20408162474632263,
0.18518517911434174,
0.23333333432674408,
0.23999999463558197,
0.14999999105930328,
0.12765957415103912,
0.10526315122842789,
0.11538460850715637,
0.13636362552642822,
0.11538460850715637,
0.23076923191547394,
0.10526315122842789,
0.18518517911434174,
0.1621621549129486,
0.0952380895614624,
0.20408162474632263,
0.060606058686971664,
0.05714285373687744,
0.14999999105930328,
0.23076923191547394,
0.11764705181121826,
0.09756097197532654,
0.22727271914482117,
0.05714285373687744,
0,
0.14999999105930328,
0.04255318641662598,
0.1538461446762085,
0.1538461446762085,
0.1538461446762085,
0.15094339847564697,
0.052631575614213943,
0.04999999701976776,
0.1702127605676651,
0.08510638028383255,
0.05405404791235924,
0.07999999821186066,
0.1428571343421936,
0.09756097197532654,
0.10256409645080566,
0.21276594698429108,
0.2222222238779068,
0.09302324801683426,
0.1395348757505417,
0.09999999403953552,
0.27272728085517883,
0.12121211737394333,
0,
0.06451612710952759,
0,
0.06451612710952759,
0,
0.06451612710952759,
0,
0.0624999962747097,
0.22857142984867096,
0,
0,
0,
0.2083333283662796,
0.10810810327529907,
0.13793103396892548,
0.1538461446762085,
0.10256409645080566,
0,
0.19999998807907104,
0.1111111044883728,
0.20512820780277252
] | DNj0cjDNsP | true | [
"We propose a novel approach to improve a given cross-surface mapping through local refinement with a new iterative method to deform the mesh in order to meet user constraints."
] |
[
"Understanding the groundbreaking performance of Deep Neural Networks is one\n",
"of the greatest challenges to the scientific community today.",
"In this work, we\n",
"introduce an information theoretic viewpoint on the behavior of deep networks\n",
"optimization processes and their generalization abilities.",
"By studying the Information\n",
"Plane, the plane of the mutual information between the input variable and\n",
"the desired label, for each hidden layer.",
"Specifically, we show that the training of\n",
"the network is characterized by a rapid increase in the mutual information (MI)\n",
"between the layers and the target label, followed by a longer decrease in the MI\n",
"between the layers and the input variable.",
"Further, we explicitly show that these\n",
"two fundamental information-theoretic quantities correspond to the generalization\n",
"error of the network, as a result of introducing a new generalization bound that is\n",
"exponential in the representation compression.",
"The analysis focuses on typical\n",
"patterns of large-scale problems.",
"For this purpose, we introduce a novel analytic\n",
"bound on the mutual information between consecutive layers in the network.\n",
"An important consequence of our analysis is a super-linear boost in training time\n",
"with the number of non-degenerate hidden layers, demonstrating the computational\n",
"benefit of the hidden layers.",
"Deep Neural Networks (DNNs) heralded a new era in predictive modeling and machine learning.",
"Their ability to learn and generalize has set a new bar on performance, compared to state-of-the-art methods.",
"This improvement is evident across almost every application domain, and especially in areas that involve complicated dependencies between the input variable and the target label BID19 .",
"However, despite their great empirical success, there is still no comprehensive understanding of their optimization process and its relationship to their (remarkable) generalization abilities.This work examines DNNs from an information-theoretic viewpoint.",
"For this purpose we utilize the Information Bottleneck principle BID37 .",
"The Information Bottleneck (IB) is a computational framework for extracting the most compact, yet informative, representation of the input variable (X), with respect to a target label variable (Y ).",
"The IB bound defines the optimal trade-off between representation complexity and its predictive power.",
"Specifically, it is achieved by minimizing the mutual information (MI) between the representation and the input, subject to the level of MI between the representation and the target label.Recent results BID35 , demonstrated that the layers of DNNs tend to converge to the IB optimal bound.",
"The results pointed to a distinction between two phases of the training process.",
"The first phase is characterized by an increase in the MI with the label (i.e. fitting the training data), whereas in the second and most important phase, the training error was slowly reduced with a decrease in mutual information between the layers and the input (i.e. representation compression).",
"These two phases appear to correspond to fast convergence to a flat minimum (drift) following a random walk, or diffusion, in the vicinity of the training error's flat minimum, as reported in other studies (e.g. BID39 ).These",
"observations raised several interesting questions: (a) which",
"properties of the SGD optimization cause these two training phases? (b) how can",
"the diffusion phase improve generalization perfor-mance? (c) can the",
"representation compression explain the convergence of the layers to the optimal IB bound? (d) can this",
"diffusion phase explain the benefit of many hidden layers?In this work",
"we attempt to answer these questions. Specifically",
", we draw important connections between recent results inspired by statistical mechanics and information-theoretic principles. We show that",
"the layers of a DNN indeed follow the behavior described by BID35 . We claim that",
"the reason may be found in the Stochastic Gradient Decent (SGD) optimization mechanism. We show that",
"the first phase of the SGD is characterized by a rapid decrease in the training error, which corresponds to an increase in the MI with the labels. Then, the SGD",
"behaves like non-homogeneous Brownian motion in the weights space, in the proximity of a flat error minimum. This non-homogeneous",
"diffusion corresponds to a decrease in MI between the layers and the input variable, in \"directions\" that are irrelevant to the target label.One of the main challenges in applying information theoretic measures to real-world data is a reasonable estimation of high dimensional joint distributions. This problem has been",
"extensively studied over the years (e.g. BID28 ), and has led the conclusion that there is no \"efficient\" solution when the dimension of the problem is large. Recently, a number of",
"studies have focused on calculating the MI in DNNs using Statistical Mechanics. These methods have generated",
"promising results in a variety of special cases BID8 , which support many of the observations made by BID35 .In this work we provide an analytic",
"bound on the MI between consecutive layers, which is valid for any non-linearity of the units, and directly demonstrates the compression of the representation during the diffusion phase. Specifically, we derive a Gaussian",
"bound that only depends on the linear part of the layers. This bound gives a super linear dependence",
"of the convergence time of the layers, which in turn enables us to prove the super-linear computational benefit of the hidden layers. Further, the Gaussian bound allows us to study",
"mutual information values in DNNs in real-world data without estimating them directly.",
"In this work we study DNNs using information-theoretic principles.",
"We describe the training process of the network as two separate phases, as has been previously done by others.",
"In the first phase (drift) we show that I(T k ; Y ) increases, corresponding to improved generalization with ERM.",
"In the second phase (diffusion), the representation information, I(X; T k ) slowly decreases, while I(T K ; Y ) continues to increase.",
"We rigorously prove that the representation compression is a direct consequence of the diffusion phase, independent of the non-linearity of the activation function.",
"We provide a new Gaussian bound on the representation compression and then relate the diffusion exponent to the compression time.",
"One key outcome of this analysis is a novel proof of the computational benefit of the hidden layers, where we show that they boost the overall convergence time of the network by at least a factor of K 2 , where K is the number of non-degenerate hidden layers.",
"This boost can be exponential in the number of hidden layers if the diffusion is \"ultra slow\", as recently reported.1 m m i=1 h (x i , y i ) be the empirical error.",
"Hoeffding's inequality BID12 shows that for every h ∈ H, DISPLAYFORM0 Then, we can apply the union bound and conclude that DISPLAYFORM1 We want to control the above probability with a confidence level of δ.",
"Therefore, we ask that 2 H exp −2 2 m ≤ δ.",
"This leads to a PAC bound, which states that for a fixed m and for every h ∈ H, we have with probability 1 − δ that DISPLAYFORM2 Note that under the definitions stated in Section 1.1, we have that |H| ≤ 2 X .",
"However, the PAC bound above also holds for a infinite hypotheses class, where log |H| is replaced with the VC dimension of the problem, with several additional constants BID38 BID34 BID32 .Let",
"us now assume that X is a d-dimensional random vector which follows a Markov random field structure. As",
"stated above, this means that p(x i ) = i p(x i |P a(x i )) where P a(X i ) is a set of components in the vector X that are adjacent to X i . Assuming",
"that the Markov random field is ergodic, we can define a typical set of realizations from X as a set that satisfies the Asymptotic Equipartition Property (AEP) BID6 . Therefore",
", for every > 0, the probability of a sequence drawn from X to be in the typical set A is greater than 1 − and |A | ≤ 2 H(X)+ . Hence, if",
"we only consider a typical realization of X (as opposed to every possible realization), we have that asymptotically H ≤ 2 H(X) . Finally,",
"let T be a mapping of X. Then, 2 H(X|T ) is the number of typical realizations of X that are mapped to T . This means",
"that the size of the typical set of T is bounded from above by 2 H(X) 2 H(X|T ) = 2 I(X;T ) . Plugging this",
"into the PAC bound above yields that with probability 1 − δ, the typical squared generalization error of T ,"
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14814814925193787,
0.1599999964237213,
0,
0.7142857313156128,
0.52173912525177,
0.0952380895614624,
0.29629629850387573,
0.0833333283662796,
0.1666666567325592,
0.13793103396892548,
0.13333332538604736,
0.17391304671764374,
0,
0.1599999964237213,
0.19999998807907104,
0.09090908616781235,
0.09090908616781235,
0.0952380895614624,
0,
0.2142857164144516,
0.06666666269302368,
0.1538461446762085,
0.1818181723356247,
0.06451612710952759,
0.12121211737394333,
0.09756097197532654,
0.3404255211353302,
0.07407406717538834,
0.09090908616781235,
0.12903225421905518,
0.15686273574829102,
0.13333332538604736,
0.14814814925193787,
0.08163265138864517,
0,
0.19999998807907104,
0.1599999964237213,
0.12903225421905518,
0.1428571343421936,
0,
0.05714285373687744,
0.19354838132858276,
0.1249999925494194,
0.14999999105930328,
0.12121211737394333,
0.17543859779834747,
0.13636362552642822,
0.1249999925494194,
0.1428571343421936,
0.17777776718139648,
0.1875,
0.10256409645080566,
0.0714285671710968,
0,
0.11764705181121826,
0.10810810327529907,
0.052631575614213943,
0.11428570747375488,
0.1764705777168274,
0.07692307233810425,
0.08510638028383255,
0.11999999731779099,
0,
0.07407406717538834,
0.08695651590824127,
0,
0.08888888359069824,
0.0952380895614624,
0.12244897335767746,
0.05128204822540283,
0.10256409645080566,
0.10810810327529907,
0.1666666567325592
] | SkeL6sCqK7 | true | [
"Introduce an information theoretic viewpoint on the behavior of deep networks optimization processes and their generalization abilities"
] |
[
"We propose and study a method for learning interpretable representations for the task of regression.",
"Features are represented as networks of multi-type expression trees comprised of activation functions common in neural networks in addition to other elementary functions.",
"Differentiable features are trained via gradient descent, and the performance of features in a linear model is used to weight the rate of change among subcomponents of each representation.",
"The search process maintains an archive of representations with accuracy-complexity trade-offs to assist in generalization and interpretation.",
"We compare several stochastic optimization approaches within this framework.",
"We benchmark these variants on 100 open-source regression problems in comparison to state-of-the-art machine learning approaches.",
"Our main finding is that this approach produces the highest average test scores across problems while producing representations that are orders of magnitude smaller than the next best performing method (gradient boosting).",
"We also report a negative result in which attempts to directly optimize the disentanglement of the representation result in more highly correlated features.",
"The performance of a machine learning (ML) model depends primarily on the data representation used in training BID3 , and for this reason the representational capacity of neural networks (NN) is considered a central factor in their success in many applications BID19 .",
"To date, there does not seem to be a consensus on how the architecture should be designed.",
"As problems grow in complexity, the networks proposed to tackle them grow as well, leading to an intractable design space.",
"One design approach is to tune network architectures through network hyperparameters using grid search or randomized search BID4 with cross validation.",
"Often some combination of hyperparameter tuning and manual design by expertise/intuition is done BID19 .",
"Many approaches to network architecture search exist, including weight sharing BID53 and reinforcement learning BID70 .",
"Another potential solution explored in this work (and others) is to use population-based stochastic optimization (SO) methods, also known as metaheuristics BID44 .",
"In SO, several candidate solutions are evaluated and varied over several iterations, and heuristics are used to select and update the candidate networks until the population produces a desirable architecture.",
"The general approach has been studied at least since the late 80s in various forms BID45 BID69 BID60 for NN design, with several recent applications BID55 BID28 BID9 BID54 .In",
"practice, the adequacy of the architecture is often dependent on conflicting objectives. For",
"example, interpretability may be a central concern, because many researchers in the scientific community rely on ML models not only to provide predictions that match data from various processes, but to provide insight into the nature of the processes themselves. Approaches",
"to interpretability can be roughly grouped into semantic and syntactic approaches. Semantic approaches",
"encompass methods that attempt to elucidate the behavior of a model under various input conditions as a way of explanation (e.g. BID56 ). Syntactic methods instead",
"focus on the development of concise models that offer insight by virtue of their simplicity, in a similar vein to models built from first-principles (e.g. BID63 BID57 ). Akin to the latter group,",
"our goal is to discover the simplest description of a process whose predictions generalize as well as possible.Good representations should also disentangle the factors of variation BID3 in the data, in order to ease model interpretation. Disentanglement implies functional",
"modularity; i.e., sub-components of the network should encapsulate behaviors that model a sub-process of the task. In this sense, stochastic methods",
"such as evolutionary computation (EC) appear well-motivated, as they are premised on the identification and propagation of building blocks of solutions BID23 . Experiments with EC applied to networks",
"suggest it pressures networks to be modular BID24 BID29 . Although the identification functional",
"building blocks of solutions sounds ideal, we have no way of knowing a priori whether a given problem will admit the identification of building blocks of solutions via heuristic search BID49 . Our goal in this paper is thus to empirically",
"assess the performance of several SO approaches in a system designed to produce intelligible representations from NN building blocks for regression.In Section 2, we introduce a new method for optimizing representations that we call the feature engineering automation tool (FEAT) 1 . The purpose of this method is to optimize an",
"archive of representations that characterize the trade-off between conciseness and accuracy among representations. Algorithmically, two aspects of the method distinguish",
"FEAT from previous work. First, it represents the internal structure of each NN",
"as a set of syntax trees, with the goal of improving the transparency of the resultant representations. Second, it uses weights learned via gradient descent to",
"provide feedback to the variation process at a more granular level. We compare several multi-objective variants of this approach",
"using EC and non-EC methods with different sets of objectives.We discuss related work in more detail in Section 3. In section 4 and 5, we describe and conduct an experiment that",
"benchmarks FEAT against state-of-the-art ML methods on 100 open-source regression problems. Future work based on this analysis is discussed in Section 6,",
"and additional detailed results are provided in the Appendix.",
"This paper proposes a feature engineering archive tool that optimizes neural network architectures by representing them as syntax trees.",
"FEAT uses model weights as feedback to guide network variation in an EC optimization algorithm.",
"We conduct a thorough analysis of this method applied to the task of regression in comparison to state-of-the-art methods.",
"The results suggest that FEAT achieves state-of-the-art performance on regression tasks while producing representations that are significantly less complex than those resulting from similarly performing methods.",
"This improvement comes at an additional computational cost, limited in this study to 60 minutes per training instance.",
"We expect this limitation to be reasonable for many applications where intelligibility is the prime motivation.Future work should consider the issue of representation disentanglement in more depth.",
"Our attempts to include additional search objectives that explicitly minimize multicollinearity were not successful.",
"Although more analysis is needed to confirm this, we suspect that the model selection procedure (Section 2.1, step 3) permits highly collinear representations to be chosen.",
"This is because multicollinearity primarily affects the standard errors ofβ BID2 , and is not necessarily detrimental to validation error.",
"Therefore it could be incorrect to expect the model selection procedure to effectively choose more disentangled representations.",
"Besides improving the model selection procedure, it may be fruitful to pressure disentanglement at other stages of the search process.",
"For example, the variation process could prune highly correlated features, or the disentanglement metric could be combined with error into a single loss function with a tunable parameter.",
"We hope to pursue these ideas in future studies."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.19999998807907104,
0.21739129722118378,
0.15789473056793213,
0,
0.10810810327529907,
0.07843136787414551,
0.19512194395065308,
0.17241378128528595,
0.21621620655059814,
0.1538461446762085,
0.09999999403953552,
0.11428570747375488,
0.2222222238779068,
0.09302324801683426,
0.2222222238779068,
0.039215680211782455,
0.1818181723356247,
0.17241378128528595,
0.12121211737394333,
0.2222222238779068,
0.2745097875595093,
0.1818181723356247,
0.1904761791229248,
0.21276594698429108,
0.11764705181121826,
0.14814814925193787,
0.1875,
0.15789473056793213,
0.1764705777168274,
0.3255814015865326,
0.19999998807907104,
0.07999999821186066,
0.0476190410554409,
0.13333332538604736,
0.25,
0.1666666567325592,
0.2631579041481018,
0.04347825422883034,
0.051282044500112534,
0.1249999925494194,
0.05714285373687744,
0.08510638028383255,
0.14999999105930328,
0.10810810327529907,
0.14999999105930328,
0.08888888359069824,
0.06666666269302368
] | Hke-JhA9Y7 | true | [
"Representing the network architecture as a set of syntax trees and optimizing their structure leads to accurate and concise regression models. "
] |
[
"Most distributed machine learning (ML) systems store a copy of the model parameters locally on each machine to minimize network communication.",
"In practice, in order to reduce synchronization waiting time, these copies of the model are not necessarily updated in lock-step, and can become stale.",
"Despite much development in large-scale ML, the effect of staleness on the learning efficiency is inconclusive, mainly because it is challenging to control or monitor the staleness in complex distributed environments.",
"In this work, we study the convergence behaviors of a wide array of ML models and algorithms under delayed updates.",
"Our extensive experiments reveal the rich diversity of the effects of staleness on the convergence of ML algorithms and offer insights into seemingly contradictory reports in the literature.",
"The empirical findings also inspire a new convergence analysis of SGD in non-convex optimization under staleness, matching the best-known convergence rate of O(1/\\sqrt{T}).",
"With the advent of big data and complex models, there is a growing body of works on scaling machine learning under synchronous and non-synchronous 1 distributed execution BID8 BID11 BID29 .",
"These works, however, point to seemingly contradictory conclusions on whether non-synchronous execution outperforms synchronous counterparts in terms of absolute convergence, which is measured by the wall clock time to reach the desired model quality.",
"For deep neural networks, BID2 ; BID8 show that fully asynchronous systems achieve high scalability and model quality, but others argue that synchronous training converges faster BID1 BID5 .",
"The disagreement goes beyond deep learning models: ; BID49 ; BID26 ; BID31 ; BID41 empirically and theoretically show that many algorithms scale effectively under non-synchronous settings, but BID36 ; ; demonstrate significant penalties from asynchrony.The crux of the disagreement lies in the trade-off between two factors contributing to the absolute convergence: statistical efficiency and system throughput.",
"Statistical efficiency measures convergence per algorithmic step (e.g., a mini-batch), while system throughput captures the performance of the underlying implementation and hardware.",
"Non-synchronous execution can improve system throughput due to lower synchronization overheads, which is well understood BID1 BID4 BID2 .",
"However, by allowing various workers to use stale versions of the model that do not always reflect the latest updates, non-synchronous systems can exhibit lower statistical efficiency BID1 BID5 .",
"How statistical efficiency and system throughput trade off in distributed systems, however, is far from clear.The difficulties in understanding the trade-off arise because statistical efficiency and system throughput are coupled during execution in distributed environments.",
"Non-synchronous executions are in general non-deterministic, which can be difficult to profile.",
"Furthermore, large-scale experiments 2 RELATED WORK Staleness is reported to help absolute convergence for distributed deep learning in BID2 ; BID8 ; and has minimal impact on convergence BID31 BID6 BID51 BID32 .",
"But BID1 ; BID5 show significant negative effects of staleness.",
"LDA training is generally insensitive to staleness BID44 BID47 BID7 , and so is MF training BID48 BID33 BID4 BID49 .",
"However, none of their evaluations quantifies the level of staleness in the systems.",
"By explicitly controlling the staleness, we decouple the distributed execution, which is hard to control, from ML convergence outcomes.We focus on algorithms that are commonly used in large-scale optimization BID11 BID1 BID8 , instead of methods specifically designed to minimize synchronization BID39 BID43 BID20 .",
"Non-synchronous execution has theoretical underpinning BID30 BID49 BID31 BID41 .",
"Here we study algorithms that do not necessarily satisfy assumptions in their analyses.",
"In this work, we study the convergence behaviors under delayed updates for a wide array of models and algorithms.",
"Our extensive experiments reveal that staleness appears to be a key governing parameter in learning.",
"Overall staleness slows down the convergence, and under high staleness levels the convergence can progress very slowly or fail.",
"The effects of staleness are highly problem 10 Cosine similarity is closely related to the coherence measure in Definition 1.",
"11 Low gradient coherence during the early part of optimization is consistent with the common heuristics to use fewer workers at the beginning in asynchronous training.",
"BID31 also requires the number of workers to follow DISPLAYFORM0 where K is the iteration number.dependent, influenced by model complexity, choice of the algorithms, the number of workers, and the model itself, among others.",
"Our empirical findings inspire new analyses of non-convex optimization under asynchrony based on gradient coherence, matching the existing rate of O(1/ √ T ).Our",
"findings have clear implications for distributed ML. To",
"achieve actual speed-up in absolute convergence, any distributed ML system needs to overcome the slowdown from staleness, and carefully trade off between system throughput gains and statistical penalties. Many",
"ML methods indeed demonstrate certain robustness against low staleness, which should offer opportunities for system optimization. Our",
"results support the broader observation that existing successful nonsynchronous systems generally keep staleness low and use algorithms efficient under staleness."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.21052631735801697,
0.2926829159259796,
0.29411762952804565,
0.42105263471603394,
0.1666666567325592,
0.3720930218696594,
0.25531914830207825,
0.0476190447807312,
0.2222222238779068,
0.15789473056793213,
0.060606054961681366,
0.1395348757505417,
0.1860465109348297,
0.07407406717538834,
0.17777776718139648,
0.23999999463558197,
0.12121211737394333,
0.307692289352417,
0.17241379618644714,
0.1666666567325592,
0.2142857164144516,
0.29411762952804565,
0.19999998807907104,
0.1875,
0.2857142686843872,
0.1538461446762085,
0.1463414579629898,
0.15789473056793213,
0,
0.1428571343421936,
0,
0.23529411852359772
] | BylQV305YQ | true | [
"Empirical and theoretical study of the effects of staleness in non-synchronous execution on machine learning algorithms."
] |
[
"System identification is the process of building a mathematical model of an unknown system from measurements of its inputs and outputs.",
"It is a key step for model-based control, estimator design, and output prediction.",
"This work presents an algorithm for non-linear offline system identification from partial observations, i.e. situations in which the system's full-state is not directly observable.",
"The algorithm presented, called SISL, iteratively infers the system's full state through non-linear optimization and then updates the model parameters.",
"We test our algorithm on a simulated system of coupled Lorenz attractors, showing our algorithm's ability to identify high-dimensional systems that prove intractable for particle-based approaches.",
"We also use SISL to identify the dynamics of an aerobatic helicopter.",
"By augmenting the state with unobserved fluid states, we learn a model that predicts the acceleration of the helicopter better than state-of-the-art approaches.",
"The performance of controllers and state-estimators for non-linear systems depends heavily on the quality of the model of system dynamics (Hou & Wang, 2013) .",
"System-identification tackles the problem of learning or calibrating dynamics models from data (Ljung, 1999) , which is often a timehistory of observations of the system and control inputs.",
"In this work, we address the problem of learning dynamics models of partially observed, high-dimensional non-linear systems.",
"That is, we consider situations in which the system's state cannot be inferred from a single observation, but instead requires inference over a time-series of observations.",
"The problem of identifying systems from partial observations arises in many robotics domains (Punjani & Abbeel, 2015; Cory & Tedrake, 2008; Ordonez et al., 2017) .",
"Though we often have direct measurements of a robot's pose and velocity, in many cases we cannot directly observe relevant quantities such as the temperature of actuators or the state the environment around the robot.",
"Consider learning a dynamics model for an aerobatic helicopter.",
"Abbeel et al. (2010) attempted to map only the helicopter's pose and velocity to its acceleration and they found their model to be inaccurate when predicting aggressive maneuvers.",
"They posited that the substantial airflow generated by the helicopter affected the dynamics.",
"Since it is often impossible to directly measure the state of the airflow around a vehicle, identification must be performed in a partially observed setting.",
"System-identification is a mature field with a rich history (Ljung, 1999; 2010) .",
"Various techniques can be classified by whether they apply to linear or non-linear systems, with partially or fully observed states.",
"Additionally, techniques are applied in an online or batch-offline setting.",
"This work presents an approach to offline identification of non-linear and partially observed systems.",
"When a system is fully observed, i.e. its full state is observed but corrupted by noise, a set of techniques called equation-error methods are typically employed (Åström & Eykhoff, 1971) .",
"In such cases, we can consider observations as independent, and minimize the error between the observed statederivatives and those predicted by the model given the control input and observed states.",
"In partially observed settings, merely knowing the current input is insufficient to accurately predict the observation.",
"Several black-box approaches exist to predict observations from time-series of inputs.",
"Autoregressive approaches directly map a time-history of past inputs to observations (Billings, 2013) .",
"Recurrent neural networks (Bailer-Jones et al., 1998; Zimmermann & Neuneier, 2000) and subspace-identification methods (Van Overschee & De Moor, 1994) can also be used to learn blackbox dynamical systems from this data.",
"However, in many cases prior knowledge can be used to specify structured, parameterized models of the system (Gupta et al., 2019) .",
"Such models can be trained with less data and used with a wider array of control and state-estimation techniques than non-linear black-box models (Gupta et al., 2019; Lutter et al., 2019b; a) .",
"Techniques used to identify partially observed structured models are often based on Expectation-Maximization (EM) (Dempster et al., 1977; Schön et al., 2011; Kantas et al., 2015; Ghahramani & Roweis, 1999 ).",
"An alternating procedure is performed in which a smoothing step uses the current system dynamics estimate to infer the distribution over state-trajectories, and is followed by a learning step that uses this distribution to update the system dynamics estimate.",
"In the non-linear or non-Gaussian case, it is typically not possible to analytically characterize the distribution over trajectories, and thus methods based on Sequential Monte-Carlo such as Particle Smoothing (PS) (Schön et al., 2011; Kantas et al., 2015) , or Extended Kalman Smoothing (EKS) (Ghahramani & Roweis, 1999) are employed in the E-step.",
"Though considered state-of-theart for this problem, both methods become intractable in high-dimensional state spaces.",
"PS suffers from the curse of dimensionality, requiring an intractably large number of particles if the state space is high-dimensional (Snyder et al., 2008; Kantas et al., 2015) , and an M-step that can be quadratic in complexity with respect to the number of particles (Schön et al., 2011) .",
"EKS-based methods are fast during the E-step, but the M-step requires approximations to integrate out state uncertainty, such as fitting non-linearities with Radial Basis Function approximators, and scales poorly with the dimension of the state-space (Ghahramani & Roweis, 1999) .",
"In this work, we present a system-identification algorithm that is suited for high-dimensional, nonlinear, and partially observed systems.",
"By assuming that the systems are close to deterministic, as is often the case in robotics, we approximate the distribution over unobserved states using only their maximum-likelihood (ML) point-estimate.",
"Our algorithm, called SISL (System-identification via Iterative Smoothing and Learning) performs the following two steps until convergence:",
"• In the smoothing or E-step, we use non-linear programming to tractably find the ML pointestimate of the unobserved states.",
"• In the learning or M-step, we use the estimate of unobserved states to improve the estimate of system parameters.",
"The idea to use an ML point-estimate in lieu of the distribution over unobserved states in the EM procedure's E-step is not new, and, in general, does not guarantee monotonic convergence to a local optimum (Celeux & Govaert, 1992) .",
"However, such an approximation is equivalent to regular EM if the ML point-estimate is the only instance of unobserved variables with non-negligible probability (Celeux & Govaert, 1992; Neal & Hinton, 1998) .",
"We apply this idea to the problem of system-identification for nearly deterministic systems, in which ML point-estimates can serve as surrogates for the true distribution over unobserved state-trajectories.",
"The primary contribution of this work is an algorithm for identifying non-linear, partially observed systems that is able to scale to high-dimensional problems.",
"In Section 2, we specify the assumptions underpinning our algorithm and discuss the computational methodology for using it.",
"In Section 3, we empirically demonstrate that it is able to identify the parameters of a high-dimensional system of coupled Lorenz attractors, a problem that proves intractable for particle-based methods.",
"We also demonstrate our algorithm on the problem of identifying the dynamics of an aerobatic helicopter, and compare against various approaches including the state-of-the-art approach (Punjani & Abbeel, 2015) .",
"This paper presented an algorithm for system identification of non-linear systems given partial state observations.",
"The algorithm optimizes system parameters given a time history of observations by iteratively finding the most likely state-history, and then using it to optimize the system parameters.",
"The approach is particularly well suited for high-dimensional and nearly deterministic problems.",
"In simulated experiments on a partially observed system of coupled Lorenz attractors, we showed that our algorithm can perform identification on a problem that particle-based EM methods are fundamentally ill-suited for.",
"We also validated that our algorithm is an effective replacement for identification methods based on EM if the system is close to deterministic, but can yield biased parameter estimates if it is not.",
"We then used our algorithm to model the time-varying hiddenstates that affect the dynamics of an aerobatic helicopter.",
"Our approach outperforms state-of-the-art methods because it is able to fit large non-linear models to unobserved states.",
"We aim to apply our algorithm to system identification problems in a number of domains.",
"There has recently been interest in characterizing the dynamics of aircraft with high aspect ratios, for which the difficult-to-observe bending modes substantially impact dynamics.",
"Additionally, the inability to measure friction forces in dynamic interactions involving contact typically stands in the way of system identification, and thus requires algorithms that are capable of identification under partial observation.",
"A APPENDIX"
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.24242423474788666,
0.14814814925193787,
0.5641025304794312,
0.12121211737394333,
0.20512820780277252,
0,
0.05714285373687744,
0.17142856121063232,
0.20512820780277252,
0.06666666269302368,
0.1538461446762085,
0.1538461446762085,
0.045454539358615875,
0.17391303181648254,
0,
0,
0.10810810327529907,
0.07999999821186066,
0.060606054961681366,
0,
0.4285714328289032,
0.09302324801683426,
0.052631575614213943,
0,
0.1599999964237213,
0.14814814925193787,
0.043478257954120636,
0.0555555522441864,
0.0952380895614624,
0,
0.0952380895614624,
0.032786883413791656,
0.0714285671710968,
0.038461536169052124,
0,
0.1875,
0,
0,
0.0624999962747097,
0.06666666269302368,
0.0416666641831398,
0,
0.04999999701976776,
0.17142856121063232,
0.12903225421905518,
0.1463414579629898,
0.04999999701976776,
0.5517241358757019,
0.21052631735801697,
0.07692307233810425,
0.2380952388048172,
0.1818181723356247,
0.06451612710952759,
0.06666666269302368,
0.2857142686843872,
0.0555555522441864,
0.1395348757505417
] | B1gR3ANFPS | true | [
"This work presents a scalable algorithm for non-linear offline system identification from partial observations."
] |
[
"Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models.",
"Sign-based methods, such as signSGD (Bernstein et al., 2018), have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM.",
"In this paper, we perform a general analysis of sign-based methods for non-convex optimization.",
"Our analysis is built on intuitive bounds on success probabilities and does not rely on special noise distributions nor on the boundedness of the variance of stochastic gradients.",
"Extending the theory to distributed setting within a parameter server framework, we assure exponentially fast variance reduction with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes.",
"We validate our theoretical findings experimentally.",
"One of the key factors behind the success of modern machine learning models is the availability of large amounts of training data (Bottou & Le Cun, 2003; Krizhevsky et al., 2012; Schmidhuber, 2015) .",
"However, the state-of-the-art deep learning models deployed in industry typically rely on datasets too large to fit the memory of a single computer, and hence the training data is typically split and stored across a number of compute nodes capable of working in parallel.",
"Training such models then amounts to solving optimization problems of the form",
"where f m : R d → R represents the non-convex loss of a deep learning model parameterized by x ∈ R d associated with data stored on node m.",
"Arguably, stochastic gradient descent (SGD) (Robbins & Monro, 1951; Vaswani et al., 2019; Qian et al., 2019) in of its many variants (Kingma & Ba, 2015; Duchi et al., 2011; Schmidt et al., 2017; Zeiler, 2012; Ghadimi & Lan, 2013 ) is the most popular algorithm for solving (1).",
"In its basic implementation, all workers m ∈ {1, 2, . . . , M } in parallel compute a random approximation g m (x k ) of ∇f m (x k ), known as the stochastic gradient.",
"These approximations are then sent to a master node which performs the aggregation",
"The aggregated vector is subsequently broadcast back to the nodes, each of which performs an update of the form x k+1 = x k − γ kĝ (x k ), thus updating their local copies of the parameters of the model."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.052631575614213943,
0.04444443807005882,
0.3870967626571655,
0.4000000059604645,
0.03999999538064003,
0,
0.08695651590824127,
0.07547169178724289,
0.06896550953388214,
0.1395348757505417,
0.06779660284519196,
0.08163265138864517,
0,
0.03999999538064003
] | rkxNelrKPB | true | [
"General analysis of sign-based methods (e.g. signSGD) for non-convex optimization, built on intuitive bounds on success probabilities."
] |
[
"Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts.",
"One of the major challenge of off-policy learning is to derive counterfactual estimators that also has low variance and thus low generalization error. \n",
"In this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks.Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work.",
"With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and is also consistent with our theoretical results.",
"Off-policy learning refers to evaluating and improving a deterministic policy using historic data collected from a stationary policy, which is important because in real-world scenarios on-policy evaluation is oftentimes expensive and has adverse impacts.",
"For instance, evaluating a new treatment option, a clinical policy, by administering it to patients requires rigorous human clinical trials, in which patients are exposed to risks of serious side effects.",
"As another example, an online advertising A/B testing can incur high cost for advertisers and bring them few gains.",
"Therefore, we need to utilize historic data to perform off-policy evaluation and learning that can enable safe exploration of the hypothesis space of policies before deploying them.There has been extensive studies on off-policy learning in the context of reinforcement learning and contextual bandits, including various methods such as Q learning BID33 ), doubly robust estimator BID8 ), self-normalized (Swaminathan & Joachims (2015b) ), etc.",
"A recently emerging direction of off-policy learning involves the use of logged interaction data with bandit feedback.",
"However, in this setting, we can only observe limited feedback, often in the form of a scalar reward or loss, for every action; a larger amount of information about other possibilities is never revealed, such as what reward we could have obtained had we taken another action, the best action we should have take, and the relationship between the change in policy and the change in reward.",
"For example, after an item is suggested to a user by an online recommendation system, although we can observe the user's subsequent interactions with this particular item, we cannot anticipate the user's reaction to other items that could have been the better options.Using historic data to perform off-policy learning in bandit feedback case faces a common challenge in counterfactual inference: How do we handle the distribution mismatch between the logging policy and a new policy and the induced generalization error?",
"To answer this question, BID34 derived the new counterfactual risk minimization framework, that added the sample variance as a regularization term into conventional empirical risk minimization objective.",
"However, the parametrization of policies in their work as linear stochastic models has limited representation power, and the computation of sample variance regularization requires iterating through all training samples.",
"Although a first-order approximation technique was proposed in the paper, deriving accurate and efficient end-to-end training algorithms under this framework still remains a challenging task.Our contribution in this paper is three-fold:1.",
"By drawing a connection to the generalization error bound of importance sampling BID6 ), we propose a new learning principle for off-policy learning with bandit feedback.",
"We explicitly regularize the generalization error of the new policy by minimizing the distribution divergence between it and the logging policy.",
"The proposed learning objective automatically trade off between emipircal risk and sample variance.",
"2. To enable end-to-end training, we propose to parametrize the policy as a neural network, and solves the divergence minimization problem using recent work on variational divergence minimization BID26 ) and Gumbel soft-max BID18 ) sampling.",
"3. Our experiment evaluation on benchmark datasets shows significant improvement in performance over conventional baselines, and case studies also corroborates the soundness of our theoretical proofs.",
"In this paper, we started from an intuition that explicitly regularizing variance can help improve the generalization performance of off-policy learning for logged bandit datasets, and proposed a new training principle inspired by learning bounds for importance sampling problems.",
"The theoretical discussion guided us to a training objective as the combination of importance reweighted loss and a regularization term of distribution divergence measuring the distribution match between the logging policy and the policy we are learning.",
"By applying variational divergence minimization and Gumbel soft-max sampling techniques, we are able to train neural network policies end-to-end to minimize the variance regularized objective.",
"Evaluations on benchmark datasets proved the effectiveness of our learning principle and training algorithm, and further case studies also verified our theoretical discussion.Limitations of the work mainly lies in the need for the propensity scores (the probability an action is taken by the logging policy), which may not always be available.",
"Learning to estimate propensity scores and plug the estimation into our training framework will increase the applicability of our algorithms.",
"For example, as suggested by BID6 , directly learning importance weights (the ratio between new policy probability to the logging policy probability) has comparable theoretical guarantees, which might be a good extension for the proposed algorithm.Although the work focuses on off-policy from logged data, the techniques and theorems may be extended to general supervised learning and reinforcement learning.",
"It will be interesting to study how A. PROOFS DISPLAYFORM0 We apply Lemma 1 to z, importance sampling weight function w(z) = p(z)/p 0 (z) = h(y|x)/h 0 (y|x), and loss l(z)/L, we have DISPLAYFORM1 Thus, we have DISPLAYFORM2 Proof.",
"For a single hypothesis denoted as δ with values DISPLAYFORM3 By Lemma 1, the variance can be bounded using Reni divergence as DISPLAYFORM4 Applying Bernstein's concentration bounds we have DISPLAYFORM5 σ 2 (Z)+ LM/3 ), we can obtain that with probability at least 1 − η, the following bounds for importance sampling of bandit learning holds DISPLAYFORM6 , where the second inequality comes from the fact that DISPLAYFORM7 sampled from logging policy h 0 ; regularization hyper-parameter λ Result: An optimized generator h * θ (y|x) that is an approximate minimizer of R(w) initialization; while Not Converged do / * Update discriminator * / Sample a mini-batch of 'fake' samples (x i ,ŷ i ) with x i from D andŷ i ∼ h θ t (y|x i ); Sample a mini-batch of 'real' samples (x i , y i ) from D ; Update w t+1 = w t + η w ∂F (T w , h θ )(10) ; / * Update generator * / Sample a mini-batch of m samples from D ; Sample a mini-batch of m 1 'fake' samples ; Estimate the generator gradient as g 2 = F (T w , h θ )(10) ; Update θ t+1 = θ t − η θ (g 1 + λg 2 ) ; end Algorithm 3: Minimizing Variance Regularized Risk -Co-Training Version"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11999999731779099,
0.2666666507720947,
0.27397260069847107,
0.1249999925494194,
0.18518517911434174,
0.1599999964237213,
0.0476190410554409,
0.12987013161182404,
0.20512819290161133,
0.08219178020954132,
0.22727271914482117,
0.21276594698429108,
0.11999999731779099,
0.07692307233810425,
0.3404255211353302,
0.09999999403953552,
0.1666666567325592,
0.14814814925193787,
0.12244897335767746,
0.29999998211860657,
0.19230768084526062,
0.1702127605676651,
0.14705881476402283,
0.04878048226237297,
0.24657534062862396,
0.06896550953388214,
0.09210526198148727
] | SyPMT6gAb | true | [
"For off-policy learning with bandit feedbacks, we propose a new variance regularized counterfactual learning algorithm, which has both theoretical foundations and superior empirical performance."
] |
[
"We outline new approaches to incorporate ideas from deep learning into wave-based least-squares imaging.",
"The aim, and main contribution of this work, is the combination of handcrafted constraints with deep convolutional neural networks, as a way to harness their remarkable ease of generating natural images.",
"The mathematical basis underlying our method is the expectation-maximization framework, where data are divided in batches and coupled to additional \"latent\" unknowns.",
"These unknowns are pairs of elements from the original unknown space (but now coupled to a specific data batch) and network inputs.",
"In this setting, the neural network controls the similarity between these additional parameters, acting as a \"center\" variable.",
"The resulting problem amounts to a maximum-likelihood estimation of the network parameters when the augmented data model is marginalized over the latent variables.",
"In this work, we tested an inverse problem framework which includes hard constraints and deep priors.",
"Hard constraints are necessary in many problems, such as seismic imaging, where the unknowns must belong to a feasible set in order to ensure the numerical stability of the forward problem.",
"Deep priors, enforced through adherence to the range of a neural network, provide an additional, implicit type of regularization, as demonstrated by recent work [2, Dittmer et al. [3] ], and corroborated by our numerical results.",
"The resulting algorithm can be mathematically interpreted in light of expectation maximization methods.",
"Furthermore, connections to elastic averaging SGD [10] highlight potential computational benefits of a parallel (synchronous or asynchronous) implementation.",
"On a speculative note, we argue that the presented method, which combines stochastic optimization on the dual variable with on-the-fly estimation of the generative model's weights using Langevin dynamics, reaps information on the \"posterior\" distribution leveraging multiplicity in the data and the fact that the data is acquired over one and the same Earth model.",
"Our preliminary results seem consistent with a behavior to be expected from a \"posterior\" distribution.",
"b,c) sample \"prior\" (before training) and \"posterior\" distribution functions for two points in the model."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.19999998807907104,
0.290909081697464,
0.2083333283662796,
0.2083333283662796,
0.09302324801683426,
0.1702127605676651,
0.1904761791229248,
0.22641508281230927,
0.13333332538604736,
0.05128204822540283,
0.09090908616781235,
0.34285715222358704,
0.25,
0.24390242993831635
] | Hyet2Q29IS | true | [
"We combine hard handcrafted constraints with a deep prior weak constraint to perform seismic imaging and reap information on the \"posterior\" distribution leveraging multiplicity in the data."
] |
[
"When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. ",
"The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query. ",
"We present a unified framework, based on the relation-aware self-attention mechanism,to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder.",
"On the challenging Spider dataset this framework boosts the exact match accuracy to 53.7%, compared to 47.4% for the previous state-of-the-art model unaugmented with BERT embeddings.",
"In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment.",
"The ability to effectively query databases with natural language has the potential to unlock the power of large datasets to the vast majority of users who are not proficient in query languages.",
"As such, a large body of research has focused on the task of translating natural language questions into queries that existing database software can execute.",
"The release of large annotated datasets containing questions and the corresponding database SQL queries has catalyzed progress in the field, by enabling the training of supervised learning models for the task.",
"In contrast to prior semantic parsing datasets (Finegan-Dollak et al., 2018) , new tasks such as WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018b) pose the real-life challenge of generalization to unseen database schemas.",
"Every query is conditioned on a multi-table database schema, and the databases do not overlap between the train and test sets.",
"Schema generalization is challenging for three interconnected reasons.",
"First, any text-to-SQL semantic parsing model must encode a given schema into column and table representations suitable for decoding a SQL query that might involve any of the given columns or tables.",
"Second, these representations should encode all the information about the schema, including its column types, foreign key relations, and primary keys used for database joins.",
"Finally, the model must recognize natural language used to refer to database columns and tables, which might differ from the referential language seen in training.",
"The latter challenge is known as schema linking -aligning column/table references in the question to the corresponding schema columns/tables.",
"While the question of schema encoding has been studied in recent literature (Bogin et al., 2019b) , schema linking has been relatively less explored.",
"Consider the example in Figure 1 .",
"It illustrates the challenge of ambiguity in linking: while \"model\" in the question refers to car_names.model rather than model_list.model, \"cars\" actually refers to both cars_data and car_names (but not car_makers) for the purpose of table joining.",
"To resolve the column/table references properly, the semantic parser must take into account both the known schema relations (e.g. foreign keys) and the question context.",
"Prior work (Bogin et al., 2019b) addressed the schema representation problem by encoding the directed graph of foreign key relations among the columns with a graph neural network.",
"While effective, this approach has two important shortcomings.",
"First, it does not contextualize schema encoding with the question, thus making it difficult for the model to reason about schema linking after both the column representations and question word representations have been built.",
"Second, it limits information propagation during schema encoding to predefined relations in the schema such as foreign keys.",
"The advent of self-attentional mechanisms in natural language processing (Vaswani et al., 2017) shows that global reasoning is crucial to building effective representations of relational structures.",
"However, we would like any global reasoning to also take into account the aforementioned predefined schema relations.",
"In this work, we present a unified framework, called RAT-SQL, 1 for encoding relational structure in the database schema and a given question.",
"It uses relation-aware self-attention to combine global reasoning over the schema entities and question words with structured reasoning over predefined schema relations.",
"We then apply RAT-SQL to the problems of schema encoding and schema linking.",
"As a result, we obtain 53.7% exact match accuracy on the Spider test set.",
"At the time of writing, this result is the state of the art among models unaugmented with pretrained BERT embeddings.",
"In addition, we experimentally demonstrate that RAT-SQL enables the model to build more accurate internal representations of the question's true alignment with schema columns and tables.",
"Despite the abundance of research in semantic parsing of text to SQL, many contemporary models struggle to learn good representations for a given database schema as well as to properly link column/table references in the question.",
"These problems are related: to encode & use columns/tables from the schema, the model must reason about their role in the context of a given question.",
"In this work, we present a unified framework for addressing the schema encoding and linking challenges.",
"Thanks to relation-aware self-attention, it jointly learns schema and question word representations based on their alignment with each other and predefined schema relations.",
"Empirically, the RAT framework allows us to gain significant state of the art improvement on textto-SQL parsing.",
"Qualitatively, it provides a way to combine predefined hard schema relations and inferred soft self-attended relations in the same encoder architecture.",
"We foresee this joint representation learning being beneficial in many learning tasks beyond text-to-SQL, as long as the input has predefined structure.",
"A THE NEED FOR SCHEMA LINKING One natural question is how often does the decoder fail to select the correct column, even with the schema encoding and linking improvements we have made.",
"To answer this, we conducted an oracle experiment (see Table 3 ).",
"For \"oracle sketch\", at every grammar nonterminal the decoder is forced to make the correct choice so the final SQL sketch exactly matches that of the correct answer.",
"The rest of the decoding proceeds as if the decoder had made the choice on its own.",
"Similarly, \"oracle cols\" forces the decoder to output the correct column or table at terminal productions.",
"With both oracles, we see an accuracy of 99.4% which just verifies that our grammar is sufficient to answer nearly every question in the data set.",
"With just \"oracle sketch\", the accuracy is only 70.9%, which means 73.5% of the questions that RAT-SQL gets wrong and could get right have incorrect column or table selection.",
"Similarly, with just \"oracle cols\", the accuracy is 67.6%, which means that 82.0% of the questions that RAT-SQL gets wrong have incorrect structure.",
"In other words, most questions have both column and structure wrong, so both problems will continue to be important to work on for the future."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.04999999329447746,
0.17777776718139648,
0.15789473056793213,
0.0476190410554409,
0.25,
0.1395348757505417,
0.09756097197532654,
0.22727271914482117,
0.16326530277729034,
0.1111111044883728,
0,
0.21739129722118378,
0.09756097197532654,
0.1538461446762085,
0.11764705181121826,
0.21052631735801697,
0.17391304671764374,
0.1666666567325592,
0.09999999403953552,
0.1860465109348297,
0,
0.1304347813129425,
0.1764705777168274,
0.1860465109348297,
0.11764705181121826,
0.25641024112701416,
0.1666666567325592,
0.27586206793785095,
0.0624999962747097,
0.1764705777168274,
0.1428571343421936,
0.1702127605676651,
0.1463414579629898,
0.1818181723356247,
0.052631575614213943,
0.24242423474788666,
0.2702702581882477,
0.10810810327529907,
0.12765957415103912,
0,
0.09756097197532654,
0.1249999925494194,
0.0624999962747097,
0.13636362552642822,
0.12765957415103912,
0.09999999403953552,
0.09999999403953552
] | H1egcgHtvB | true | [
"State of the art in complex text-to-SQL parsing by combining hard and soft relational reasoning in schema/question encoding."
] |
[
"As our experience shows, humans can learn and deploy a myriad of different skills to tackle the situations they encounter daily.",
"Neural networks, in contrast, have a fixed memory capacity that prevents them from learning more than a few sets of skills before starting to forget them. \n",
"In this work, we make a step to bridge neural networks with human-like learning capabilities.",
"For this, we propose a model with a growing and open-bounded memory capacity that can be accessed based on the model’s current demands.",
"To test this system, we introduce a continual learning task based on language modelling where the model is exposed to multiple languages and domains in sequence, without providing any explicit signal on the type of input it is currently dealing with.",
"The proposed system exhibits improved adaptation skills in that it can recover faster than comparable baselines after a switch in the input language or domain.",
"In a classic cartoon by Gary Larson, a student raises his hand to ask the teacher: \"Mr. Osborne, may I be excused? My brain is full.\" (Larson & Martin, 2003) .",
"We laugh at this situation because we know it is absurd.",
"Human brains don't just get full.",
"Instead, they seem to be able to keep in their long-term memory massive amounts of information encoding well-acquired knowledge and skills.",
"Furthermore, the information stored in memory is not necessarily relevant at all times.",
"For instance, a person may have a phone call in French in the morning, then go about her daily errands in German, and later write an email in English.",
"Different linguistic knowledge will be required for each of these situations, and context alone, rather than some explicit signal, will dictate what is needed at each given moment.",
"Vanilla neural network models have been successfully deployed in various applications in the past.",
"However, they rely on fixed sized memories and suffer from the problem known as \"catastrophic forgetting\" (McCloskey & Cohen, 1989; Ratcliff, 1990) , which refers to the fact that previously acquired information is quickly forgotten as novel skills need to be mastered.",
"Earlier work attempted to correct this problem by looking for available capacity on a fixed-sized network that would allow encoding a new solution without affecting previously learned tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Serrà et al., 2018; Lopez-Paz & Ranzato, 2017; Fernando et al., 2017; Lee et al., 2017) .",
"The problem with this approach is that eventually, the system will run out of available capacity.",
"Instead, here we argue for developing models that can grow their internal capacity.",
"While some work has also relied on growing the model to face catastrophic forgetting (Rusu et al., 2016; Li & Hoiem, 2018; Aljundi et al., 2017) , they all rely, to the best of our knowledge, on an explicit signal identifying the task that the system is currently solving.",
"Indeed, most work dealing with catastrophic forgetting has evaluated the models on settings often making unrealistic assumptions.",
"Not only they typically provided the model with an explicit identifier for the task at hand, but also tasks featured unnatural properties, such as scrambled pixels, or categories that were incrementally added, but presented sequentially on blocks once and for all, and never encountered again during training.",
"Only recently, some work has started tackling continual learning in a more realistic task-agnostic way (Aljundi et al., 2019 ).",
"Yet, there are no standard publicly available datasets that can help the evaluation of continual learning systems on more natural settings.",
"In this paper, we make a two-fold contribution towards task agnostic continual learning.",
"First, we introduce a recurrent neural network that can grow its memory by creating new modules as training progresses.",
"Rather than using all modules simultaneously, or indexing them based on a task identification signal, our model learns to weight their contributions to adapt to the current context.",
"Second, we introduce to the community a multilingual/multidomain language modelling task with switching domains that we hope can fit this bill.",
"We propose two variants of it.",
"The first is a character-based language modelling benchmark with text written in 5 different languages that randomly switch between one another.",
"The second one is a word-based language modelling task, where the text oscillates between 4 different domains.",
"No segmentation signal is given when there is a switch, making the models having to discover it autonomously while they are evaluated for their adaptation skills.",
"Our experimental results show that our system can switch between different domains faster than comparable neural networks.",
"Furthermore, our model is very general because it does not make any assumption about the type of underlying neural network architecture and thus, it can easily be adopted for tackling other tasks in conjunction with any other neural network system.",
"We believe that developing more flexible forms of artificial intelligence will probably require flexible memory capabilities that can only be delivered by models capable of growth.",
"Here we have proposed a method based on growing full-fledged modules over time.",
"We explored a particular instantiation of this architecture in which modules are grown at a constant rate and consolidated into a long-term memory (LTM).",
"Once the model has reached a maximum size, memories can be still be consolidated into LTM by reinstating LTM modules back into STM (see Figure 1 ).",
"Furthermore, we introduced to the community two lifelong language modelling tasks.",
"One, characterbased and multilingual, and other, word-based on multiple domains.",
"Our experiments confirm the efficacy of our Growing LTM model, showing that it can learn to adapt much faster than comparable baselines without suffering in terms of its overall performance.",
"The proposed system is very flexible, allowing it to be used with any neural network architecture.",
"While here we have studied it in the lifelong language modeling setting, we believe that the system will also show promising results in other domains with similar requirements, such as robotics -where the model can learn to deal with different kinds of terrains-or image recognition -where it can learn different kinds of visual information depending on the contextual requirements (Rebuffi et al., 2017) .",
"In the future, mechanisms that exploit the structure of the input data for associating it with the relevant sets of models (Aljundi et al., 2017; Milan et al., 2016) can be explored.",
"Furthermore, we plan to study mechanisms that would allow the model to decide when to grow, rather than keeping a constant schedule.",
"In the long term, the model should be capable of deciding how to structure its long-term memory and whether or not to grow it, as Stack-RNNs do to grow the working memory.",
"Moreover, we are interested in exploring how communication between memories can be enabled through a central routing mechanism, in a similar fashion to the model proposed by Hafner et al. (2017) .",
"To conclude, in this work we have given a step -and we hope that more will follow-in providing neural networks with flexible memory structures.",
"We expect that further pursuing this goal will pave the way towards developing more general learning systems and, fundamentally, that in the future neural networks will no longer need to be excused from class just because their weights are full."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1538461446762085,
0.1428571343421936,
0.21739129722118378,
0.3396226465702057,
0.52173912525177,
0.1090909019112587,
0.09836065024137497,
0.1428571343421936,
0,
0.11764705181121826,
0.09090908616781235,
0.0714285671710968,
0.14035087823867798,
0.09090908616781235,
0.11428570747375488,
0.1111111044883728,
0.08510638028383255,
0,
0.21917808055877686,
0.0833333283662796,
0.1621621549129486,
0.11538460850715637,
0.1538461446762085,
0.1818181723356247,
0.19999998807907104,
0.21052631735801697,
0.2745097875595093,
0.1621621549129486,
0.19230768084526062,
0.2083333283662796,
0.25,
0.0416666604578495,
0.21212120354175568,
0.07407406717538834,
0.1818181723356247,
0.15094339847564697,
0.072727270424366,
0.1428571343421936,
0.09999999403953552,
0.06666666269302368,
0.25531914830207825,
0.1463414579629898,
0.06896550953388214,
0.11764705181121826,
0.17543859779834747,
0.09999999403953552,
0.18518517911434174,
0.14705881476402283
] | rJxoi1HtPr | true | [
"We introduce a continual learning setup based on language modelling where no explicit task segmentation signal is given and propose a neural network model with growing long term memory to tackle it."
] |
[
"We propose to tackle a time series regression problem by computing temporal evolution of a probability density function to provide a probabilistic forecast.",
"A Recurrent Neural Network (RNN) based model is employed to learn a nonlinear operator for temporal evolution of a probability density function.",
"We use a softmax layer for a numerical discretization of a smooth probability density functions, which transforms a function approximation problem to a classification task.",
"Explicit and implicit regularization strategies are introduced to impose a smoothness condition on the estimated probability distribution.",
"A Monte Carlo procedure to compute the temporal evolution of the distribution for a multiple-step forecast is presented.",
"The evaluation of the proposed algorithm on three synthetic and two real data sets shows advantage over the compared baselines.",
"Application of the deep learning for manufacturing processes has attracted a great attention as one of the core technologies in Industry 4.0 BID15 .",
"In many manufacturing processes, e.g. blast furnace, smelter, and milling, the complexity of the overall system makes it almost impossible or impractical to develop a simulation model from the first principles.",
"Hence, system identification from sensor observations has been a long-standing research topic BID24 .",
"Still, when the observation is noisy and there is no prior knowledge on the underlying dynamics, there is only a very limited number of methods for the reconstruction of nonlinear dynamics.In this work, we consider the following class of problems, where the system is driven by a complex underlying dynamical system, e.g., ∂y ∂t = F(y(t), y(t − τ ), u(t)).Here",
", y(t) is a continuous process, F is a nonlinear operator, τ is a delay-time parameter, and u(t) is an exogenous forcing, such as control parameters. At",
"time step t, we then observe a noisy measurement of y(t) which can be defined by the following noise model DISPLAYFORM0 where ν t is a multiplicative and t is an additive noise process. In",
"FORMULA0 and FORMULA1 , we place no assumption on function F, do not assume any distributional properties of noises ν t and t , but assume the knowledge of the control parameters u(t).Since",
"the noise components, ν t and t , are stochastic processes, the observationŷ t is a random variable. In this",
"work, we are interested in computing temporal evolution of the probability density function (PDF) ofŷ, given the observations up to time step t, i.e., p(ŷ t+n | Y 0:t , U 0:t+n−1 ) for n ≥ 1, where Y 0:t = (ŷ 0 , · · · ,ŷ t ) is a trajectory of the past observations and U 0:t+n−1 = (u 0 , · · · , u t+n−1 ) consists of the history of the known control actions, U 0:t−1 , and a future control scenario, U t:t+n−1 . We show",
", in Section 3, a class of problems, where simple regression problem of forecasting the value ofŷ t+n is not sufficient or not possible, e.g., chaotic systems. Note that",
"the computation of time evolution of a PDF has been a long-standing topic in statistical physics. For a simple",
"Markov process, there are well-established theories based on the Fokker-Planck equation. However, it",
"is very difficult to extend those theories to a more general problem, such as delay-time dynamical systems, or apply it to complex nonlinear systems.Modeling of the system (1) has been extensively studied in the past, in particular, under the linearity assumptions on F and certain noise models, e.g., Gaussian t and ν t = 1 in (2). The approaches",
"based on auto-regressive processes BID18 and Kalman filter BID9 are good examples. Although these",
"methods do estimate the predictive probability distribution and enable the computation of the forecast uncertainty, the assumptions on the noise and linearity in many cases make it challenging to model real nonlinear dynamical systems.Recently, a nonlinear state-space model based on the Gaussian process, called the Gaussian Process State Space Model (GPSSM), has been extended for the identification of nonlinear system BID5 BID4 . GPSSM is capable",
"of representing a nonlinear system and is particularly advantageous when the size of the data set is relatively small that it is difficult to train a deep learning model. However, the joint",
"Gaussian assumption of GPSSM may restrict the representation capability for a complex non-Gaussian noise.A recent success of deep learning created a flurry of new approaches for time series modeling and prediction. The ability of deep",
"neural networks, such as RNN, to learn complex nonlinear spatiotemporal relationships in the data enabled these methods to outperform the classical time series approaches. For example, in the",
"recent works of BID20 BID11 ; BID3 , the authors proposed different variants of the RNN-based algorithms to perform time series predictions and showed their advantage over the traditional methods. Although encouraging",
", these approaches lack the ability to estimate the probability distribution of the predictions since RNN is a deterministic model and unable to fully capture the stochastic nature of the data.To enable RNN to model the stochastic properties of the data, BID2 augmented RNN with a latent random variable included in the hidden state and proposed to estimate the resulting model using variational inference. In a similar vein,",
"the works of BID0 ; BID14 extend the traditional Kalman filter to handle nonlinear dynamics when the inference becomes intractable. Their approach is",
"based on formulating the variational lower bound and optimizing it under the assumption of Gaussian posterior.Another recent line of works enabled stochasticity in the RNN-based models by drawing a connection between Bayesian variation inference and a dropout technique. In particular, BID6",
"showed that the model parameter uncertainty (which then leads to uncertainty in model predictions), that traditionally was estimated using variational inference, can be approximated using a dropout method (a random removal of some connections in the network structure). The prediction uncertainty",
"is then estimated by evaluating the model outputs at different realizations of the dropout weights. Following the ideas of BID6",
", BID27 proposed additional ways (besides modeling the parameter uncertainty) to quantify the forecast uncertainty in RNN, which included the model mis-specification error and the inherent noise of the data."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.1666666567325592,
0.05405404791235924,
0.05405404791235924,
0.1818181723356247,
0.12121211737394333,
0.11428570747375488,
0.052631575614213943,
0.08695651590824127,
0,
0.028985504060983658,
0.052631575614213943,
0.08510638028383255,
0.045454539358615875,
0.060606054961681366,
0.1012658178806305,
0.045454539358615875,
0.1249999925494194,
0,
0.08571428060531616,
0.06666666269302368,
0.1764705777168274,
0.0952380895614624,
0.17391303181648254,
0.19999998807907104,
0.2222222238779068,
0.158730149269104,
0.05405404791235924,
0.11320754140615463,
0.11999999731779099,
0,
0.1428571343421936
] | BkDB51WR- | true | [
"Proposed RNN-based algorithm to estimate predictive distribution in one- and multi-step forecasts in time series prediction problems"
] |
[
"In cognitive systems, the role of a working memory is crucial for visual reasoning and decision making.",
"Tremendous progress has been made in understanding the mechanisms of the human/animal working memory, as well as in formulating different frameworks of artificial neural networks. ",
"In the case of humans, the visual working memory (VWM) task is a standard one in which the subjects are presented with a sequence of images, each of which needs to be identified as to whether it was already seen or not. \n\n",
"Our work is a study of multiple ways to learn a working memory model using recurrent neural networks that learn to remember input images across timesteps.",
"We train these neural networks to solve the working memory task by training them with a sequence of images in supervised and reinforcement learning settings.",
"The supervised setting uses image sequences with their corresponding labels.",
"The reinforcement learning setting is inspired by the popular view in neuroscience that the working memory in the prefrontal cortex is modulated by a dopaminergic mechanism.",
"We consider the VWM task as an environment that rewards the agent when it remembers past information and penalizes it for forgetting. \n \n",
"We quantitatively estimate the performance of these models on sequences of images from a standard image dataset (CIFAR-100).",
"Further, we evaluate their ability to remember and recall as they are increasingly trained over episodes.",
"Based on our analysis, we establish that a gated recurrent neural network model with long short-term memory units trained using reinforcement learning is powerful and more efficient in temporally consolidating the input spatial information. \n\n",
"This work is an initial analysis as a part of our ultimate goal to use artificial neural networks to model the behavior and information processing of the working memory of the brain and to use brain imaging data captured from human subjects during the VWM cognitive task to understand various memory mechanisms of the brain. \n"
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.1463414579629898,
0.1304347813129425,
0.1666666567325592,
0.21276594698429108,
0.20408162474632263,
0,
0.35555556416511536,
0.08888888359069824,
0.04878048226237297,
0.09999999403953552,
0.2711864411830902,
0.1230769157409668
] | Syl0NmtLIr | false | [
"LSTMs can more effectively model the working memory if they are learned using reinforcement learning, much like the dopamine system that modulates the memory in the prefrontal cortex "
] |
[
"Nonlinearity is crucial to the performance of a deep (neural) network (DN).\n",
"To date there has been little progress understanding the menagerie of available nonlinearities, but recently progress has been made on understanding the r\\^{o}le played by piecewise affine and convex nonlinearities like the ReLU and absolute value activation functions and max-pooling.\n",
"In particular, DN layers constructed from these operations can be interpreted as {\\em max-affine spline operators} (MASOs) that have an elegant link to vector quantization (VQ) and $K$-means.\n",
"While this is good theoretical progress, the entire MASO approach is predicated on the requirement that the nonlinearities be piecewise affine and convex, which precludes important activation functions like the sigmoid, hyperbolic tangent, and softmax.\n",
"{\\em This paper extends the MASO framework to these and an infinitely large class of new nonlinearities by linking deterministic MASOs with probabilistic Gaussian Mixture Models (GMMs).",
"}\n",
"We show that, under a GMM, piecewise affine, convex nonlinearities like ReLU, absolute value, and max-pooling can be interpreted as solutions to certain natural ``hard'' VQ inference problems, while sigmoid, hyperbolic tangent, and softmax can be interpreted as solutions to corresponding ``soft'' VQ inference problems.\n",
"We further extend the framework by hybridizing the hard and soft VQ optimizations to create a $\\beta$-VQ inference that interpolates between hard, soft, and linear VQ inference.\n",
"A prime example of a $\\beta$-VQ DN nonlinearity is the {\\em swish} nonlinearity, which offers state-of-the-art performance in a range of computer vision tasks but was developed ad hoc by experimentation.\n",
"Finally, we validate with experiments an important assertion of our theory, namely that DN performance can be significantly improved by enforcing orthogonality in its linear filters.\n",
"Deep (neural) networks (DNs) have recently come to the fore in a wide range of machine learning tasks, from regression to classification and beyond.",
"A DN is typically constructed by composing a large number of linear/affine transformations interspersed with up/down-sampling operations and simple scalar nonlinearities such as the ReLU, absolute value, sigmoid, hyperbolic tangent, etc.",
"BID13 .",
"Scalar nonlinearities are crucial to a DN's performance.",
"Indeed, without nonlinearity, the entire network would collapse to a simple affine transformation.",
"But to date there has been little progress understanding and unifying the menagerie of nonlinearities, with few reasons to choose one over another other than intuition or experimentation.Recently, progress has been made on understanding the rôle played by piecewise affine and convex nonlinearities like the ReLU, leaky ReLU, and absolute value activations and downsampling operations like max-, average-, and channel-pooling BID1 .",
"In particular, these operations can be interpreted as max-affine spline operators (MASOs) BID16 ; BID14 that enable a DN to find a locally optimized piecewise affine approximation to the prediction operator given training data.",
"A spline-based prediction is made in two steps.",
"First, given an input signal x, we determine which region of the spline's partition of the domain (the input signal space) it falls into.",
"Second, we apply to x the fixed (in this case affine) function that is assigned to that partition region to obtain the prediction y = f (x).The",
"key result of BID1 is any DN layer constructed from a combination of linear and piecewise affine and convex is a MASO, and hence the entire DN is merely a composition of MASOs.MASOs have the attractive property that their partition of the signal space (the collection of multidimensional \"knots\") is completely determined by their affine parameters (slopes and offsets). This",
"provides an elegant link to vector quantization (VQ) and K-means clustering. That",
"is, during learning, a DN implicitly constructs a hierarchical VQ of the training data that is then used for splinebased prediction. This",
"is good progress for DNs based on ReLU, absolute value, and max-pooling, but what about DNs based on classical, high-performing nonlinearities that are neither piecewise affine nor convex like the sigmoid, hyperbolic tangent, and softmax or fresh nonlinearities like the swish BID20 that has been shown to outperform others on a range of tasks?Contributions",
". In this paper",
", we address this gap in the DN theory by developing a new framework that unifies a wide range of DN nonlinearities and inspires and supports the development of new ones. The key idea",
"is to leverage the yinyang relationship between deterministic VQ/K-means and probabilistic Gaussian Mixture Models (GMMs) BID3 . Under a GMM,",
"piecewise affine, convex nonlinearities like ReLU and absolute value can be interpreted as solutions to certain natural hard inference problems, while sigmoid and hyperbolic tangent can be interpreted as solutions to corresponding soft inference problems. We summarize",
"our primary contributions as follows:Contribution 1: We leverage the well-understood relationship between VQ, K-means, and GMMs to propose the Soft MASO (SMASO) model, a probabilistic GMM that extends the concept of a deterministic MASO DN layer. Under the SMASO",
"model, hard maximum a posteriori (MAP) inference of the VQ parameters corresponds to conventional deterministic MASO DN operations that involve piecewise affine and convex functions, such as fully connected and convolution matrix multiplication; ReLU, leaky-ReLU, and absolute value activation; and max-, average-, and channelpooling. These operations",
"assign the layer's input signal (feature map) to the VQ partition region corresponding to the closest centroid in terms of the Euclidean distance, Contribution 2: A hard VQ inference contains no information regarding the confidence of the VQ region selection, which is related to the distance from the input signal to the region boundary. In response, we",
"develop a method for soft MAP inference of the VQ parameters based on the probability that the layer input belongs to a given VQ region. Switching from",
"hard to soft VQ inference recovers several classical and powerful nonlinearities and provides an avenue to derive completely new ones. We illustrate",
"by showing that the soft versions of ReLU and max-pooling are the sigmoid gated linear unit and softmax pooling, respectively. We also find",
"a home for the sigmoid, hyperbolic tangent, and softmax in the framework as a new kind of DN layer where the MASO output is the VQ probability.Contribution 3: We generalize hard and soft VQ to what we call β-VQ inference, where β ∈ (0, 1) is a free and learnable parameter. This parameter",
"interpolates the VQ from linear (β → 0), to probabilistic SMASO (β = 0.5), to deterministic MASO (β → 1). We show that the",
"β-VQ version of the hard ReLU activation is the swish nonlinearity, which offers state-of-the-art performance in a range of computer vision tasks but was developed ad hoc through experimentation BID20 .Contribution 4: Seen",
"through the MASO lens, current DNs solve a simplistic per-unit (per-neuron), independent VQ optimization problem at each layer. In response, we extend",
"the SMASO GMM to a factorial GMM that that supports jointly optimal VQ across all units in a layer. Since the factorial aspect",
"of the new model would make naïve VQ inference exponentially computationally complex, we develop a simple sufficient condition under which a we can achieve efficient, tractable, jointly optimal VQ inference. The condition is that the",
"linear \"filters\" feeding into any nonlinearity should be orthogonal. We propose two simple strategies",
"to learn approximately and truly orthogonal weights and show on three different datasets that both offer significant improvements in classification per-formance. Since orthogonalization can be applied",
"to an arbitrary DN, this result and our theoretical understanding are of independent interest. This paper is organized as follows. After",
"reviewing the theory of MASOs and VQ",
"for DNs in Section 2, we formulate the GMM-based extension to SMASOs in Section 3. Section 4 develops the hybrid β-VQ inference",
"with a special case study on the swish nonlinearity. Section 5 extends the SMASO to a factorial GMM",
"and shows the power of DN orthogonalization. We wrap up in Section 6 with directions for future",
"research. Proofs of the various results appear in several appendices",
"in the Supplementary Material."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14814814925193787,
0.08510638028383255,
0.1860465109348297,
0.08888888359069824,
0.09756097197532654,
0.11764705181121826,
0.10526315122842789,
0.045454539358615875,
0,
0.21621620655059814,
0.13333332538604736,
0.1818181723356247,
0.07407406717538834,
0.0634920597076416,
0.043478257954120636,
0,
0,
0,
0.10526315122842789,
0.23076923191547394,
0.05714285373687744,
0.10169491171836853,
0,
0.1428571343421936,
0.12121211737394333,
0.09302324801683426,
0.0833333283662796,
0.072727270424366,
0.037735845893621445,
0.10810810327529907,
0.11764705181121826,
0.05714285373687744,
0.07017543166875839,
0.05882352590560913,
0.04444444179534912,
0.0555555522441864,
0.0624999962747097,
0.04651162400841713,
0,
0.05128204822540283,
0.05714285373687744,
0.0952380895614624,
0,
0.06666666269302368,
0.06451612710952759,
0,
0
] | Syxt2jC5FX | true | [
"Reformulate deep networks nonlinearities from a vector quantization scope and bridge most known nonlinearities together."
] |
[
"Engineered proteins offer the potential to solve many problems in biomedicine, energy, and materials science, but creating designs that succeed is difficult in practice.",
"A significant aspect of this challenge is the complex coupling between protein sequence and 3D structure, and the task of finding a viable design is often referred to as the inverse protein folding problem.",
"We develop generative models for protein sequences conditioned on a graph-structured specification of the design target.",
"Our approach efficiently captures the complex dependencies in proteins by focusing on those that are long-range in sequence but local in 3D space.",
"Our framework significantly improves upon prior parametric models of protein sequences given structure, and takes a step toward rapid and targeted biomolecular design with the aid of deep generative models.",
"A central goal for computational protein design is to automate the invention of protein molecules with defined structural and functional properties.",
"This field has seen tremendous progess in the past two decades BID14 , including the design of novel 3D folds BID20 , enzymes BID30 , and complexes BID4 .",
"However, the current practice often requires multiple rounds of trial-and-error, with first designs frequently failing BID19 BID28 .",
"Several of the challenges stem from the bottom-up nature of contemporary approaches that rely on both the accuracy of energy functions to describe protein physics as well as on the efficiency of sampling algorithms to explore the protein sequence and structure space.Here, we explore an alternative, top-down framework for protein design that directly learns a conditional generative model for protein sequences given a specification of the target structure, which is represented as a graph over the sequence elements.",
"Specifically, we augment the autoregressive self-attention of recent sequence models BID34 with graph-based descriptions of the 3D structure.",
"By composing multiple layers of structured self-attention, our model can effectively capture higher-order, interaction-based dependencies between sequence and structure, in contrast to previous parameteric approaches BID24 BID36 that are limited to only the first-order effects.The graph-structured conditioning of a sequence model affords several benefits, including favorable computational efficiency, inductive bias, and representational flexibility.",
"We accomplish the first two by leveraging a well-evidenced finding in protein science, namely that long-range dependencies in sequence are generally short-range in 3D space BID23 BID3 .",
"By making the graph and self-attention similarly sparse and localized in 3D space, we achieve computational scaling that is linear in sequence length.",
"Additionally, graph structured inputs offer representational flexibility, as they accomodate both coarse, 'flexible backbone' (connectivity and topology) as well as fine-grained (precise atom locations) descriptions of structure.We demonstrate the merits of our approach via a detailed empirical study.",
"Specifically, we evaluate our model at structural generalization to sequences of protein folds that were outside of the training set.",
"Our model achieves considerably improved generalization performance over the recent deep models of protein sequence given structure as well as structure-naïve language models.",
"We presented a new deep generative model to 'design' protein sequences given a graph specification of their structure.",
"Our model augments the traditional sequence-level self-attention of Transformers BID34 with relational 3D structural encodings and is able to leverage the spatial locality of dependencies in molecular structures for efficient computation.",
"When evaluated on unseen folds, the model achieves significantly improved perplexities over the state-of-the-art parametric generative models.",
"Our framework suggests the possibility of being able to efficiently design and engineer protein sequences with structurally-guided deep generative models, and underscores the central role of modeling sparse long-range dependencies in biological sequences."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.09999999403953552,
0.13333332538604736,
0.24242423474788666,
0.21052631735801697,
0.22727271914482117,
0.1621621549129486,
0,
0.05882352590560913,
0.19178082048892975,
0.060606054961681366,
0.1515151411294937,
0.2857142686843872,
0.052631575614213943,
0.07547169178724289,
0.277777761220932,
0.15789473056793213,
0.4117647111415863,
0.21739129722118378,
0.060606054961681366,
0.260869562625885
] | SJgxrLLKOE | true | [
"We learn to conditionally generate protein sequences given structures with a model that captures sparse, long-range dependencies."
] |
[
"We provide a novel perspective on the forward pass through a block of layers in a deep network.",
"In particular, we show that a forward pass through a standard dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex objective with a single iteration of a $\\tau$-nice Proximal Stochastic Gradient method.",
"We further show that replacing standard Bernoulli dropout with additive dropout is equivalent to optimizing the same convex objective with a variance-reduced proximal method.",
"By expressing both fully-connected and convolutional layers as special cases of a high-order tensor product, we unify the underlying convex optimization problem in the tensor setting and derive a formula for the Lipschitz constant $L$ used to determine the optimal step size of the above proximal methods.",
"We conduct experiments with standard convolutional networks applied to the CIFAR-10 and CIFAR-100 datasets and show that replacing a block of layers with multiple iterations of the corresponding solver, with step size set via $L$, consistently improves classification accuracy.",
"Deep learning has revolutionized computer vision and natural language processing and is increasingly applied throughout science and engineering BID20 .",
"This has motivated the mathematical analysis of various aspects of deep networks, such as the capacity and uniqueness of their representations BID28 BID24 and their global training convergence properties BID10 .",
"However, a complete characterization of deep networks remains elusive.",
"For example, Bernoulli dropout layers are known to improve generalization BID29 , but a thorough theoretical understanding of their behavior remains an open problem.",
"While basic dropout layers have proven to be effective, there are many other types of dropout with various desirable properties BID22 .",
"This raises many questions.",
"Can the fundamental block of layers that consists of a dropout layer followed by a linear transformation and a non-linear activation be further improved for better generalization?",
"Can the choice of dropout layer be made independently from the linear transformation and non-linear activation?",
"Are there systematic ways to propose new types of dropout?We",
"attempt to address some of these questions by establishing a strong connection between the forward pass through a block of layers in a deep network and the solution of convex optimization problems of the following form: DISPLAYFORM0 Note that when f i (a i x) = 1 2 (a i x − y i ) 2 and g(x) = x 2 2 , Eq. (1) is standard ridge regression. When",
"g(x) = x 1 , Eq. (1) has the form of LASSO regression.We show that a block of layers that consists of dropout followed by a linear transformation (fullyconnected or convolutional) and a non-linear activation has close connections to applying stochastic solvers to (1). Interestingly",
", the choice of the stochastic optimization algorithm gives rise to commonly used dropout layers, such as Bernoulli and additive dropout, and to a family of other types of dropout layers that have not been explored before. As a special",
"case, when the block in question does not include dropout, the stochastic algorithm reduces to a deterministic one.Our contributions can be summarized as follows. (i) We show",
"that a forward pass through a block that consists of Bernoulli dropout followed by a linear transformation and a non-linear activation is equivalent to a single iteration of τ -nice Proximal Stochastic Gradient, Prox-SG BID34 when it is applied to an instance of (1). We provide",
"various conditions on g that recover (either exactly or approximately) common non-linearities used in practice. (ii) We show",
"that the same block with an additive dropout instead of Bernoulli dropout is equivalent to a single iteration of mS2GD BID16 ) -a mini-batching form of variance-reduced SGD BID12 ) -applied to an instance of (1). (iii) By expressing",
"both fully-connected and convolutional layers (referred to as linear throughout) as special cases of a high-order tensor product BID2 , we derive a formula for the Lipschitz constant L of ∇F (x). As a consequence, we",
"can compute the optimal step size for the stochastic solvers that correspond to blocks of layers. We note that concurrent",
"work BID26 used a different analysis strategy to derive an equivalent result for computing the singular values of convolutional layers. (iv) We validate our theoretical",
"analysis experimentally by replacing blocks of layers in standard image classification networks with corresponding solvers and show that this improves the accuracy of the models.",
"We have presented equivalences between layers in deep networks and stochastic solvers, and have shown that this can be leveraged to improve accuracy.",
"The presented relationships open many doors for future work.",
"For instance, our framework shows an intimate relation between a dropout layer and the sampling S from the set [n 1 ] in a stochastic algorithm.",
"As a consequence, one can borrow theory from the stochastic optimization literature to propose new types of dropout layers.",
"For example, consider a serial importance sampling strategy with Prox-SG to solve (5) BID37 BID34 , where serial sampling is the sampling that satisfies Prob (i ∈ S, j ∈ S) = 0.",
"A serial importance sampling S from the set of functions f i ( X ) is the sampling such that Prob DISPLAYFORM0 i.e. each function from the set [n 1 ] is sampled with a probability proportional to the norm of the gradient of the function.",
"This sampling strategy is the optimal serial sampling S that maximizes the rate of convergence solving (5) BID37 .",
"From a deep layer perspective, performing Prox-SG with importance sampling for a single iteration is equivalent to a forward pass through the same block of layers with a new dropout layer.",
"Such a dropout layer will keep each input activation with a non-uniform probability proportional to the norm of the gradient.",
"This is in contrast to BerDropout p where all input activations are kept with an equal probability 1 − p.",
"Other types of dropout arise when considering non-serial importance sampling where |S| = τ > 1.In summary, we have presented equivalences between stochastic solvers on a particular class of convex optimization problems and a forward pass through a dropout layer followed by a linear layer and a non-linear activation.",
"Inspired by these equivalences, we have demonstrated empirically on multiple datasets and network architectures that replacing such network blocks with their corresponding stochastic solvers improves the accuracy of the model.",
"We hope that the presented framework will contribute to a principled understanding of the theory and practice of deep network architectures.A LEAKY RELU AS A PROXIMAL OPERATOR Proof.",
"The proximal operator is defined as Prox g (a) = arg min DISPLAYFORM1 Note that the problem is both convex and smooth.",
"The optimality conditions are given by: DISPLAYFORM2 Since the problem is separable in coordinates, we have: DISPLAYFORM3 The Leaky ReLU is defined as DISPLAYFORM4 which shows that Prox g is a generalized form of the Leaky ReLU with a shift of λ and a slope α = Proof.",
"The proximal operator is defined as Prox g (a) = arg min DISPLAYFORM5 Note that the function g(x) is elementwise separable, convex, and smooth.",
"By equating the gradient to zero and taking the positive solution of the resulting quadratic polynomial, we arrive at the closedform solution: DISPLAYFORM6 where denotes elementwise multiplication.",
"It is easy to see that this operator is close to zero for x i << 0, and close to x i for x i >> 0, with a smooth transition for small |x i |.Note",
"that the function Prox g (a) approximates the activation SoftPlus = log(1 + exp (a)) very well. An illustrative",
"example is shown in FIG2 ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1666666567325592,
0.11320754140615463,
0.0952380895614624,
0.16949151456356049,
0.18518517911434174,
0.05405404791235924,
0.08888888359069824,
0.06896550953388214,
0.13636362552642822,
0.14999999105930328,
0,
0.1818181723356247,
0.11428570747375488,
0.06666666269302368,
0.19178082048892975,
0.17241378128528595,
0.2641509473323822,
0.1702127605676651,
0.10526315122842789,
0.10526315122842789,
0.07692307233810425,
0.11764705181121826,
0.2631579041481018,
0.13333332538604736,
0.1860465109348297,
0.4878048598766327,
0,
0.13636362552642822,
0.25641024112701416,
0.08163265138864517,
0.1111111044883728,
0.0555555522441864,
0.1304347813129425,
0.052631575614213943,
0.051282044500112534,
0.09677419066429138,
0.25,
0.30434781312942505,
0.09756097197532654,
0.06779660284519196,
0.09302324801683426,
0.09090908616781235,
0.13333332538604736,
0.052631575614213943,
0
] | ryxxCiRqYX | true | [
"A framework that links deep network layers to stochastic optimization algorithms; can be used to improve model accuracy and inform network design."
] |
[
"Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases.",
"Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy.",
"Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured.",
"Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters.",
"This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.",
"Deep networks are emerging as components of a number of revolutionary technologies, including image recognition (Krizhevsky et al., 2012) , speech recognition , and driving assistance (Xu et al., 2017) .",
"Unlocking the full promise of such applications requires a system perspective where task performance, throughput, energy-efficiency, and compactness are all critical considerations to be optimized through co-design of algorithms and deployment hardware.",
"Current research seeks to develop methods for creating deep networks that maintain high accuracy while reducing the precision needed to represent their activations and weights, thereby reducing the computation and memory required for their implementation.",
"The advantages of using such algorithms to create networks for low precision hardware has been demonstrated in several deployed systems (Esser et al., 2016; Jouppi et al., 2017; Qiu et al., 2016) .",
"It has been shown that low precision networks can be trained with stochastic gradient descent by updating high precision weights that are quantized, along with activations, for the forward and backward pass (Courbariaux et al., 2015; Esser et al., 2016) .",
"This quantization is defined by a mapping of real numbers to the set of discrete values supported by a given low precision representation (often integers with 8-bits or less).",
"We would like a mapping for each quantized layer that maximizes task performance, but it remains an open question how to optimally achieve this.",
"To date, most approaches for training low precision networks have employed uniform quantizers, which can be configured by a single step size parameter (the width of a quantization bin), though more complex nonuniform mappings have been considered (Polino et al., 2018) .",
"Early work with low precision deep networks used a simple fixed configuration for the quantizer (Hubara et al., 2016; Esser et al., 2016) , while starting with Rastegari et al. (2016) , later work focused on fitting the quantizer to the data, either based on statistics of the data distribution (Li & Liu, 2016; Cai et al., 2017; McKinstry et al., 2018) or seeking to minimize quantization error during training (Choi et al., 2018c; Zhang et al., 2018) .",
"Most recently, work has focused on using backpropagation with (Jung et al., 2018) , FAQ (McKinstry et al., 2018) , LQ-Nets (Zhang et al., 2018) , PACT (Choi et al., 2018b) , Regularization (Choi et al., 2018c) , and NICE (Baskin et al., 2018 stochastic gradient descent to learn a quantizer that minimizes task loss (Zhu et al., 2016; Mishra & Marr, 2017; Choi et al., 2018b; a; Jung et al., 2018; Baskin et al., 2018; Polino et al., 2018) .",
"While attractive for their simplicity, fixed mapping schemes based on user settings place no guarantees on optimizing network performance, and quantization error minimization schemes might perfectly minimize quantization error and yet still be non optimal if a different quantization mapping actually minimizes task error.",
"Learning the quantization mapping by seeking to minimize task loss is appealing to us as it directly seeks to improve on the metric of interest.",
"However, as the quantizer itself is discontinuous, such an approach requires approximating its gradient, which existing methods have done in a relatively coarse manner that ignore the impact of transitions between quantized states (Choi et al., 2018b; a; Jung et al., 2018) .",
"Here, we introduce a new way to learn the quantization mapping for each layer in a deep network, Learned",
"Step Size Quantization (LSQ), that improves on prior efforts with two key contributions.",
"First, we provide a simple way to approximate the gradient to the quantizer step size that is sensitive to quantized state transitions, arguably providing for finer grained optimization when learning the step size as a model parameter.",
"Second, we propose a simple heuristic to bring the magnitude of step size updates into better balance with weight updates, which we show improves convergence.",
"The overall approach is usable for quantizing both activations and weights, and works with existing methods for backpropagation and stochastic gradient descent.",
"Using LSQ to train several network architectures on the ImageNet dataset, we demonstrate significantly better accuracy than prior quantization approaches (Table 1 ) and, for the first time that we are aware of, demonstrate the milestone of 3-bit quantized networks reaching full precision network accuracy (Table 4) .",
"The results presented here demonstrate that on the ImageNet dataset across several network architectures, LSQ exceeds the performance of all prior approaches for creating quantized networks.",
"We found best performance when rescaling the quantizer step size loss gradient based on layer size and precision.",
"Interestingly, LSQ does not appear to minimize quantization error, whether measured using mean square error, mean absolute error, or Kullback-Leibler divergence.",
"The approach itself is simple, requiring only a single additional parameter per weight or activation layer.",
"Although our goal is to train low precision networks to achieve accuracy equal to their full precision counterparts, it is not yet clear whether this goal is achievable for 2-bit networks, which here reached accuracy several percent below their full precision counterparts.",
"However, we found that such 2-bit solutions for state-of-the-art networks are useful in that they can give the best accuracy for the given model size, for example, with an 8MB model size limit, a 2-bit ResNet-50 was better than a 4-bit ResNet-34 (Figure 3 ).",
"This work is a continuation of a trend towards steadily reducing the number of bits of precision necessary to achieve good performance across a range of network architectures on ImageNet.",
"While it is unclear how far it can be taken, it is noteworthy that the trend towards higher performance at lower precision strengthens the analogy between artificial neural networks and biological neural networks, which themselves employ synapses represented by perhaps a few bits of information (Bartol Jr et al., 2015) and single bit spikes that may be employed in small spatial and/or temporal ensembles to provide low bit width data representation.",
"Analogies aside, reducing network precision while maintaining high accuracy is a promising means of reducing model size and increasing throughput to provide performance advantages in real world deployed deep networks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.21739129722118378,
0.2461538463830948,
0.2702702581882477,
0.07843136787414551,
0.1538461446762085,
0.0952380895614624,
0.08510638028383255,
0.21739129722118378,
0.21739129722118378,
0.22641508281230927,
0.23255813121795654,
0.1463414579629898,
0.21052631735801697,
0.2222222238779068,
0.029411761090159416,
0.07547169178724289,
0.1538461446762085,
0.14035087823867798,
0.17142856121063232,
0.06666666269302368,
0.25531914830207825,
0.09756097197532654,
0.0555555522441864,
0.28070175647735596,
0.3333333432674408,
0.1764705777168274,
0.05714285373687744,
0,
0.16326530277729034,
0.145454540848732,
0.1904761791229248,
0.17499999701976776,
0.17391303181648254
] | rkgO66VKDS | true | [
"A method for learning quantization configuration for low precision networks that achieves state of the art performance for quantized networks."
] |
[
"There have been several studies recently showing that strong natural language understanding (NLU) models are prone to relying on unwanted dataset biases without learning the underlying task, resulting in models which fail to generalize to out-of-domain datasets, and are likely to perform poorly in real-world scenarios.",
"We propose several learning strategies to train neural models which are more robust to such biases and transfer better to out-of-domain datasets.",
"We introduce an additional lightweight bias-only model which learns dataset biases and uses its prediction to adjust the loss of the base model to reduce the biases.",
"In other words, our methods down-weight the importance of the biased examples, and focus training on hard examples, i.e. examples that cannot be correctly classified by only relying on biases.",
"Our approaches are model agnostic and simple to implement. ",
"We experiment on large-scale natural language inference and fact verification datasets and their out-of-domain datasets and show that our debiased models significantly improve the robustness in all settings, including gaining 9.76 points on the FEVER symmetric evaluation dataset, 5.45 on the HANS dataset and 4.78 points on the SNLI hard set. ",
"These datasets are specifically designed to assess the robustness of models in the out-of-domain setting where typical biases in the training data do not exist in the evaluation set.\n",
"Recent neural models (Devlin et al., 2019; Radford et al., 2018; Chen et al., 2017) have achieved high and even near human-performance on several large-scale natural language understanding benchmarks.",
"However, it has been demonstrated that neural models tend to rely on existing idiosyncratic biases in the datasets, and leverage superficial correlations between the label and existing shortcuts in the training dataset to perform surprisingly well 1 , without learning the underlying task (Kaushik & Lipton, 2018; Gururangan et al., 2018; Poliak et al., 2018; Schuster et al., 2019; Niven & Kao, 2019; McCoy et al., 2019) .",
"For instance, natural language inference (NLI) consists of determining whether a hypothesis sentence (There is no teacher in the room) can be inferred from a premise sentence (Kids work at computers with a teacher's help) 2 (Dagan et al., 2006) .",
"However, recent work has demonstrated that large-scale NLI benchmarks contain annotation artifacts; certain words in the hypothesis are highly indicative of inference class that allow models with poor premise grounding to perform unexpectedly well (Poliak et al., 2018; Gururangan et al., 2018) .",
"As an example, in some NLI benchmarks, negation words such as \"nobody\", \"no\", and \"not\" in the hypothesis are often highly correlated with the contradiction label.",
"As a consequence, NLI models do not need to learn the true relationship between the premise and hypothesis and instead can rely on statistical cues, such as learning to link negation words with the contradiction label.",
"As a result of the existence of such biases, models exploiting statistical shortcuts during training often perform poorly on out-of-domain datasets, especially if they are carefully designed to limit the spurious cues.",
"To allow proper evaluation, recent studies have tried to create new evaluation datasets that do not contain such biases (Gururangan et al., 2018; Schuster et al., 2019) .",
"Unfortunately, it is hard to avoid spurious statistical cues in the construction of large-scale benchmarks, and collecting 1 We use biases, heuristic patterns or shortcuts interchangeably.",
"2 The given sentences are in the contradictory relation and the hypothesis cannot be inferred from the premise.",
"new datasets is costly (Sharma et al., 2018) .",
"It is therefore crucial to develop techniques to reduce the reliance on biases during the training of the neural models.",
"In this paper, we propose several end-to-end debiasing techniques to adjust the cross-entropy loss to reduce the biases learned from datasets, which work by down-weighting the biased examples so that the model focuses on learning hard examples.",
"Figure 1 illustrates an example of applying our strategy to prevent an NLI model from predicting the labels using existing biases in the hypothesis.",
"Our strategy involves adding a bias-only branch f B on top of the base model f M during training (In case of NLI, the bias-only model only uses the hypothesis).",
"We then compute the combination of the two models f C in a way to motivate the base model to learn different strategies than the ones used by the bias-only branch f B .",
"At the end of the training, we remove the bias-only classifier and use the predictions of the base model.",
"We propose three main debiasing strategies, detailed in Section 2.2.",
"In our first two proposed methods, the combination is done with an ensemble method which combines the predictions of the base and the bias-only models.",
"The training loss of the base model is then computed on the output of this combined model f C .",
"This has the effect of reducing the loss going from the combined model to the base model for the examples which the bias-only model classifies correctly.",
"For the third method, the bias-only predictions are used to directly weight the loss of the base model, explicitly modulating the loss depending on the accuracy of the bias-only model.",
"All strategies work by allowing the base model to focus on learning the hard examples, by preventing it from learning the biased examples.",
"Our approaches are simple and highly effective.",
"They require training a simple classifier on top of the base model.",
"Furthermore, our methods are model agnostic and general enough to be applicable for addressing common biases seen in several datasets in different domains.",
"We evaluate our models on challenging benchmarks in textual entailment and fact verification.",
"For entailment, we run extensive experiments on HANS (Heuristic Analysis for NLI Systems) (McCoy et al., 2019) , and hard NLI sets of Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (MNLI) (Williams et al., 2018 ) datasets (Gururangan et al., 2018 .",
"We additionally construct hard MNLI datasets from MNLI development sets to facilitate the out-of-domain evaluation on this dataset 3 .",
"Furthermore, we evaluate our fact verification models on FEVER Symmetric test set (Schuster et al., 2019) .",
"The selected datasets are highly challenging and have been carefully designed to be unbiased to allow proper evaluation of the out-of-domain performance of the models.",
"We show that including our strategies on training baseline models including BERT (Devlin et al., 2019) provide substantial gain on out-of-domain performance in all the experiments.",
"In summary, we make the following contributions:",
"1) Proposing several debiasing strategies to train neural models that make them more robust to existing biases in the dataset.",
"2) An empirical evaluation of the proposed methods on two large-scale NLI benchmarks and obtaining substantial gain on their challenging out-of-domain data, including 5.45 points on HANS and 4.78 points on SNLI hard set.",
"3) Evaluating our models on fact verification, obtaining 9.76 points gain on FEVER symmetric test set, improving the results of prior work by 4.65 points.",
"To facilitate future work, we release our datasets and code.",
"We propose several novel techniques to reduce biases learned by neural models.",
"We introduce a bias-only model that is designed to capture biases and leverages the existing shortcuts in the datasets to succeed.",
"Our debiasing strategies then work by adjusting the cross-entropy loss based on the performance of this bias-only model to focus learning on the hard examples and down-weight the importance of the biased examples.",
"Our proposed debiasing techniques are model agnostic, simple and highly effective.",
"Extensive experiments show that our methods substantially improve the model robustness to domain-shift, including 9.76 points gain on FEVER symmetric test set, 5.45 on HANS dataset and 4.78 points on SNLI hard set."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19354838132858276,
0.4285714328289032,
0.1818181723356247,
0.07999999821186066,
0.1249999925494194,
0.1818181723356247,
0.21276594698429108,
0.0833333283662796,
0.1111111044883728,
0.03333332762122154,
0.06451612710952759,
0.08695651590824127,
0.07407406717538834,
0.07692307233810425,
0.1249999925494194,
0.1666666567325592,
0.10526315122842789,
0.06451612710952759,
0.10256409645080566,
0.18518517911434174,
0.13636362552642822,
0,
0.20408162474632263,
0.0555555522441864,
0.25,
0.045454539358615875,
0,
0.04878048226237297,
0.04651162400841713,
0.09756097197532654,
0.06896551698446274,
0,
0.45454543828964233,
0.17142856121063232,
0.06896550953388214,
0.19999998807907104,
0,
0.22727271914482117,
0.2978723347187042,
0,
0.2926829159259796,
0.11320754140615463,
0,
0.1249999925494194,
0.29411762952804565,
0.2926829159259796,
0.2083333283662796,
0.12121211737394333,
0.072727270424366
] | SJlCK1rYwB | true | [
"We propose several general debiasing strategies to address common biases seen in different datasets and obtain substantial improved out-of-domain performance in all settings."
] |
[
"Reconstruction of few-view x-ray Computed Tomography (CT) data is a highly ill-posed problem.",
"It is often used in applications that require low radiation dose in clinical CT, rapid industrial scanning, or fixed-gantry CT.",
"Existing analytic or iterative algorithms generally produce poorly reconstructed images, severely deteriorated by artifacts and noise, especially when the number of x-ray projections is considerably low.",
"This paper presents a deep network-driven approach to address extreme few-view CT by incorporating convolutional neural network-based inference into state-of-the-art iterative reconstruction.",
"The proposed method interprets few-view sinogram data using attention-based deep networks to infer the reconstructed image.",
"The predicted image is then used as prior knowledge in the iterative algorithm for final reconstruction.",
"We demonstrate effectiveness of the proposed approach by performing reconstruction experiments on a chest CT dataset.",
"Computed Tomography (CT) reconstruction is an inverse problem where images are reconstructed from a collection of multiple x-ray projections known as sinogram.",
"Conventional CT imaging systems use densely sampled x-ray projections (roughly equal to one projection per detector column) with a full angular range (180-360 degrees).",
"Unlike the conventional CT setup, on the other hand, some CT systems use different imaging configurations that require rapid scanning or reduced radiation dose.",
"In those cases, the CT imaging uses a small number of x-ray projections, referred to as few-view CT.",
"Reconstructing images from a few x-ray projections becomes an extremely under-determined inverse problem, which results in significant image degradation.",
"The reconstructed images from extremely few-view sinogram measurement (10 views or less) are often characterized by severe artifacts and noise, even with state-of-the-art regularized iterative algorithms [1, 2, 3, 4, 5] as well as with the widely used Filtered Backprojection (FBP) [6] .",
"In recent years, deep learning-based approaches have been successfully applied to a number of image restoration, denoising, inpainting and other image processing applications.",
"Methods in this category use perceptual information as well as contextual features to improve the image quality.",
"In CT imaging applications, several deep convolutional neural network (CNN) approaches have been proposed to address different ill-conditioned CT reconstruction applications.",
"Methods in [7, 8, 9] proposed CNN-based approaches to improve the image quality for low-dose (sparse-view) imaging.",
"These approaches aim to infer the noise distribution to generate a cleaner image from the noisy image.",
"However, these methods do not employ the sinogram to ensure that the reconstructed image is consistent with the measurement.",
"Gupta et al. [10] proposed a method using a CNN-based projector for moderate sparse-view reconstruction (45 and 144 views).",
"Anirudh et al. [11] proposed a CNN-based sinogram completion approach to address limited-angle CT reconstruction.",
"In this paper, we present a CNN inference-based reconstruction algorithm to address extremely few-view CT imaging scenarios.",
"For the initial reconstruction, we employ a CNN-based inference model, based on CT-Net [11] , that directly uses the input measurement (few-view sinogram data) to predict the reconstructed image.",
"In the cases where the sinogram measurements are extremely undersampled, the images reconstructed by existing analytic and iterative methods may suffer from too much noise with little high frequency information, and the methods in [7, 8, 9 ] may repair the missing or noisy part with perceptually created, but incorrect content.",
"Thus, we pursue a method that directly uses the sinogram so that the reconstructed content is consistent with the input measurement, as an inverse problem.",
"Furthermore, instead of performing the sinogram completion in [11] optimized for limited-angle reconstruction, we propose to use the predicted image from the CNN inference model as an image prior in state-of-the-art iterative algorithms in order to improve the final reconstruction.",
"Our experiments on a chest CT dataset show that the proposed model outperforms existing analytical and state-of-the-art iterative algorithms as well as the sinogram completion."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.1538461446762085,
0.0624999962747097,
0,
0.34285715222358704,
0.13793103396892548,
0.13793103396892548,
0.27586206793785095,
0.11428570747375488,
0.1621621549129486,
0.05714285373687744,
0.2666666507720947,
0.1249999925494194,
0.07547169178724289,
0.11428570747375488,
0.06896550953388214,
0.24242423474788666,
0.06666666269302368,
0.14814814925193787,
0.06666666269302368,
0.12903225421905518,
0.3571428656578064,
0.7333333492279053,
0.09999999403953552,
0.0357142835855484,
0.05714285373687744,
0.1304347813129425,
0.1111111044883728
] | B1g-h7398H | true | [
"We present a CNN inference-based reconstruction algorithm to address extremely few-view CT. "
] |
[
"In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date.",
"The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context.",
"Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding.",
"To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. ",
"We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses.",
"Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction.",
"Arguably, one of the key goals of AI, and the ultimate the goal of natural language research, is for humans to be able to talk to machines.",
"In order to get close to this goal, machines must master a number of skills: to be able to comprehend language, employ memory to retain and recall knowledge, to reason about these concepts together, and finally output a response that both fulfills functional goals in the conversation while simultaneously being captivating to their human speaking partner.",
"The current state-of-the-art approaches, sequence to sequence models of various kinds BID20 BID23 BID17 BID21 attempt to address some of these skills, but generally suffer from an inability to bring memory and knowledge to bear; as indicated by their name, they involve encoding an input sequence, providing limited reasoning by transforming their hidden state given the input, and then decoding to an output.",
"To converse intelligently on a given topic, a speaker clearly needs knowledge of that subject, and it is our contention here that more direct knowledge memory mechanisms need to be employed.",
"In this work we consider setups where this can be naturally measured and built.We consider the task of open-domain dialogue, where two speakers conduct open-ended chit-chat given an initial starting topic, and during the course of the conversation the topic can broaden or focus on related themes.",
"During such conversations, an interlocutor can glean new information and personal points of view from their speaking partner, while providing similarly themselves.",
"This is a challenging task as it requires several components not found in many standard models.",
"We design a set of architectures specifically for this goal that combine elements of Memory Network architectures BID19 to retrieve knowledge and read and condition on it, and Transformer architectures BID21 to provide state-of-the-art text representations and sequence models for generating outputs, which we term Transformer Memory Networks.As, to our knowledge, no public domain dataset of requisite scale exists, we build a supervised dataset of human-human conversations using crowd-sourced workers, first crowd-sourcing 1365 diverse discussion topics and then conversations involving 201, 999 utterances about them.",
"Each topic is connected to Wikipedia, and one of the humans (the wizard) is asked to link the knowledge they use to sentences from existing articles.",
"In this way, we have both a natural way to train a knowledgeable conversation agent, by employing a memory component that can recall and ground on this existing text, and a natural way to evaluate models that we build, by assessing their ability at locating and using such knowledge.Our Transformer Memory Network architectures, both in retrieval and generative versions, are tested in this setup using both automatic metrics and human evaluations.",
"We show their ability to execute engaging knowledgeable conversations with humans, compared to a number of baselines such as standard Memory Networks or Transformers.",
"Our new benchmark, publicly in ParlAI (http:// parl.ai/projects/wizard of wikipedia/), aims to encourage and measure further improvements in this important research direction.",
"In this work we build dialogue agents which are able to employ large memory systems containing encyclopedic knowledge about the world in order to conduct engaging open-domain conversations.",
"We develop a set of architectures, Transformer Memory Network models, that are capable of retrieving and attending to such knowledge and outputting a response, either in retrieval or generative modes.",
"To train and evaluate such models, we collect the Wizard of Wikipedia dataset, a large collection of open-domain dialogues grounded by Wikipedia knowledge, and demonstrated the effectiveness of our models in automatic and human experiments.",
"Our new publicly available benchmark aims to encourage further model exploration, and we expect such efforts will result in significant advances in this important research direction.There is much future work to be explored using our task and dataset.",
"Some of these include:",
"(i) bridging the gap between the engagingness of retrieval responses versus the ability of generative models to work on new knowledge and topics,",
"(ii) learning to retrieve and reason simultaneously rather than using a separate IR component; and",
"(iii) investigating the relationship between knowledge-grounded dialogue and existing QA tasks which also employ such IR systems.",
"The aim is for those strands to come together to obtain an engaging and knowledgeable conversational agent.",
"Examples of collected conversations from the dataset, where both wizard and apprentice are humans.",
"The wizard has access to an information retrieval system over Wikipedia, so that they can ask and answer questions, and make statements relevant to the discussion.",
"For each utterance, knowledge retrieval is performed based on dialogue history, giving ∼61 knowledge candidates per turn, with wizards clicking no sentence used 6.2% of the time.Assuming that a question contains a question mark or begins with 'how', 'why', 'who', 'where', 'what' or 'when' , in the dataset Apprentices ask questions in 13.9% of training set utterances, and answer questions (i.e., the Wizard has asked a question) 39.5% of the time, while saying new or follow-on statements (neither asking nor answering a question) 49.3% of the time.",
"Hence, the wizard and apprentice conduct conversations with a variety of dialogue acts."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.05882352590560913,
0,
0.19999998807907104,
0.12121211737394333,
0.19354838132858276,
0.1599999964237213,
0,
0.032258059829473495,
0.030303027480840683,
0.0952380895614624,
0.11320754140615463,
0.0555555522441864,
0.13333332538604736,
0.12195121496915817,
0,
0.11940298229455948,
0.1621621549129486,
0.0555555522441864,
0.09756097197532654,
0.09756097197532654,
0.1395348757505417,
0.07999999821186066,
0,
0.11764705181121826,
0.0714285671710968,
0,
0.13333332538604736,
0,
0,
0.06818181276321411,
0.07407406717538834
] | r1l73iRqKm | true | [
"We build knowledgeable conversational agents by conditioning on Wikipedia + a new supervised task."
] |
[
"We formulate a new problem at the intersection of semi-supervised learning and contextual bandits, motivated by several applications including clinical trials and dialog systems.",
"We demonstrate how contextual bandit and graph convolutional networks can be adjusted to the new problem formulation.",
"We then take the best of both approaches to develop multi-GCN embedded contextual bandit.",
"Our algorithms are verified on several real world datasets.",
"We formulate the problem of Online Partially Rewarded (OPR) learning.",
"Our problem is a synthesis of the challenges often considered in the semi-supervised and contextual bandit literature.",
"Despite a broad range of practical cases, we are not aware of any prior work addressing each of the corresponding components.Online: data incrementally collected and systems are required to take an action before they are allowed to observe any feedback from the environment.Partially: oftentimes there is no environment feedback available, e.g. a missing label Rewarded: instead of the true label, we can only hope to observe feedback indicating whether our prediction is good or bad (1 or 0 reward), the latter case obscuring the true label for learning.Practical scenarios that fall under the umbrella of OPR range from clinical trials to dialog orchestration.",
"In clinical trials, reward is partial, as patients may not return for followup evaluation.",
"When patients do return, if feedback on their treatment is negative, the best treatment, or true label, remains unknown.",
"In dialog systems, a user's query is often directed to a number of domain specific agents and the best response is returned.",
"If the user provides negative feedback to the returned response, the best available response is uncertain and moreover, users can also choose to not provide feedback.In many applications, obtaining labeled data requires a human expert or expensive experimentation, while unlabeled data may be cheaply collected in abundance.",
"Learning from unlabeled observations is the key challenge of semi-supervised learning BID2 .",
"We note that the problem of online semi-supervised leaning is rarely considered, with few exceptions BID14 BID13 .",
"In our setting, the problem is further complicated by the bandit-like feedback in place of labels, rendering existing semi-supervised approaches inapplicable.",
"We will however demonstrate how one of the recent approaches, Graph Convolutional Networks (GCN) BID9 , can be extended to our setting.The multi-armed bandit problem provides a solution to the exploration versus exploitation tradeoff while maximizing cumulative reward in an online learning setting.",
"In Linear Upper Confidence Bound (LINUCB) BID10 BID4 and in Contextual Thompson Sampling (CTS) BID0 , the authors assume a linear dependency between the expected reward of an action and its context.",
"However, these algorithms assume that the bandit can observe the reward at each iteration.",
"Several authors have considered variations of partial/corrupted rewards BID1 BID6 , but the case of entirely missing rewards has not been studied to the best of our knowledge.The rest of the paper is structured as follows.",
"In section 2, we formally define the Online Partially Rewarded learning setup and present two extensions to GCN to suit our problem setup.",
"Section 3 presents quantitative evidence of these methods applied to four datasets and analyses the learned latent space of these methods."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.17142856121063232,
0.06896550953388214,
0.07692307233810425,
0.0952380895614624,
0.1818181723356247,
0.1428571343421936,
0.10869564861059189,
0.07692307233810425,
0,
0.1249999925494194,
0.0363636314868927,
0.1666666567325592,
0.20689654350280762,
0.0624999962747097,
0.11320754140615463,
0.0952380895614624,
0.07999999821186066,
0.09302325546741486,
0.1818181723356247,
0.13333332538604736
] | HkgrQE7luV | true | [
"Synthesis of GCN and LINUCB algorithms for online learning with missing feedbacks"
] |
[
" A collection of scientific papers is often accompanied by tags:\n keywords, topics, concepts etc., associated with each paper.\n ",
"Sometimes these tags are human-generated, sometimes they are\n machine-generated. ",
"We propose a simple measure of the consistency\n of the tagging of scientific papers: whether these tags are\n predictive for the citation graph links. ",
"Since the authors tend to\n cite papers about the topics close to those of their publications, a\n consistent tagging system could predict citations. ",
"We present an\n algorithm to calculate consistency, and experiments with human- and\n machine-generated tags. ",
"We show that augmentation, i.e. the combination\n of the manual tags with the machine-generated ones, can enhance the\n consistency of the tags. ",
"We further introduce cross-consistency,\n the ability to predict citation links between papers tagged by\n different taggers, e.g. manually and by a machine.\n ",
"Cross-consistency can be used to evaluate the tagging quality when\n the amount of labeled data is limited.",
"A part of a construction of a knowledge graph is the analysis of publications and adding to them tags: concept names, keywords, etc.",
"This often involves natural language processing or other machine learning methods BID8 .",
"To develop such methods one must have a measure of success: one should be able to determine whether the given tagging is \"good\" or \"bad\".",
"The most direct way to test the machine produced tags is to compare them to the tags produced by humans.",
"One creates a \"golden set\" of papers tagged by humans, and penalizes the algorithms for any deviation from these tags.",
"There are, however, certain problems with this approach.",
"First, human tagging is expensiveeven more so for scientific papers, where human taggers must have a specialized training just to understand what the papers are about.",
"Second, even the best human taggers' results are inconsistent.",
"This provides a natural limitation for this method BID7 .",
"The latter problem is exacerbated when the tagging dictionary is large.",
"For example, the popular US National Library of Medicine database of Medical Subject Headings (MeSH, https://www.nlm.nih.gov/mesh/) has just under 30 000 entries.",
"A superset of MeSH, Unified Medical Language System (UMLS, https://www.nlm.nih.gov/research/ umls/knowledge_sources/metathesaurus/release/statistics.html) contains a staggering amount of 3 822 832 distinct concepts.",
"It is doubtful a human can do a good job choosing the right tags from a dictionary so large.",
"A domain expert usually deals with a subsystem of the dictionary, covering her area of expertise.",
"This presents obvious difficulties for tagging papers about multidisciplinary research, that may require a combination of the efforts of several highly qualified taggers.",
"Another problem is the evaluation of tag augmentation.",
"Suppose we have papers tagged by humans, and we want to add machine-generated tags, for example, to improve the search function in the collection.",
"Do the new tags actually add to the quality or subtract from it?",
"How can we evaluate the result if our tags are by definition different from those produced by humans?Thus",
"a measure of the tagging quality other than a direct comparison with manually produced tags may be useful for the assessing the work of the tagging engines. This",
"is especially important for an ongoing quality control of an engine that continuously ingests and tags fresh publications. In this",
"paper we propose such a measure.The idea for this measure is inspired by the works on graph embeddings [Hamilton et al., 2018, Grover and BID3 . In these",
"works one tags graph nodes and compares different sets of tags. The usual",
"comparison criterion is whether the tags can predict graph edges: nodes connected by an edge should have similar tags, while nodes not connected by an edge should have dissimilar tags. To use this",
"approach we need to represent papers as nodes on a graph. A natural choice",
"is the citation graph: and edge from paper A to paper B means that paper A cites paper B. This leads to the following assumptions:1. Scientific papers",
"cited by the given paper A are more similar to A than the other (non cited) papers.2. A good tagging system",
"must reflect this.In other words, a good set of tags must be able to predict links on the citation graph, and the quality of the prediction reflects the quality of the tags. We will call this property",
"consistency: a good tagger consistently gives similar tags to similar papers. It is worth stressing that",
"consistency is just one component of the quality of a tagger. If a tagger consistently uses",
"keyword library instead of keyword bread BID0 , this measure would give it high marks, despite tags being obviously wrong. A way to overcome this deficiency",
"is to calculate cross-consistency with a known \"good\" tagger. For example, we can tag some papers",
"manually, and some papers using machine generated tags, and then predict citation links between these papers. This cross-consistency measures the",
"similarity between these taggers. This application is interesting because",
"it allows us to expand the number of labeled papers for evaluation of machine-based taggers. We can create a golden set of manually",
"tagged papers, and then generate tags for the papers in their reference lists, and the random samples using the machine-based tagger. Since a typical paper cites many publications",
", this approach significantly expands the quantity of data available for training and testing.To create a measure based on these ideas one should note that citation links strongly depend on the time the candidate for citation was published. Even a very relevant paper may not be cited",
"if it is too old or too new. In the first case the citing authors may prefer",
"a newer paper on the same topic. In the second case they may overlook the most recent",
"publications. Therefore we recast our assumptions in the following",
"way:A consistent tagging system should be able to predict citation links from a given paper to a set of simultaneously published papers.The rest of the paper is organized as follows. In Section 2 we discuss the algorithm to calculate the",
"consistency of the given tagging system. Experiments with this measure are discussed in Section",
"3. In Section 4 we present the conclusions.",
"First, there is clear difference between the consistency of the randomly generated tags and the real ones ( Figure 2 ).",
"As expected, the consistency of the random tags is concentrated at AUC = 0.5, with some outliers both above and below this value.",
"In contrast, the consistency of the real tags is almost always above AUC = 0.5.",
"An exception is tagging sources of low coverage like GNAT (see Table 1 ), where consistency is close to 0.5.",
"Obviously when the coverage is low, most positive and negative samples have zero overlap with their seed papers, which lowers AUC.",
"Unexpectedly, the consistency of high coverage machine generated sources like NEJI is on par with the human tags.Tags augmentation is explored on Figure 3 .",
"As expected, adding random tags to the manually generated ones does not noticeably change the consistency of the result.",
"However, adding \"real\" machine generated tags is improving our measure, which is another evidence that the measure itself is reasonable.The cross-consistency between manual tags and machine-generated ones is shown on Figure 4 .",
"Here we used different sources for seed papers and for samples.",
"While crossconsistency is lower than the internal consistency of each tagger, is still is significantly higher than for random tags.In conclusion, a simple measure of consistency of tagging: whether it is predictive for citation links in a knowledge graph,-seems to be informative about the tagging process and can be used, along with other measures, to assess and evaluate it.",
"Cross-consistency between different taggers can be used to estimate their similarity, especially when some taggers (e.g. manual tagging) are too expensive to run on a large set of papers.",
"Cross consistency between manual tags and NEJI generated ones.",
"X axis shows the source for the seed papers, Y axes shows the source for samples"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17142856121063232,
0.0833333283662796,
0.1666666567325592,
0.21621620655059814,
0.20689654350280762,
0.12121211737394333,
0.2631579041481018,
0.12903225421905518,
0.2857142686843872,
0,
0.20512820780277252,
0.19999998807907104,
0.2857142686843872,
0,
0.19999998807907104,
0.0833333283662796,
0.0833333283662796,
0.07999999821186066,
0.05128204822540283,
0.10256409645080566,
0.25,
0.19999998807907104,
0.1621621549129486,
0.08695651590824127,
0.2222222238779068,
0.2222222238779068,
0.1249999925494194,
0.1621621549129486,
0.11764705181121826,
0.1860465109348297,
0.14814814925193787,
0.14999999105930328,
0.27586206793785095,
0.4000000059604645,
0.47058823704719543,
0.2790697515010834,
0.5333333015441895,
0.2142857164144516,
0.20512820780277252,
0.25806450843811035,
0.1818181723356247,
0,
0.277777761220932,
0.4000000059604645,
0.14035087823867798,
0.13333332538604736,
0.19999998807907104,
0.0833333283662796,
0.2448979616165161,
0.13793103396892548,
0.08695651590824127,
0.1764705777168274,
0.15789473056793213,
0.13333332538604736,
0.05714285373687744,
0.1111111044883728,
0.10810810327529907,
0.1875,
0.13636362552642822,
0.1599999964237213,
0.19672130048274994,
0.1395348757505417,
0.1666666567325592,
0.07692307233810425
] | SyeD-b9T6m | true | [
"A good tagger gives similar tags to a given paper and the papers it cites"
] |
[
"Recent research has intensively revealed the vulnerability of deep neural networks, especially for convolutional neural networks (CNNs) on the task of image recognition, through creating adversarial samples which `\"slightly\" differ from legitimate samples.",
"This vulnerability indicates that these powerful models are sensitive to specific perturbations and cannot filter out these adversarial perturbations.",
"In this work, we propose a quantization-based method which enables a CNN to filter out adversarial perturbations effectively.",
"Notably, different from prior work on input quantization, we apply the quantization in the intermediate layers of a CNN.",
"Our approach is naturally aligned with the clustering of the coarse-grained semantic information learned by a CNN.",
"Furthermore, to compensate for the loss of information which is inevitably caused by the quantization, we propose the multi-head quantization, where we project data points to different sub-spaces and perform quantization within each sub-space.",
"We enclose our design in a quantization layer named as the Q-Layer.",
"The results obtained on MNIST and Fashion-MNSIT datasets demonstrate that only adding one Q-Layer into a CNN could significantly improve its robustness against both white-box and black-box attacks.",
"In recent years, along with the massive success of deep neural networks (DNNs) witnessed in many research fields, we have also observed their impressive failures when confronted with adversarial examples, especially for image recognition tasks.",
"Prior work (Szegedy et al. (2014) ; Goodfellow et al. (2015) ) has demonstrated that an adversarial image can be easily synthesized by adding to a legitimate image a specifically crafted perturbation, which is typically imperceptible for human visual inspection.",
"The generated adversarial image, however, is strikingly effective for causing convolutional neural network (CNN) classifiers to make extreme confident misclassification results.",
"This vulnerability of DNNs has stimulated the unceasing arms race between research on both attacking (Goodfellow et al. (2015) ; Kurakin et al. (2017) ; Carlini & Wagner (2017) ; Moosavi-Dezfooli et al. (2016) ; Chen et al. (2017) ; Brendel et al. (2018) ) and defending (Madry et al. (2018) ; Samangouei et al. (2018b) ; Buckman et al. (2018) ; Zhang & Liang (2019) ) these powerful models.",
"Among much existing work and a large variety of defense methods, several prior studies (Xu et al. (2018) ; Buckman et al. (2018) ; Zhang & Liang (2019) ) have spent concerted efforts on defending adversarial attacks through input quantization.",
"The principle idea of these methods is to use quantization to filter out small-scale adversarial perturbations.",
"Recall that in prior work (Bau et al. (2017) ; Zeiler & Fergus (2014) ; Zhou et al. (2015) ), it has been shown that the shallow layers of a CNN mostly capture fine-grained features including lines and curves.",
"In the meantime, deeper layers learn coarse-grained yet semantically more critical features, which essentially discriminate different samples.",
"Especially for classification tasks, it is natural to expect samples with the same classification label to share similar semantic information.",
"As such, the semantic similarity between samples may be better revealed if we attend to their latent features learned by the intermediate layers of a CNN.",
"Here we hypothesize that data points with similar semantic information should be distributed densely in the latent feature space.",
"Thus, in order to more effectively filter out adversarial perturbations, we propose an alternative approach which quantizes the data representations embedded in the feature space produced by the intermediate layers of CNN classifiers.",
"Interestingly, there have been other studies that develop similar approaches but for different purposes.",
"For example, Wang et al. (2017; have applied k-means clustering on the intermediate feature maps of CNN models to discover explainable visual concepts.",
"Recent methods, including VQ-VAE (van den Oord et al. (2017) ) and SOM-VAE (Fortuin et al. (2019) ), were proposed to construct generative models for images and time-series data with discrete latent representations, which offer better explainability.",
"However, to the best of our knowledge, the approach of applying intermediate layer quantization for CNN models has not been explored in the context of defending adversarial examples.",
"In this work, we propose a quantization method that is realized by an extra intermediate layer, i.e., the quantization layer (Q-Layer).",
"Our Q-Layer can be easily integrated into any existing architecture of CNN models.",
"Specifically, the Q-Layer splits the mainstream of information that flows forward in a regular CNN model into two separate flows.",
"Both flows share the same information passed by layers before the Q-Layer, but differ in the subsequent networks after the Q-Layer.",
"These two flows produce two outputs, one is the quantized output, and the other is the Non-quantized output.",
"Specifically, the non-quantized path is introduced to facilitate the gradient-based training, and to regularize the quantization operation.",
"In the quantized path, we introduce non-differentiability to defend gradient-based attacks.",
"It is important to note that, while gradient-based attacks cannot be directly applied to the quantized network, they can still be conducted by following the nonquantized path.",
"Also, similar to most input transformation methods (Xu et al. (2018) ; Buckman et al. (2018) ) proposed for defending adversarial examples, our quantization will inevitably lose some feature information, which might be useful for classification.",
"In order to compensate for this loss of information, we further propose multi-head quantization, where we project data points to different sub-spaces and perform quantization within each sub-space.",
"In particular, we perform the projection by re-weighting the input-channels of CNN with trainable parameters.",
"This projection process can be interpreted as performing feature extraction from different points of view, hence help retain the overall effectiveness of our method without causing much performance degradation for the model to be protected.",
"Last but not least, our proposed method can be readily combined with other existing defenses, e.g., adversarial training (Goodfellow et al. (2015) ), to jointly improve the adversarial robustness of a protected CNN classifier.",
"In summary, we make the following contribution:",
"• We propose a quantization-based defense method for the adversarial example problem by designing a quantization Layer (Q-Layer) which can be integrated into existing architectures of CNN models.",
"Our implementation is online available 1 .",
"• We propose multi-head quantization to compensate for the possible information loss caused by the quantization process, and bring significant improvement to the adversarial robustness of an armed model under large perturbation.",
"• We evaluate our method under several representative attacks on MNIST and Fashion-MNIST datasets.",
"Our experiment results demonstrate that the adoption of the Q-Layer can significantly enhance the robustness of a CNN against both black-box and white-box attack, and the robustness can be further improved by combining our method with adversarial training.",
"2 RELATED WORK 2.1 ADVERSARIAL ATTACK Given a neural network classifier N with parameters denoted by w, N can be regarded as a function that takes an input x ∈ R dx and produces an classification label y, i.e., N (x;",
"w) = y or N",
"(x) = y for notation simplicity.",
"In principle, the goal of the adversarial attack is to create a perturbation δ ∈ R dx to be added to a legitimate sample x for creating an adversarial example, i.e., x + δ, which causes the target model N to produce a wrong classification result.",
"Depending on different threat models, adversarial attacks are categorized as black-box attacks or white-box attacks (Papernot et al. (2018) ).",
"Specifically, it is commonly assumed in the white-box attack scenario, that an attacker knows every detail of the target model.",
"This dramatically eases the generation of impactful adversarial examples, and has stimulated researchers to propose various white-box attack methods, including the fast gradient sign method (FGSM) (Goodfellow et al. (2015) ), the basic iterative method (BIM) (Kurakin et al. (2017) ), the Carlini-Wagner (CW) attack (Carlini & Wagner (2017) ), and DeepFool (Moosavi-Dezfooli et al. (2016) ).",
"On the contrary, in the black-box attack scenario, an attacker is typically assumed to be restricted for accessing detailed information, e.g., the architecture, values of parameters, training datasets, of the target model.",
"There have been many black-box attack methods proposed in prior work (Chen et al. (2017) ; Brendel et al. (2018) ; Papernot et al. (2016) ).",
"Representative black-box attacks typically exploit the transferability (Papernot et al. (2016) ) of the adversarial examples, hence is also referred to as transfer black-box attacks.",
"Explicitly, in transfer black-box attacks, an attacker can train and maintain a substitute model, then conduct white-box attacks on the substitute model to generate adversarial samples which retain a certain level of attack power to the target model.",
"Since both black-box and white-box attacks rely on the white-box assumption, in the following, we mainly introduce several representative white-box attacks, namely the FGSM, BIM and CW attacks, which are also employed in our experiments due to their wide adoption as the benchmark attack methods (Samangouei et al. (2018a; b) ).",
"Fast gradient sign method (FGSM) Goodfellow et al. (2015) proposed FGSM, in which δ is calculated by scaling the l ∞ norm of the gradient of the loss function L with respect to a legitimate input x as follows:",
"where represents the maximally allowed scale of perturbation.",
"This method represents a one-step approximation for the direction in the input space that affects the loss function most significantly.",
"Basic iterative method (BIM) Kurakin et al. (2017) proposed the BIM attack, which iteratively performs the FGSM hence generates more impactful adversarial examples at the expense of computational efficiency.",
"Carlini-Wagner (CW) attack Carlini & Wagner (2017) aimed to find the smallest perturbation to fool the target model, by solving the following optimization problem:",
"where c > 0 is a tunable positive constant and p represents different norms.",
"In our experiment, we consider l ∞ norm.",
"L is designed to satisfy that L(x, δ) < 0 if and only if N (x + δ) = N (x).",
"In this paper, we have designed and implemented a quantization layer (Q-Layer) to protection CNN classifiers from the adversarial attacks, and presented the experiment results which show that, by simply inserting one Q-Layer into a regular CNN, its adversarial robustness under both white-box and black-box attacks obtains significant improvement.",
"Moreover, we have combined our method in tandem with adversarial training.",
"The empirical results show that the Q-layer can make a CNN benefit more from adversarial training and even perform well under attacks with larger perturbations.",
"One limitation of this work is due to the uncertainty introduced by the random initialization of concept matrix.",
"This issue also exists in many other clustering algorithms.",
"In this work, we alleviate the impact of this issue by reactivating inactivate concepts.",
"Future work would pursue other approaches on constructing the concept matrix, e.g., regularizing the concept matrix with specific semantic constrains, and using the E-path as a learned index to retrieve information stored in the concept matrix, which acts as an external memory."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07547169178724289,
0.19512194395065308,
0.4878048598766327,
0.0476190410554409,
0.19999998807907104,
0.11320754140615463,
0.1111111044883728,
0.039215680211782455,
0.06896550953388214,
0.1666666567325592,
0.08888888359069824,
0,
0.06666666269302368,
0.20512820780277252,
0.03389830142259598,
0.04878048226237297,
0.0952380895614624,
0.16326530277729034,
0.09302324801683426,
0.25925925374031067,
0,
0.04255318641662598,
0.10344827175140381,
0.0833333283662796,
0.1304347813129425,
0.05405404791235924,
0.0476190410554409,
0,
0,
0.052631575614213943,
0.05714285373687744,
0.0833333283662796,
0.1428571343421936,
0.07999999821186066,
0.10526315122842789,
0.1428571343421936,
0.20338982343673706,
0,
0.31372547149658203,
0,
0.1538461446762085,
0.10526315122842789,
0.1818181723356247,
0.0952380895614624,
0,
0,
0.158730149269104,
0.0476190410554409,
0,
0.11764705181121826,
0.07407406717538834,
0,
0.1304347813129425,
0.14035087823867798,
0.05970148742198944,
0.16949151456356049,
0,
0.0952380895614624,
0.15686273574829102,
0.04444443807005882,
0.052631575614213943,
0,
0.0476190410554409,
0.11764705181121826,
0.17142856121063232,
0.16326530277729034,
0.14999999105930328,
0,
0,
0.2295081913471222
] | SJe7mC4twH | true | [
"We propose a quantization-based method which regularizes a CNN's learned representations to be automatically aligned with trainable concept matrix hence effectively filtering out adversarial perturbations."
] |
[
"Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs.",
"A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant \\emph{linear} layers.",
"Although this question is answered for the first three examples (for popular transformations, at-least), a full characterization of invariant and equivariant linear layers for graphs is not known. \n\n",
"In this paper we provide a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and show that their dimension, in case of edge-value graph data, is $2$ and $15$, respectively.",
"More generally, for graph data defined on $k$-tuples of nodes, the dimension is the $k$-th and $2k$-th Bell numbers.",
"Orthogonal bases for the layers are computed, including generalization to multi-graph data.",
"The constant number of basis elements and their characteristics allow successfully applying the networks to different size graphs.",
"From the theoretical point of view, our results generalize and unify recent advancement in equivariant deep learning.",
"In particular, we show that our model is capable of approximating any message passing neural network.\n\n",
"Applying these new linear layers in a simple deep neural network framework is shown to achieve comparable results to state-of-the-art and to have better expressivity than previous invariant and equivariant bases.\n",
"We consider the problem of graph learning, namely finding a functional relation between input graphs (more generally, hyper-graphs) G and corresponding targets T , e.g., labels.",
"As graphs are common data representations, this task received quite a bit of recent attention in the machine learning community BID2 ; BID13 ; ; BID38 .More",
"specifically, a (hyper-)graph data point G = (V, A) consists of a set of n nodes V, and values A attached to its hyper-edges 1 . These",
"values are encoded in a tensor A. The order of the tensor A, or equivalently, the number of indices used to represent its elements, indicates the type of data it represents, as follows: First order tensor represents node-values where A i is the value of the i-th node; Second order tensor represents edge-values, where A ij is the value attached to the (i, j) edge",
"; in general, k-th order tensor encodes hyper-edge-values, where A i1,...,i k represents the value of the hyper-edge represented by (i 1 , . . . , i k ). For example",
", it is customary to represent a graph using a binary adjacency matrix A, where A ij equals one if vertex i is connected to vertex j and zero otherwise. We denote",
"the set of order-k tensors by R n k .The task at",
"hand is constructing a functional relation f (A ) ≈ T , where f is a neural network. If T = t is",
"a single output response then it is natural to ask that f is order invariant, namely it should produce the same output regardless of the node numbering used to encode A. For example, if we represent a graph using an adjacency matrix A = A ∈ R n×n , then for an arbitrary permutation matrix P and an arbitrary adjacency matrix A, the function f is order invariant if it satisfies f (P T AP ) = f (A). If the targets",
"T specify output response in a form of a tensor, T = T , then it is natural to ask that f is order equivariant, that is, f commutes with the renumbering of nodes operator acting on tensors. Using the above",
"adjacency matrix example, for every adjacency matrix A and Figure 1 : The full basis for equivariant linear layers for edge-value data A ∈ R n×n , for n = 5. The purely linear",
"15 basis elements, B µ , are represented by matrices n 2 × n 2 , and the 2 bias basis elements (right), C λ , by matrices n × n, see equation 9.every permutation matrix P , the function f is equivariant if it satisfies f (P T AP ) = P T f (A)P . To define invariance",
"and equivariance for functions acting on general tensors A ∈ R n k we use the reordering operator: P A is defined to be the tensor that results from renumbering the nodes V according to the permutation defined by P . Invariance now reads",
"as f (P A) = f (A); while equivariance means f (P A) = P f (A). Note that the latter",
"equivariance definition also holds for functions between different order tensors, f : R n k → R n l .Following the standard",
"paradigm of neural-networks where a network f is defined by alternating compositions of linear layers and non-linear activations, we set as a goal to characterize all linear invariant and equivariant layers. The case of node-value",
"input A = a ∈ R n was treated in the pioneering works of BID39 ; BID26 . These works characterize",
"all linear permutation invariant and equivariant operators acting on node-value (i.e., first order) tensors, R n . In particular it it shown",
"that the linear space of invariant linear operators L : R n → R is of dimension one, containing essentially only the sum operator, L(a) = α1T a. The space of equivariant",
"linear operators L : DISPLAYFORM0 The general equivariant tensor case was partially treated in where the authors make the observation that the set of standard tensor operators: product, element-wise product, summation, and contraction are all equivariant, and due to linearity the same applies to their linear combinations. However, these do not exhaust",
"nor provide a full and complete basis for all possible tensor equivariant linear layers.In this paper we provide a full characterization of permutation invariant and equivariant linear layers for general tensor input and output data. We show that the space of invariant",
"linear layers L : R n k → R is of dimension b(k), where b(k) is the k-th Bell number. The k-th Bell number is the number",
"of possible partitions of a set of size k; see inset for the case k = 3. Furthermore, the space of equivariant",
"linear layers DISPLAYFORM1 Remarkably, this dimension is independent of the size n of the node set V. This allows applying the same network on graphs of different sizes. For both types of layers we provide a",
"general formula for an orthogonal basis that can be readily used to build linear invariant or equivariant layers with maximal expressive power. Going back to the example of a graph",
"represented by an adjacency matrix A ∈ R n×n we have k = 2 and the linear invariant layers L : Figure 1 shows visualization of the basis to the linear equivariant layers acting on edge-value data such as adjacency matrices. DISPLAYFORM2 In BID12 the authors provide",
"an impressive generalization of the case of node-value data to several node sets, V 1 , V 2 , . . . , V m of sizes n 1 , n 2 , . . . , n m . Their goal is to learn interactions across",
"sets. That is, an input data point is a tensor A",
"∈ R n1×n2×···×nm that assigns a value to each element in the cartesian product V 1 × V 2 × · · · × V m . Renumbering the nodes in each node set using",
"permutation matrices P 1 , . . . , P m (resp.) results in a new tensor we denote by P 1:m A. Order invariance means f (P 1:m A) = f (A) and order equivariance is f (P 1:m A) = P 1:m f (A). BID12 introduce bases for linear invariant and",
"equivariant layers. Although the layers in BID12 satisfy the order",
"invariance and equivariance, they do not exhaust all possible such layers in case some node sets coincide. For example, if V 1 = V 2 they have 4 independent",
"learnable parameters where our model has the maximal number of 15 parameters.Our analysis allows generalizing the multi-node set case to arbitrary tensor data over V 1 × V 2 × · · · × V m . Namely, for data points in the form of a tensor A",
"∈ R n k 1 1 ×n k 2 2 ×···×n km m . The tensor A attaches a value to every element of",
"the Cartesian product DISPLAYFORM3 2 , that is, k 1 -tuple from V 1 , k 2 -tuple from V 2 and so forth. We show that the linear space of invariant linear",
"layers DISPLAYFORM4 , while the equivariant linear layers L : DISPLAYFORM5 We also provide orthogonal bases for these spaces. Note that, for clarity, the discussion above disregards",
"biases and features; we detail these in the paper.In appendix C we show that our model is capable of approximating any message-passing neural network as defined in BID9 which encapsulate several popular graph learning models. One immediate corollary is that the universal approximation",
"power of our model is not lower than message passing neural nets.In the experimental part of the paper we concentrated on possibly the most popular instantiation of graph learning, namely that of a single node set and edge-value data, e.g., with adjacency matrices. We created simple networks by composing our invariant or equivariant",
"linear layers in standard ways and tested the networks in learning invariant and equivariant graph functions: (i) We compared identical networks with our basis and the basis of BID12",
"and showed we can learn graph functions like trace, diagonal, and maximal singular vector. The basis in BID12 , tailored to the multi-set setting, cannot learn these",
"functions demonstrating it is not maximal in the graph-learning (i.e., multi-set with repetitions) scenario. We also demonstrate our representation allows extrapolation: learning on one",
"size graphs and testing on another size; (ii) We also tested our networks on a collection of graph learning datasets,",
"achieving results that are comparable to the state-of-the-art in 3 social network datasets."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19354838132858276,
0.29411762952804565,
0.4651162624359131,
0.52173912525177,
0.29411762952804565,
0.2142857164144516,
0.1764705777168274,
0.1818181723356247,
0.060606054961681366,
0.2666666507720947,
0.1860465109348297,
0.1463414579629898,
0.19999998807907104,
0.13333332538604736,
0.045454539358615875,
0.13636362552642822,
0.1428571343421936,
0.05882352590560913,
0.19178082048892975,
0.08163265138864517,
0.380952388048172,
0.09999999403953552,
0.11538460850715637,
0,
0.0555555522441864,
0.35555556416511536,
0.11428570747375488,
0.2631579041481018,
0.2857142686843872,
0.1666666567325592,
0.5531914830207825,
0.22857142984867096,
0.23529411852359772,
0.17391303181648254,
0.35555556416511536,
0.24561403691768646,
0.0952380895614624,
0.14814814925193787,
0.04878048226237297,
0.22641508281230927,
0.1666666567325592,
0.09302324801683426,
0.1538461446762085,
0.1666666567325592,
0.21052631735801697,
0.19999998807907104,
0.145454540848732,
0.21212120354175568,
0.3589743673801422,
0.1463414579629898,
0,
0.22857142984867096,
0
] | Syx72jC9tm | true | [
"The paper provides a full characterization of permutation invariant and equivariant linear layers for graph data."
] |
[
"In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions.",
"However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional (e.g. images).",
"For this reason, previous works have considered partial models, which model only part of the observation.",
"In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning.",
"To address this, we introduce a general family of partial models that are provably causally correct, but avoid the need to fully model future observations.",
"The ability to predict future outcomes of hypothetical decisions is a key aspect of intelligence.",
"One approach to capture this ability is via model-based reinforcement learning (MBRL) (Munro, 1987; Werbos, 1987; Nguyen & Widrow, 1990; Schmidhuber, 1991) .",
"In this framework, an agent builds an internal representation s t by sensing an environment through observational data y t (such as rewards, visual inputs, proprioceptive information) and interacts with the environment by taking actions a t according to a policy π(a t |s t ).",
"The sensory data collected is used to build a model that typically predicts future observations y >t from past actions a ≤t and past observations y ≤t .",
"The resulting model may be used in various ways, e.g. for planning (Oh et al., 2015; Silver et al., 2017a) , generation of synthetic training data (Weber et al., 2017) , better credit assignment (Heess et al., 2015) , learning useful internal representations and belief states (Gregor et al., 2019; Guo et al., 2018) , or exploration via quantification of uncertainty or information gain (Pathak et al., 2017) .",
"Within MBRL, commonly explored methods include action-conditional, next-step models (Oh et al., 2015; Ha & Schmidhuber, 2018; Chiappa et al., 2017; Schmidhuber, 2010; Xie et al., 2016; Deisenroth & Rasmussen, 2011; Lin & Mitchell, 1992; Li et al., 2015; Diuk et al., 2008; Igl et al., 2018; Ebert et al., 2018; Kaiser et al., 2019; Janner et al., 2019) .",
"However, it is often not tractable to accurately model all the available information.",
"This is both due to the fact that conditioning on high-dimensional data such as images would require modeling and generating images in order to plan over several timesteps (Finn & Levine, 2017) , and to the fact that modeling images is challenging and may unnecessarily focus on visual details which are not relevant for acting.",
"These challenges have motivated researchers to consider simpler models, henceforth referred to as partial models, i.e. models which are neither conditioned on, nor generate the full set of observed data (Guo et al., 2018; Gregor et al., 2019; Amos et al., 2018) .",
"In this paper, we demonstrate that partial models will often fail to make correct predictions under a new policy, and link this failure to a problem in causal reasoning.",
"Prior to this work, there has been a growing interest in combining causal inference with RL research in the directions of non-model based bandit algorithms (Bareinboim et al., 2015; Forney et al., 2017; Zhang & Bareinboim, 2017; Lee & Bareinboim, 2018; Bradtke & Barto, 1996) and causal discovery with RL (Zhu & Chen, 2019) .",
"Contrary to previous works, in this paper we focus on model-based approaches and propose a novel framework for learning better partial models.",
"A key insight of our methodology is the fact that any piece of information about the state of the environment that is used by the policy to make a decision, but is not available to the model, acts as a confounding variable for that model.",
"As a result, the learned model is causally incorrect.",
"Using such a model to reason may lead to the wrong conclusions about the optimal course of action as we demonstrate in this paper.",
"We address these issues of partial models by combining general principles of causal reasoning, probabilistic modeling and deep learning.",
"Our contributions are as follows.",
"• We identify and clarify a fundamental problem of partial models from a causal-reasoning perspective and illustrate it using simple, intuitive Markov Decision Processes (MDPs) (Section 2).",
"• In order to tackle these shortcomings we examine the following question: What is the minimal information that we have to condition a partial model on such that it will be causally correct with respect to changes in the policy?",
"(Section 4) • We answer this question by proposing a family of viable solutions and empirically investigate their effects on models learned in illustrative environments (simple MDPs and 3D environments).",
"Our method is described in Section 4 and the experiments are in Section 5.",
"We have characterized and explained some of the issues of partial models in terms of causal reasoning.",
"We proposed a simple, yet effective, modification to partial models so that they can still make correct predictions under changes in the behavior policy, which we validated theoretically and experimentally.",
"The proposed modifications address the correctness of the model against policy changes, but don't address the correctness/robustness against other types of intervention in the environment.",
"We will explore these aspects in future work."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.10526315122842789,
0.0555555522441864,
0.24242423474788666,
0.23255813121795654,
0.2380952388048172,
0.06451612710952759,
0.052631575614213943,
0.07407406717538834,
0.05128204822540283,
0.029411761090159416,
0.0363636314868927,
0.19999998807907104,
0.13333332538604736,
0.2181818187236786,
0.23255813121795654,
0.09836065024137497,
0.20512819290161133,
0.11999999731779099,
0.1538461446762085,
0.1538461446762085,
0.11428570747375488,
0,
0.0952380895614624,
0.2745097875595093,
0.1304347813129425,
0.13793103396892548,
0.3125,
0.25531914830207825,
0.1111111044883728,
0.07999999821186066
] | HyeG9yHKPr | true | [
"Causally correct partial models do not have to generate the whole observation to remain causally correct in stochastic environments."
] |
[
"In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task.",
"In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost.",
"Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation.",
"Second, we introduce a new metric measuring how quickly a learner acquires a new skill.",
"Third, we propose an improved version of GEM (Lopez-Paz & Ranzato, 2017), dubbed Averaged GEM (A-GEM), which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC (Kirkpatrick et al., 2016) and other regularization-based methods.",
"Finally, we show that all algorithms including A-GEM can learn even more quickly if they are provided with task descriptors specifying the classification tasks under consideration.",
"Our experiments on several standard lifelong learning benchmarks demonstrate that A-GEM has the best trade-off between accuracy and efficiency",
"Intelligent systems, whether they are natural or artificial, must be able to quickly adapt to changes in the environment and to quickly learn new skills by leveraging past experiences.",
"While current learning algorithms can achieve excellent performance on a variety of tasks, they strongly rely on copious amounts of supervision in the form of labeled data.The lifelong learning (LLL) setting attempts at addressing this shortcoming, bringing machine learning closer to a more realistic human learning by acquiring new skills quickly with a small amount of training data, given the experience accumulated in the past.",
"In this setting, the learner is presented with a stream of tasks whose relatedness is not known a priori.",
"The learner has then the potential to learn more quickly a new task, if it can remember how to combine and re-use knowledge acquired while learning related tasks of the past.",
"Of course, for this learning setting to be useful, the model needs to be constrained in terms of amount of compute and memory required.",
"Usually this means that the learner should not be allowed to merely store all examples seen in the past (in which case this reduces the lifelong learning problem to a multitask problem) nor should the learner be engaged in computations that would not be feasible in real-time, as the goal is to quickly learn from a stream of data.Unfortunately, the established training and evaluation protocol as well as current algorithms for lifelong learning do not satisfy all the above desiderata, namely learning from a stream of data using limited number of samples, limited memory and limited compute.",
"In the most popular training paradigm, the learner does several passes over the data BID1 BID22 , while ideally the model should need only a handful of samples and these should be provided one-by-one in a single pass BID15 .",
"Moreover, when the learner has several hyper-parameters to tune, the current practice is to go over the sequence of tasks several times, each time with a different hyper-parameter value, again ignoring the requirement of learning from a stream of data and, strictly speaking, violating the assumption of the LLL scenario.",
"While some algorithms may work well in a single-pass setting, they unfortunately require a lot of computation BID15 or their memory scales with the number of tasks , which greatly impedes their actual deployment in practical applications.In this work, we propose an evaluation methodology and an algorithm that better match our desiderata, namely learning efficiently -in terms of training samples, time and memory -from a stream of tasks.",
"First, we propose a new learning paradigm, whereby the learner performs cross validation on a set of tasks which is disjoint from the set of tasks actually used for evaluation (Sec. 2) .",
"In this setting, the learner will have to learn and will be tested on an entirely new sequence of tasks and it will perform just a single pass over this data stream.",
"Second, we build upon GEM BID15 , an algorithm which leverages a small episodic memory to perform well in a single pass setting, and propose a small change to the loss function which makes GEM orders of magnitude faster at training time while maintaining similar performance; we dub this variant of GEM, A-GEM (Sec. 4).",
"Third, we explore the use of compositional task descriptors in order to improve the fewshot learning performance within LLL showing that with this additional information the learner can pick up new skills more quickly (Sec. 5).",
"Fourth, we introduce a new metric to measure the speed of learning, which is useful to quantify the ability of a learning algorithm to learn a new task (Sec. 3).",
"And finally, using our new learning paradigm and metric, we demonstrate A-GEM on a variety of benchmarks and against several representative baselines (Sec. 6).",
"Our experiments show that A-GEM has a better trade-off between average accuracy and computational/memory cost.",
"Moreover, all algorithms improve their ability to quickly learn a new task when provided with compositional task descriptors, and they do so better and better as they progress through the learning experience.",
"We studied the problem of efficient Lifelong Learning (LLL) in the case where the learner can only do a single pass over the input data stream.",
"We found that our approach, A-GEM, has the best tradeoff between average accuracy by the end of the learning experience and computational/memory cost.",
"Compared to the original GEM algorithm, A-GEM is about 100 times faster and has 10 times less memory requirements; compared to regularization based approaches, it achieves significantly higher average accuracy.",
"We also demonstrated that by using compositional task descriptors all methods can improve their few-shot performance, with A-GEM often being the best.Our detailed experiments reported in Appendix E also show that there is still a substantial performance gap between LLL methods, including A-GEM, trained in a sequential learning setting and the same network trained in a non-sequential multi-task setting, despite seeing the same data samples.",
"Moreover, while task descriptors do help in the few-shot learning regime, the LCA performance gap between different methods is very small; suggesting a poor ability of current methods to transfer knowledge even when forgetting has been eliminated.",
"Addressing these two fundamental issues will be the focus of our future research."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.1702127605676651,
0.14999999105930328,
0.10169491171836853,
0.060606054961681366,
0.16129031777381897,
0.08510638028383255,
0.3499999940395355,
0.08510638028383255,
0.13333332538604736,
0.052631575614213943,
0.1599999964237213,
0.1904761791229248,
0.1882352977991104,
0.072727270424366,
0.09999999403953552,
0.202531635761261,
0.0833333283662796,
0.12244897335767746,
0.14705881476402283,
0.1090909019112587,
0.1818181723356247,
0.13636362552642822,
0.3888888955116272,
0.2448979616165161,
0.09090908616781235,
0.2380952388048172,
0.20408162474632263,
0.12987013161182404,
0.1428571343421936,
0
] | Hkf2_sC5FX | true | [
"An efficient lifelong learning algorithm that provides a better trade-off between accuracy and time/ memory complexity compared to other algorithms. "
] |
[
"Reduced precision computation is one of the key areas addressing the widening’compute gap’, driven by an exponential growth in deep learning applications.",
"In recent years, deep neural network training has largely migrated to 16-bit precision,with significant gains in performance and energy efficiency.",
"However, attempts to train DNNs at 8-bit precision have met with significant challenges, because of the higher precision and dynamic range requirements of back-propagation. ",
"In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. ",
"We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16)and a broader set of workloads (Resnet-18/34/50, GNMT, and Transformer) than previously reported. ",
"We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point, to improve error propagation.",
"We also examine the impact of quantization noise on generalization, and propose a stochastic rounding technique to address gradient noise.",
"As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.",
"The unprecedented success of Deep Learning models in a variety of tasks including computer vision , machine translation and speech recognition (Graves et al., 2013; Hannun et al., 2014) has led to the proliferation of deeper and more complex models.",
"Algorithmic innovations such as large batch training (Keskar et al., 2016) and neural architecture search (Zoph & Le, 2016) have enabled models to scale on large compute cluster to accelerate training.",
"This enhanced performance has enabled the adoption of larger neural networks.",
"As a consequence, the computational requirements for training Deep Learning models have been growing at an exponential rate (Amodei & Hernandez) over the past few years, outperforming Moore's Law and hardware capabilities by a wide margin.",
"One of the promising areas of research to address this growing compute gap is to reduce the numeric precision requirements for deep learning.",
"Reduced precision methods exploit the inherent noise resilient properties of deep neural networks to improve compute efficiency, while minimizing the loss of model accuracy.",
"Recent studies (Micikevicius et al., 2017; Das et al., 2018) have shown that, deep neural networks can be trained using 16-bits of precision without any noticeable impact on validation accuracy across a wide range of networks.",
"Today, state-of-the-art training platforms support 16-bit precision in the form of high-performance systolic array or GEMM engine (General Matrix Multiply) implementations (Markidis et al., 2018; Köster et al., 2017a) .",
"There have been numerous attempts (Hubara et al., 2017; Zhou et al., 2016; De Sa et al., 2018; Wu et al., 2018; Cai et al., 2017) to train deep neural networks at lower precision (below 16-bits) with varying degrees of success.",
"With the abundance of 8-bit integer deep learning 'ops' deployed to accelerate inference tasks, much of the research into training methods have also focused on integer based fixed-point numeric formats (Zhou et al., 2016; De Sa et al., 2018; Wu et al., 2018) .",
"Training with 8-bit integers has been significantly more challenging because the dynamic range of such formats is not sufficient to represent error gradients during back-propagation.",
"More recently, Wang et al. (2018) have shown that 8-bit floating representation can be used to train convolutional neural networks, with the help of specialized chunk-based accumulation and stochastic rounding hardware.",
"While this method has shown promising results, it requires expensive stochastic rounding hardware built into the critical compute path making it unattractive for systolic array and GEMM accelerator implementations.",
"Our paper extends the state of the art in 8-bit floating point (FP8) training with the following key contributions:",
"• We propose a scalable training solution that eliminates the need for specialized hardware designs (Wang et al., 2018) , thereby enabling efficient MAC designs with higher compute density.",
"• We demonstrated state-of-the-art training results using 8-bit floating point representation (for weight, activation, error and gradient tensors), across multiple data sets (Imagenet-1K, WMT16) and a broader set of workloads (Resnet, GNMT, Transformer) than previously reported (Wang et al., 2018) .",
"• We propose enhanced loss scaling method to compensate for the reduced subnormal range of 8-bit floating point representation for improved error propagation leading to better model accuracy.",
"• We present a detailed study of the impact of quantization noise on model generalization and propose a stochastic rounding technique to address the gradient noise in the early epochs leading to better generalization.",
"We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16) and a broader set of workloads (Resnet-18/34/50, GNMT, Transformer) than previously reported.",
"We propose easy to implement and scalable solution for building FP8 compute primitives, eliminating the need for stochastic rounding hardware in the critical compute path, as proposed by Wang et al. (2018) , thereby reducing the cost and complexity of the MAC unit.",
"We explore issues around gradient underflow and quantization noise that arise as a result of using the proposed 8-bit numeric format for large scale neural network training.",
"We propose solutions to deal with these problems in the form of enhanced loss scaling and stochastic rounding."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0,
0.05882352590560913,
0.05405404791235924,
0.21052631735801697,
0.21621620655059814,
0.1764705777168274,
0.060606054961681366,
0,
0,
0.0476190447807312,
0,
0.0416666641831398,
0,
0,
0.08510638028383255,
0.0952380895614624,
0,
0.07843136787414551,
0.05128204822540283,
0.08888888359069824,
0,
0.25806450843811035,
0.0952380895614624,
0.40740740299224854,
0.19999998807907104,
0.04878048226237297,
0.2222222238779068,
0.039215683937072754,
0.19512194395065308,
0.0624999962747097
] | HJe88xBKPr | true | [
"We demonstrated state-of-the-art training results using 8-bit floating point representation, across Resnet, GNMT, Transformer."
] |
[
"Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed.",
"Some supervise the learning process by pairwise or tripletwise similarity constraints while others take the advantage of structured similarity information among multiple data points.",
"In this work, we approach deep metric learning from a novel perspective.",
"We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one.",
"ICE has three main appealing properties.",
"Firstly, similar to categorical cross entropy (CCE), ICE has clear probabilistic interpretation and exploits structured semantic similarity information for learning supervision.",
"Secondly, ICE is scalable to infinite training data as it learns on mini-batches iteratively and is independent of the training set size.",
"Thirdly, motivated by our relative weight analysis, seamless sample reweighting is incorporated.",
"It rescales samples’ gradients to control the differentiation degree over training examples instead of truncating them by sample mining.",
"In addition to its simplicity and intuitiveness, extensive experiments on three real-world benchmarks demonstrate the superiority of ICE.",
"Deep metric learning (DML) aims to learn a non-linear embedding function (a.k.a. distance metric) such that the semantic similarities over samples are well captured in the feature space (Tadmor et al., 2016; Sohn, 2016) .",
"Due to its fundamental function of learning discriminative representations, DML has diverse applications, such as image retrieval (Song et al., 2016) , clustering (Song et al., 2017) , verification (Schroff et al., 2015) , few-shot learning (Vinyals et al., 2016) and zero-shot learning (Bucher et al., 2016) .",
"A key to DML is to design an effective and efficient loss function for supervising the learning process, thus significant efforts have been made (Chopra et al., 2005; Schroff et al., 2015; Sohn, 2016; Song et al., 2016; Law et al., 2017; Wu et al., 2017) .",
"Some loss functions learn the embedding function from pairwise or triplet-wise relationship constraints (Chopra et al., 2005; Schroff et al., 2015; Tadmor et al., 2016) .",
"However, they are known to not only suffer from an increasing number of non-informative samples during training, but also incur considering only several instances per loss computation.",
"Therefore, informative sample mining strategies are proposed (Schroff et al., 2015; Wu et al., 2017; Wang et al., 2019b) .",
"Recently, several methods consider semantic relations among multiple examples to exploit their similarity structure (Sohn, 2016; Song et al., 2016; Law et al., 2017) .",
"Consequently, these structured losses achieve better performance than pairwise and triple-wise approaches.",
"In this paper, we tackle the DML problem from a novel perspective.",
"Specifically, we propose a novel loss function inspired by CCE.",
"CCE is well-known in classification problems owing to the fact that it has an intuitive probabilistic interpretation and achieves great performance, e.g., ImageNet classification (Russakovsky et al., 2015) .",
"However, since CCE learns a decision function which predicts the class label of an input, it learns class-level centres for reference (Zhang et al., 2018; Wang et al., 2017a) .",
"Therefore, CCE is not scalable to infinite classes and cannot generalise well when it is directly applied to DML (Law et al., 2017) .",
"With scalability and structured information in mind, we introduce instance cross entropy (ICE) for DML.",
"It learns an embedding function by minimising the cross entropy between a predicted instance-level matching distribution and its corresponding ground-truth.",
"In comparison with CCE, given a query, CCE aims to maximise its matching probability with the class-level context vector (weight vector) of its ground-truth class, whereas ICE targets at maximising its matching probability with it similar instances.",
"As ICE does not learn class-level context vectors, it is scalable to infinite training classes, which is an intrinsic demand of DML.",
"Similar to (Sohn, 2016; Song et al., 2016; Law et al., 2017; Goldberger et al., 2005; Wu et al., 2018) , ICE is a structured loss as it also considers all other instances in the mini-batch of a given query.",
"We illustrate ICE with comparison to other structured losses in Figure 1 .",
"A common challenge of instance-based losses is that many training examples become trivial as model improves.",
"Therefore, we integrate seamless sample reweighting into ICE, which functions similarly with various sample mining schemes (Sohn, 2016; Schroff et al., 2015; Shi et al., 2016; Wu et al., 2017) .",
"Existing mining methods require either separate time-consuming process, e.g., class mining (Sohn, 2016) , or distance thresholds for data pruning (Schroff et al., 2015; Shi et al., 2016; Wu et al., 2017) .",
"Instead, our reweighting scheme works without explicit data truncation and mining.",
"It is motivated by the relative weight analysis between two examples.",
"The current common practice of DML is to learn an angular embedding space by projecting all features to a unit hypersphere surface (Song et al., 2017; Law et al., 2017; MovshovitzAttias et al., 2017) .",
"We identify the challenge that without sample mining, informative training examples cannot be differentiated and emphasised properly because the relative weight between two samples is strictly bounded.",
"We address it by sample reweighting, which rescales samples' gradient to control the differentiation degree among them.",
"Finally, for intraclass compactness and interclass separability, most methods (Schroff et al., 2015; Song et al., 2016; Tadmor et al., 2016; Wu et al., 2017) use distance thresholds to decrease intraclass variances and increase interclass distances.",
"In contrast, we achieve the target from a perspective of instance-level matching probability.",
"Without any distance margin constraint, ICE makes no assumptions about the boundaries between different classes.",
"Therefore, ICE is easier to apply in applications where we have no prior knowledge about intraclass variances.",
"Our contributions are summarised: (1) We approach DML from a novel perspective by taking in the key idea of matching probability in CCE.",
"We introduce ICE, which is scalable to an infinite number of training classes and exploits structured information for learning supervision.",
"(2) A seamless sample reweighting scheme is derived for ICE to address the challenge of learning an embedding subspace by projecting all features to a unit hypersphere surface.",
"(3) We show the superiority of ICE by comparing with state-of-the-art methods on three real-world datasets.",
"We remark that Prototypical Networks, Matching Networks (Vinyals et al., 2016) and NCA are also scalable and do not require distance thresholds.",
"Therefore, they are illustrated and differentiated in Figure 1 .",
"Matching Networks are designed specifically for one-shot learning.",
"Similarly, (Triantafillou et al., 2017) design mAP-SSVM and mAP-DLM for few-shot learning, which directly optimises the retrieval performance mAP when multiple positives exist.",
"FastAP (Cakir et al., 2019) is similar to (Triantafillou et al., 2017) and optimises the ranked-based average precision.",
"Instead, ICE processes one positive at a time.",
"Beyond, the setting of few-shot learning is different from deep metric learning: Each mini-batch is a complete subtask and contains a support set as training data and a query set as validation data in few-shot learning.",
"Few-shot learning applies episodic training in practice.",
"Remarkably, TADAM formulates instances versus class centres and also has a metric scaling parameter for adjusting the impact of different class centres.",
"Contrastively, ICE adjusts the influence of other instances.",
"Furthermore, ours is not exactly distance metric scaling since we simply apply naive cosine similarity as the distance metric at the testing stage.",
"That is why we interpret it as a weighting scheme during training.",
"In this paper, we propose a novel instance-level softmax regression framework, named instance cross entropy, for deep metric learning.",
"Firstly, the proposed ICE has clear probability interpretation and exploits structured semantic similarity information among multiple instances.",
"Secondly, ICE is scalable to infinitely many classes, which is required by DML.",
"Thirdly, ICE has only one weight scaling hyper-parameter, which works as mining informative examples and can be easily selected via cross-validation.",
"Finally, distance thresholds are not applied to achieve intraclass compactness and interclass separability.",
"This indicates that ICE makes no assumptions about intraclass variances and the boundaries between different classes.",
"Therefore ICE owns general applicability."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.04651162400841713,
0,
0.9756097793579102,
0,
0.1428571343421936,
0.09756097197532654,
0,
0.04999999329447746,
0.1538461446762085,
0.0357142798602581,
0.07547169178724289,
0.10344827175140381,
0.04651162400841713,
0.04255318641662598,
0,
0,
0.060606054961681366,
0.060606054961681366,
0.06451612710952759,
0.11999999731779099,
0.1249999925494194,
0.04651162400841713,
0.277777761220932,
0.5365853905677795,
0.1538461446762085,
0.0952380895614624,
0.037735845893621445,
0.060606054961681366,
0,
0.04347825422883034,
0,
0.0624999962747097,
0.1249999925494194,
0.03999999538064003,
0.1702127605676651,
0.15789473056793213,
0.0416666604578495,
0.1764705777168274,
0.1111111044883728,
0,
0.1395348757505417,
0.19512194395065308,
0.0833333283662796,
0.10810810327529907,
0.09302324801683426,
0.06666666269302368,
0,
0.13333332538604736,
0.10526315122842789,
0.06896550953388214,
0.0833333283662796,
0,
0.09756097197532654,
0.06896550953388214,
0.04878048226237297,
0,
0.19999998807907104,
0.10526315122842789,
0.060606054961681366,
0.1428571343421936,
0.05882352590560913,
0.1621621549129486,
0
] | BJeguTEKDB | true | [
"We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one. "
] |
[
"In model-based reinforcement learning, the agent interleaves between model learning and planning. ",
"These two components are inextricably intertwined.",
"If the model is not able to provide sensible long-term prediction, the executed planer would exploit model flaws, which can yield catastrophic failures.",
"This paper focuses on building a model that reasons about the long-term future and demonstrates how to use this for efficient planning and exploration.",
"To this end, we build a latent-variable autoregressive model by leveraging recent ideas in variational inference.",
"We argue that forcing latent variables to carry future information through an auxiliary task substantially improves long-term predictions.",
"Moreover, by planning in the latent space, the planner's solution is ensured to be within regions where the model is valid.",
"An exploration strategy can be devised by searching for unlikely trajectories under the model.",
"Our methods achieves higher reward faster compared to baselines on a variety of tasks and environments in both the imitation learning and model-based reinforcement learning settings.",
"Reinforcement Learning (RL) is an agent-oriented learning paradigm concerned with learning by interacting with an uncertain environment.",
"Combined with deep neural networks as function approximators, deep reinforcement learning (deep RL) algorithms recently allowed us to tackle highly complex tasks.",
"Despite recent success in a variety of challenging environment such as Atari games BID4 and the game of Go , it is still difficult to apply RL approaches in domains with high dimensional observation-action space and complex dynamics.Furthermore, most popular RL algorithms are model-free as they directly learn a value function BID34 or policy BID43 ) without trying to model or predict the environment's dynamics.",
"Model-free RL techniques often require large amounts of training data and can be expensive, dangerous or impossibly slow, especially for agents and robots acting in the real world.",
"On the other hand, model-based RL BID49 BID14 BID11 provides an alternative approach by learning an explicit representation of the underlying environment dynamics.",
"The principal component of model-based methods is to use an estimated model as an internal simulator for planning, hence limiting the need for interaction with the environment.",
"Unfortunately, when the dynamics are complex, it is not trivial to learn models that are accurate enough to later ensure stable and fast learning of a good policy.The most widely used techniques for model learning are based on one-step prediction.",
"Specifically, given an observation o t and an action a t at time t, a model is trained to predict the conditional distribution over the immediate next observation o t+1 , i.e p(o t+1 | o t , a t ).",
"Although computationally easy, the one-step prediction error is an inadequate proxy for the downstream performance of model-based methods as it does not account for how the model behaves when com-posed with itself.",
"In fact, one-step modelling errors can compound after multiple steps and can degrade the policy learning.",
"This is referred to as the compounding error phenomenon BID51 BID0 BID54 .",
"Other examples of models are autoregressive models such as recurrent neural networks BID32 that factorize naturally as log p θ (o t+1 , a t+1 , o t+2 , a t+2 , . . . | o t , a t ) = t log p θ (o t+1 , a t+1 | o 1 , a 1 , . . . o t , a t ).",
"Training autoregressive models using maximum likelihood results in 'teacher-forcing' that breaks the training over one-step decisions.",
"Such sequential models are known to suffer from accumulating errors as observed in BID30 .Our",
"key motivation is the following -a model of the environment should reason about (i.e. be trained to predict) long-term transition dynamics p θ (o t+1 , a t+1 , o t+2 , a t+2 , . . . | o t , a t ) and not just single step transitions p θ (o t+1 | o t , a t ). That",
"is, the model should predict what will happen in the long-term future, and not just the immediate future. We hypothesize",
"(and test) that such a model would exhibit less cascading of errors and would learn better feature embeddings for improved performance.One way to capture long-term transition dynamics is to use latent variables recurrent networks. Ideally, latent",
"variables could capture higher level structures in the data and help to reason about long-term transition dynamics. However, in practice",
"it is difficult for latent variables to capture higher level representation in the presence of a strong autoregressive model as shown in BID17 BID16 ; BID18 . To overcome this difficulty",
", we leverage recent advances in variational inference. In particular, we make use",
"of the recently proposed Z-forcing idea BID16 , which uses an auxiliary cost on the latent variable to predict the long-term future. Keeping in mind that more",
"accurate long-term prediction is better for planning, we use two ways to inject future information into latent variables. Firstly, we augment the dynamics",
"model with a backward recurrent network (RNN) such that the approximate posterior of latent variables depends on the summary of future information. Secondly, we force latent variables",
"to predict a summary of the future using an auxiliary cost that acts as a regularizer. Unlike one-step prediction, our approach",
"encourages the predicted future observations to remain grounded in the real observations.Injection of information about the future can also help in planning as it can be seen as injecting a plan for the future. In stochastic environment dynamics, unfolding",
"the dynamics model may lead to unlikely trajectories due to errors compounding at each step during rollouts.In this work, we make the following key contributions:1. We demonstrate that having an auxiliary loss",
"to predict the longer-term future helps in faster imitation learning. 2. We demonstrate that incorporating the latent",
"plan into dynamics model can be used for planning (for example Model Predictive Control) efficiently. We show the performance of the proposed method",
"as compared to existing state of the art RL methods. 3. We empirically observe that using the proposed",
"auxiliary loss could help in finding sub-goals in the partially observable 2D environment.",
"In this work we considered the challenge of model learning in model-based RL.",
"We showed how to train, from raw high-dimensional observations, a latent-variable model that is robust to compounding error.",
"The key insight in our approach involve forcing our latent variables to account for long-term future information.",
"We explain how we use the model for efficient planning and exploration.",
"Through experiments in various tasks, we demonstrate the benefits of such a model to provide sensible long-term predictions and therefore outperform baseline methods.",
"Mujoco Tasks We evaluate on 2 Mujoco tasks BID52 , the Reacher and the Half Cheetah task BID52 .",
"The Reacher tasks is an object manipulation task consist of manipulating a 7-DoF robotic arm to reach the goal, the agent is rewarded for the number of objects it reaches within a fixed number of steps.",
"The HalfCheetah task is continuous control task where the agent is awarded for the distance the robots moves.For both tasks, the experts are trained using Trust Region Policy Optimization (TRPO) BID43 .",
"We generate 10k expert trajectories for training the student model, all models are trained for 50 epochs.",
"For the HalfCheetah task, we chunk the trajectory (1000 timesteps) into 4 chunks of length 250 to save computation time.Car Racing task The Car Racing task BID28 ) is a continuous control task where each episode contains randomly generated trials.",
"The agent (car) is rewarded for visiting as many tiles as possible in the least amount of time possible.",
"The expert is trained using methods in BID19 .",
"We generate 10k trajectories from the expert.",
"For trajectories of length over 1000, we take the first 1000 steps.",
"Similarly to Section 5.1, we chunk the 1000 steps trajectory into 4 chunks of 250 for computation purposes.BabyAI The BabyAI environment is a POMDP 2D Minigrid envorinment BID10 with multiple tasks.",
"For our experiments, we use the PickupUnlock task consistent of 2 rooms, a key, an object to pick up and a door in between the rooms.",
"The agent starts off in the left room where it needs to find a key, it then needs to take the key to the door to unlock the next room, after which, the agent will move into the next room and find the object that it needs to pick up.",
"The rooms can be of different sizes and the difficulty increases as the size of the room increases.",
"We train all our models on room of size 15.",
"It is not trivial to train up a reinforcement learning expert on the PickupUnlock task on room size of 15.",
"We use curriculum learning with PPO BID44 for training our experts.",
"We start with a room size of 6 and increase the room size by 2 at each level of curriculum learning.We train the LSTM baseline and our model both using imitation learning.",
"The training data are 10k trajectories generated from the expert model.",
"We evaluate the both baseline and our model every 100 iterations on the real test environment (BabyAI environment) and we report the reward per episode.",
"Experiments are run 5 times with different random seeds and we report the average of the 5 runs.Wheeled locomotion We use the Wheeled locomotion with sparse rewards environment from (CoReyes et al., 2018) .",
"The robot is presented with multiple goals and must move sequentially in order to reach each reward.",
"The agent obtains a reward for every 3 goal it reaches and hence this is a task with sparse rewards.",
"We follow similar setup to BID13 , the number of explored trajectories for MPC is 2048, MPC re-plans at every 19 steps.",
"However, different from (Co-Reyes et al., 2018), we sample latent variables from our sequential prior which depends on the summary of the past events h t .",
"This is in comparison to BID13 , where the prior of the latent variables are fixed.",
"Experiments are run 3 times and average of the 3 runs are reported."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17142856121063232,
0,
0.1860465109348297,
0.2666666507720947,
0.052631575614213943,
0.29999998211860657,
0.25,
0.1111111044883728,
0.1304347813129425,
0.0555555522441864,
0,
0.10256409645080566,
0.16326530277729034,
0.1395348757505417,
0.17391303181648254,
0.16949151456356049,
0.07692307233810425,
0.19607841968536377,
0.05405404791235924,
0.11764705181121826,
0.038461532443761826,
0.15789473056793213,
0.05405404791235924,
0.09677419066429138,
0.19999998807907104,
0.25,
0.19512194395065308,
0.23529411852359772,
0.05882352590560913,
0.2978723347187042,
0.40909090638160706,
0.2222222238779068,
0.1428571343421936,
0.18867923319339752,
0.07547169178724289,
0.2631579041481018,
0.13636362552642822,
0.1538461446762085,
0.11764705181121826,
0.22857142984867096,
0.10256409645080566,
0.31578946113586426,
0.1764705777168274,
0.13333332538604736,
0.05405404791235924,
0.11764705181121826,
0.12244897335767746,
0.15789473056793213,
0.06896550953388214,
0.20512819290161133,
0.13333332538604736,
0.06896551698446274,
0.05882352590560913,
0.1111111044883728,
0.08695651590824127,
0.1111111044883728,
0.0555555522441864,
0,
0.09756097197532654,
0.060606054961681366,
0.0416666604578495,
0.060606054961681366,
0.045454539358615875,
0.039215680211782455,
0.10256409645080566,
0.09756097197532654,
0.1395348757505417,
0.1702127605676651,
0.2702702581882477,
0.060606054961681366
] | SkgQBn0cF7 | true | [
"incorporating, in the model, latent variables that encode future content improves the long-term prediction accuracy, which is critical for better planning in model-based RL."
] |
[
"Detecting anomalies is of growing importance for various industrial applications and mission-critical infrastructures, including satellite systems.",
"Although there have been several studies in detecting anomalies based on rule-based or machine learning-based approaches for satellite systems, a tensor-based decomposition method has not been extensively explored for anomaly detection.",
"In this work, we introduce an Integrative Tensor-based Anomaly Detection (ITAD) framework to detect anomalies in a satellite system.",
"Because of the high risk and cost, detecting anomalies in a satellite system is crucial.",
"We construct 3rd-order tensors with telemetry data collected from Korea Multi-Purpose Satellite-2 (KOMPSAT-2) and calculate the anomaly score using one of the component matrices obtained by applying CANDECOMP/PARAFAC decomposition to detect anomalies.",
"Our result shows that our tensor-based approach can be effective in achieving higher accuracy and reducing false positives in detecting anomalies as compared to other existing approaches.",
"Due to the high maintenance cost as well as extreme risk in space, detecting anomalies in a satellite system is critical.",
"However, anomaly detection in a satellite system is challenging for several reasons.",
"First, anomalies occur due to complex system interactions from various factors inside and outside a satellite system.",
"For example, a sensor in one subsystem in a satellite system is often connected to several other types of sensors or resources in other subsystem modules.",
"Each sensor measurement is encapsulated as telemetry and downlinked to the ground station.",
"In order to identify anomalies, it is crucial to compare and understand not just one single telemetry but several telemetries as a whole.",
"However, most of the previous studies (Fuertes et al., 2016; Hundman et al., 2018; OMeara et al., 2016) on detecting anomalies in satellite systems have primarily focused on analyzing individual telemetry.",
"This can lead to a high false positives rate, because some instantaneous glitches may not be actual anomalies, but just trivial outliers (Yairi et al., 2017) .",
"Additionally, false positives can be costly, requiring much manual effort from operators to investigate and determine whether they are anomalies (Hundman et al., 2018) .",
"To reduce the false positives, analyzing a set of multiple telemetries as a whole can be more effective to determine true anomalies in a complex system.",
"To the best of our knowledge, this integrated approach for a satellite system has not been studied extensively in the past.",
"In order to address these challenges, we propose an Integrative Tensor-based Anomaly Detection (ITAD) framework for a satellite system, where a tensor can effectively capture a set of high dimensional data.",
"Specifically, we construct a 3rd-order tensor for entire telemetries in one subsystem and decompose it into component matrices, which captures the characteristics of multiple telemetries as a whole to detect anomalies.",
"We then conduct a cluster analysis on one component matrix in a decomposed tensor and calculate the anomaly score based on the distance between each telemetry sample and its cluster centroid.",
"Finally, we used the dynamic thresholding method (Hundman et al., 2018) to detect anomalies; the dynamic thresholding method changes the detection threshold value over time instead of using a fixed value for the entire dataset.",
"We performed experiments on our approach with a subset of real telemetries from the KOMPSAT-2 satellite, and verify that our approach can detect actual anomalies effectively and reduce false positives significantly, compared to other approaches.",
"Determining an appropriate rank-size r is an NP-complete problem (Håstad, 1990) , and there is no general algorithm to find it.",
"To choose r, we exploit the reconstruction error, which is proposed in the original CP research (Carroll and Chang, 1970; Harshman, 1970) .",
"However, there is a possibility to suffer from overfactoring and ultimately failing to obtain an optimal solution from this method.",
"To address this possibility, we plan to apply the Core Consistency Diagnostic (CORCONDIA) proposed by Bro and Kiers (2003) for determining the optimal rank r for our future work.",
"We believe that the CORCONDIA method, which assesses the core consistency and measures the similarity between the core array and theoretical super-diagonal array, can yield more accurate results.",
"Even though we use 10 months of real telemetry dataset, we do not have many anomalies, which is a realistic scenario.",
"Otherwise, i.e. if there are many anomalous events, most mission-critical systems would fail very quickly.",
"In the presence of a small number of anomalies, the main focus of our work is to reduce false positives to assist satellite operators to determine the true anomalies, as requested by KARI operators.",
"However, we agree that because of a small number of anomalies, current precision, and recall metrics would be very sensitive to anomaly events.",
"Missing one anomaly would result in a 33% drop in performance.",
"To partially address this issue, we are currently in the process of collecting more datasets with anomalies within a longer and plan to evaluate our tensor-based system with datasets with more anomalies.",
"Also, we believe we need to develop a better performance metric, which can capture the performance with a small number of anomalies.",
"Lastly, we are in the process of deploying our tensor-based anomaly detection method to THE KOMPSAT-2 satellite in the spring of 2020.",
"We plan to incorporate not only 88 telemetries we experimented in this research, but also other types of telemetries and subsystems to evaluate our integrative anomaly detection method.",
"In this work, we proposed an Integrative Tensor-based Anomaly Detection framework (ITAD) to detect anomalies using the KOMPSAT-2 satellite telemetry dataset, where our approach can analyze multiple telemetries simultaneously to detect anomalies.",
"Our ITAD achieves higher performance in precision and F1 score compared to other approaches.",
"We also demonstrate that the ITAD reduces the false positives significantly.",
"This reduction in FPs is because it can distinguish actual anomalies from trivial outliers by incorporating information from other telemetries at the same time.",
"In the future, we plan to improve our algorithm by applying the CORCONDIA method to avoid overfactoring and find an optimal rank r and incorporate and evaluate datasets with more anomalies.",
"We believe our work laid the first grounds using an integrated tensor-based detection mechanism for space anomaly detection.",
"Moreover, the result demonstrates that our proposed method can be applicable in a variety of multivariate time-series anomaly detection scenarios, which require low false positives as well as high accuracy.",
"A TENSOR DECOMPOSITION"
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1599999964237213,
0.15789473056793213,
0.5,
0.25,
0,
0,
0.2142857164144516,
0.380952388048172,
0.23999999463558197,
0.20000000298023224,
0,
0.06451612710952759,
0.0555555522441864,
0.0555555522441864,
0,
0.12121211737394333,
0.27586206793785095,
0.3684210479259491,
0.10526315122842789,
0.05714285373687744,
0.10526315122842789,
0.04878048598766327,
0,
0,
0.07407406717538834,
0.0555555522441864,
0,
0.06896550953388214,
0,
0.11428570747375488,
0.06451612710952759,
0.10526315122842789,
0.1111111044883728,
0.0714285671710968,
0.0714285671710968,
0,
0.2631579041481018,
0,
0,
0,
0,
0.07692307233810425,
0.052631575614213943,
0
] | HJeg46EKPr | true | [
"Integrative Tensor-based Anomaly Detection(ITAD) framework for a satellite system."
] |
[
"Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time.",
"In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning.",
"We start from the KL regularized expected reward objective which introduces an additional component, a default policy.",
"Instead of relying on a fixed default policy, we learn it from data.",
"But crucially, we restrict the amount of information the default policy receives, forcing it to learn reusable behaviors that help the policy learn faster.",
"We formalize this strategy and discuss connections to information bottleneck approaches and to the variational EM algorithm.",
"We present empirical results in both discrete and continuous action domains and demonstrate that, for certain tasks, learning a default policy alongside the policy can significantly speed up and improve learning.\n",
"Please watch the video demonstrating learned experts and default policies on several continuous control tasks ( https://youtu.be/U2qA3llzus8 ).",
"For many interesting reinforcement learning tasks, good policies exhibit similar behaviors in different contexts, behaviors that need to be modified only slightly or occasionally to account for the specific task at hand or to respond to information becoming available.",
"For example, a simulated humanoid in navigational tasks is usually required to walk -independently of the specific goal it is aiming for.",
"Similarly, an agent in a simulated maze tends to primarily move forward with occasional left/right turns at intersections.",
"This intuition has been explored across multiple fields, from cognitive science (e.g. BID22 to neuroscience and machine learning.",
"For instance, the idea of bounded rationality (e.g. BID46 ) emphasizes the cost of information processing and the presence of internal computational constraints.",
"This implies that the behavior of an agent minimizes the need to process information, and more generally trades off task reward with computational effort, resulting in structured repetitive patterns.",
"Computationally, these ideas can be modeled using tools from information and probability theory (e.g. BID50 BID32 BID47 BID40 BID33 BID49 , for instance, via constraints on the channel capacity between past states and future actions in a Markov decision process.",
"In this paper we explore this idea, starting from the KL regularized expected reward objective (e.g. BID51 BID52 BID19 BID36 BID23 BID48 , which encourages an agent to trade off expected reward against deviations from a prior or default distribution over trajectories.",
"We explore how this can be used to inject subjective knowledge into the learning problem by using an informative default policy that is learned alongside the agent policy This default policy encodes default behaviours that should be executed in multiple contexts in absence of addi-tional task information and the objective forces the learned policy to be structured in alignment with the default policy.To render this approach effective, we introduce an information asymmetry between the default and agent policies, preventing the default policy from accessing certain information in the state.",
"This prevents the default policy from collapsing to the agent's policy.",
"Instead, the default policy is forced to generalize across a subset of states, implementing a form of default behavior that is valid in the absence of the missing information, and thereby exerting pressure that encourages sharing of behavior across different parts of the state space.",
"FIG0 illustrates the proposed setup, with asymmetry imposed by hiding parts of the state from the default policy.",
"We investigate the proposed approach empirically on a variety of challenging problems including both continuous action problems such as controlling simulated high-dimensional physical embodied agents, as well as discrete action visual mazes.",
"We find that even when the agent and default policies are learned at the same time, significant speed-ups can be achieved on a range of tasks.",
"We consider several variations of the formulation, and discuss its connection to several ideas in the wider literature, including information bottleneck, and variational formulations of the EM algorithm for learning generative models.",
"In this work we studied the influence of learning the default policy in the KL-regularized RL objective.",
"Specifically we looked at the scenario where we enforce information asymmetry between the default policy and the main one.",
"In the continuous control, we showed empirically that in the case of sparse-reward tasks with complex walkers, there is a significant speed-up of learning compared to the baseline.",
"In addition, we found that there was no significant gain in dense-reward tasks and/or with simple walkers.",
"Moreover, we demonstrated that significant gains can be achieved in the discrete action spaces.",
"We provided evidence that these gains are mostly due to the information asymmetry between the agent and the default policy.",
"Best results are obtained when the default policy sees only a subset of information, allowing it to learn task-agnostic behaviour.",
"Furthermore, these default polices can be reused to significantly speed-up learning on new tasks."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1428571343421936,
0.09999999403953552,
0.20512819290161133,
0.11428570747375488,
0.1904761791229248,
0.1621621549129486,
0.35999998450279236,
0.1463414579629898,
0.1428571343421936,
0.1860465109348297,
0.14999999105930328,
0.04878048226237297,
0.1428571343421936,
0.1599999964237213,
0.22580644488334656,
0.13114753365516663,
0.2222222238779068,
0.19354838132858276,
0.25925925374031067,
0.21052631735801697,
0.11999999731779099,
0.2978723347187042,
0.20408162474632263,
0.3243243098258972,
0.31578946113586426,
0.12765957415103912,
0.05128204822540283,
0.1666666567325592,
0.3499999940395355,
0.2380952388048172,
0.1111111044883728
] | S1lqMn05Ym | true | [
"Limiting state information for the default policy can improvement performance, in a KL-regularized RL framework where both agent and default policy are optimized together"
] |
[
"When an image classifier makes a prediction, which parts of the image are relevant and why?",
"We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision?",
"Producing an answer requires marginalizing over images that could have been seen but weren't.",
"We can sample plausible image in-fills by conditioning a generative model on the rest of the image.",
"We then optimize to find the image regions that most change the classifier's decision after in-fill.",
"Our approach contrasts with ad-hoc in-filling approaches, such as blurring or injecting noise, which generate inputs far from the data distribution, and ignore informative relationships between different parts of the image.",
"Our method produces more compact and relevant saliency maps, with fewer artifacts compared to previous methods.",
"The decisions of powerful image classifiers are difficult to interpret.",
"Saliency maps are a tool for interpreting differentiable classifiers that, given a particular input example and output class, computes the sensitivity of the classification with respect to each input dimension.",
"BID3 and BID2 cast saliency computation an optimization problem informally described by the following question: which inputs, when replaced by an uninformative reference value, maximally change the classifier output?",
"Because these methods use heuristic reference values, e.g. blurred input BID3 or random colors BID2 , they ignore the context of the surrounding pixels, often producing unnatural in-filled images (Figure 2 ).",
"If we think of a saliency map as interrogating the neural network classifier, these approaches have to deal with a somewhat unusual question of how the classifier responds to images outside of its training distribution.To encourage explanations that are consistent with the data distribution, we modify the question at hand: which region, when replaced by plausible alternative values, would maximally change classifier output?",
"In this paper we provide a new model-agnostic framework for computing and visualizing feature importance of any differentiable classifier, based on variational Bernoulli dropout BID4 .",
"We marginalize out the masked region, conditioning the generative model on the non-masked parts of the image to sample counterfactual inputs that either change or preserve classifier behavior.",
"By leveraging a powerful in-filling conditional generative model we produce saliency maps on ImageNet that identify relevant and concentrated pixels better than existing methods.",
"We proposed FIDO, a new framework for explaining differentiable classifiers that uses adaptive Bernoulli dropout with strong generative in-filling to combine the best properties of recently proposed methods BID3 BID2 BID18 .",
"We compute saliency by marginalizing over plausible alternative inputs, revealing concentrated pixel areas that preserve label information.",
"By quantitative comparisons we find the FIDO saliency map provides more parsimonious explanations than existing methods.",
"FIDO provides novel but relevant explanations for the classifier in question by highlighting contextual information relevant to the prediction and consistent with the training distribution.",
"We released the code in PyTorch at https://github.",
"com/zzzace2000/FIDO-saliency."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.05128204822540283,
0.1249999925494194,
0.10526315122842789,
0.307692289352417,
0.1538461446762085,
0,
0.09999999403953552,
0.05882352590560913,
0.07843136787414551,
0.11999999731779099,
0,
0.1818181723356247,
0.04081632196903229,
0.2857142686843872,
0.25,
0.2222222238779068,
0.7804877758026123,
0.04999999701976776,
0.1304347813129425,
0.0624999962747097
] | B1MXz20cYQ | true | [
"We compute saliency by using a strong generative model to efficiently marginalize over plausible alternative inputs, revealing concentrated pixel areas that preserve label information."
] |
[
"This paper presents the Variation Network (VarNet), a generative model providing means to manipulate the high-level attributes of a given input.",
"The originality of our approach is that VarNet is not only capable of handling pre-defined attributes but can also learn the relevant attributes of the dataset by itself. ",
"These two settings can be easily combined which makes VarNet applicable for a wide variety of tasks.",
"Further, VarNet has a sound probabilistic interpretation which grants us with a novel way to navigate in the latent spaces as well as means to control how the attributes are learned.",
"We demonstrate experimentally that this model is capable of performing interesting input manipulation and that the learned attributes are relevant and interpretable.",
"We focus on the problem of generating variations of a given input in an intended way.",
"This means that given some input element x, which can be considered as a template, we want to generate transformed versions of x with different high-level attributes.",
"Such a mechanism is of great use in many domains such as image edition since it allows to edit images on a more abstract level and is of crucial importance for creative uses since it allows to generate new content.More precisely, given a dataset D = {(x (1) , m (1) ), . . . , (x (N ) , m (N ) )} of N labeled elements (x, m) ∈ X × M, where X stands for the input space and M for the metadata space, we would like to obtain a model capable of learning a relevant attribute space Ψ ⊂ R d for some integer d > 0 and meaningful attribute functions φ : X × M → Ψ that we can then use to control generation.In a great majority of the recent proposed methods BID13 ; BID16 , these attributes are assumed to be given.",
"We identify two shortcomings: labeled data is not always available and this approach de facto excludes attributes that can be hard to formulate in an absolute way.",
"The novelty of our approach is that these attributes can be either learned by the model (we name them free attributes) or imposed (fixed attributes).",
"This problem is an ill-posed one on many aspects.",
"Firstly, in the case of fixed attribute functions φ, there is no ground truth for variations since there is no x with two different attributes.",
"Secondly, it can be hard to determine if a learned free attribute is relevant.",
"However, we provide empirical evidence that our general approach is capable of learning such relevant attributes and that they can be used for generating meaningful variations.In this paper, we introduce the Variation Network (VarNet), a probabilistic neural network which provides means to manipulate an input by changing its high-level attributes.",
"Our model has a sound probabilistic interpretation which makes the variations obtained by changing the attributes statistically meaningful.",
"As a consequence, this probabilistic framework provides us with a novel mechanism to \"control\" or \"shape\" the learned free attributes which then gives interpretable controls over the variations.",
"This architecture is general and provides a wide range of choices for the design of the attribute function φ: we can combine both free and fixed attributes and the fixed attributes can be either continuous or discrete.Our contributions are the following:• A widely applicable encoder-decoder architecture which generalizes existing approaches BID11 ; BID14 ; BID13 The input x,x are in X , the input space and the metadata m is in M, the metadata space.",
"The latent template code z * lies in Z * , the template space, while the latent variable z lies in Z the latent space.",
"The variable u is sampled from a zero-mean unitvariance normal distribution.",
"Finally, the features φ(x, m) are in Ψ, the attribute space.",
"The Neural Autoregressive Flows (NAF) BID10 are represented using two arrows, one pointing to the center of the other one; this denotes the fact that the actual parameters of first neural network are obtained by feeding meta-parameters into a second neural network.",
"The discriminator D acts on Z * × Ψ.•",
"An easy-to-use framework: any encoder-decoder architecture can be easily transformed into a VarNet in order to provide it with controlled input manipulation capabilities,• A novel and statistically sound approach to navigate in the latent space,• Ways to control the behavior of the free learned attributes.The plan of this paper is the following: Sect. 2",
"presents the VarNet architecture together with its training algorithm. For",
"better clarity, we introduce separately all the components featured in our model and postpone the discussion about their interplay and the motivation behind our modeling choices in Sect. 3 and",
"Sect. 4 discusses",
"about the related works. In particular",
", we show that VarNet provides an interesting solution to many constrained generation problems already considered in the literature. Finally, we",
"illustrate in Appendix A the possibilities offered by our proposed model and show that its faculty to generate variations in an intended way is of particular interest.",
"We presented the Variation Network, a generative model able to vary attributes of a given input.",
"The novelty is that these attributes can be fixed or learned and have a sound probabilistic interpretation.",
"Many sampling schemes have been presented together with a detailed discussion and examples.",
"We hope that the flexibility in the design of the attribute function and the simplicity, from an implementation point of view, in transforming existing encoder-decoder architectures (it suffices to provide the encoder and decoder networks) will be of interest in many applications.For future work, we would like to extend our approach in two different ways: being able to deal with partially-given fixed attributes and handling discrete free attributes.",
"We also want to investigate the of use stochastic attribute functions φ.",
"Indeed, it appeared to us that using deterministic attribute functions was crucial and we would like to go deeper in the understanding of the interplay between all VarNet components."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4285714328289032,
0.25531914830207825,
0.19999998807907104,
0.11999999731779099,
0.2790697515010834,
0.10526315122842789,
0.3199999928474426,
0.1705426275730133,
0.23999999463558197,
0.2978723347187042,
0.0624999962747097,
0.13333332538604736,
0.2702702581882477,
0.3661971688270569,
0.14999999105930328,
0.16326530277729034,
0.20512820780277252,
0.052631575614213943,
0.1764705777168274,
0,
0.13793103396892548,
0.060606054961681366,
0.28169015049934387,
0,
0.04255318641662598,
0,
0,
0.09302324801683426,
0.16326530277729034,
0.42105263471603394,
0.3499999940395355,
0.0555555522441864,
0.12820512056350708,
0.05714285373687744,
0.07999999821186066
] | ryfaViR9YX | true | [
"The Variation Network is a generative model able to learn high-level attributes without supervision that can then be used for controlled input manipulation."
] |
[
"Despite alarm over the reliance of machine learning systems on so-called spurious patterns in training data, the term lacks coherent meaning in standard statistical frameworks.",
"However, the language of causality offers clarity: spurious associations are those due to a common cause (confounding) vs direct or indirect effects.",
"In this paper, we focus on NLP, introducing methods and resources for training models insensitive to spurious patterns.",
"Given documents and their initial labels, we task humans with revise each document to accord with a counterfactual target label, asking that the revised documents be internally coherent while avoiding any gratuitous changes.",
"Interestingly, on sentiment analysis and natural language inference tasks, classifiers trained on original data fail on their counterfactually-revised counterparts and vice versa.",
"Classifiers trained on combined datasets perform remarkably well, just shy of those specialized to either domain.",
"While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e.g., mentions of genre), models trained on the combined data are insensitive to this signal.",
"We will publicly release both datasets.",
"What makes a document's sentiment positive?",
"What makes a loan applicant creditworthy?",
"What makes a job candidate qualified?",
"What about a photograph truly makes it depict a dolphin?",
"Moreover, what does it mean for a feature to be relevant to such a determination?",
"Statistical learning offers one framework for approaching these questions.",
"First, we swap out the semantic question for a more readily answerable associative question.",
"For example, instead of asking what comprises a document's sentiment, we recast the question as which documents are likely to be labeled as positive (or negative)?",
"Then, in this associative framing, we interpret as relevant, those features that are most predictive of the label.",
"However, despite the rapid adoption and undeniable commercial success of associative learning, this framing seems unsatisfying.",
"Alongside deep learning's predictive wins, critical questions have piled up concerning spuriousness, artifacts, reliability, and discrimination, that the purely associative perspective appears ill-equipped to answer.",
"For example, in computer vision, researchers have found that deep neural networks rely on surface-level texture (Jo & Bengio, 2017; Geirhos et al., 2018) or clues in the image's background to recognize foreground objects even when that seems both unnecessary and somehow wrong: the beach is not what makes a seagull a seagull.",
"And yet researchers struggle to articulate precisely why models should not rely on such patterns.",
"In NLP, these issues have emerged as central concerns in the literature on annotation artifacts and bias (in the societal sense).",
"Across myriad tasks, researchers have demonstrated that models tend to rely on spurious associations (Poliak et al., 2018; Gururangan et al., 2018; Kaushik & Lipton, 2018; Kiritchenko & Mohammad, 2018) .",
"Notably, some models for question-answering tasks may not actually be sensitive to the choice of the question (Kaushik & Lipton, 2018) , while in Natural Language Inference (NLI), classifiers trained on hypotheses only (vs hypotheses and premises) perform surprisingly well (Poliak et al., 2018; Gururangan et al., 2018) .",
"However, papers seldom make clear what, if anything, spuriousness means within the standard supervised learning framework.",
"ML systems are trained to exploit the mutual information between features and a label to make accurate predictions.",
"Statistical learning does not offer a conceptual distinction between between spurious and non-spurious associations.",
"Causality, however, offers a coherent notion of spuriousness.",
"Spurious associations owe to common cause rather than to a (direct or indirect) causal path.",
"We might consider a factor of variation to be spuriously correlated with a label of interest if intervening upon it (counterfactually) would not impact the applicability of the label or vice versa.",
"While our paper does not rely on the mathematical machinery of causality, we draw inspiration from the underlying philosophy to design a new dataset creation procedure in which humans counterfactually augment datasets.",
"Returning to NLP, even though the raw data does not come neatly disentangled into manipulable factors, people nevertheless speak colloquially of editing documents to manipulate specific aspects (Hovy, 1987) .",
"For example, the following interventions seem natural:",
"(i) Revise the letter to make it more positive;",
"(ii) Edit the second sentence so that it appears to contradict the first.",
"The very notion of targeted revisions like",
"(i) suggests a generative process in which the sentiment is but one (manipulable) cause of the final document.",
"These edits might be thought of as intervening on sentiment while holding all upstream features constant.",
"However even if some other factor has no influence on sentiment, if they share some underlying common cause (confounding), then we might expect aspects of the final document to be predictive of sentiment owing to spurious association.",
"In this exploratory paper, we design a human-in-the-loop system for counterfactually manipulating documents.",
"Our hope is that by intervening only upon the factor of interest, we might disentangle the spurious and non-spurious associations, yielding classifiers that hold up better when spurious associations do not transport out of sample.",
"We employ crowd workers not to label documents, but rather to edit them, manipulating the text to make a targeted (counterfactual) class apply.",
"For sentiment analysis, we direct the worker: revise this negative movie review to make it positive, without making any gratuitous changes.",
"We might regard the second part of this directive as a sort of least action principle, ensuring that we perturb only those spans necessary to alter the applicability of the label.",
"For NLI, a 3-class classification task (entailment, contradiction, neutral), we ask the workers to modify the premise while keeping the hypothesis intact, and vice versa, seeking two sets of edits corresponding to each of the (two) counterfactual classes.",
"Using this platform, we collect thousands of counterfactually-manipulated examples for both sentiment analysis and NLI, extending the IMDb (Maas et al., 2011) and SNLI (Bowman et al., 2015) datasets, respectively.",
"The result is two new datasets (each an extension of a standard resource) that enable us to both probe fundamental properties of language and train classifiers less reliant on spurious signal.",
"We show that classifiers trained on original IMDb reviews fail on counterfactually-revised data and vice versa.",
"We further show that spurious correlations in these datasets are picked up by even linear models, however, augmenting the revised examples breaks up these correlations (e.g., genre ceases to be predictive of sentiment).",
"For a Bidirectional LSTM (Graves & Schmidhuber, 2005 ) trained on IMDb reviews, classification accuracy goes down from 79.3% to 55.7% when evaluated on original vs revised reviews.",
"The same classifier trained on revised reviews achieves an accuracy of 62.5% on original reviews compared to 89.1% on their revised counterparts.",
"These numbers go to 81.7% and 92.0% respectively when the classifier is retrained on the combined dataset.",
"Similar behavior is observed for linear classifiers.",
"We discovered that BERT (Devlin et al., 2019 ) is more resilient to such drops in performance on sentiment analysis.",
"Despite that, it appears to rely on spurious associations in SNLI hypotheses identified by Gururangan et al. (2018) .",
"We show that if fine-tuned on SNLI sentence pairs, BERT fails on pairs with revised premise and vice versa, experiencing more than a 30 point drop in accuracy.",
"However, fine-tuned on the combined set, it performs much better across all datasets.",
"Similarly, a Bi-LSTM trained on hypotheses alone can accurately classify 69% of the SNLI dataset but performs worse than the majority class baseline when evaluated on the revised dataset.",
"When trained on hypotheses only from the combined dataset, its performance is expectedly worse than simply selecting the majority class on both SNLI as well as the revised dataset.",
"By leveraging humans not only to provide labels but also to intervene upon the data, revising documents to alter the applicability of various labels, we are able to derive insights about the underlying semantic concepts.",
"Moreover we can leverage the augmented data to train classifiers less dependent on spurious associations.",
"Our study demonstrates the promise of leveraging human-in-the-loop feedback to disentangle the spurious and non-spurious associations, yielding classifiers that hold up better when spurious associations do not transport out of sample.",
"Our methods appear useful on both sentiment analysis and NLI, two contrasting tasks.",
"In sentiment analysis, expressions of opinion matter more than stated facts, while in NLI this is reversed.",
"SNLI poses another challenge in that it is a 3-class classification task using two input sentences.",
"In future work, we plan to extend these techniques, finding ways to leverage humans in the loop to build more robust systems for question answering and summarization (among others)."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2380952388048172,
0.19512194395065308,
0.1621621549129486,
0.3199999928474426,
0.052631575614213943,
0.11428570747375488,
0.17391303181648254,
0,
0,
0,
0,
0,
0.0624999962747097,
0,
0.0624999962747097,
0.13636362552642822,
0.10810810327529907,
0.05714285373687744,
0.09090908616781235,
0.11764705181121826,
0.11764705181121826,
0.1538461446762085,
0.17777776718139648,
0.12903225421905518,
0.05714285373687744,
0.1111111044883728,
0.1249999925494194,
0,
0.12121211737394333,
0.1304347813129425,
0.1599999964237213,
0.12765957415103912,
0.07692307233810425,
0.1428571343421936,
0.12903225421905518,
0,
0.1111111044883728,
0.05714285373687744,
0.1538461446762085,
0.0624999962747097,
0.11999999731779099,
0.09999999403953552,
0.14999999105930328,
0.08695651590824127,
0.11538460850715637,
0.04255318641662598,
0.12244897335767746,
0.05882352590560913,
0.15686273574829102,
0.0833333283662796,
0.10256409645080566,
0.1621621549129486,
0,
0.14999999105930328,
0.2702702581882477,
0.1304347813129425,
0.1249999925494194,
0.09090908616781235,
0.09090908616781235,
0.16326530277729034,
0.29411762952804565,
0.1702127605676651,
0.0624999962747097,
0.0555555522441864,
0.05714285373687744,
0.17391303181648254
] | Sklgs0NFvr | true | [
"Humans in the loop revise documents to accord with counterfactual labels, resulting resource helps to reduce reliance on spurious associations."
] |
[
"Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive way to explain a model.",
"In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation.",
"By measuring the tolerance level for an adversarial attack, we can extract a set of features that provides most robust support for a current prediction, and also can extract a set of features that contrasts the current prediction to a target class by setting a targeted adversarial attack.",
"By applying this methodology to various prediction tasks across multiple domains, we observed the derived explanations are indeed capturing the significant feature set qualitatively and quantitatively.",
"With the significant progress of recent machine learning research, various machine learning models have been being rapidly adopted to countless real-world applications.",
"This rapid adaptation increasingly questions the machine learning model's credibility, fairness, and more generally interpretability.",
"In the line of this research, researchers have explored various notions of model interpretability.",
"Some researchers directly answer the trustability (Ribeiro et al., 2016) or the fairness of a model (Zhao et al., 2017) , while some other researchers seek to actually improve the model's performance by understanding the model's weak points (Koh & Liang, 2017) .",
"Even though the goal of such various model interpretability tasks varies, vast majority of them are built upon extracting relevant features for a prediction, so called feature-based explanation.",
"Feature-based explanation is commonly based on measuring the fidelity of the explanation to the model, which is essentially how close the sum of attribution scores for a set of features approximates the function value difference before and after removing the set of features.",
"Depending on their design, the fidelity-based attribution evaluation varies: completeness (Sundararajan et al., 2017) , sensitivity-n (Ancona et al., 2018) , infidelity (Yeh et al., 2019) , and causal local explanation metric (Plumb et al., 2018) .",
"The idea of smallest sufficient region (SSR) and smallest destroying region (SDR) (Fong & Vedaldi, 2017; Dabkowski & Gal, 2017 ) is worth noting because it considers the ranking of the feature attribution scores, not the actual score itself.",
"Intuitively, for a faithful attribution score, removing the most salient features would naturally lead to a large difference in prediction score.",
"Therefore, SDR-based evaluations measure how much the function value changes when the most high-valued salient features are removed.",
"Although the aforementioned attribution evaluations made success in many cases, setting features with an arbitrary reference values to zero-out the input is limited, in the sense that it only considers the prediction at the reference value while ignoring the rest of the input space.",
"Furthermore, the choice of reference value inherently introduces bias.",
"For example, if we set the feature value to 0 in rgb images, this introduces a bias in the attribution map that favors the bright pixels.",
"As a result, explanations that optimize upon such evaluations often omit important dark objects and the pertinent negative features in the image, which is the part of the image that does not contain object but is crucial to the prediction (Dhurandhar et al., 2018 ).",
"An alternative way to remove pixels is to use sampling from some predefined distribution or a generative model (Chang et al., 2018) , which nevertheless could still introduce some bias with respect to the defined distribution.",
"Moreover, they require a generative model that approximates the data distribution, which may not be available in certain domains.",
"In this paper, we remove such inherit bias by taking a different perspective on the input perturbation.",
"We start from an intuition that if a set of features are important to make a specific prediction, keeping them in the same values would preserve the prediction even though other irrelevant features are modified.",
"In other words, the model would be more sensitive on the changes of those important or relevant features than the ones that are not.",
"Unlike the foremost approaches including SDR and SSR that perturbs features to a specific reference point, we consider the minimum norm of perturbation to arbitrary directions, not just to a reference point, that can change model's prediction, also known as \"minimum adversarial perturbation\" in the literature (Goodfellow et al., 2014; Weng et al., 2018b) .",
"Based on this idea, we define new evaluation criteria to test the importance of a set of features.",
"By computing the minimum adversarial perturbation on the complementary set of features that can alter the model's decision, we could test the degree of importance of the set.",
"Although explicitly computing the importance value is NP-hard (Katz et al., 2017) , Carlini & Wagner (2017) and Madry et al. (2017) showed that the perturbations computed by adversarial attacks can serve as reasonably tight upper bounds, which lead to an efficient approximation for the proposed evaluation.",
"Furthermore, we can derive a new explanation framework by formulating the model explanation to a two-player min-max game between explanator and adversarial attacker.",
"The explanator aims to find a set of important features to maximize the minimum perturbation computed by the attacker.",
"This framework empirically performs much better than previous approaches quantitatively, with very inspiring examples.",
"To summarize our contributions:",
"• We define new evaluation criteria for feature-based explanations based on robustness analysis.",
"The evaluation criteria consider the worst case perturbations when a set of features are anchored, which does not introduce bias into the evaluation.",
"• We design efficient algorithms to generate explanations that maximize the proposed criteria, which perform favorably against baseline methods on the proposed evaluation criteria.",
"• Experiments in computer vision and NLP models demonstrate that the proposed explanation can indeed identify some important features that are not captured by previous methods.",
"Furthermore, our method is able to extract a set of features that contrasts the current prediction to a target class.",
"In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation.",
"Furthermore, we develop a new explanation method to find important set of features to optimize this new criterion.",
"Experimental results demonstrate that the proposed new explanations are indeed capturing significant feature sets across multiple domains.",
"Figure 8 : Comparisons between our proposed methods under different criteria.",
"From left to right: untargeted Robustness-S r , targeted Robustness-S r , untargeted Robustness-S r , targeted Robustness-S r .",
"We omit points in the plot with value too high to fit in the scale of y-axis."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.145454540848732,
0.3214285671710968,
0.21875,
0.2181818187236786,
0.11999999731779099,
0.08888888359069824,
0.09302324801683426,
0.0937499925494194,
0.14035087823867798,
0.2950819730758667,
0.14035087823867798,
0.1269841194152832,
0.1599999964237213,
0.08510638028383255,
0.1230769157409668,
0.10256409645080566,
0.07547169178724289,
0.22857142984867096,
0.09677419066429138,
0.08163265138864517,
0.08510638028383255,
0.16393442451953888,
0.1538461446762085,
0.1599999964237213,
0.3404255211353302,
0.19607841968536377,
0.1944444328546524,
0.23529411852359772,
0.21276594698429108,
0,
0,
0.41860464215278625,
0.2745097875595093,
0.307692289352417,
0.1090909019112587,
0.1666666567325592,
0.3214285671710968,
0.17391303181648254,
0.12765957415103912,
0.04878048226237297,
0.05128204822540283,
0.17777776718139648
] | Hye4KeSYDr | true | [
"We propose new objective measurement for evaluating explanations based on the notion of adversarial robustness. The evaluation criteria further allows us to derive new explanations which capture pertinent features qualitatively and quantitatively."
] |
[
"Generative adversarial networks (GANs) have been shown to provide an effective way to model complex distributions and have obtained impressive results on various challenging tasks.",
"However, typical GANs require fully-observed data during training.",
"In this paper, we present a GAN-based framework for learning from complex, high-dimensional incomplete data.",
"The proposed framework learns a complete data generator along with a mask generator that models the missing data distribution.",
"We further demonstrate how to impute missing data by equipping our framework with an adversarially trained imputer.",
"We evaluate the proposed framework using a series of experiments with several types of missing data processes under the missing completely at random assumption.",
"Generative adversarial networks (GANs) BID0 provide a powerful modeling framework for learning complex high-dimensional distributions.",
"Unlike likelihood-based methods, GANs are referred to as implicit probabilistic models BID8 .",
"They represent a probability distribution through a generator that learns to directly produce samples from the desired distribution.",
"The generator is trained adversarially by optimizing a minimax objective together with a discriminator.",
"In practice, GANs have been shown to be very successful in a range of applications including generating photorealistic images BID3 .",
"Other than generating samples, many downstream tasks require a good generative model, such as image inpainting BID9 BID15 .Training",
"GANs normally requires access to a large collection of fully-observed data. However,",
"it is not always possible to obtain a large amount of fully-observed data. Missing",
"data is well-known to be prevalent in many real-world application domains where different data cases might have different missing entries. This arbitrary",
"missingness poses a significant challenge to many existing machine learning models.Following BID6 , the generative process for incompletely observed data can be described as shown below where x ∈ R n is a complete data vector and m ∈ {0, 1} n is a binary mask 2 that determines which entries in x to reveal: DISPLAYFORM0 Let x obs denote the observed elements of x, and x mis denote the missing elements according to the mask m. In addition, let",
"θ denote the unknown parameters of the data distribution, and φ denote the unknown parameters for the mask distribution, which are usually assumed to be independent of θ. In the standard",
"maximum likelihood setting, the unknown parameters are estimated by maximizing the 1 Our implementation is available at https://github.com/steveli/misgan 2 The complementm is usually referred to as the missing data indicator in the literature.following marginal likelihood, integrating over the unknown missing data values:p(x obs , m) = p θ (x obs , x mis )p φ (m|x obs , x mis )dx mis .Little & Rubin (",
"2014) characterize the missing data mechanism p φ (m|x obs , x mis ) in terms of independence relations between the complete data x = [x obs , x mis ] and the masks m:• Missing completely at random (MCAR): p φ (m|x) = p φ (m),• Missing at random (MAR): p φ (m|x) = p φ (m|x obs ),• Not missing at random (NMAR): m depends on x mis and possibly also x obs .Most work on incomplete",
"data assumes MCAR or MAR since under these assumptions p(x obs , m) can be factorized into p θ (x obs )p φ (m|x obs ). With such decoupling, the",
"missing data mechanism can be ignored when learning the data generating model while yielding correct estimates for θ. When p θ (x) does not admit",
"efficient marginalization over x mis , estimation of θ is usually performed by maximizing a variational lower bound, as shown below, using the EM algorithm or a more general approach BID6 Ghahramani & Jordan, 1994) :log p θ (x obs ) ≥ E q(xmis|xobs) [log p θ (x obs , x mis ) − log q(x mis |x obs )] .The primary contribution of",
"this paper is the development of a GAN-based framework for learning high-dimensional data distributions in the presence of incomplete observations. Our framework introduces an",
"auxiliary GAN for learning a mask distribution to model the missingness. The masks are used to \"mask",
"\" generated complete data by filling the indicated missing entries with a constant value. The complete data generator",
"is trained so that the resulting masked data are indistinguishable from real incomplete data that are masked similarly.Our framework builds on the ideas of AmbientGAN (Bora et al., 2018) . AmbientGAN modifies the discriminator",
"of a GAN to distinguish corrupted real samples from corrupted generated samples under a range of corruption processes (or measurement processes). For images, examples of the measurement",
"processes include random dropout, blur, block-patch, and so on. Missing data can be seen as a special type",
"of corruption, except that we have access to the missing pattern in addition to the corrupted measurements. Moreover, AmbientGAN assumes the measurement",
"process is known or parameterized only by a few parameters, which is not the case in general missing data problems.We provide empirical evidence that the proposed framework is able to effectively learn complex, highdimensional data distributions from highly incomplete data when the GAN generator incorporates suitable priors on the data generating process. We further show how the architecture can be",
"used to generate high-quality imputations.",
"This work presents and evaluates a highly flexible framework for learning standard GAN data generators in the presence of missing data.",
"Although we only focus on the MCAR case in this work, MisGAN can be easily extended to cases where the output of the data generator is provided to the mask generator.",
"These modifications can capture both MAR and NMAR mechanisms.",
"The question of learnability requires further investigation as the analysis in Section 3 no longer holds due to dependence between the transition matrix and the data distribution under MAR and NMAR.",
"We have tried this modified architecture in our experiments and it showed similar results as to the original MisGAN.",
"This suggests that the extra dependencies may not adversely affect learnability.",
"We leave the formal evaluation of this modified framework for future work.A PROOF OF THEOREM 1 AND THEOREM 2Let P be the finite set of feature values.",
"For the n-dimensional case, let M = {0, 1} n be the set of masks and I = P n be the set of all possible feature vectors.",
"Also let D M be the set of probability distributions on M, which implies m 0 and v∈I m(v) = 1 for all m ∈ M, where m(v) denotes the entry of m indexed by v.Given τ ∈ P and q ∈ D M , define the transformation DISPLAYFORM0 where is the entry-wise multiplication and 1{·} is the indicator function.Given m ∈ M, define an equivalent relation ∼ m on I by v ∼ m u iff v m = u m, and denote by [v] m the equivalence class containing v.Given q ∈ D M , let S q ⊂ M be the support of q, that is, DISPLAYFORM1 Given τ ∈ P and v ∈ I, let M τ,v denote the set of masks consistent with v in the sense that q(m) > 0 and v m = τm, that is, DISPLAYFORM2 Proof.",
"This is clear from the following equation DISPLAYFORM3 which can be obtained from (13) as follows, DISPLAYFORM4 Proposition",
"2. For any τ ∈ P, q ∈ D M and x ∈ R I , the vector T q,τ x determines the collection of marginals {x ([v] DISPLAYFORM5 Proof.",
"Fix τ ∈ P, q ∈ D M and x ∈ R I .",
"Since v m + τm ∈ [v] m , it suffices to show that we can solve for x ([v] m ) in terms of T q,τ x for m ∈ M τ,v = ∅.",
"We use induction on the size of M τ,v .First",
"consider the base case |M τ,v | = 1. Consider v 0 ∈ I with M τ,v0 = {m 0 }. By FORMULA0",
", DISPLAYFORM6 , which proves the base case. Now assume",
"we can solve for x ([v] m ) in terms of T q,τ x for m ∈ S q and v ∈ I with |M τ,v | ≤ k. Consider v",
"0 ∈ I with |M τ,v0 | = k + 1; if no such v 0 exists, the conclusion holds trivially. Let M τ,v0",
"= {m 0 , m 1 , . . . , m k }. We need to",
"show that T q,τ x determines x([v 0 ] m ) for = 0, 1, . . . , k. By (14) again",
", DISPLAYFORM7 Let m = k =0 m , which may or may not belong to S q . Note that DISPLAYFORM8",
"and hence DISPLAYFORM9 Plugging FORMULA0 into FORMULA0 yields DISPLAYFORM10 Note that DISPLAYFORM11 It follows from FORMULA0 and FORMULA0 Theorem 1 is a direct consequence of Proposition 1 and Proposition 2 as the collection of marginals {x ([v] m ) : v ∈ I, m ∈ S q } is independent of τ . Therefore, if x 1 , x",
"2 ∈ R I satisfy T q,τ0 x 1 = T q,τ0 x 2 for some τ 0 ∈ P, then T q,τ x 1 = T q,τ x 2 for all τ ∈ P. Theorem 1 is a special case when x 1 = 0.Moreover, Proposition 2 also shows that MisGAN overall learns the distribution p(x obs , m), as x([v] m ) is equivalent to p(x obs |m) and T q,τ x is essentially the distribution of f τ (x, m) under the optimally learned missingness q = p(m). Theorem 2 basically restates",
"Proposition 1 and Proposition 2. This is also true when τ",
"/ ∈ P according to Appendix B."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.09090908616781235,
0.6206896305084229,
0.3333333134651184,
0.12903225421905518,
0.22857142984867096,
0.3448275923728943,
0,
0.2666666507720947,
0.07407406717538834,
0.05882352590560913,
0.060606054961681366,
0.1538461446762085,
0.1428571343421936,
0.11764705181121826,
0.13333332538604736,
0.17142856121063232,
0.05970148742198944,
0.09999999403953552,
0.0952380895614624,
0.21621620655059814,
0.0615384578704834,
0.5714285373687744,
0.3333333134651184,
0.19999998807907104,
0.2380952388048172,
0.17142856121063232,
0.1249999925494194,
0.060606054961681366,
0.1875,
0,
0.47058823704719543,
0.09999999403953552,
0,
0.1428571343421936,
0.060606054961681366,
0.1599999964237213,
0.1538461446762085,
0.05714285373687744,
0.04444444179534912,
0.19354838132858276,
0.04999999701976776,
0,
0.04651162400841713,
0.0833333283662796,
0.05714285373687744,
0.08695651590824127,
0.04999999701976776,
0.0555555522441864,
0,
0.05714285373687744,
0,
0.10344827175140381,
0.10810810327529907,
0.0833333283662796,
0
] | S1lDV3RcKm | true | [
"This paper presents a GAN-based framework for learning the distribution from high-dimensional incomplete data."
] |