Unnamed: 0
int64
0
1.83k
Clean_Title
stringlengths
8
153
Clean_Text
stringlengths
330
2.26k
Clean_Summary
stringlengths
53
295
1,600
MobileBERT: Task-Agnostic Compression of BERT by Progressive Knowledge Transfer
The recent development of Natural Language Processing has achieved great success using large pre-trained models with hundreds of millions of parameters.However, these models suffer from the heavy model size and high latency such that we cannot directly deploy them to resource-limited mobile devices.In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model.Like BERT, MobileBERT is task-agnostic; that is, it can be universally applied to various downstream NLP tasks via fine-tuning.MobileBERT is a slimmed version of BERT-LARGE augmented with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.To train MobileBERT, we use a bottom-to-top progressive scheme to transfer the intrinsic knowledge of a specially designed Inverted Bottleneck BERT-LARGE teacher to it.Empirical studies show that MobileBERT is 4.3x smaller and 4.0x faster than original BERT-BASE while achieving competitive results on well-known NLP benchmarks.On the natural language inference tasks of GLUE, MobileBERT achieves 0.6 GLUE score performance degradation, and 367 ms latency on a Pixel 3 phone.On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a 90.0/79.2 dev F1 score, which is 1.5/2.1 higher than BERT-BASE.
We develop a task-agnosticlly compressed BERT, which is 4.3x smaller and 4.0x faster than BERT-BASE while achieving competitive performance on GLUE and SQuAD.
1,601
On importance-weighted autoencoders
The importance weighted autoencoder is a popular variational-inference method which achieves a tighter evidence bound than standard variational autoencoders by optimising a multi-sample objective, i.e. an objective that is expressible as an integral over Monte Carlo samples.Unfortunately, IWAE crucially relies on the availability of reparametrisations and even if these exist, the multi-sample objective leads to inference-network gradients which break down as is increased."This breakdown can only be circumvented by removing high-variance score-function terms, either by heuristically ignoring them gradient from Roeder et al.) or through an identity from Tucker et al. gradient).", 'In this work, we argue that directly optimising the proposal distribution in importance sampling as in the reweighted wake-sleep algorithm from Bornschein & Bengio is preferable to optimising IWAE-type multi-sample objectives.To formalise this argument, we introduce an adaptive-importance sampling framework termed adaptive importance sampling for learning which slightly generalises the RWS algorithm.We then show that AISLE admits IWAE-STL and IWAE-DREG as special cases.
We show that most variants of importance-weighted autoencoders can be derived in a more principled manner as special cases of adaptive importance-sampling approaches like the reweighted-wake sleep algorithm.
1,602
Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel.Alistarh et al. describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs.For the first variant, QSGD, they provide strong theoretical guarantees.For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks.Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf.
NUQSGD closes the gap between the theoretical guarantees of QSGD and the empirical performance of QSGDinf.
1,603
Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity
The impressive lifelong learning in animal brains is primarily enabled by plastic changes in synaptic connectivity.Importantly, these changes are not passive, but are actively controlled by neuromodulation, which is itself under the control of the brain.The resulting self-modifying abilities of the brain play an important role in learning and adaptation, and are a major basis for biological reinforcement learning.Here we show for the first time that artificial neural networks with such neuromodulated plasticity can be trained with gradient descent.Extending previous work on differentiable Hebbian plasticity, we propose a differentiable formulation for the neuromodulation of plasticity.We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks.In one task, neuromodulated plastic LSTMs with millions of parameters outperform standard LSTMs on a benchmark language modeling task.We conclude that differentiable neuromodulation of plasticity offers a powerful new framework for training neural networks.
Neural networks can be trained to modify their own connectivity, improving their online learning performance on challenging tasks.
1,604
Few-Shot Learning with Simplex
Deep learning has made remarkable achievement in many fields.However, learningthe parameters of neural networks usually demands a large amount of labeleddata.The algorithms of deep learning, therefore, encounter difficulties when appliedto supervised learning where only little data are available.This specific taskis called few-shot learning.To address it, we propose a novel algorithm for fewshotlearning using discrete geometry, in the sense that the samples in a class aremodeled as a reduced simplex.The volume of the simplex is used for the measurementof class scatter.During testing, combined with the test sample and thepoints in the class, a new simplex is formed.Then the similarity between the testsample and the class can be quantized with the ratio of volumes of the new simplexto the original class simplex.Moreover, we present an approach to constructingsimplices using local regions of feature maps yielded by convolutional neural networks.Experiments on Omniglot and miniImageNet verify the effectiveness ofour simplex algorithm on few-shot learning.
A simplex-based geometric method is proposed to cope with few-shot learning problems.
1,605
Working memory facilitates reward-modulated Hebbian learning in recurrent neural networks
Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance.We show that a network can learn complicated sequences with a reward-modulated Hebbian learning rule if the network of reservoir neurons is combined with a second network that serves as a dynamic working memory and provides a spatio-temporal backbone signal to the reservoir.In combination with the working memory, reward-modulated Hebbian learning of the readout neurons performs as well as FORCE learning, but with the advantage of a biologically plausible interpretation of both the learning rule and the learning paradigm.
We show that a working memory input to a reservoir network makes a local reward-modulated Hebbian rule perform as well as recursive least-squares (aka FORCE)
1,606
STCN: Stochastic Temporal Convolutional Networks
Convolutional architectures have recently been shown to be competitive on manysequence modelling tasks when compared to the de-facto standard of recurrent neural networks while providing computational and modelling advantages due to inherent parallelism.However, currently, there remains a performancegap to more expressive stochastic RNN variants, especially those with several layers of dependent random variables.In this work, we propose stochastic temporal convolutional networks, a novel architecture that combines the computational advantages of temporal convolutional networks with the representational power and robustness of stochastic latent spaces.In particular, we propose a hierarchy of stochastic latent variables that captures temporal dependencies at different time-scales.The architecture is modular and flexible due to the decoupling of the deterministic and stochastic layers.We show that the proposed architecture achieves state of the art log-likelihoods across several tasks.Finally, the model is capable of predicting high-quality synthetic samples over a long-range temporal horizon in modelling of handwritten text.
We combine the computational advantages of temporal convolutional architectures with the expressiveness of stochastic latent variables.
1,607
D3PG: Deep Differentiable Deterministic Policy Gradients
Over the last decade, two competing control strategies have emerged for solving complex control tasks with high efficacy.Model-based control algorithms, such as model-predictive control and trajectory optimization, peer into the gradients of underlying system dynamics in order to solve control tasks with high sample efficiency. However, like all gradient-based numerical optimization methods,model-based control methods are sensitive to intializations and are prone to becoming trapped in local minima.Deep reinforcement learning, on the other hand, can somewhat alleviate these issues by exploring the solution space through sampling — at the expense of computational cost.In this paper, we present a hybrid method that combines the best aspects of gradient-based methods and DRL.We base our algorithm on the deep deterministic policy gradients algorithm and propose a simple modification that uses true gradients from a differentiable physical simulator to increase the convergence rate of both the actor and the critic. We demonstrate our algorithm on seven 2D robot control tasks, with the most complex one being a differentiable half cheetah with hard contact constraints.Empirical results show that our method boosts the performance of DDPGwithout sacrificing its robustness to local minima.
We propose a novel method that leverages the gradients from differentiable simulators to improve the performance of RL for robotics control
1,608
Bayesian Hypernetworks
We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks.A Bayesian hypernetwork, h, is a neural network which learns to transform a simple noise distribution, p = N, to a distribution q := q) over the parameters t of another neural network.We train q with variational inference, using an invertible h to enable efficient estimation of the variational lower bound on the posterior p via sampling.In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap iid sampling of q. In practice, Bayesian hypernets provide a better defense against adversarial examples than dropout, and also exhibit competitive performance on a suite of tasks which evaluate model uncertainty, including regularization, active learning, and anomaly detection.
We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks.
1,609
Deep Innovation Protection
Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels.This paper presents a method called Deep Innovation Protection that allows training complex world models end-to-end for such 3D environments.The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in a world model, allowing other components to adapt.We investigate the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss.
Deep Innovation Protection allows evolving complex world models end-to-end for 3D tasks.
1,610
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses.Recent work shows that randomized smoothing can be used to provide certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius.The attack-free characteristic makes MACER faster to train and easier to optimize.In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN.For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.
We propose MACER: a provable defense algorithm that trains robust models by maximizing the certified radius. It does not use adversarial training but performs better than all existing provable l2-defenses.
1,611
Guide Actor-Critic for Continuous Control
Actor-critic methods solve reinforcement learning problems by updating a parameterized policy known as an actor in a direction that increases an estimate of the expected return known as a critic.However, existing actor-critic methods only use values or gradients of the critic to update the policy parameter.In this paper, we propose a novel actor-critic method called the guide actor-critic.GAC firstly learns a guide actor that locally maximizes the critic and then it updates the policy parameter based on the guide actor by supervised learning.Our main theoretical contributions are two folds.First, we show that GAC updates the guide actor by performing second-order optimization in the action space where the curvature matrix is based on the Hessians of the critic.Second, we show that the deterministic policy gradient method is a special case of GAC when the Hessians are ignored.Through experiments, we show that our method is a promising reinforcement learning method for continuous controls.
This paper proposes a novel actor-critic method that uses Hessians of a critic to update an actor.
1,612
Reject Illegal Inputs: Scaling Generative Classifiers with Supervised Deep Infomax
Deep Infomax~ is an unsupervised representation learning framework by maximizing the mutual information between the inputs and the outputs of an encoder, while probabilistic constraints are imposed on the outputs.In this paper, we propose Supervised Deep InfoMax~, which introduces supervised probabilistic constraints to the encoder outputs.The supervised probabilistic constraints are equivalent to a generative classifier on high-level data representations, where class conditional log-likelihoods of samples can be evaluated.Unlike other works building generative classifiers with conditional generative models, SDIMs scale on complex datasets, and can achieve comparable performance with discriminative counterparts. ', "With SDIM, we could perform .Instead of always reporting a class label, SDIM only makes predictions when test samples' largest logits surpass some pre-chosen thresholds, otherwise they will be deemed as out of the data distributions, and be rejected.Our experiments show that SDIM with rejection policy can effectively reject illegal inputs including out-of-distribution samples and adversarial examples.
scale generative classifiers on complex datasets, and evaluate their effectiveness to reject illegal inputs including out-of-distribution samples and adversarial examples.
1,613
Scaling Laws for the Principled Design, Initialization, and Preconditioning of ReLU Networks
Abstract In this work, we describe a set of rules for the design and initialization of well-conditioned neural networks, guided by the goal of naturally balancing the diagonal blocks of the Hessian at the start of training.We show how our measure of conditioning of a block relates to another natural measure of conditioning, the ratio of weight gradients to the weights.We prove that for a ReLU-based deep multilayer perceptron, a simple initialization scheme using the geometric mean of the fan-in and fan-out satisfies our scaling rule.For more sophisticated architectures, we show how our scaling principle can be used to guide design choices to produce well-conditioned neural networks, reducing guess-work.
A theory for initialization and scaling of ReLU neural network layers
1,614
Thieves on Sesame Street! Model Extraction of BERT-based APIs
We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model.Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT, we show that the adversary does not need any real training data to successfully mount the attack.In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks including natural language inference and question answering.Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model.Finally, we study two defense strategies against model extraction—membership classification and API watermarking—which while successful against some adversaries can also be circumvented by more clever ones.
Outputs of modern NLP APIs on nonsensical text provide strong signals about model internals, allowing adversaries to steal the APIs.
1,615
SEARNN: Training RNNs with global-local losses
We propose SEARNN, a novel training algorithm for recurrent neural networks inspired by the "learning to search" approach to structured prediction.RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation.Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses.Further, it introduces discrepancies between training and predicting that may hurt test performance.Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error.We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction.Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes.This allows us to validate the benefits of our approach on a machine translation task.
We introduce SeaRNN, a novel algorithm for RNN training, inspired by the learning to search approach to structured prediction, in order to avoid the limitations of MLE training.
1,616
Progressive Reinforcement Learning with Distillation for Multi-Skilled Motion Control
Deep reinforcement learning has demonstrated increasing capabilities for continuous control problems,including agents that can move with skill and agility through their environment.An open problem in this setting is that of developing good strategies for integrating or merging policiesfor multiple skills, where each individual skill is a specialist in a specific skill and its associated state distribution.We extend policy distillation methods to the continuous action setting and leverage this technique to combine \\expert policies,as evaluated in the domain of simulated bipedal locomotion across different classes of terrain.We also introduce an input injection method for augmenting an existing policy network to exploit new input features.Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills.The combination of these methods allows a policy to be incrementally augmented with new skills.We compare our progressive learning and integration via distillation methodagainst three alternative baselines.
A continual learning method that uses distillation to combine expert policies and transfer learning to accelerate learning new skills.
1,617
MoET: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees
Deep Reinforcement Learning has led to many recent breakthroughs on complex control tasks, such as defeating the best human player in the game of Go.However, decisions made by the DRL agent are not explainable, hindering its applicability in safety-critical settings.Viper, a recently proposed technique, constructs a decision tree policy by mimicking the DRL agent.Decision trees are interpretable as each action made can be traced back to the decision rule path that lead to it.However, one global decision tree approximating the DRL policy has significant limitations with respect to the geometry of decision boundaries.We propose MoET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions.We propose a training procedure to support non-differentiable decision tree experts and integrate it into imitation learning procedure of Viper.We evaluate our algorithm on four OpenAI gym environments, and show that the policy constructed in such a way is more performant and better mimics the DRL agent by lowering mispredictions and increasing the reward.We also show that MoET policies are amenable for verification using off-the-shelf automated theorem provers such as Z3.
Explainable reinforcement learning model using novel combination of mixture of experts with non-differentiable decision tree experts.
1,618
A Stochastic Derivative Free Optimization Method with Momentum
We consider the problem of unconstrained minimization of a smooth objectivefunction in in setting where only function evaluations are possible.We propose and analyze stochastic zeroth-order method with heavy ball momentum.In particular, we propose, SMTP, a momentum version of the stochastic three-point method Bergou et al..We show new complexity results for non-convex, convex and strongly convex functions.We test our method on a collection of learning to continuous control tasks on several MuJoCo Todorov et al. environments with varying difficulty and compare against STP, other state-of-the-art derivative-free optimization algorithms and against policy gradient methods.SMTP significantly outperforms STP and all other methods that we considered in our numerical experiments.Our second contribution is SMTP with importance sampling which we call SMTP_IS.We provide convergence analysis of this method for non-convex, convex and strongly convex objectives.
We develop and analyze a new derivative free optimization algorithm with momentum and importance sampling with applications to continuous control.
1,619
SHREWD: Semantic Hierarchy Based Relational Embeddings For Weakly-Supervised Deep Hashing
Using class labels to represent class similarity is a typical approach to training deep hashing systems for retrieval; samples from the same or different classes take binary 1 or 0 similarity values.This similarity does not model the full rich knowledge of semantic relations that may be present between data points.In this work we build upon the idea of using semantic hierarchies to form distance metrics between all available sample labels; for example cat to dog has a smaller distance than cat to guitar.We combine this type of semantic distance into a loss function to promote similar distances between the deep neural network embeddings.We also introduce an empirical Kullback-Leibler divergence loss term to promote binarization and uniformity of the embeddings.We test the resulting SHREWD method and demonstrate improvements in hierarchical retrieval scores using compact, binary hash codes instead of real valued ones, and show that in a weakly supervised hashing setting we are able to learn competitively without explicitly relying on class labels, but instead on similarities between labels.
We propose a new method for training deep hashing for image retrieval using only a relational distance metric between samples
1,620
Local Editing of Cross-Surface Mappings with Iterative Least Squares Conformal Maps
In this paper, we propose a novel approach to improve a given surface mapping through local refinement.The approachreceives an established mapping between two surfaces and follows four phases: inspection of the mapping and creation of a sparseset of landmarks in mismatching regions; segmentation with a low-distortion region-growing process based on flattening thesegmented parts; optimization of the deformation of segmented parts to align the landmarks in the planar parameterization domain;and aggregation of the mappings from segments to update the surface mapping.In addition, we propose a new method to deform themesh in order to meet constraints).We incrementally adjust the cotangent weights forthe constraints and apply the deformation in a fashion that guarantees that the deformed mesh will be free of flipped faces and will havelow conformal distortion.Our new deformation approach, Iterative Least Squares Conformal Mapping, outperforms otherlow-distortion deformation methods.The approach is general, and we tested it by improving the mappings from different existing surfacemapping methods.We also tested its effectiveness by editing the mappings for a variety of 3D objects.
We propose a novel approach to improve a given cross-surface mapping through local refinement with a new iterative method to deform the mesh in order to meet user constraints.
1,621
REPRESENTATION COMPRESSION AND GENERALIZATION IN DEEP NEURAL NETWORKS
Understanding the groundbreaking performance of Deep Neural Networks is oneof the greatest challenges to the scientific community today.In this work, weintroduce an information theoretic viewpoint on the behavior of deep networksoptimization processes and their generalization abilities.By studying the InformationPlane, the plane of the mutual information between the input variable andthe desired label, for each hidden layer.Specifically, we show that the training ofthe network is characterized by a rapid increase in the mutual informationbetween the layers and the target label, followed by a longer decrease in the MIbetween the layers and the input variable.Further, we explicitly show that thesetwo fundamental information-theoretic quantities correspond to the generalizationerror of the network, as a result of introducing a new generalization bound that isexponential in the representation compression.The analysis focuses on typicalpatterns of large-scale problems.For this purpose, we introduce a novel analyticbound on the mutual information between consecutive layers in the network.An important consequence of our analysis is a super-linear boost in training timewith the number of non-degenerate hidden layers, demonstrating the computationalbenefit of the hidden layers.
Introduce an information theoretic viewpoint on the behavior of deep networks optimization processes and their generalization abilities
1,622
Learning concise representations for regression by evolving networks of trees
We propose and study a method for learning interpretable representations for the task of regression.Features are represented as networks of multi-type expression trees comprised of activation functions common in neural networks in addition to other elementary functions.Differentiable features are trained via gradient descent, and the performance of features in a linear model is used to weight the rate of change among subcomponents of each representation.The search process maintains an archive of representations with accuracy-complexity trade-offs to assist in generalization and interpretation.We compare several stochastic optimization approaches within this framework.We benchmark these variants on 100 open-source regression problems in comparison to state-of-the-art machine learning approaches.Our main finding is that this approach produces the highest average test scores across problems while producing representations that are orders of magnitude smaller than the next best performing method.We also report a negative result in which attempts to directly optimize the disentanglement of the representation result in more highly correlated features.
Representing the network architecture as a set of syntax trees and optimizing their structure leads to accurate and concise regression models.
1,623
Toward Understanding the Impact of Staleness in Distributed Machine Learning
Most distributed machine learning systems store a copy of the model parameters locally on each machine to minimize network communication.In practice, in order to reduce synchronization waiting time, these copies of the model are not necessarily updated in lock-step, and can become stale.Despite much development in large-scale ML, the effect of staleness on the learning efficiency is inconclusive, mainly because it is challenging to control or monitor the staleness in complex distributed environments.In this work, we study the convergence behaviors of a wide array of ML models and algorithms under delayed updates.Our extensive experiments reveal the rich diversity of the effects of staleness on the convergence of ML algorithms and offer insights into seemingly contradictory reports in the literature.The empirical findings also inspire a new convergence analysis of SGD in non-convex optimization under staleness, matching the best-known convergence rate of O.
Empirical and theoretical study of the effects of staleness in non-synchronous execution on machine learning algorithms.
1,624
Non-linear System Identification from Partial Observations via Iterative Smoothing and Learning
System identification is the process of building a mathematical model of an unknown system from measurements of its inputs and outputs.It is a key step for model-based control, estimator design, and output prediction."This work presents an algorithm for non-linear offline system identification from partial observations, i.e. situations in which the system's full-state is not directly observable.", "The algorithm presented, called SISL, iteratively infers the system's full state through non-linear optimization and then updates the model parameters.", "We test our algorithm on a simulated system of coupled Lorenz attractors, showing our algorithm's ability to identify high-dimensional systems that prove intractable for particle-based approaches.", 'We also use SISL to identify the dynamics of an aerobatic helicopter.By augmenting the state with unobserved fluid states, we learn a model that predicts the acceleration of the helicopter better than state-of-the-art approaches.
This work presents a scalable algorithm for non-linear offline system identification from partial observations.
1,625
On Stochastic Sign Descent Methods
Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models.Sign-based methods, such as signSGD, have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM.In this paper, we perform a general analysis of sign-based methods for non-convex optimization.Our analysis is built on intuitive bounds on success probabilities and does not rely on special noise distributions nor on the boundedness of the variance of stochastic gradients.Extending the theory to distributed setting within a parameter server framework, we assure exponentially fast variance reduction with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes.We validate our theoretical findings experimentally.
General analysis of sign-based methods (e.g. signSGD) for non-convex optimization, built on intuitive bounds on success probabilities.
1,626
Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization
Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts.One of the major challenge of off-policy learning is to derive counterfactual estimators that also has low variance and thus low generalization error.In this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks.Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work.With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and is also consistent with our theoretical results.
For off-policy learning with bandit feedbacks, we propose a new variance regularized counterfactual learning algorithm, which has both theoretical foundations and superior empirical performance.
1,627
Learned imaging with constraints and uncertainty quantification
We outline new approaches to incorporate ideas from deep learning into wave-based least-squares imaging.The aim, and main contribution of this work, is the combination of handcrafted constraints with deep convolutional neural networks, as a way to harness their remarkable ease of generating natural images.The mathematical basis underlying our method is the expectation-maximization framework, where data are divided in batches and coupled to additional "latent" unknowns.These unknowns are pairs of elements from the original unknown space and network inputs.In this setting, the neural network controls the similarity between these additional parameters, acting as a "center" variable.The resulting problem amounts to a maximum-likelihood estimation of the network parameters when the augmented data model is marginalized over the latent variables.
We combine hard handcrafted constraints with a deep prior weak constraint to perform seismic imaging and reap information on the "posterior" distribution leveraging multiplicity in the data.
1,628
RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers
When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in encoding the database relations in an accessible way for the semantic parser, and modeling alignment between database columns and their mentions in a given query. We present a unified framework, based on the relation-aware self-attention mechanism,to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder.On the challenging Spider dataset this framework boosts the exact match accuracy to 53.7%, compared to 47.4% for the previous state-of-the-art model unaugmented with BERT embeddings.In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment.
State of the art in complex text-to-SQL parsing by combining hard and soft relational reasoning in schema/question encoding.
1,629
Task-agnostic Continual Learning via Growing Long-Term Memory Networks
As our experience shows, humans can learn and deploy a myriad of different skills to tackle the situations they encounter daily.Neural networks, in contrast, have a fixed memory capacity that prevents them from learning more than a few sets of skills before starting to forget them.In this work, we make a step to bridge neural networks with human-like learning capabilities.For this, we propose a model with a growing and open-bounded memory capacity that can be accessed based on the model’s current demands.To test this system, we introduce a continual learning task based on language modelling where the model is exposed to multiple languages and domains in sequence, without providing any explicit signal on the type of input it is currently dealing with.The proposed system exhibits improved adaptation skills in that it can recover faster than comparable baselines after a switch in the input language or domain.
We introduce a continual learning setup based on language modelling where no explicit task segmentation signal is given and propose a neural network model with growing long term memory to tackle it.
1,630
Learning temporal evolution of probability distribution with Recurrent Neural Network
We propose to tackle a time series regression problem by computing temporal evolution of a probability density function to provide a probabilistic forecast.A Recurrent Neural Network based model is employed to learn a nonlinear operator for temporal evolution of a probability density function.We use a softmax layer for a numerical discretization of a smooth probability density functions, which transforms a function approximation problem to a classification task.Explicit and implicit regularization strategies are introduced to impose a smoothness condition on the estimated probability distribution.A Monte Carlo procedure to compute the temporal evolution of the distribution for a multiple-step forecast is presented.The evaluation of the proposed algorithm on three synthetic and two real data sets shows advantage over the compared baselines.
Proposed RNN-based algorithm to estimate predictive distribution in one- and multi-step forecasts in time series prediction problems
1,631
Modelling Working Memory using Deep Recurrent Reinforcement Learning
In cognitive systems, the role of a working memory is crucial for visual reasoning and decision making.Tremendous progress has been made in understanding the mechanisms of the human/animal working memory, as well as in formulating different frameworks of artificial neural networks. In the case of humans, the visual working memory task is a standard one in which the subjects are presented with a sequence of images, each of which needs to be identified as to whether it was already seen or not.Our work is a study of multiple ways to learn a working memory model using recurrent neural networks that learn to remember input images across timesteps.We train these neural networks to solve the working memory task by training them with a sequence of images in supervised and reinforcement learning settings.The supervised setting uses image sequences with their corresponding labels.The reinforcement learning setting is inspired by the popular view in neuroscience that the working memory in the prefrontal cortex is modulated by a dopaminergic mechanism.We consider the VWM task as an environment that rewards the agent when it remembers past information and penalizes it for forgetting.We quantitatively estimate the performance of these models on sequences of images from a standard image dataset.Further, we evaluate their ability to remember and recall as they are increasingly trained over episodes.Based on our analysis, we establish that a gated recurrent neural network model with long short-term memory units trained using reinforcement learning is powerful and more efficient in temporally consolidating the input spatial information.This work is an initial analysis as a part of our ultimate goal to use artificial neural networks to model the behavior and information processing of the working memory of the brain and to use brain imaging data captured from human subjects during the VWM cognitive task to understand various memory mechanisms of the brain.
LSTMs can more effectively model the working memory if they are learned using reinforcement learning, much like the dopamine system that modulates the memory in the prefrontal cortex
1,632
From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference
Nonlinearity is crucial to the performance of a deep network.To date there has been little progress understanding the menagerie of available nonlinearities, but recently progress has been made on understanding the r\\^le played by piecewise affine and convex nonlinearities like the ReLU and absolute value activation functions and max-pooling.In particular, DN layers constructed from these operations can be interpreted as that have an elegant link to vector quantization and-means.While this is good theoretical progress, the entire MASO approach is predicated on the requirement that the nonlinearities be piecewise affine and convex, which precludes important activation functions like the sigmoid, hyperbolic tangent, and softmax."We show that, under a GMM, piecewise affine, convex nonlinearities like ReLU, absolute value, and max-pooling can be interpreted as solutions to certain natural hard VQ inference problems, while sigmoid, hyperbolic tangent, and softmax can be interpreted as solutions to corresponding soft VQ inference problems.", 'We further extend the framework by hybridizing the hard and soft VQ optimizations to create a-VQ inference that interpolates between hard, soft, and linear VQ inference.A prime example of a-VQ DN nonlinearity is the nonlinearity, which offers state-of-the-art performance in a range of computer vision tasks but was developed ad hoc by experimentation.Finally, we validate with experiments an important assertion of our theory, namely that DN performance can be significantly improved by enforcing orthogonality in its linear filters.
Reformulate deep networks nonlinearities from a vector quantization scope and bridge most known nonlinearities together.
1,633
Generative Models for Graph-Based Protein Design
Engineered proteins offer the potential to solve many problems in biomedicine, energy, and materials science, but creating designs that succeed is difficult in practice.A significant aspect of this challenge is the complex coupling between protein sequence and 3D structure, and the task of finding a viable design is often referred to as the inverse protein folding problem.We develop generative models for protein sequences conditioned on a graph-structured specification of the design target.Our approach efficiently captures the complex dependencies in proteins by focusing on those that are long-range in sequence but local in 3D space.Our framework significantly improves upon prior parametric models of protein sequences given structure, and takes a step toward rapid and targeted biomolecular design with the aid of deep generative models.
We learn to conditionally generate protein sequences given structures with a model that captures sparse, long-range dependencies.
1,634
Deep Layers as Stochastic Solvers
We provide a novel perspective on the forward pass through a block of layers in a deep network.In particular, we show that a forward pass through a standard dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex objective with a single iteration of a-nice Proximal Stochastic Gradient method.We further show that replacing standard Bernoulli dropout with additive dropout is equivalent to optimizing the same convex objective with a variance-reduced proximal method.By expressing both fully-connected and convolutional layers as special cases of a high-order tensor product, we unify the underlying convex optimization problem in the tensor setting and derive a formula for the Lipschitz constant used to determine the optimal step size of the above proximal methods.We conduct experiments with standard convolutional networks applied to the CIFAR-10 and CIFAR-100 datasets and show that replacing a block of layers with multiple iterations of the corresponding solver, with step size set via, consistently improves classification accuracy.
A framework that links deep network layers to stochastic optimization algorithms; can be used to improve model accuracy and inform network design.
1,635
LEARNED STEP SIZE QUANTIZATION
Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases.Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy.Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured."Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters.", 'This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.
A method for learning quantization configuration for low precision networks that achieves state of the art performance for quantized networks.
1,636
Simple but effective techniques to reduce dataset biases
There have been several studies recently showing that strong natural language understanding models are prone to relying on unwanted dataset biases without learning the underlying task, resulting in models which fail to generalize to out-of-domain datasets, and are likely to perform poorly in real-world scenarios.We propose several learning strategies to train neural models which are more robust to such biases and transfer better to out-of-domain datasets.We introduce an additional lightweight bias-only model which learns dataset biases and uses its prediction to adjust the loss of the base model to reduce the biases.In other words, our methods down-weight the importance of the biased examples, and focus training on hard examples, i.e. examples that cannot be correctly classified by only relying on biases.Our approaches are model agnostic and simple to implement. We experiment on large-scale natural language inference and fact verification datasets and their out-of-domain datasets and show that our debiased models significantly improve the robustness in all settings, including gaining 9.76 points on the FEVER symmetric evaluation dataset, 5.45 on the HANS dataset and 4.78 points on the SNLI hard set. These datasets are specifically designed to assess the robustness of models in the out-of-domain setting where typical biases in the training data do not exist in the evaluation set.
We propose several general debiasing strategies to address common biases seen in different datasets and obtain substantial improved out-of-domain performance in all settings.
1,637
Extreme Few-view CT Reconstruction using Deep Inference
Reconstruction of few-view x-ray Computed Tomography data is a highly ill-posed problem.It is often used in applications that require low radiation dose in clinical CT, rapid industrial scanning, or fixed-gantry CT.Existing analytic or iterative algorithms generally produce poorly reconstructed images, severely deteriorated by artifacts and noise, especially when the number of x-ray projections is considerably low.This paper presents a deep network-driven approach to address extreme few-view CT by incorporating convolutional neural network-based inference into state-of-the-art iterative reconstruction.The proposed method interprets few-view sinogram data using attention-based deep networks to infer the reconstructed image.The predicted image is then used as prior knowledge in the iterative algorithm for final reconstruction.We demonstrate effectiveness of the proposed approach by performing reconstruction experiments on a chest CT dataset.
We present a CNN inference-based reconstruction algorithm to address extremely few-view CT.
1,638
Wizard of Wikipedia: Knowledge-Powered Conversational Agents
In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date.The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utterance to output, rather than employing recalled knowledge as context.Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding.To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses.Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction.
We build knowledgeable conversational agents by conditioning on Wikipedia + a new supervised task.
1,639
Online Semi-Supervised Learning with Bandit Feedback
We formulate a new problem at the intersection of semi-supervised learning and contextual bandits, motivated by several applications including clinical trials and dialog systems.We demonstrate how contextual bandit and graph convolutional networks can be adjusted to the new problem formulation.We then take the best of both approaches to develop multi-GCN embedded contextual bandit.Our algorithms are verified on several real world datasets.
Synthesis of GCN and LINUCB algorithms for online learning with missing feedbacks
1,640
How to measure the consistency of the tagging of scientific papers?
A collection of scientific papers is often accompanied by tags: keywords, topics, concepts etc., associated with each paper. Sometimes these tags are human-generated, sometimes they are machine-generated. We propose a simple measure of the consistency of the tagging of scientific papers: whether these tags are predictive for the citation graph links. Since the authors tend to cite papers about the topics close to those of their publications, a consistent tagging system could predict citations. We present an algorithm to calculate consistency, and experiments with human- and machine-generated tags. We show that augmentation, i.e. the combination of the manual tags with the machine-generated ones, can enhance the consistency of the tags. We further introduce cross-consistency, the ability to predict citation links between papers tagged by different taggers, e.g. manually and by a machine. Cross-consistency can be used to evaluate the tagging quality when the amount of labeled data is limited.
A good tagger gives similar tags to a given paper and the papers it cites
1,641
Defensive Quantization Layer For Convolutional Network Against Adversarial Attack
Recent research has intensively revealed the vulnerability of deep neural networks, especially for convolutional neural networks on the task of image recognition, through creating adversarial samples which `"slightly" differ from legitimate samples.This vulnerability indicates that these powerful models are sensitive to specific perturbations and cannot filter out these adversarial perturbations.In this work, we propose a quantization-based method which enables a CNN to filter out adversarial perturbations effectively.Notably, different from prior work on input quantization, we apply the quantization in the intermediate layers of a CNN.Our approach is naturally aligned with the clustering of the coarse-grained semantic information learned by a CNN.Furthermore, to compensate for the loss of information which is inevitably caused by the quantization, we propose the multi-head quantization, where we project data points to different sub-spaces and perform quantization within each sub-space.We enclose our design in a quantization layer named as the Q-Layer.The results obtained on MNIST and Fashion-MNSIT datasets demonstrate that only adding one Q-Layer into a CNN could significantly improve its robustness against both white-box and black-box attacks.
We propose a quantization-based method which regularizes a CNN's learned representations to be automatically aligned with trainable concept matrix hence effectively filtering out adversarial perturbations.
1,642
Invariant and Equivariant Graph Networks
Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs.A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant layers.Although this question is answered for the first three examples, a full characterization of invariant and equivariant linear layers for graphs is not known.In this paper we provide a characterization of all permutation invariant and equivariant linear layers forgraph data, and show that their dimension, in case of edge-value graph data, is and, respectively.More generally, for graph data defined on-tuples of nodes, the dimension is the-th and-th Bell numbers.Orthogonal bases for the layers are computed, including generalization to multi-graph data.The constant number of basis elements and their characteristics allow successfully applying the networks to different size graphs.From the theoretical point of view, our results generalize and unify recent advancement in equivariant deep learning.In particular, we show that our model is capable of approximating any message passing neural network.Applying these new linear layers in a simple deep neural network framework is shown to achieve comparable results to state-of-the-art and to have better expressivity than previous invariant and equivariant bases.
The paper provides a full characterization of permutation invariant and equivariant linear layers for graph data.
1,643
Causally Correct Partial Models for Reinforcement Learning
In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions.", 'However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional.For this reason, previous works have considered partial models, which model only part of the observation."In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning.", 'To address this, we introduce a general family of partial models that are provably causally correct, but avoid the need to fully model future observations.
Causally correct partial models do not have to generate the whole observation to remain causally correct in stochastic environments.
1,644
Efficient Lifelong Learning with A-GEM
In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task.In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost.Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation.Second, we introduce a new metric measuring how quickly a learner acquires a new skill.Third, we propose an improved version of GEM, dubbed Averaged GEM, which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC and other regularization-based methods.Finally, we show that all algorithms including A-GEM can learn even more quickly if they are provided with task descriptors specifying the classification tasks under consideration.Our experiments on several standard lifelong learning benchmarks demonstrate that A-GEM has the best trade-off between accuracy and efficiency
An efficient lifelong learning algorithm that provides a better trade-off between accuracy and time/ memory complexity compared to other algorithms.
1,645
Mixed Precision Training With 8-bit Floating Point
Reduced precision computation is one of the key areas addressing the widening’compute gap’, driven by an exponential growth in deep learning applications.In recent years, deep neural network training has largely migrated to 16-bit precision,with significant gains in performance and energy efficiency.However, attempts to train DNNs at 8-bit precision have met with significant challenges, because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. We demonstrate state-of-the-art accuracy across multiple data setsand a broader set of workloads than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point, to improve error propagation.We also examine the impact of quantization noise on generalization, and propose a stochastic rounding technique to address gradient noise.As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.
We demonstrated state-of-the-art training results using 8-bit floating point representation, across Resnet, GNMT, Transformer.
1,646
INSTANCE CROSS ENTROPY FOR DEEP METRIC LEARNING
Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed.Some supervise the learning process by pairwise or tripletwise similarity constraints while others take the advantage of structured similarity information among multiple data points.In this work, we approach deep metric learning from a novel perspective.We propose instance cross entropy which measures the difference between an estimated instance-level matching distribution and its ground-truth one.ICE has three main appealing properties.Firstly, similar to categorical cross entropy, ICE has clear probabilistic interpretation and exploits structured semantic similarity information for learning supervision.Secondly, ICE is scalable to infinite training data as it learns on mini-batches iteratively and is independent of the training set size.Thirdly, motivated by our relative weight analysis, seamless sample reweighting is incorporated.It rescales samples’ gradients to control the differentiation degree over training examples instead of truncating them by sample mining.In addition to its simplicity and intuitiveness, extensive experiments on three real-world benchmarks demonstrate the superiority of ICE.
We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one.
1,647
Modeling the Long Term Future in Model-Based Reinforcement Learning
In model-based reinforcement learning, the agent interleaves between model learning and planning. These two components are inextricably intertwined.If the model is not able to provide sensible long-term prediction, the executed planer would exploit model flaws, which can yield catastrophic failures.This paper focuses on building a model that reasons about the long-term future and demonstrates how to use this for efficient planning and exploration.To this end, we build a latent-variable autoregressive model by leveraging recent ideas in variational inference.We argue that forcing latent variables to carry future information through an auxiliary task substantially improves long-term predictions."Moreover, by planning in the latent space, the planner's solution is ensured to be within regions where the model is valid.", 'An exploration strategy can be devised by searching for unlikely trajectories under the model.Our methods achieves higher reward faster compared to baselines on a variety of tasks and environments in both the imitation learning and model-based reinforcement learning settings.
incorporating, in the model, latent variables that encode future content improves the long-term prediction accuracy, which is critical for better planning in model-based RL.
1,648
Integrative Tensor-based Anomaly Detection System For Satellites
Detecting anomalies is of growing importance for various industrial applications and mission-critical infrastructures, including satellite systems.Although there have been several studies in detecting anomalies based on rule-based or machine learning-based approaches for satellite systems, a tensor-based decomposition method has not been extensively explored for anomaly detection.In this work, we introduce an Integrative Tensor-based Anomaly Detection framework to detect anomalies in a satellite system.Because of the high risk and cost, detecting anomalies in a satellite system is crucial.We construct 3rd-order tensors with telemetry data collected from Korea Multi-Purpose Satellite-2 and calculate the anomaly score using one of the component matrices obtained by applying CANDECOMP/PARAFAC decomposition to detect anomalies.Our result shows that our tensor-based approach can be effective in achieving higher accuracy and reducing false positives in detecting anomalies as compared to other existing approaches.
Integrative Tensor-based Anomaly Detection(ITAD) framework for a satellite system.
1,649
Information asymmetry in KL-regularized RL
Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time.In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning.We start from the KL regularized expected reward objective which introduces an additional component, a default policy.Instead of relying on a fixed default policy, we learn it from data.But crucially, we restrict the amount of information the default policy receives, forcing it to learn reusable behaviors that help the policy learn faster.We formalize this strategy and discuss connections to information bottleneck approaches and to the variational EM algorithm.We present empirical results in both discrete and continuous action domains and demonstrate that, for certain tasks, learning a default policy alongside the policy can significantly speed up and improve learning.Please watch the video demonstrating learned experts and default policies on several continuous control tasks.
Limiting state information for the default policy can improvement performance, in a KL-regularized RL framework where both agent and default policy are optimized together
1,650
Explaining Image Classifiers by Counterfactual Generation
When an image classifier makes a prediction, which parts of the image are relevant and why?We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision?', "Producing an answer requires marginalizing over images that could have been seen but weren't.", 'We can sample plausible image in-fills by conditioning a generative model on the rest of the image."We then optimize to find the image regions that most change the classifier's decision after in-fill.", 'Our approach contrasts with ad-hoc in-filling approaches, such as blurring or injecting noise, which generate inputs far from the data distribution, and ignore informative relationships between different parts of the image.Our method produces more compact and relevant saliency maps, with fewer artifacts compared to previous methods.
We compute saliency by using a strong generative model to efficiently marginalize over plausible alternative inputs, revealing concentrated pixel areas that preserve label information.
1,651
Variation Network: Learning High-level Attributes for Controlled Input Manipulation
This paper presents the Variation Network, a generative model providing means to manipulate the high-level attributes of a given input.The originality of our approach is that VarNet is not only capable of handling pre-defined attributes but can also learn the relevant attributes of the dataset by itself. These two settings can be easily combined which makes VarNet applicable for a wide variety of tasks.Further, VarNet has a sound probabilistic interpretation which grants us with a novel way to navigate in the latent spaces as well as means to control how the attributes are learned.We demonstrate experimentally that this model is capable of performing interesting input manipulation and that the learned attributes are relevant and interpretable.
The Variation Network is a generative model able to learn high-level attributes without supervision that can then be used for controlled input manipulation.
1,652
Learning The Difference That Makes A Difference With Counterfactually-Augmented Data
Despite alarm over the reliance of machine learning systems on so-called spurious patterns in training data, the term lacks coherent meaning in standard statistical frameworks.However, the language of causality offers clarity: spurious associations are those due to a common cause vs direct or indirect effects.In this paper, we focus on NLP, introducing methods and resources for training models insensitive to spurious patterns.Given documents and their initial labels, we task humans with revise each document to accord with a counterfactual target label, asking that the revised documents be internally coherent while avoiding any gratuitous changes.Interestingly, on sentiment analysis and natural language inference tasks, classifiers trained on original data fail on their counterfactually-revised counterparts and vice versa.Classifiers trained on combined datasets perform remarkably well, just shy of those specialized to either domain.While classifiers trained on either original or manipulated data alone are sensitive to spurious features, models trained on the combined data are insensitive to this signal.We will publicly release both datasets.
Humans in the loop revise documents to accord with counterfactual labels, resulting resource helps to reduce reliance on spurious associations.
1,653
Evaluations and Methods for Explanation through Robustness Analysis
Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive way to explain a model.In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation.By measuring the tolerance level for an adversarial attack, we can extract a set of features that provides most robust support for a current prediction, and also can extract a set of features that contrasts the current prediction to a target class by setting a targeted adversarial attack.By applying this methodology to various prediction tasks across multiple domains, we observed the derived explanations are indeed capturing the significant feature set qualitatively and quantitatively.
We propose new objective measurement for evaluating explanations based on the notion of adversarial robustness. The evaluation criteria further allows us to derive new explanations which capture pertinent features qualitatively and quantitatively.
1,654
MisGAN: Learning from Incomplete Data with Generative Adversarial Networks
Generative adversarial networks have been shown to provide an effective way to model complex distributions and have obtained impressive results on various challenging tasks.However, typical GANs require fully-observed data during training.In this paper, we present a GAN-based framework for learning from complex, high-dimensional incomplete data.The proposed framework learns a complete data generator along with a mask generator that models the missing data distribution.We further demonstrate how to impute missing data by equipping our framework with an adversarially trained imputer.We evaluate the proposed framework using a series of experiments with several types of missing data processes under the missing completely at random assumption.
This paper presents a GAN-based framework for learning the distribution from high-dimensional incomplete data.
1,655
An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on Smoothly Varying Weight Hypothesis
Due to a resource-constrained environment, network compression has become an important part of deep neural networks research.In this paper, we propose a new compression method, Inter-Layer Weight Prediction and quantization method which quantize the predicted residuals between the weights in all convolution layers based on an inter-frame prediction method in conventional video coding schemes.Furthermore, we found a phenomenon Smoothly Varying Weight Hypothesis which is that the weights in adjacent convolution layers share strong similarity in shapes and values, i.e., the weights tend to vary smoothly along with the layers.Based on SVWH, we propose a second ILWP and quantization method which quantize the predicted residuals between the weights in adjacent convolution layers.Since the predicted weight residuals tend to follow Laplace distributions with very low variance, the weight quantization can more effectively be applied, thus producing more zero weights and enhancing the weight compression ratio.In addition, we propose a new inter-layer loss for eliminating non-texture bits, which enabled us to more effectively store only texture bits.That is, the proposed loss regularizes the weights such that the collocated weights between the adjacent two layers have the same values.Finally, we propose an ILWP with an inter-layer loss and quantization method.Our comprehensive experiments show that the proposed method achieves a much higher weight compression rate at the same accuracy level compared with the previous quantization-based compression methods in deep neural networks.
We propose a new compression method, Inter-Layer Weight Prediction (ILWP) and quantization method which quantize the predicted residuals between the weights in convolution layers.
1,656
FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks
There exists a plethora of techniques for inducing structured sparsity in parametric models during the optimization process, with the final goal of resource-efficient inference.However, to the best of our knowledge, none target a specific number of floating-point operations as part of a single end-to-end optimization objective, despite reporting FLOPs as part of the results.Furthermore, a one-size-fits-all approach ignores realistic system constraints, which differ significantly between, say, a GPU and a mobile phone -- FLOPs on the former incur less latency than on the latter; thus, it is important for practitioners to be able to specify a target number of FLOPs during model compression.In this work, we extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective and show that, given a desired FLOPs requirement, different neural networks can be successfully trained for image classification.
We extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective, and we show that, given a desired FLOPs requirement, different neural networks are successfully trained.
1,657
Hierarchical Image-to-image Translation with Nested Distributions Modeling
Unpaired image-to-image translation among category domains has achieved remarkable success in past decades.Recent studies mainly focus on two challenges.For one thing, such translation is inherently multimodal due to variations of domain-specific information.For another, existing multimodal approaches have limitations in handling more than two domains, i.e. they have to independently build one model for every pair of domains.To address these problems, we propose the Hierarchical Image-to-image Translation method which jointly formulates the multimodal and multi-domain problem in a semantic hierarchy structure, and can further control the uncertainty of multimodal.Specifically, we regard the domain-specific variations as the result of the multi-granularity property of domains, and one can control the granularity of the multimodal translation by dividing a domain with large variations into multiple subdomains which capture local and fine-grained variations.With the assumption of Gaussian prior, variations of domains are modeled in a common space such that translations can further be done among multiple domains within one model.To learn such complicated space, we propose to leverage the inclusion relation among domains to constrain distributions of parent and children to be nested.Experiments on several datasets validate the promising results and competitive performance against state-of-the-arts.
Granularity controled multi-domain and multimodal image to image translation method
1,658
Generalized Tensor Models for Recurrent Neural Networks
Recurrent Neural Networks are very successful at solving challenging problems with sequential data.However, this observed efficiency is not yet entirely explained by theory.It is known that a certain class of multiplicative RNNs enjoys the property of depth efficiency --- a shallow network of exponentially large width is necessary to realize the same score function as computed by such an RNN.Such networks, however, are not very often applied to real life tasks.In this work, we attempt to reduce the gap between theory and practice by extending the theoretical analysis to RNNs which employ various nonlinearities, such as Rectified Linear Unit, and show that they also benefit from properties of universality and depth efficiency.Our theoretical results are verified by a series of extensive computational experiments.
Analysis of expressivity and generality of recurrent neural networks with ReLu nonlinearities using Tensor-Train decomposition.
1,659
ADef: an Iterative Algorithm to Construct Adversarial Deformations
While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood.In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image.In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step.We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101.
We propose a new, efficient algorithm to construct adversarial examples by means of deformations, rather than additive perturbations.
1,660
Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow
Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable.Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients.In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck."By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients.", 'We demonstrate that our proposed variational discriminator bottleneck leads to significant improvements across three distinct application areas for adversarial learning algorithms.Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running.We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods.The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings.Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods.
Regularizing adversarial learning with an information bottleneck, applied to imitation learning, inverse reinforcement learning, and generative adversarial networks.
1,661
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation
Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions.While architectural advances have led to improved accuracy, building robust models remains challenging, involving major changes in training procedure and datasets. Prior work has argued that there is an inherent trade-off between robustness and accuracy, as exemplified by standard data augmentation techniques such as Cutout, which improves clean accuracy but not robustness, and additive Gaussian noise, which improves robustness but hurts accuracy.We introduce Patch Gaussian, a simple augmentation scheme that adds noise to randomly selected patches in an input image. Models trained with Patch Gaussian achieve state of the art on the CIFAR-10 and ImageNet Common Corruptions benchmarks while also maintaining accuracy on clean data.We find that this augmentation leads to reduced sensitivity to high frequency noise while retaining the ability to take advantage of relevant high frequency information in the image.We show it can be used in conjunction with other regularization methods and data augmentation policies such as AutoAugment. Finally, we find that the idea of restricting perturbations to patches can also be useful in the context of adversarial learning, yielding models without the loss in accuracy that is found with unconstrained adversarial training.
Simple augmentation method overcomes robustness/accuracy trade-off observed in literature and opens questions about the effect of training distribution on out-of-distribution generalization.
1,662
Mixture Density Networks Find Viewpoint the Dominant Factor for Accurate Spatial Offset Regression
Offset regression is a standard method for spatial localization in many vision tasks, including human pose estimation, object detection, and instance segmentation.However,if high localization accuracy is crucial for a task, convolutional neural networks will offset regressionusually struggle to deliver. This can be attributed to the locality of the convolution operation, exacerbated by variance in scale, clutter, and viewpoint.An even more fundamental issue is the multi-modality of real-world images.As a consequence, they cannot be approximated adequately using a single mode model. Instead, we propose to use mixture density networks for offset regression, allowing the model to manage various modes efficiently and learning to predict full conditional density of the outputs given the input.On 2D human pose estimation in the wild, which requires accurate localisation of body keypoints, we show that this yields significant improvement in localization accuracy.In particular, our experiments reveal viewpoint variation as the dominant multi-modal factor.Further, by carefully initializing MDN parameters, we do not face any instabilities in training, which is known to be a big obstacle for widespread deployment of MDN.The method can be readily applied to any task with a spatial regression component.Our findings highlight the multi-modal nature of real-world vision, and the significance of explicitly accounting for viewpoint variation, at least when spatial localization is concerned.
We use mixture density networks to do full conditional density estimation for spatial offset regression and apply it to the human pose estimation task.
1,663
Visualizing Music Transformer
Like language, music can be represented as a sequence of discrete symbols that form a hierarchical syntax, with notes being roughly like characters and motifs of notes like words. Unlike text however, music relies heavily on repetition on multiple timescales to build structure and meaning.The Music Transformer has shown compelling results in generating music with structure. In this paper, we introduce a tool for visualizing self-attention on polyphonic music with an interactive pianoroll. We use music transformer as both a descriptive tool and a generative model. For the former, we use it to analyze existing music to see if the resulting self-attention structure corroborates with the musical structure known from music theory. ', "For the latter, we inspect the model's self-attention during generation, in order to understand how past notes affect future ones.", 'We also compare and contrast the attention structure of regular attention to that of relative attention, and examine its impact on the resulting generated music. For example, for the JSB Chorales dataset, a model trained with relative attention is more consistent in attending to all the voices in the preceding timestep and the chords before, and at cadences to the beginning of a phrase, allowing it to create an arc. We hope that our analyses will offer more evidence for relative self-attention as a powerful inductive bias for modeling music. We invite the reader to explore our video animations of music attention and to interact with the visualizations at https://storage.googleapis.com/nips-workshop-visualization/index.html.
Visualizing the differences between regular and relative attention for Music Transformer.
1,664
Three factors influencing minima in SGD
We study the statistical properties of the endpoint of stochastic gradient descent.We approximate SGD as a stochastic differential equation and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients..Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount.We experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small.
Three factors (batch size, learning rate, gradient noise) change in predictable way the properties (e.g. sharpness) of minima found by SGD.
1,665
A closer look at the word analogy problem
Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems.In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns.My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted.Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model’s ability to learn the relationship between a word pair the way a human does.
Simple generative approach to solve the word analogy problem which yields insights into word relationships, and the problems with estimating them
1,666
Rethinking the Hyperparameters for Fine-tuning
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks.Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper-parameters and keeping them fixed to values normally used for training from scratch.This paper re-examines several common practices of setting hyper-parameters for fine-tuning.Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter.We find that picking the right value for momentum is critical for fine-tuning performance and connect it with previous theoretical findings. Optimal hyper-parameters for fine-tuning in particular the effective learning rate are not only dataset dependent but also sensitive to the similarity between the source domain and target domain.This is in contrast to hyper-parameters for training from scratch. Reference-based regularization that keeps models close to the initial model does not necessarily apply for "dissimilar" datasets.Our findings challenge common practices of fine- tuning and encourages deep learning practitioners to rethink the hyper-parameters for fine-tuning.
This paper re-examines several common practices of setting hyper-parameters for fine-tuning.
1,667
DELTA: DEEP LEARNING TRANSFER USING FEATURE MAP WITH ATTENTION FOR CONVOLUTIONAL NETWORKS
Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task.To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references, have been studied.In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention.Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network.Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in an supervised learning manner.We evaluate DELTA with the state-of-the-art algorithms, including L2 and L2-SP.The experiment results show that our proposed method outperforms these baselines with higher accuracy for new tasks.
improving deep transfer learning with regularization using attention based feature maps
1,668
Multi-Dimensional Explanation of Reviews
Neural models achieved considerable improvement for many natural language processing tasks, but they offer little transparency, and interpretability comes at a cost.In some domains, automated predictions without justifications have limited applicability.Recently, progress has been made regarding single-aspect sentiment analysis for reviews, where the ambiguity of a justification is minimal.In this context, a justification, or mask, consists of word sequences from the input text, which suffice to make the prediction.Existing models cannot handle more than one aspect in one training and induce binary masks that might be ambiguous.In our work, we propose a neural model for predicting multi-aspect sentiments for reviews and generates a probabilistic multi-dimensional mask simultaneously, in an unsupervised and multi-task learning manner.Our evaluation shows that on three datasets, in the beer and hotel domain, our model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable.
Neural model predicting multi-aspect sentiments and generating a probabilistic multi-dimensional mask simultaneously. Model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable.
1,669
Alpha-divergence bridges maximum likelihood and reinforcement learning in neural sequence generation
Neural sequence generation is commonly approached by using maximum- likelihood estimation or reinforcement learning.However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency.We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL.In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL.We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases.We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α.Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.
Propose new objective function for neural sequence generation which integrates ML-based and RL-based objective functions.
1,670
Siamese Capsule Networks
Capsule Networks have shown encouraging results on benchmark computer vision datasets such as MNIST, CIFAR and smallNORB.Although, they are yet to be tested on tasks where the entities detected inherently have more complex internal representations and there are very few instances per class to learn from and where point-wise classification is not suitable.Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points.In doing so we introduce , a new variant that can be used for pairwise learning tasks.We find that the model improves over baselines in the few-shot learning setting, suggesting that capsule networks are efficient at learning discriminative representations when given few samples. We find that perform well against strong baselines on both pairwise learning datasets when trained using a contrastive loss with-normalized capsule encoded pose features, yielding best results in the few-shot learning setting where image pairs in the test set contain unseen subjects.
A pairwise learned capsule network that performs well on face verification tasks given limited labeled data
1,671
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN and Categorical DQN, while giving better run-time performance than A3C.Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting.The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones.Next, we introduce the β-leaveone-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline.Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization.Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance.Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
Reactor combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN while giving better run-time performance than A3C.
1,672
Parsing-based Approaches for Verification and Recognition of Hierarchical Plans
Hierarchical planning, in particular, Hierarchical Task Networks, was proposed as a method to describe plans by decomposition of tasks to sub-tasks until primitive tasks, actions, are obtained.Plan verification assumes a complete plan as input, and the objective is finding a task that decomposes to this plan.In plan recognition, a prefix of the plan is given and the objective is finding a task that decomposes to the plan with the given prefix.This paper describes how to verify and recognize plans using a common method known from formal grammars, by parsing.
The paper describes methods to verify and recognize HTN plans by parsing of attribute grammars.
1,673
EXPLORING NEURAL ARCHITECTURE SEARCH FOR LANGUAGE TASKS
Neural architecture search, the task of finding neural architectures automatically, has recently emerged as a promising approach for unveiling better models over human-designed ones.However, most success stories are for vision tasks and have been quite limited for text, except for a small language modeling setup.In this paper, we explore NAS for text sequences at scale, by first focusing on the task of language translation and later extending to reading comprehension.From a standard sequence-to-sequence models for translation, we conduct extensive searches over the recurrent cells and attention similarity functions across two translation tasks, IWSLT English-Vietnamese and WMT German-English.We report challenges in performing cell searches as well as demonstrate initial success on attention searches with translation improvements over strong baselines.In addition, we show that results on attention searches are transferable to reading comprehension on the SQuAD dataset.
We explore neural architecture search for language tasks. Recurrent cell search is challenging for NMT, but attention mechanism search works. The result of attention search on translation is transferable to reading comprehension.
1,674
Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer
Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code.In some cases, autoencoders can "interpolate": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints.In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data.We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting.We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.
We propose a regularizer that improves interpolation and autoencoders and show that it also improves the learned representation for downstream tasks.
1,675
From Here to There: Video Inbetweening Using Direct 3D Convolutions
We consider the problem of generating plausible and diverse video sequences, when we are only given a start and an end frame.This task is also known as inbetweening, and it belongs to the broader area of stochastic video generation, which is generally approached by means of recurrent neural networks.In this paper, we propose instead a fully convolutional model to generate video sequences directly in the pixel domain.We first obtain a latent video representation using a stochastic fusion mechanism that learns how to incorporate information from the start and end frames.Our model learns to produce such latent representation by progressively increasing the temporal resolution, and then decode in the spatiotemporal domain using 3D convolutions.The model is trained end-to-end by minimizing an adversarial loss.Experiments on several widely-used benchmark datasets show that it is able to generate meaningful and diverse in-between video sequences, according to both quantitative and qualitative evaluations.
This paper presents method for stochastically generating in-between video frames from given key frames, using direct 3D convolutions.
1,676
Weakly-supervised Knowledge Graph Alignment with Adversarial Learning
Aligning knowledge graphs from different sources or languages, which aims to align both the entity and relation, is critical to a variety of applications such as knowledge graph construction and question answering.Existing methods of knowledge graph alignment usually rely on a large number of aligned knowledge triplets to train effective models.However, these aligned triplets may not be available or are expensive to obtain for many domains.Therefore, in this paper we study how to design fully-unsupervised methods or weakly-supervised methods, i.e., to align knowledge graphs without or with only a few aligned triplets.We propose an unsupervised framework based on adversarial training, which is able to map the entities and relations in a source knowledge graph to those in a target knowledge graph.This framework can be further seamlessly integrated with existing supervised methods, where only a limited number of aligned triplets are utilized as guidance.Experiments on real-world datasets prove the effectiveness of our proposed approach in both the weakly-supervised and unsupervised settings.
This paper studies weakly-supervised knowledge graph alignment with adversarial training frameworks.
1,677
Meta-Learning to Guide Segmentation
There are myriad kinds of segmentation, and ultimately the `"right" segmentation of a given scene is in the eye of the annotator.Standard approaches require large amounts of labeled data to learn just one particular kind of segmentation.As a first step towards relieving this annotation burden, we propose the problem of guided segmentation: given varying amounts of pixel-wise labels, segment unannotated pixels by propagating supervision locally and non-locally.We propose guided networks, which extract a latent task representation---guidance---from variable amounts and classes of pixel supervision and optimize our architecture end-to-end for fast, accurate, and data-efficient segmentation by meta-learning.To span the few-shot and many-shot learning regimes, we examine guidance from as little as one pixel per concept to as much as 1000+ images, and compare to full gradient optimization at both extremes.To explore generalization, we analyze guidance as a bridge between different levels of supervision to segment classes as the union of instances.Our segmentor concentrates different amounts of supervision of different types of classes into an efficient latent representation, non-locally propagates this supervision across images, and can be updated quickly and cumulatively when given more supervision.
We propose a meta-learning approach for guiding visual segmentation tasks from varying amounts of supervision.
1,678
LOGAN: Latent Optimisation for Generative Adversarial Networks
Training generative adversarial networks requires balancing of delicate adversarial dynamics.Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes.In this work, we introduce a new form of latent optimisation inspired by the CS-GAN and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator.We develop supporting theoretical analysis from the perspectives of differentiable games and stochastic approximation.Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet dataset.Our model achieves an Inception Score of 148 and an Frechet Inception Distance of 3.4, an improvement of 17% and 32% in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters.
Latent optimisation improves adversarial training dynamics. We present both theoretical analysis and state-of-the-art image generation with ImageNet 128x128.
1,679
Theoretical properties of the global optimizer of two-layer Neural Network
In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset.We look at this problem in the setting where the number of parameters is greater than the number of sampled points.We show that for a wide class of differentiable activation functions, we have that arbitrary first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular.We essentially show that these non-singular hidden layer matrix satisfy a "good" property for these big class of activation functions.Techniques involved in proving this result inspire us to look at a new algorithmic, where in between two gradient step of hidden layer, we add a stochastic gradient descent step of the output layer.In this new algorithmic framework, we extend our earlier result and show that for all finite iterations the hidden layer satisfies thegood" property mentioned earlier therefore partially explaining success of noisy gradient methods and addressing the issue of data independency of our earlier result.Both of these results are easily extended to hidden layers given by a flat matrix from that of a square matrix.Results are applicable even if network has more than one hidden layer provided all inner hidden layers are arbitrary, satisfy non-singularity, all activations are from the given class of differentiable functions and optimization is only with respect to the outermost hidden layer.Separately, we also study the smoothness properties of the objective function and show that it is actually Lipschitz smooth, i.e., its gradients do not change sharply.We use smoothness properties to guarantee asymptotic convergence of to a first-order optimal solution.
This paper talks about theoretical properties of first-order optimal point of two layer neural network in over-parametrized case
1,680
Nonlinear Channels Aggregation Networks for Deep Action Recognition
We introduce the concept of channel aggregation in ConvNet architecture, a novel compact representation of CNN features useful for explicitly modeling the nonlinear channels encoding especially when the new unit is embedded inside of deep architectures for action recognition.The channel aggregation is based on multiple-channels features of ConvNet and aims to be at the spot finding the optical convergence path at fast speed.We name our proposed convolutional architecture “nonlinear channels aggregation networks” and its new layer “nonlinear channels aggregation layer”.We theoretically motivate channels aggregation functions and empirically study their effect on convergence speed and classification accuracy.Another contribution in this work is an efficient and effective implementation of the NCAL, speeding it up orders of magnitude.We evaluate its performance on standard benchmarks UCF101 and HMDB51, and experimental results demonstrate that this formulation not only obtains a fast convergence but stronger generalization capability without sacrificing performance.
An architecture enables CNN trained on the video sequences converging rapidly
1,681
Black-Box Adversarial Attack with Transferable Model-based Embedding
We present a new method for black-box adversarial attack.Unlike previous methods that combined transfer-based and scored-based methods by using the gradient or initialization of a surrogate white-box model, this new method tries to learn a low-dimensional embedding using a pretrained model, and then performs efficient search within the embedding space to attack an unknown target network.The method produces adversarial perturbations with high level semantic patterns that are easily transferable.We show that this approach can greatly improve the query efficiency of black-box adversarial attack across different target network architectures.We evaluate our approach on MNIST, ImageNet and Google Cloud Vision API, resulting in a significant reduction on the number of queries.We also attack adversarially defended networks on CIFAR10 and ImageNet, where our method not only reduces the number of queries, but also improves the attack success rate.
We present a new method that combines transfer-based and scored black-box adversarial attack, improving the success rate and query efficiency of black-box adversarial attack across different network architectures.
1,682
A Neuro-AI Interface: Learning DNNs from the Human Brain
Deep neural networks are inspired from the human brain and the interconnection between the two has been widely studied in the literature. However, it is still an open question whether DNNs are able to make decisions like the brain."Previous work has demonstrated that DNNs, trained by matching the neural responses from inferior temporal cortex in monkey's brain, is able to achieve human-level performance on the image object recognition tasks.", 'This indicates that neural dynamics can provide informative knowledge to help DNNs accomplish specific tasks."In this paper, we introduce the concept of a neuro-AI interface, which aims to use human's neural responses as supervised information for helping AI systems solve a task that is difficult when using traditional machine learning strategies.", 'In order to deliver the idea of neuro-AI interfaces, we focus on deploying it to one of the fundamental problems in generative adversarial networks: designing a proper evaluation metric to evaluate the quality of images produced by GANs.
Describe a neuro-AI interface technique to evaluate generative adversarial networks
1,683
A Scalable Risk-based Framework for Rigorous Autonomous Vehicle Evaluation
While recent developments in autonomous vehicle technology highlight substantial progress, we lack tools for rigorous and scalable testing.Real-world testing, the de facto evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims.We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms.Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior.We demonstrate our framework on a highway scenario.
Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior.
1,684
Learning To Avoid Negative Transfer in Few Shot Transfer Learning
Many tasks in natural language understanding require learning relationships between two sequences for various tasks such as natural language inference, paraphrasing and entailment.These aforementioned tasks are similar in nature, yet they are often modeled individually.Knowledge transfer can be effective for closely related tasks, which is usually carried out using parameter transfer in neural networks.However, transferring all parameters, some of which irrelevant for a target task, can lead to sub-optimal results and can have a negative effect on performance, referred to as transfer.Hence, this paper focuses on the transferability of both instances and parameters across natural language understanding tasks by proposing an ensemble-based transfer learning method in the context of few-shot learning.Our main contribution is a method for mitigating negative transfer across tasks when using neural networks, which involves dynamically bagging small recurrent neural networks trained on different subsets of the source task/s.We present a straightforward yet novel approach for incorporating these networks to a target task for few-shot learning by using a decaying parameter chosen according to the slope changes of a smoothed spline error curve at sub-intervals during training.Our proposed method show improvements over hard and soft parameter sharing transfer methods in the few-shot learning case and shows competitive performance against models that are trained given full supervision on the target task, from only few examples.
A dynamic bagging methods approach to avoiding negatve transfer in neural network few-shot transfer learning
1,685
Modeling treatment events in disease progression
Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment.Many clinical metrics cannot be acquired frequently either because of their cost or because they are inconvenient or harmful to a patient.In such scenarios, in order to estimate individual trajectories of disease progression, it is advantageous to leverage similarities between patients, i.e. the covariance of trajectories, and find a latent representation of progression.Most of existing methods for estimating trajectories do not account for events in-between observations, what dramatically decreases their adequacy for clinical practice.In this study, we develop a machine learning framework named Coordinatewise-Soft-Impute for analyzing disease progression from sparse observations in the presence of confounding events.CSI is guaranteed to converge to the global minimum of the corresponding optimization problem.Experimental results also demonstrates the effectiveness of CSI using both simulated and real dataset.
A novel matrix completion based algorithm to model disease progression with events
1,686
The Missing Ingredient in Zero-Shot Neural Machine Translation
Multilingual Neural Machine Translation systems are capable of translating between multiple source and target languages within a single system.An important indicator of generalization within these systems is the quality of zero-shot translation - translating between language pairs that the system has never seen during training.However, until now, the zero-shot performance of multilingual models has lagged far behind the quality that can be achieved by using a two step translation process that pivots through an intermediate language.In this work, we diagnose why multilingual models under-perform in zero shot settings.We propose explicit language invariance losses that guide an NMT encoder towards learning language agnostic representations.Our proposed strategies significantly improve zero-shot translation performance on WMT English-French-German and on the IWSLT 2017 shared task, and for the first time, match the performance of pivoting approaches while maintaining performance on supervised directions.
Simple similarity constraints on top of multilingual NMT enables high quality translation between unseen language pairs for the first time.
1,687
Finite Depth and Width Corrections to the Neural Tangent Kernel
We prove the precise scaling, at finite depth and width, for the mean and variance of the neural tangent kernel in a randomly initialized ReLU network.The standard deviation is exponential in the ratio of network depth to width.Thus, even in the limit of infinite overparameterization, the NTK is not deterministic if depth and width simultaneously tend to infinity.Moreover, we prove that for such deep and wide networks, the NTK has a non-trivial evolution during training by showing that the mean of its first SGD update is also exponential in the ratio of network depth to width.This is sharp contrast to the regime where depth is fixed and network width is very large.Our results suggest that, unlike relatively shallow and wide networks, deep and wide ReLU networks are capable of learning data-dependent features even in the so-called lazy training regime.
The neural tangent kernel in a randomly initialized ReLU net is non-trivial fluctuations as long as the depth and width are comparable.
1,688
Tensor Decompositions for Temporal Knowledge Base Completion
Most algorithms for representation learning and link prediction in relational data have been designed for static data.However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems.This is also the case for knowledge bases, which contain facts such as that are valid only at certain points in time.For the problem of link prediction under temporal constraints, i.e., answering queries of the form, we propose a solution inspired by the canonical decomposition of tensors of order 4.We introduce new regularization schemes and present an extension of ComplEx that achieves state-of-the-art performance.Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.
We propose new tensor decompositions and associated regularizers to obtain state of the art performances on temporal knowledge base completion.
1,689
Beyond Greedy Ranking: Slate Optimization via List-CVAE
The conventional approach to solving the recommendation problem greedily ranksindividual document candidates by prediction scores.However, this method fails tooptimize the slate as a whole, and hence, often struggles to capture biases causedby the page layout and document interdepedencies.The slate recommendationproblem aims to directly find the optimally ordered subset of documents that best serve users’ interests.Solving this problem is hard due to thecombinatorial explosion of document candidates and their display positions on thepage.Therefore we propose a paradigm shift from the traditional viewpoint of solving a ranking problem to a direct slate generation framework.In this paper, we introduce List Conditional Variational Auto-Encoders,which learn the joint distribution of documents on the slate conditionedon user responses, and directly generate full slates.Experiments on simulatedand real-world data show that List-CVAE outperforms greedy ranking methodsconsistently on various scales of documents corpora.
We used a CVAE type model structure to learn to directly generate slates/whole pages for recommendation systems.
1,690
EvoNet: A Neural Network for Predicting the Evolution of Dynamic Graphs
Neural networks for structured data like graphs have been studied extensively in recent years.To date, the bulk of research activity has focused mainly on static graphs.However, most real-world networks are dynamic since their topology tends to change over time.Predicting the evolution of dynamic graphs is a task of high significance in the area of graph mining.Despite its practical importance, the task has not been explored in depth so far, mainly due to its challenging nature.In this paper, we propose a model that predicts the evolution of dynamic graphs.Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs.Then, we employ a generative model which predicts the topology of the graph at the next time step and constructs a graph instance that corresponds to that topology.We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets.Results demonstrate the effectiveness of the proposed model.
Combining graph neural networks and the RNN graph generative model, we propose a novel architecture that is able to learn from a sequence of evolving graphs and predict the graph topology evolution for the future timesteps
1,691
Learning to Decompose Compound Questions with Reinforcement Learning
As for knowledge-based question answering, a fundamental problem is to relax the assumption of answerable questions from simple questions to compound questions.Traditional approaches firstly detect topic entity mentioned in questions, then traverse the knowledge graph to find relations as a multi-hop path to answers, while we propose a novel approach to leverage simple-question answerers to answer compound questions.Our model consists of two parts: a novel learning-to-decompose agent that learns a policy to decompose a compound question into simple questions and three independent simple-question answerers that classify the corresponding relations for each simple question.Experiments demonstrate that our model learns complex rules of compositionality as stochastic policy, which benefits simple neural networks to achieve state-of-the-art results on WebQuestions and MetaQA.We analyze the interpretable decomposition process as well as generated partitions.
We propose a learning-to-decompose agent that helps simple-question answerers to answer compound question over knowledge graph.
1,692
Matrix Product Operator Restricted Boltzmann Machines
A restricted Boltzmann machine learns a probabilistic distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling.Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor input.Matrix-variate and tensor-variate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by construction.This work presents the matrix product operator RBM that utilizes a tensor network generalization of Mv/TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power.A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed.
Propose a general tensor-based RBM model which can compress the model greatly at the same keep a strong model expression capacity
1,693
Robust Reinforcement Learning for Autonomous Driving
Autonomous driving is still considered as an “unsolved problem” given its inherent important variability and that many processes associated with its development like vehicle control and scenes recognition remain open issues.Despite reinforcement learning algorithms have achieved notable results in games and some robotic manipulations, this technique has not been widely scaled up to the more challenging real world applications like autonomous driving.In this work, we propose a deep reinforcement learning algorithm embedding an actor critic architecture with multi-step returns to achieve a better robustness of the agent learning strategies when acting in complex and unstable environments.The experiment is conducted with Carla simulator offering a customizable and realistic urban driving conditions.The developed deep actor RL guided by a policy-evaluator critic distinctly surpasses the performance of a standard deep RL agent.
An actor-critic reinforcement learning approach with multi-step returns applied to autonomous driving with Carla simulator.
1,694
A Classification-Based Perspective on GAN Distributions
A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks is whether GANs are actually able to capture the key characteristics of the datasets they are trained on.The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability.In this paper, we propose new techniques that employ classification-based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data.These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets.They also indicate that GANs have significant problems in reproducing the more distributional properties of the training dataset.In particular, the diversity of such synthetic data is orders of magnitude smaller than that of the original data.
We propose new methods for evaluating and quantifying the quality of synthetic GAN distributions from the perspective of classification tasks
1,695
A Deep Learning Approach for Survival Clustering without End-of-life Signals
The goal of survival clustering is to map subjects to clusters ranging from low-risk to high-risk.Existing survival methods assume the presence of clear signals or introduce them artificially using a pre-defined timeout.In this paper, we forego this assumption and introduce a loss function that differentiates between the empirical lifetime distributions of the clusters using a modified Kuiper statistic.We learn a deep neural network by optimizing this loss, that performs a soft clustering of users into survival groups.We apply our method to a social network dataset with over 1M subjects, and show significant improvement in C-index compared to alternatives.
The goal of survival clustering is to map subjects into clusters. Without end-of-life signals, this is a challenging task. To address this task we propose a new loss function by modifying the Kuiper statistics.
1,696
A Copula approach for hyperparameter transfer learning
Bayesian optimization is a popular methodology to tune the hyperparameters of expensive black-box functions.Despite its success, standard BO focuses on a single task at a time and is not designed to leverage information from related functions, such as tuning performance metrics of the same algorithm across multiple datasets.In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics.The main idea is to regress the mapping from hyperparameter to metric quantiles with a semi-parametric Gaussian Copula distribution, which provides robustness against different scales or outliers that can occur in different tasks.We introduce two methods to leverage this estimation: a Thompson sampling strategy as well as a Gaussian Copula process using such quantile estimate as a prior.We show that these strategies can combine the estimation of multiple metrics such as runtime and accuracy, steering the optimization toward cheaper hyperparameters for the same level of accuracy.Experiments on an extensive set of hyperparameter tuning tasks demonstrate significant improvements over state-of-the-art methods.
We show how using semi-parametric prior estimations can speed up HPO significantly across datasets and metrics.
1,697
An Etching Latte Art Support System by Tracing the Making Procedure Based on Projection Mapping
It is difficult for the beginners of etching latte art to make well-balanced patterns by using two fluids with different viscosities such as foamed milk and syrup.Even though making etching latte art while watching making videos which show the procedure, it is difficult to keep balance.Thus well-balanced etching latte art cannot be made easily.In this paper, we propose a system which supports the beginners to make well-balanced etching latte art by projecting a making procedure of etching latte art directly onto a cappuccino.The experiment results show the progress by using our system. We also discuss about the similarity of the etching latte art and the design templates by using background subtraction.
We have developed an etching latte art support system which projects the making procedure directly onto a cappuccino to help the beginners to make well-balanced etching latte art.
1,698
Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation
We focus on temporal self-supervision for GAN-based video generation tasks.While adversarial training successfully yields generative models for a variety of areas, temporal relationship in the generated data is much less explored.This is crucial for sequential generation tasks, e.g. video super-resolution and unpaired video translation.For the former, state-of-the-art methods often favor simpler norm losses such as L2 over adversarial training.However, their averaging nature easily leads to temporally smooth results with an undesirable lack of spatial detail.For unpaired video translation, existing approaches modify the generator networks to form spatio-temporal cycle consistencies.In contrast, we focus on improving the learning objectives and propose a temporally self-supervised algorithm.For both tasks, we show that temporal adversarial learning is key to achieving temporally coherent solutions without sacrificing spatial detail.We also propose a novel Ping-Pong loss to improve the long-term temporal consistency.It effectively prevents recurrent networks from accumulating artifacts temporally without depressing detailed features.We also propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution.A series of user studies confirms the rankings computed with these metrics.
We propose temporal self-supervisions for learning stable temporal functions with GANs.
1,699
Bridging HMMs and RNNs through Architectural Transformations
A distinct commonality between HMMs and RNNs is that they both learn hidden representations for sequential data.In addition, it has been noted that the backward computation of the Baum-Welch algorithm for HMMs is a special case of the back propagation algorithm used for neural networks). Do these observations suggest that, despite their many apparent differences, HMMs are a special case of RNNs? In this paper, we investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization, to answer this question.In particular, we investigate three key design factors—independence assumptions between the hidden states and the observation, the placement of softmax, and the use of non-linearity—in order to pin down their empirical effects. We present a comprehensive empirical study to provide insights on the interplay between expressivity and interpretability with respect to language modeling and parts-of-speech induction.
Are HMMs a special case of RNNs? We investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization and provide new insights.