Unnamed: 0
int64
0
1.83k
Clean_Title
stringlengths
8
153
Clean_Text
stringlengths
330
2.26k
Clean_Summary
stringlengths
53
295
100
Multitask Soft Option Learning
We present Multitask Soft Option Learning, a hierarchical multi-task framework based on Planning-as-Inference.MSOL extends the concept of Options, using separate variational posteriors for each task, regularized by a shared prior.The learned soft-options are temporally extended, allowing a higher-level master policy to train faster on new tasks by making decisions with lower frequency.Additionally, MSOL allows fine-tuning of soft-options for new tasks without unlearning previously useful behavior, and avoids problems with local minima in multitask training.We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines in challenging multi-task environments.
In Hierarchical RL, we introduce the notion of a 'soft', i.e. adaptable, option and show that this helps learning in multitask settings.
101
Guided variational autoencoder for disentanglement learning
We propose an algorithm, guided variational autoencoder, that is able to learn a controllable generative model by performing latent representation disentanglement learning.The learning objective is achieved by providing signal to the latent encoding/embedding in VAE without changing its main backbone architecture, hence retaining the desirable properties of the VAE.We design an unsupervised and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.In the unsupervised strategy, we guide the VAE learning by introducing a lightweight decoder that learns latent geometric transformation and principal components; in the supervised strategy, we use an adversarial excitation and inhibition mechanism to encourage the disentanglement of the latent variables.Guided-VAE enjoys its transparency and simplicity for the general representation learning task, as well as disentanglement learning.On a number of experiments for representation learning, improved synthesis/sampling, better disentanglement for classification, and reduced classification errors in meta learning have been observed.
Learning a controllable generative model by performing latent representation disentanglement learning.
102
Learning to Generate Filters for Convolutional Neural Networks
Conventionally, convolutional neural networks process different images with the same set of filters.However, the variations in images pose a challenge to this fashion.In this paper, we propose to generate sample-specific filters for convolutional layers in the forward pass.Since the filters are generated on-the-fly, the model becomes more flexible and can better fit the training data compared to traditional CNNs.In order to obtain sample-specific features, we extract the intermediate feature maps from an autoencoder.As filters are usually high dimensional, we propose to learn a set of coefficients instead of a set of filters.These coefficients are used to linearly combine the base filters from a filter repository to generate the final filters for a CNN.The proposed method is evaluated on MNIST, MTFL and CIFAR10 datasets.Experiment results demonstrate that the classification accuracy of the baseline model can be improved by using the proposed filter generation method.
dynamically generate filters conditioned on the input image for CNNs in each forward pass
103
Doubly Nested Network for Resource-Efficient Inference
We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths.Compared to conventional anytime networks only with the depth controllability, the increased architectural diversity leads to higher resource utilization and consequent performance improvement under various and dynamic resource budgets.We highlight architectural features to make our scheme feasible as well as efficient, and show its effectiveness in image classification tasks.
We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths.
104
Learning to Make Generalizable and Diverse Predictions for Retrosynthesis
We propose a new model for making generalizable and diverse retrosynthetic reaction predictions.Given a target compound, the task is to predict the likely chemical reactants to produce the target.This generative task can be framed as a sequence-to-sequence problem by using the SMILES representations of the molecules.Building on top of the popular Transformer architecture, we propose two novel pre-training methods that construct relevant auxiliary tasks for our problem.Furthermore, we incorporate a discrete latent variable model into the architecture to encourage the model to produce a diverse set of alternative predictions.On the 50k subset of reaction examples from the United States patent literature benchmark dataset, our model greatly improves performance over the baseline, while also generating predictions that are more diverse.
We propose a new model for making generalizable and diverse retrosynthetic reaction predictions.
105
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning.While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment.To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path.The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency.When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO.
We extend recent insights related to softmax consistency to achieve state-of-the-art results in continuous control.
106
Can Deep Reinforcement Learning solve Erdos-Selfridge-Spencer Games?
Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization.Here we consider a family of combinatorial games, arising from work of Erdos, Selfridge, and Spencer, and we propose their use as environments for evaluating and comparing different approaches to reinforcement learning.These games have a number of appealing features: they are challenging for current learning approaches, but they form a low-dimensional, simply parametrized environment where there is a linear closed form solution for optimal behavior from any state, and the difficulty of the game can be tuned by changing environment parameters in an interpretable way.We use these Erdos-Selfridge-Spencer games not only to compare different algorithms, but also to compare approaches based on supervised and reinforcement learning, to analyze the power of multi-agent approaches in improving performance, and to evaluate generalization to environments outside the training set.
We adapt a family of combinatorial games with tunable difficulty and an optimal policy expressible as linear network, developing it as a rich environment for reinforcement learning, showing contrasts in performance with supervised learning, and analyzing multiagent learning and generalization.
107
ODIN: Outlier Detection In Neural Networks
Adoption of deep learning in safety-critical systems raise the need for understanding what deep neural networks do not understand.Several methodologies to estimate model uncertainty have been proposed, but these methodologies constrain either how the neural network is trained or constructed.We present Outlier Detection In Neural networks, an assumption-free method for detecting outlier observations during prediction, based on principles widely used in manufacturing process monitoring.By using a linear approximation of the hidden layer manifold, we add prediction-time outlier detection to models after training without altering architecture or training.We demonstrate that ODIN efficiently detect outliers during prediction on Fashion-MNIST, ImageNet-synsets and speech command recognition.
An add-on method for deep learning to detect outliers during prediction-time
108
A Simple Fully Connected Network for Composing Word Embeddings from Characters
This work introduces a simple network for producing character aware word embeddings.Position agnostic and position aware character embeddings are combined to produce an embedding vector for each word.The learned word representations are shown to be very sparse and facilitate improved results on language modeling tasks, despite using markedly fewer parameters, and without the need to apply dropout.A final experiment suggests that weight sharing contributes to sparsity, increases performance, and prevents overfitting.
A fully connected architecture is used to produce word embeddings from character representations, outperforms traditional embeddings and provides insight into sparsity and dropout.
109
Attacking Binarized Neural Networks
Neural networks with low-precision weights and activations offer compellingefficiency advantages over their full-precision equivalents.The two mostfrequently discussed benefits of quantization are reduced memory consumption,and a faster forward pass when implemented with efficient bitwiseoperations.We propose a third benefit of very low-precision neural networks:improved robustness against some adversarial attacks, and in the worst case,performance that is on par with full-precision models.We focus on the verylow-precision case where weights and activations are both quantized to1,and note that stochastically quantizing weights in just one layer can sharplyreduce the impact of iterative attacks.We observe that non-scaled binary neuralnetworks exhibit a similar effect to the original procedure that led to , and a false notion of security.We address this by conducting both black-box and white-box experiments withbinary models that do not artificially mask gradients.
We conduct adversarial attacks against binarized neural networks and show that we reduce the impact of the strongest attacks, while maintaining comparable accuracy in a black-box setting
110
Empirical observations on the instability of aligning word vector spaces with GANs
Unsupervised bilingual dictionary induction is useful for unsupervised machine translation and for cross-lingual transfer of models into low-resource languages.One approach to UBDI is to align word vector spaces in different languages using Generative adversarial networks with linear generators, achieving state-of-the-art performance for several language pairs.For some pairs, however, GAN-based induction is unstable or completely fails to align the vector spaces.We focus on cases where linear transformations provably exist, but the performance of GAN-based UBDI depends heavily on the model initialization.We show that the instability depends on the shape and density of the vector sets, but not on noise; it is the result of local optima, but neither over-parameterization nor changing the batch size or the learning rate consistently reduces instability.Nevertheless, we can stabilize GAN-based UBDI through best-of-N model selection, based on an unsupervised stopping criterion.
An empirical investigation of GAN-based alignment of word vector spaces, focusing on cases, where linear transformations provably exist, but training is unstable.
111
Dual-Component Deep Domain Adaptation: A New Approach for Cross Project Software Vulnerability Detection
Owing to the ubiquity of computer software, software vulnerability detection has become an important problem in the software industry and in the field of computer security.One of the most crucial issues in SVD is coping with the scarcity of labeled vulnerabilities in projects that require the laborious manual labeling of code by software security experts.One possible way to address is to employ deep domain adaptation which has recently witnessed enormous success in transferring learning from structural labeled to unlabeled data sources.The general idea is to map both source and target data into a joint feature space and close the discrepancy gap of those data in this joint feature space.Generative adversarial network is a technique that attempts to bridge the discrepancy gap and also emerges as a building block to develop deep domain adaptation approaches with state-of-the-art performance.However, deep domain adaptation approaches using the GAN principle to close the discrepancy gap are subject to the mode collapsing problem that negatively impacts the predictive performance.Our aim in this paper is to propose Dual Generator-Discriminator Deep Code Domain Adaptation Network for tackling the problem of transfer learning from labeled to unlabeled software projects in the context of SVD in order to resolve the mode collapsing problem faced in previous approaches.The experimental results on real-world software projects show that our proposed method outperforms state-of-the-art baselines by a wide margin.
Our aim in this paper is to propose a new approach for tackling the problem of transfer learning from labeled to unlabeled software projects in the context of SVD in order to resolve the mode collapsing problem faced in previous approaches.
112
Fast Node Embeddings: Learning Ego-Centric Representations
Representation learning is one of the foundations of Deep Learning and allowed important improvements on several Machine Learning tasks, such as Neural Machine Translation, Question Answering and Speech Recognition.Recent works have proposed new methods for learning representations for nodes and edges in graphs.Several of these methods are based on the SkipGram algorithm, and they usually process a large number of multi-hop neighbors in order to produce the context from which node representations are learned.In this paper, we propose an effective and also efficient method for generating node embeddings in graphs that employs a restricted number of permutations over the immediate neighborhood of a node as context to generate its representation, thus ego-centric representations.We present a thorough evaluation showing that our method outperforms state-of-the-art methods in six different datasets related to the problems of link prediction and node classification, being one to three orders of magnitude faster than baselines when generating node embeddings for very large graphs.
A faster method for generating node embeddings that employs a number of permutations over a node's immediate neighborhood as context to generate its representation.
113
Autoencoder-based Initialization for Recurrent Neural Networks with a Linear Memory
Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix.This class of models is particularly effective to solve tasks that require the memorization of long sequences.We propose an alternative solution based on explicit memorization using linear autoencoders for sequences.We show how a recently proposed recurrent architecture, the Linear Memory Network, composed of a nonlinear feedforward layer and a separate linear recurrence, can be used to solve hard memorization tasks.We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution.The initialization schema can be easily adapted to any recurrent architecture. We argue that this approach is superior to a random orthogonal initialization due to the autoencoder, which allows the memorization of long sequences even before training.The empirical analysis show that our approach achieves competitive results against alternative orthogonal models, and the LSTM, on sequential MNIST, permuted MNIST and TIMIT.
We show how to initialize recurrent architectures with the closed-form solution of a linear autoencoder for sequences. We show the advantages of this approach compared to orthogonal RNNs.
114
Understanding and Improving Sequence-Labeling NER with Self-Attentive LSTMs
This paper improves upon the line of research that formulates named entity recognition as a sequence-labeling problem.We use so-called black-box long short-term memory encoders to achieve state-of-the-art results while providing insightful understanding of what the auto-regressive model learns with a parallel self-attention mechanism.Specifically, we decouple the sequence-labeling problem of NER into entity chunking, e.g., Barack_B Obama_E was_O elected_O, and entity typing, e.g., Barack_PERSON Obama_PERSON was_NONE elected_NONE, and analyze how the model learns to, or has difficulties in, capturing text patterns for each of the subtasks.The insights we gain then lead us to explore a more sophisticated deep cross-Bi-LSTM encoder, which proves better at capturing global interactions given both empirical results and a theoretical justification.
We provide insightful understanding of sequence-labeling NER and propose to use two types of cross structures, both of which bring theoretical and empirical improvements.
115
RelWalk -- A Latent Variable Model Approach to Knowledge Graph Embedding
Knowledge Graph Embedding is the task of jointly learning entity and relation embeddings for a given knowledge graph.Existing methods for learning KGEs can be seen as a two-stage process where entities and relations in the knowledge graph are represented using some linear algebraic structures, and a scoring function is defined that evaluates the strength of a relation that holds between two entities using the corresponding relation and entity embeddings.Unfortunately, prior proposals for the scoring functions in the first step have been heuristically motivated, and it is unclear as to how the scoring functions in KGEs relate to the generation process of the underlying knowledge graph.To address this issue, we propose a generative account of the KGE learning task.Specifically, given a knowledge graph represented by a set of relational triples, where the semantic relation R holds between the two entities h and t, we extend the random walk model of word embeddings to KGE.We derive a theoretical relationship between the joint probability p and the embeddings of h, R and t.Moreover, we show that marginal loss minimisation, a popular objective used by much prior work in KGE, follows naturally from the log-likelihood ratio maximisation under the probabilities estimated from the KGEs according to our theoretical relationship.We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.The KGEs learnt by our proposed method obtain state-of-the-art performance on FB15K237 and WN18RR benchmark datasets, providing empirical evidence in support of the theory.
We present a theoretically proven generative model of knowledge graph embedding.
116
Scaling shared model governance via model splitting
Currently the only techniques for sharing governance of a deep learning model are homomorphic encryption and secure multiparty computation.Unfortunately, neither of these techniques is applicable to the training of large neural networks due to their large computational and communication overheads.As a scalable technique for shared model governance, we propose splitting deep learning model between multiple parties.This paper empirically investigates the security guarantee of this technique, which is introduced as the problem of model completion: Given the entire training data set or an environment simulator, and a subset of the parameters of a trained deep learning model, how much training is required to recover the model’s original performance? We define a metric for evaluating the hardness of the model completion problem and study it empirically in both supervised learning on ImageNet and reinforcement learning on Atari and DeepMind Lab.Our experiments show that the model completion problem is harder in reinforcement learning than in supervised learning because of the unavailability of the trained agent’s trajectories, and its hardness depends not primarily on the number of parameters of the missing part, but more so on their type and location. Our results suggest that model splitting might be a feasible technique for shared model governance in some settings where training is very expensive.
We study empirically how hard it is to recover missing parts of trained models
117
Variational Domain Adaptation
This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference.Unlike the existing methods on domain transfer through deep generative models, such as StarGAN and UFDN, the variational domain adaptation has three advantages.Firstly, the samples from the target are not required.Instead, the framework requires one known source as a prior and binary discriminators,, discriminating the target domain from others.Consequently, the framework regards a target as a posterior that can be explicitly formulated through the Bayesian inference,, as exhibited by a further proposed model of dual variational autoencoder.Secondly, the framework is scablable to large-scale domains.As well as VAE encodes a sample as a mode on a latent space:, DualVAE encodes a domain as a mode on the dual latent space, named domain embedding.It reformulates the posterior with a natural paring, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation.Thirdly, DualVAE fastly converges without sophisticated automatic/manual hyperparameter search in comparison to GANs as it requires only one additional parameter to VAE.Through the numerical experiment, we demonstrate the three benefits with multi-domain image generation task on CelebA with up to 60 domains, and exhibits that DualVAE records the state-of-the-art performance outperforming StarGAN and UFDN.
This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference
118
Adversarial Training and Provable Defenses: Bridging the Gap
We propose a new method to train neural networks based on a novel combination of adversarial training and provable defenses.The key idea is to model training as a procedure which includes both, the verifier and the adversary.In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail.We experimentally show that this training method is promising and achieves the best of both worlds – it produces a model with state-of-the-art accuracy and certified robustness on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation.This is a significant improvement over the currently known best results of 68.3% accuracy and 53.9% certified robustness, achieved using a 5 times larger network than our work.
We propose a novel combination of adversarial training and provable defenses which produces a model with state-of-the-art accuracy and certified robustness on CIFAR-10.
119
Learning to Represent Programs with Graphs
Learning tasks on source code have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known syntax.", 'For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered.We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures.In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs.We evaluate our method on two tasks: VarNaming, in which a network attempts to predict the name of a variable given its usage, and VarMisuse, in which the network learns to reason about selecting the correct variable that should be used at a given program location.Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VarMisuse task in many cases.Additionally, our testing showed that VarMisuse identifies a number of bugs in mature open-source projects.
Programs have structure that can be represented as graphs, and graph neural networks can learn to find bugs on such graphs
120
Overfitting Detection of Deep Neural Networks without a Hold Out Set
Overfitting is an ubiquitous problem in neural network training and usually mitigated using a holdout data set.Here we challenge this rationale and investigate criteria for overfitting without using a holdout data set.Specifically, we train a model for a fixed number of epochs multiple times with varying fractions of randomized labels and for a range of regularization strengths.A properly trained model should not be able to attain an accuracy greater than the fraction of properly labeled data points.Otherwise the model overfits.We introduce two criteria for detecting overfitting and one to detect underfitting.We analyze early stopping, the regularization factor, and network depth.In safety critical applications we are interested in models and parameter settings which perform well and are not likely to overfit.The methods of this paper allow characterizing and identifying such models.
We introduce and analyze several criteria for detecting overfitting.
121
Perception-Aware Point-Based Value Iteration for Partially Observable Markov Decision Processes
Partially observable Markov decision processes are a widely-used framework to model decision-making with uncertainty about the environment and under stochastic outcome.In conventional POMDP models, the observations that the agent receives originate from fixed known distribution.However, in a variety of real-world scenarios the agent has an active role in its perception by selecting which observations to receive.Due to combinatorial nature of such selection process, it is computationally intractable to integrate the perception decision with the planning decision.To prevent such expansion of the action space, we propose a greedy strategy for observation selection that aims to minimize the uncertainty in state.We develop a novel point-based value iteration algorithm that incorporates the greedy strategy to achieve near-optimal uncertainty reduction for sampled belief points.This in turn enables the solver to efficiently approximate the reachable subspace of belief simplex by essentially separating computations related to perception from planning.Lastly, we implement the proposed solver and demonstrate its performance and computational advantage in a range of robotic scenarios where the robot simultaneously performs active perception and planning.
We develop a point-based value iteration solver for POMDPs with active perception and planning tasks.
122
Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks
Deep neural networks, in particular convolutional neural networks, have become highly effective tools for compressing images and solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements.This success can be attributed in part to their ability to represent and generate natural images well.Contrary to classical tools such as wavelets, image-generating deep neural networks have a large number of parameters---typically a multiple of their output dimension---and need to be trained on large datasets.In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters.The deep decoder has a simple architecture with no convolutions and fewer weight parameters than the output dimensionality.This underparameterization enables the deep decoder to compress images into a concise set of network weights, which we show is on par with wavelet-based thresholding.Further, underparameterization provides a barrier to overfitting, allowing the deep decoder to have state-of-the-art performance for denoising.The deep decoder is simple in the sense that each layer has an identical structure that consists of only one upsampling unit, pixel-wise linear combination of channels, ReLU activation, and channelwise normalization.This simplicity makes the network amenable to theoretical analysis, and it sheds light on the aspects of neural networks that enable them to form effective signal representations.
We introduce an underparameterized, nonconvolutional, and simple deep neural network that can, without training, effectively represent natural images and solve image processing tasks like compression and denoising competitively.
123
Understanding Deep Neural Networks with Rectified Linear Units
In this paper we investigate the family of functions representable by deep neural networks with rectified linear units.We give an algorithm to train a ReLU DNN with one hidden layer to with runtime polynomial in the data size albeit exponential in the input dimension.Further, we improve on the known lower bounds on size for approximating a ReLU deep net function by a shallower ReLU net."Our gap theorems hold for smoothly parametrized families of hard functions, contrary to countable, discrete families known in the literature.An example consequence of our gap theorems is the following: for every natural number there exists a function representable by a ReLU DNN with hidden layers and total size, such that any ReLU DNN with at most hidden layers will require at least total nodes.Finally, for the family of DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a family of functions attaining this scaling.Our construction utilizes the theory of zonotopes from polyhedral theory.
This paper 1) characterizes functions representable by ReLU DNNs, 2) formally studies the benefit of depth in such architectures, 3) gives an algorithm to implement empirical risk minimization to global optimality for two layer ReLU nets.
124
Assessing the scalability of biologically-motivated deep learning algorithms and architectures
The backpropagation of error algorithm is often said to be impossible to implement in a real brain.The recent success of deep networks in machine learning and AI, however, has inspired a number of proposals for understanding how the brain might learn across multiple layers, and hence how it might implement or approximate BP.As of yet, none of these proposals have been rigorously evaluated on tasks where BP-guided deep learning has proved critical, or in architectures more structured than simple fully-connected networks.Here we present the first results on scaling up a biologically motivated model of deep learning to datasets which need deep networks with appropriate architectures to achieve good performance.We present results on CIFAR-10 and ImageNet. For CIFAR-10 we show that our algorithm, a straightforward, weight-transport-free variant of difference target-propagation modified to remove backpropagation from the penultimate layer, is competitive with BP in training deep networks with locally defined receptive fields that have untied weights. For ImageNet we find that both DTP and our algorithm perform significantly worse than BP, opening questions about whether different architectures or algorithms are required to scale these approaches.Our results and implementation details help establish baselines for biologically motivated deep learning schemes going forward.
Benchmarks for biologically plausible learning algorithms on complex datasets and architectures
125
LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING
Deep neural networks usually contain millions, maybe billions, of parameters/weights, making both storage and computation very expensive.This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers. Another well-known approach for controlling the complexity of DNNs is parameter sharing/tying, where certain sets of weights are forced to share a common value.Some forms of weight sharing are hard-wired to express certain in- variances, with a notable example being the shift-invariance of convolutional layers.However, there may be other groups of weights that may be tied together during the learning process, thus further re- ducing the complexity of the network.In this paper, we adopt a recently proposed sparsity-inducing regularizer, named GrOWL, which encourages sparsity and, simulta- neously, learns which groups of parameters should share a common value.GrOWL has been proven effective in linear regression, being able to identify and cope with strongly correlated covariates.Unlike standard sparsity-inducing regularizers, GrOWL not only eliminates unimportant neurons by setting all the corresponding weights to zero, but also explicitly identifies strongly correlated neurons by tying the corresponding weights to a common value.This ability of GrOWL motivates the following two-stage procedure: use GrOWL regularization in the training process to simultaneously identify significant neurons and groups of parameter that should be tied together; retrain the network, enforcing the structure that was unveiled in the previous phase, i.e., keeping only the significant neurons and enforcing the learned tying structure.We evaluate the proposed approach on several benchmark datasets, showing that it can dramatically compress the network with slight or even no loss on generalization performance.
We have proposed using the recent GrOWL regularizer for simultaneous parameter sparsity and tying in DNN learning.
126
On Weight-Sharing and Bilevel Optimization in Architecture Search
Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search.However, its success is poorly understood and often found to be surprising.We argue that, rather than just being an optimization trick, the weight-sharing approach is induced by the relaxation of a structured hypothesis space, and introduces new algorithmic and theoretical challenges as well as applications beyond neural architecture search.Algorithmically, we show how the geometry of ERM for weight-sharing requires greater care when designing gradient- based minimization methods and apply tools from non-convex non-Euclidean optimization to give general-purpose algorithms that adapt to the underlying structure.We further analyze the learning-theoretic behavior of the bilevel optimization solved by practical weight-sharing methods.Next, using kernel configuration and NLP feature selection as case studies, we demonstrate how weight-sharing applies to the architecture search generalization of NAS and effectively optimizes the resulting bilevel objective.Finally, we use our optimization analysis to develop a simple exponentiated gradient method for NAS that aligns with the underlying optimization geometry and matches state-of-the-art approaches on CIFAR-10.
An analysis of the learning and optimization structures of architecture search in neural networks and beyond.
127
Practical lossless compression with latent variables using bits back coding
Deep latent variable models have seen recent success in many data domains.Lossless compression is an application of these models which, despite having the potential to be highly useful, has yet to be implemented in a practical manner."We present '`Bits Back with ANS', a scheme to perform lossless compression with latent variable models at a near optimal rate.", 'We demonstrate this scheme by using it to compress the MNIST dataset with a variational auto-encoder model, achieving compression rates superior to standard methods with only a simple VAE.Given that the scheme is highly amenable to parallelization, we conclude that with a sufficiently high quality generative model this scheme could be used to achieve substantial improvements in compression rate with acceptable running time.We make our implementation available open source at https://github.com/bits-back/bits-back .
We do lossless compression of large image datasets using a VAE, beat existing compression algorithms.
128
Latent Question Reformulation and Information Accumulation for Multi-Hop Machine Reading
Multi-hop text-based question-answering is a current challenge in machine comprehension.This task requires to sequentially integrate facts from multiple passages to answer complex natural language questions.In this paper, we propose a novel architecture, called the Latent Question Reformulation Network, a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities.LQR-net is composed of an association of \extbf and \extbf.The purpose of the reading module is to produce a question-aware representation of the document.From this document representation, the reformulation module extracts essential elements to calculate an updated representation of the question.This updated question is then passed to the following hop.We evaluate our architecture on the \\hotpotqa question-answering dataset designed to assess multi-hop reasoning capabilities.Our model achieves competitive results on the public leaderboard and outperforms the best current models in terms of Exact Match and score.Finally, we show that an analysis of the sequential reformulations can provide interpretable reasoning paths.
In this paper, we propose the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities.
129
Explaining Time Series by Counterfactuals
We propose a method to automatically compute the importance of features at every observation in time series, by simulating counterfactual trajectories given previous observations.We define the importance of each observation as the change in the model output caused by replacing the observation with a generated one.Our method can be applied to arbitrarily complex time series models.We compare the generated feature importance to existing methods like sensitivity analyses, feature occlusion, and other explanation baselines to show that our approach generates more precise explanations and is less sensitive to noise in the input signals.
Explaining Multivariate Time Series Models by finding important observations in time using Counterfactuals
130
Unsupervised Domain Adaptation through Self-Supervision
This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data.Like much of previous work, we seek to align the learned representations of the source and target domains while preserving discriminability.The way we accomplish alignment is by learning to perform auxiliary self-supervised task on both domains simultaneously. Each self-supervised task brings the two domains closer together along the direction relevant to that task.Training this jointly with the main task classifier on the source domain is shown to successfully generalize to the unlabeled target domain. The presented objective is straightforward to implement and easy to optimize.We achieve state-of-the-art results on four out of seven standard benchmarks, and competitive results on segmentation adaptation.We also demonstrate that our method composes well with another popular pixel-level adaptation method.
We use self-supervision on both domain to align them for unsupervised domain adaptation.
131
Maximally Consistent Sampling and the Jaccard Index of Probability Distributions
We introduce simple, efficient algorithms for computing a MinHash of a probability distribution, suitable for both sparse and dense data, with equivalent running times to the state of the art for both cases.The collision probability of these algorithms is a new measure of the similarity of positive vectors which we investigate in detail.We describe the sense in which this collision probability is optimal for any Locality Sensitive Hash based on sampling.We argue that this similarity measure is more useful for probability distributions than the similarity pursued by other algorithms for weighted MinHash, and is the natural generalization of the Jaccard index.
The minimum of a set of exponentially distributed hashes has a very useful collision probability that generalizes the Jaccard Index to probability distributions.
132
Graph Neural Networks with Generated Parameters for Relation Extraction
Recently, progress has been made towards improving relational reasoning in machine learning field.Among existing models, graph neural networks is one of the most effective approaches for multi-hop relational reasoning.In fact, multi-hop relational reasoning is indispensable in many natural language processing tasks such as relation extraction.In this paper, we propose to generate the parameters of graph neural networks according to natural language sentences, which enables GNNs to process relational reasoning on unstructured text inputs.We verify GP-GNNs in relation extraction from text.Experimental results on a human-annotated dataset and two distantly supervised datasets show that our model achieves significant improvements compared to the baselines.We also perform a qualitative analysis to demonstrate that our model could discover more accurate relations by multi-hop relational reasoning.
A graph neural network model with parameters generated from natural languages, which can perform multi-hop reasoning.
133
Online Meta-Critic Learning for Off-Policy Actor-Critic Methods
Off-Policy Actor-Critic methods have proven successful in a variety of continuous control tasks.Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return.In this paper, we introduce a novel and flexible meta-critic that observes the learning process and meta-learns an additional loss for the actor that accelerates and improves actor-critic learning.Compared to the vanilla critic, the meta-critic network is explicitly trained to accelerate the learning process; and compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks.Crucially, our meta-critic framework is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency.We demonstrate that online meta-critic learning leads to improvements in a variety of continuous control environments when combined with contemporary Off-PAC methods DDPG, TD3 and the state-of-the-art SAC.
We present Meta-Critic, an auxiliary critic module for off-policy actor-critic methods that can be meta-learned online during single task learning.
134
Non-vacuous Generalization Bounds at the ImageNet Scale: a PAC-Bayesian Compression Approach
Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data.Nevertheless, these networks often generalize well in practice.It has also been observed that trained networks can often be compressed to much smaller representations.The purpose of this paper is to connect these two empirical observations.Our main technical result is a generalization bound for compressed networks based on the compressed size that, combined with off-the-shelf compression algorithms, leads to state-of-the-art generalization guarantees.In particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem.Additionally, we show that compressibility of models that tend to overfit is limited.Empirical results show that an increase in overfitting increases the number of bits required to describe a trained network.
We obtain non-vacuous generalization bounds on ImageNet-scale deep neural networks by combining an original PAC-Bayes bound and an off-the-shelf neural network compression method.
135
Adversarial Gain
Adversarial examples can be defined as inputs to a model which induce a mistake -- where the model output is different than that of an oracle, perhaps in surprising or malicious ways.Original models of adversarial attacks are primarily studied in the context of classification and computer vision tasks.While several attacks have been proposed in natural language processing settings, they often vary in defining the parameters of an attack and what a successful attack would look like.The goal of this work is to propose a unifying model of adversarial examples suitable for NLP tasks in both generative and classification settings.We define the notion of adversarial gain: based in control theory, it is a measure of the change in the output of a system relative to the perturbation of the input presented to the learner.This definition, as we show, can be used under different feature spaces and distance conditions to determine attack or defense effectiveness across different intuitive manifolds.This notion of adversarial gain not only provides a useful way for evaluating adversaries and defenses, but can act as a building block for future work in robustness under adversaries due to its rooted nature in stability and manifold theory.
We propose an alternative measure for determining effectiveness of adversarial attacks in NLP models according to a distance measure-based method like incremental L2-gain in control theory.
136
Decoupling the Layers in Residual Networks
We propose a Warped Residual Network using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network.We apply a perturbation theory on residual networks and decouple the interactions between residual units.The resulting warp operator is a first order approximation of the output over multiple layers.The first order perturbation theory exhibits properties such as binomial path lengths and exponential gradient scaling found experimentally by Veit et al.We demonstrate through an extensive performance study that the proposed network achieves comparable predictive performance to the original residual network with the same number of parameters, while achieving a significant speed-up on the total training time.As WarpNet performs model parallelism in residual network training in which weights are distributed over different GPUs, it offers speed-up and capability to train larger networks compared to original residual networks.
We propose the Warped Residual Network using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network.
137
On Evaluating Explainability Algorithms
A plethora of methods attempting to explain predictions of black-box models have been proposed by the Explainable Artificial Intelligence community.Yet, measuring the quality of the generated explanations is largely unexplored, making quantitative comparisons non-trivial.In this work, we propose a suite of multifaceted metrics that enables us to objectively compare explainers based on the correctness, consistency, as well as the confidence of the generated explanations.These metrics are computationally inexpensive, do not require model-retraining and can be used across different data modalities.We evaluate them on common explainers such as Grad-CAM, SmoothGrad, LIME and Integrated Gradients.Our experiments show that the proposed metrics reflect qualitative observations reported in earlier works.
We propose a suite of metrics that capture desired properties of explainability algorithms and use it to objectively compare and evaluate such methods
138
On the Confidence of Neural Network Predictions for some NLP Tasks
Neural networks are known to produce unexpected results on inputs that are far from the training distribution.One approach to tackle this problem is to detect the samples on which the trained network can not answer reliably.ODIN is a recently proposed method for out-of-distribution detection that does not modify the trained network and achieves good performance for various image classification tasks.In this paper we adapt ODIN for sentence classification and word tagging tasks.We show that the scores produced by ODIN can be used as a confidence measure for the predictions on both in-distribution and out-of-distribution datasets.
A recent out-of-distribution detection method helps to measure the confidence of RNN predictions for some NLP tasks
139
The Laplacian in RL: Learning Representations with Efficient Approximations
The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph.In reinforcement learning, where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approximating the eigenvectors of the Laplacian provides a promising approach to state representation learning.However, existing methods for performing this approximation are ill-suited in general RL settings for two main reasons: First, they are computationally expensive, often requiring operations on large matrices.Second, these methods lack adequate justification beyond simple, tabular, finite-state settings.In this paper, we present a fully general and scalable method for approximating the eigenvectors of the Laplacian in a model-free RL context.We systematically evaluate our approach and empirically show that it generalizes beyond the tabular, finite-state setting.Even in tabular, finite-state settings, its ability to approximate the eigenvectors outperforms previous proposals.Finally, we show the potential benefits of using a Laplacian representation learned using our method in goal-achieving RL tasks, providing evidence that our technique can be used to significantly improve the performance of an RL agent.
We propose a scalable method to approximate the eigenvectors of the Laplacian in the reinforcement learning context and we show that the learned representations can improve the performance of an RL agent.
140
PocketFlow: An Automated Framework for Compressing and Accelerating Deep Neural Networks
Deep neural networks are widely used in various domains, but the prohibitive computational complexity prevents their deployment on mobile devices.Numerous model compression algorithms have been proposed, however, it is often difficult and time-consuming to choose proper hyper-parameters to obtain an efficient compressed model.In this paper, we propose an automated framework for model compression and acceleration, namely PocketFlow.This is an easy-to-use toolkit that integrates a series of model compression algorithms and embeds a hyper-parameter optimization module to automatically search for the optimal combination of hyper-parameters.Furthermore, the compressed model can be converted into the TensorFlow Lite format and easily deployed on mobile devices to speed-up the inference.PocketFlow is now open-source and publicly available at https://github.com/Tencent/PocketFlow.
We propose PocketFlow, an automated framework for model compression and acceleration, to facilitate deep learning models' deployment on mobile devices.
141
AmbientGAN: Generative models from lossy measurements
Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest.However, current techniques for training generative models require access to fully-observed samples.In many settings, it is expensive or even impossible to obtain fully-observed samples, but economical to obtain partial, noisy observations.We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest.We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models.Based on this, we propose a new method of training Generative Adversarial Networks which we call AmbientGAN.On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements.Generative models trained with our method can obtain-x higher inception scores than the baselines.
How to learn GANs from noisy, distorted, partial observations
142
Traditional and Heavy Tailed Self Regularization in Neural Network Models
Random Matrix Theory is applied to analyze the weight matrices of Deep Neural Networks, including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet. Empirical and theoretical results clearly indicate that the empirical spectral density of DNN layer matrices displays signatures of traditionally-regularized statistical models, even in the absence of exogenously specifying traditional forms of regularization, such as Dropout or Weight Norm constraints. Building on recent results in RMT, most notably its extension to Universality classes of Heavy-Tailed matrices, we develop a theory to identify 5+1 Phases of Training, corresponding to increasing amounts of Implicit Self-Regularization. For smaller and/or older DNNs, this Implicit Self-Regularization is like traditional Tikhonov regularization, in that there is a "size scale" separating signal from noise. For state-of-the-art DNNs, however, we identify a novel form of Heavy-Tailed Self-Regularization, similar to the self-organization seen in the statistical physics of disordered systems. This implicit Self-Regularization can depend strongly on the many knobs of the training process. By exploiting the generalization gap phenomena, we demonstrate that we can cause a small model to exhibit all 5+1 phases of training simply by changing the batch size.
See the abstract. (For the revision, the paper is identical, except for a 59 page Supplementary Material, which can serve as a stand-along technical report version of the paper.)
143
Explanation-Based Attention for Semi-Supervised Deep Active Learning
We introduce an attention mechanism to improve feature extraction for deep active learning in the semi-supervised setting.The proposed attention mechanism is based on recent methods to visually explain predictions made by DNNs.We apply the proposed explanation-based attention to MNIST and SVHN classification.The conducted experiments show accuracy improvements for the original and class-imbalanced datasets with the same number of training examples and faster long-tail convergence compared to uncertainty-based methods.
We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting.
144
Barcodes as summary of objective functions' topology
We apply canonical forms of gradient complexes to explore neural networks loss surfaces."We present an algorithm for calculations of the objective function's barcodes of minima. ", "Our experiments confirm two principal observations: the barcodes of minima are located in a small lower part of the range of values of objective function and increase of the neural network's depth brings down the minima's barcodes.", 'This has natural implications for the neural network learning and the ability to generalize.
We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces.
145
AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks
New types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs.However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon.In particular, models that exploit structured input via complex and instance-dependent control flow are difficult to accelerate using existing algorithms and hardware that typically rely on minibatching.We present an asynchronous model-parallel training algorithm that is specifically motivated by training on networks of interconnected devices.Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently, even for small minibatch sizes, resulting in shorter overall training times.Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.
Using asynchronous gradient updates to accelerate dynamic neural network training
146
Reward Design in Cooperative Multi-agent Reinforcement Learning for Packet Routing
In cooperative multi-agent reinforcement learning, how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem.The global reward signal assigns the same global reward to all agents without distinguishing their contributions, while the local reward signal provides different local rewards to each agent based solely on individual behavior.Both of the two reward assignment approaches have some shortcomings: the former might encourage lazy agents, while the latter might produce selfish agents.In this paper, we study reward design problem in cooperative MARL based on packet routing environments.Firstly, we show that the above two reward signals are prone to produce suboptimal policies.Then, inspired by some observations and considerations, we design some mixed reward signals, which are off-the-shelf to learn better policies.Finally, we turn the mixed reward signals into the adaptive counterparts, which achieve best results in our experiments.Other reward signals are also discussed in this paper.As reward design is a very fundamental problem in RL and especially in MARL, we hope that MARL researchers can rethink the rewards used in their systems.
We study reward design problem in cooperative MARL based on packet routing environments. The experimental results remind us to be careful to design the rewards, as they are really important to guide the agent behavior.
147
Learning to Solve Linear Inverse Problems in Imaging with Neumann Networks
Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized least squares solutions.Along these lines, we present some extensions of the Neumann network, a recently introduced end-to-end learned architecture inspired by a truncated Neumann series expansion of the solution map to a regularized least squares problem.Here we summarize the Neumann network approach, and show that it has a form compatible with the optimal reconstruction function for a given inverse problem.We also investigate an extension of the Neumann network that incorporates a more sample efficient patch-based regularization approach.
Neumann networks are an end-to-end, sample-efficient learning approach to solving linear inverse problems in imaging that are compatible with the MSE optimal approach and admit an extension to patch-based learning.
148
Global-to-local Memory Pointer Networks for Task-Oriented Dialogue
End-to-end task-oriented dialogue is challenging since knowledge bases are usually large, dynamic and hard to incorporate into a learning framework.We propose the global-to-local memory pointer networks to address this issue.In our model, a global memory encoder and a local memory decoder are proposed to share external knowledge.The encoder encodes dialogue history, modifies global contextual representation, and generates a global memory pointer.The decoder first generates a sketch response with unfilled slots.Next, it passes the global memory pointer to filter the external knowledge for relevant information, then instantiates the slots via the local memory pointers.We empirically show that our model can improve copy accuracy and mitigate the common out-of-vocabulary problem.As a result, GLMP is able to improve over the previous state-of-the-art models in both simulated bAbI Dialogue dataset and human-human Stanford Multi-domain Dialogue dataset on automatic and human evaluation.
GLMP: Global memory encoder (context RNN, global pointer) and local memory decoder (sketch RNN, local pointer) that share external knowledge (MemNN) are proposed to strengthen response generation in task-oriented dialogue.
149
ACE: Artificial Checkerboard Enhancer to Induce and Evade Adversarial Attacks
The checkerboard phenomenon is one of the well-known visual artifacts in the computer vision field.The origins and solutions of checkerboard artifacts in the pixel space have been studied for a long time, but their effects on the gradient space have rarely been investigated.In this paper, we revisit the checkerboard artifacts in the gradient space which turn out to be the weak point of a network architecture.We explore image-agnostic property of gradient checkerboard artifacts and propose a simple yet effective defense method by utilizing the artifacts.We introduce our defense module, dubbed Artificial Checkerboard Enhancer, which induces adversarial attacks on designated pixels.This enables the model to deflect attacks by shifting only a single pixel in the image with a remarkable defense rate.We provide extensive experiments to support the effectiveness of our work for various attack scenarios using state-of-the-art attack methods.Furthermore, we show that ACE is even applicable to large-scale datasets including ImageNet dataset and can be easily transferred to various pretrained networks.
We propose a novel aritificial checkerboard enhancer (ACE) module which guides attacks to a pre-specified pixel space and successfully defends it with a simple padding operation.
150
Convergence Behaviour of Some Gradient-Based Methods on Bilinear Zero-Sum Games
Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, and understanding the dynamics of gradient algorithms for solving such formulations has been a grand challenge.As a first step, we restrict to bilinear zero-sum games and give a systematic analysis of popular gradient updates, for both simultaneous and alternating versions.We provide exact conditions for their convergence and find the optimal parameter setup and convergence rates.In particular, our results offer formal evidence that alternating updates converge "better" than simultaneous ones.
We systematically analyze the convergence behaviour of popular gradient algorithms for solving bilinear games, with both simultaneous and alternating updates.
151
Cross-Linked Variational Autoencoders for Generalized Zero-Shot Learning
Most approaches in generalized zero-shot learning rely on cross-modal mapping between an image feature space and a class embedding space or on generating artificial image features.However, learning a shared cross-modal embedding by aligning the latent spaces of modality-specific autoencoders is shown to be promising in zero-shot learning.While following the same direction, we also take artificial feature generation one step further and propose a model where a shared latent space of image features and class embeddings is learned by aligned variational autoencoders, for the purpose of generating latent features to train a softmax classifier.We evaluate our learned latent features on conventional benchmark datasets and establish a new state of the art on generalized zero-shot as well as on few-shot learning.Moreover, our results on ImageNet with various zero-shot splits show that our latent features generalize well in large-scale settings.
We use VAEs to learn a shared latent space embedding between image features and attributes and thereby achieve state-of-the-art results in generalized zero-shot learning.
152
Spatial Information is Overrated for Image Classification
Intuitively, image classification should profit from using spatial information.Recent work, however, suggests that this might be overrated in standard CNNs.In this paper, we are pushing the envelope and aim to further investigate the reliance on and necessity of spatial information.We propose and analyze three methods, namely Shuffle Conv, GAP+FC and 1x1 Conv, that destroy spatial information during both training and testing phases.We extensively evaluate these methods on several object recognition datasets with a wide range of CNN architectures.Interestingly, we consistently observe that spatial information can be completely deleted from a significant number of layers with no or only small performance drops.
Spatial information at last layers is not necessary for a good classification accuracy.
153
Unlabeled Disentangling of GANs with Guided Siamese Networks
Disentangling underlying generative factors of a data distribution is important for interpretability and generalizable representations.In this paper, we introduce two novel disentangling methods.Our first method, Unlabeled Disentangling GAN, decomposes the latent noise by generating similar/dissimilar image pairs and it learns a distance metric on these pairs with siamese networks and a contrastive loss.This pairwise approach provides consistent representations for similar data points.Our second method modifies the UD-GAN with user-defined guidance functions, which restrict the information that goes into the siamese networks.This constraint helps UD-GAN-G to focus on the desired semantic variations in the data.We show that both our methods outperform existing unsupervised approaches in quantitative metrics that measure semantic accuracy of the learned representations.In addition, we illustrate that simple guidance functions we use in UD-GAN-G allow us to directly capture the desired variations in the data.
We use Siamese Networks to guide and disentangle the generation process in GANs without labeled data.
154
Predicted Variables in Programming
We present Predicted Variables, an approach to making machine learning a first class citizen in programming languages.There is a growing divide in approaches to building systems: using human experts on the one hand, and using behavior learned from data on the other hand.PVars aim to make using ML in programming easier by hybridizing the two.We leverage the existing concept of variables and create a new type, a predicted variable.PVars are akin to native variables with one important distinction: PVars determine their value using ML when evaluated.We describe PVars and their interface, how they can be used in programming, and demonstrate the feasibility of our approach on three algorithmic problems: binary search, QuickSort, and caches.We show experimentally that PVars are able to improve over the commonly used heuristics and lead to a better performance than the original algorithms.As opposed to previous work applying ML to algorithmic problems, PVars have the advantage that they can be used within the existing frameworks and do not require the existing domain knowledge to be replaced.PVars allow for a seamless integration of ML into existing systems and algorithms.Our PVars implementation currently relies on standard Reinforcement Learning methods.To learn faster, PVars use the heuristic function, which they are replacing, as an initial function.We show that PVars quickly pick up the behavior of the initial function and then improve performance beyond that without ever performing substantially worse -- allowing for a safe deployment in critical applications.
We present Predicted Variables, an approach to making machine learning a first class citizen in programming languages.
155
Unsupervised Hierarchical Video Prediction
Much recent research has been devoted to video prediction and generation, but mostly for short-scale time horizons.The hierarchical video prediction method by Villegas et al. is an example of a state of the art method for long term video prediction. However, their method has limited applicability in practical settings as it requires a ground truth pose at training time. This paper presents a long term hierarchical video prediction model that does not have such a restriction.We show that the network learns its own higher-level structure that works better in cases where the ground truth pose does not fully capture all of the information needed to predict the next frame. This method gives sharper results than other video prediction methods which do not require a ground truth pose, and its efficiency is shown on the Humans 3.6M and Robot Pushing datasets.
We show ways to train a hierarchical video prediction model without needing pose labels.
156
Classification in the dark using tactile exploration
Combining information from different sensory modalities to execute goal directed actions is a key aspect of human intelligence.Specifically, human agents are very easily able to translate the task communicated in one sensory domain into a representation that enables them to complete this task when they can only sense their environment using a separate sensory modality.In order to build agents with similar capabilities, in this work we consider the problem of a retrieving a target object from a drawer.The agent is provided with an image of a previously unseen object and it explores objects in the drawer using only tactile sensing to retrieve the object that was shown in the image without receiving any visual feedback.Success at this task requires close integration of visual and tactile sensing.We present a method for performing this task in a simulated environment using an anthropomorphic hand.We hope that future research in the direction of combining sensory signals for acting will find the object retrieval from a drawer to be a useful benchmark problem
In this work, we study the problem of learning representations to identify novel objects by exploring objects using tactile sensing. Key point here is that the query is provided in image domain.
157
Scaling Hierarchical Coreference with Homomorphic Compression
Locality sensitive hashing schemes such as \\simhash provide compact representations of multisets from which similarity can be estimated.However, in certain applications, we need to estimate the similarity of dynamically changing sets. In this case, we need the representation to be a homomorphism so that the hash of unions and differences of sets can be computed directly from the hashes of operands. We propose two representations that have this property for cosine similarity, and make substantial progress on a third representation for Jaccard similarity.We employ these hashes to compress the sufficient statistics of a conditional random field coreference model and study how this compression affects our ability to compute similarities as entities are split and merged during inference.\\cutWe also provide novel statistical analysis of \\simhash to help justify it as an estimator inside a CRF, showing that the bias and variance reduce quickly with the number of bits."On a problem of author coreference, we find that our \\simhash scheme allows scaling the hierarchical coreference algorithm by an order of magnitude without degrading its statistical performance or the model's coreference accuracy, as long as we employ at least 128 or 256 bits.Angle-preserving random projections further improve the coreference quality, potentially allowing even fewer dimensions to be used.
We employ linear homomorphic compression schemes to represent the sufficient statistics of a conditional random field model of coreference and this allows us to scale inference and improve speed by an order of magnitude.
158
Formal Limitations on the Measurement of Mutual Information
Motivated by applications to unsupervised learning, we consider the problem of measuring mutual information.Recent analysis has shown that naive kNN estimators of mutual information have serious statistical limitations motivating more refined methods.In this paper we prove that serious statistical limitations are inherent to any measurement method.More specifically, we show that any distribution-free high-confidence lower bound on mutual information cannot be larger than where is the size of the data sample.We also analyze the Donsker-Varadhan lower bound on KL divergence in particular and show that, when simple statistical considerations are taken into account, this bound can never produce a high-confidence value larger than.While large high-confidence lower bounds are impossible, in practice one can use estimators without formal guarantees.We suggest expressing mutual information as a difference of entropies and using cross entropy as an entropy estimator. We observe that, although cross entropy is only an upper bound on entropy, cross-entropy estimates converge to the true cross entropy at the rate of.
We give a theoretical analysis of the measurement and optimization of mutual information.
159
Neuron Hierarchical Networks
In this paper, we propose a neural network framework called neuron hierarchical network, that evolves beyond the hierarchy in layers, and concentrates on the hierarchy of neurons.We observe mass redundancy in the weights of both handcrafted and randomly searched architectures.Inspired by the development of human brains, we prune low-sensitivity neurons in the model and add new neurons to the graph, and the relation between individual neurons are emphasized and the existence of layers weakened.We propose a process to discover the best base model by random architecture search, and discover the best locations and connections of the added neurons by evolutionary search.Experiment results show that the NHN achieves higher test accuracy on Cifar-10 than state-of-the-art handcrafted and randomly searched architectures, while requiring much fewer parameters and less searching time.
By breaking the layer hierarchy, we propose a 3-step approach to the construction of neuron-hierarchy networks that outperform NAS, SMASH and hierarchical representation with fewer parameters and shorter searching time.
160
Learning To Simulate
Simulation is a useful tool in situations where training data for machine learning models is costly to annotate or even hard to acquire.In this work, we propose a reinforcement learning-based method for automatically adjusting the parameters of any simulator, thereby controlling the distribution of synthesized data in order to maximize the accuracy of a model trained on that data.In contrast to prior art that hand-crafts these simulation parameters or adjusts only parts of the available parameters, our approach fully controls the simulator with the actual underlying goal of maximizing accuracy, rather than mimicking the real data distribution or randomly generating a large volume of data.We find that our approach quickly converges to the optimal simulation parameters in controlled experiments and can indeed discover good sets of parameters for an image rendering simulator in actual computer vision applications.
We propose an algorithm that automatically adjusts parameters of a simulation engine to generate training data for a neural network such that validation accuracy is maximized.
161
Noise Regularization for Conditional Density Estimation
Modelling statistical relationships beyond the conditional mean is crucial in many settings.Conditional density estimation aims to learn the full conditional probability density from data.Though highly expressive, neural network based CDE models can suffer from severe over-fitting when trained with the maximum likelihood objective.Due to the inherent structure of such models, classical regularization approaches in the parameter space are rendered ineffective.To address this issue, we develop a model-agnostic noise regularization method for CDE that adds random perturbations to the data during training.We demonstrate that the proposed approach corresponds to a smoothness regularization and prove its asymptotic consistency.In our experiments, noise regularization significantly and consistently outperforms other regularization methods across seven data sets and three CDE models.The effectiveness of noise regularization makes neural network based CDE the preferable method over previous non- and semi-parametric approaches, even when training data is scarce.
A model-agnostic regularization scheme for neural network-based conditional density estimation.
162
PatchVAE: Learning Local Latent Codes for Recognition
Unsupervised representation learning holds the promise of exploiting large amount of available unlabeled data to learn general representations.A promising technique for unsupervised learning is the framework of Variational Auto-encoders.However, unsupervised representations learned by VAEs are significantly outperformed by those learned by supervising for recognition.Our hypothesis is that to learn useful representations for recognition the model needs to be encouraged to learn about repeating and consistent patterns in data.Drawing inspiration from the mid-level representation discovery work, we propose PatchVAE, that reasons about images at patch level.Our key contribution is a bottleneck formulation in a VAE framework that encourages mid-level style representations.Our experiments demonstrate that representations learned by our method perform much better on the recognition tasks compared to those learned by vanilla VAEs.
A patch-based bottleneck formulation in a VAE framework that learns unsupervised representations better suited for visual recognition.
163
Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization
Vanishing and exploding gradients are two of the main obstacles in training deep neural networks, especially in capturing long range dependencies in recurrent neural networks.In this paper, we present an efficient parametrization of the transition matrix of an RNN that allows us to stabilize the gradients that arise in its training.Specifically, we parameterize the transition matrix by its singular value decomposition, which allows us to explicitly track and control its singular values.We attain efficiency by using tools that are common in numerical linear algebra, namely Householder reflectors for representing the orthogonal matrices that arise in the SVD.By explicitly controlling the singular values, our proposed svdRNN method allows us to easily solve the exploding gradient problem and we observe that it empirically solves the vanishing gradient issue to a large extent.We note that the SVD parameterization can be used for any rectangular weight matrix, hence it can be easily extended to any deep neural network, such as a multi-layer perceptron.Theoretically, we demonstrate that our parameterization does not lose any expressive power, and show how it potentially makes the optimization process easier.Our extensive experimental results also demonstrate that the proposed framework converges faster, and has good generalization, especially when the depth is large.
To solve the gradient vanishing/exploding problems, we proprose an efficient parametrization of the transition matrix of RNN that loses no expressive power, converges faster and has good generalization.
164
Total Style Transfer with a Single Feed-Forward Network
Recent image style transferring methods achieved arbitrary stylization with input content and style images.To transfer the style of an arbitrary image to a content image, these methods used a feed-forward network with a lowest-scaled feature transformer or a cascade of the networks with a feature transformer of a corresponding scale.However, their approaches did not consider either multi-scaled style in their single-scale feature transformer or dependency between the transformed feature statistics across the cascade networks.This shortcoming resulted in generating partially and inexactly transferred style in the generated images.To overcome this limitation of partial style transfer, we propose a total style transferring method which transfers multi-scaled feature statistics through a single feed-forward process.First, our method transforms multi-scaled feature maps of a content image into those of a target style image by considering both inter-channel correlations in each single scaled feature map and inter-scale correlations between multi-scaled feature maps.Second, each transformed feature map is inserted into the decoder layer of the corresponding scale using skip-connection.Finally, the skip-connected multi-scaled feature maps are decoded into a stylized image through our trained decoder network.
A paper suggesting a method to transform the style of images using deep neural networks.
165
I Know the Feeling: Learning to Converse with Empathy
Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling.One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill that is trivial for humans.Research in this area is made difficult by the paucity of suitable publicly available datasets both for emotion and dialogues.This work proposes a new task for empathetic dialogue generation and EmpatheticDialogues, a dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems.Our experiments indicate that dialogue models that use our dataset are perceived to be more empathetic by human evaluators, while improving on other metrics as well, compared to models merely trained on large-scale Internet conversation data.We also present empirical comparisons of several ways to improve the performance of a given model by leveraging existing models or datasets without requiring lengthy re-training of the full model.
We improve existing dialogue systems for responding to people sharing personal stories, incorporating emotion prediction representations and also release a new benchmark and dataset of empathetic dialogues.
166
Economy Statistical Recurrent Units For Inferring Nonlinear Granger Causality
Granger causality is a widely-used criterion for analyzing interactions in large-scale networks.As most physical interactions are inherently nonlinear, we consider the problem of inferring the existence of pairwise Granger causality between nonlinearly interacting stochastic processes from their time series measurements.Our proposed approach relies on modeling the embedded nonlinearities in the measurements using a component-wise time series prediction model based on Statistical Recurrent Units.We make a case that the network topology of Granger causal relations is directly inferrable from a structured sparse estimate of the internal parameters of the SRU networks trained to predict the processes’ time series measurements.We propose a variant of SRU, called economy-SRU, which, by design has considerably fewer trainable parameters, and therefore less prone to overfitting.The economy-SRU computes a low-dimensional sketch of its high-dimensional hidden state in the form of random projections to generate the feedback for its recurrent processing.Additionally, the internal weight parameters of the economy-SRU are strategically regularized in a group-wise manner to facilitate the proposed network in extracting meaningful predictive features that are highly time-localized to mimic real-world causal events.Extensive experiments are carried out to demonstrate that the proposed economy-SRU based time series prediction model outperforms the MLP, LSTM and attention-gated CNN-based time series models considered previously for inferring Granger causality.
A new recurrent neural network architecture for detecting pairwise Granger causality between nonlinearly interacting time series.
167
Stochastic Training of Graph Convolutional Networks
Graph convolutional networks are powerful deep neural networks for graph-structured data."However, GCN computes nodes' representation recursively from their neighbors, making the receptive field size grow exponentially with the number of layers.Previous attempts on reducing the receptive field size by subsampling neighbors do not have any convergence guarantee, and their receptive field size per node is still in the order of hundreds.In this paper, we develop a preprocessing strategy and two control variate based algorithms to further reduce the receptive field size."Our algorithms are guaranteed to converge to GCN's local optimum regardless of the neighbor sampling size.", 'Empirical results show that our algorithms have a similar convergence speed per epoch with the exact algorithm even using only two neighbors per node.The time consumption of our algorithm on the Reddit dataset is only one fifth of previous neighbor sampling algorithms.
A control variate based stochastic training algorithm for graph convolutional networks that the receptive field can be only two neighbors per node.
168
Instant Quantization of Neural Networks using Monte Carlo Methods
Low bit-width integer weights and activations are very important for efficient inference, especially with respect to lower power consumption.We propose to apply Monte Carlo methods and importance sampling to sparsify and quantize pre-trained neural networks without any retraining.We obtain sparse, low bit-width integer representations that approximate the full precision weights and activations.The precision, sparsity, and complexity are easily configurable by the amount of sampling performed.Our approach, called Monte Carlo Quantization, is linear in both time and space, while the resulting quantized sparse networks show minimal accuracy loss compared to the original full-precision networks.Our method either outperforms or achieves results competitive with methods that do require additional training on a variety of challenging tasks.
Monte Carlo methods for quantizing pre-trained models without any additional training.
169
INFORMATION MAXIMIZATION AUTO-ENCODING
We propose the Information Maximization Autoencoder, an information theoretic approach to simultaneously learn continuous and discrete representations in an unsupervised setting.Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid discrete and continuous representation with the objective of maximizing the mutual information between the data and their representations.A decoder is included to approximate the posterior distribution of the data given their representations, where a high fidelity approximation can be achieved by leveraging the informative representations. We show that the proposed objective is theoretically valid and provides a principled framework for understanding the tradeoffs regarding informativeness of each representation factor, disentanglement of representations, and decoding quality.
Information theoretical approach for unsupervised learning of unsupervised learning of a hybrid of discrete and continuous representations,
170
Improving generalization by regularizing in $L^2$ function space
Learning rules for neural networks necessarily include some form of regularization.Most regularization techniques are conceptualized and implemented in the space of parameters.However, it is also possible to regularize in the space of functions.Here, we propose to measure networks in an Hilbert space, and test a learning rule that regularizes the distance a network can travel through-space each update. This approach is inspired by the slow movement of gradient descent through parameter space as well as by the natural gradient, which can be derived from a regularization term upon functional change.The resulting learning rule, which we call Hilbert-constrained gradient descent, is thus closely related to the natural gradient but regularizes a different and more calculable metric over the space of functions.Experiments show that the HCGD is efficient and leads to considerably better generalization.
It's important to consider optimization in function space, not just parameter space. We introduce a learning rule that reduces distance traveled in function space, just like SGD limits distance traveled in parameter space.
171
Stochastic Gradient Descent with Biased but Consistent Gradient Estimators
Stochastic gradient descent, which dates back to the 1950s, is one of the most popular and effective approaches for performing stochastic optimization.Research on SGD resurged recently in machine learning for optimizing convex loss functions and training nonconvex deep neural networks.The theory assumes that one can easily compute an unbiased gradient estimator, which is usually the case due to the sample average nature of empirical risk minimization.There exist, however, many scenarios where an unbiased estimator may be as expensive to compute as the full gradient because training examples are interconnected.Recently, Chen et al. proposed using a consistent gradient estimator as an economic alternative.Encouraged by empirical success, we show, in a general setting, that consistent estimators result in the same convergence behavior as do unbiased ones.Our analysis covers strongly convex, convex, and nonconvex objectives.We verify the results with illustrative experiments on synthetic and real-world data.This work opens several new research directions, including the development of more efficient SGD updates with consistent estimators and the design of efficient training algorithms for large-scale graphs.
Convergence theory for biased (but consistent) gradient estimators in stochastic optimization and application to graph convolutional networks
172
Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers
We consider the problem of uncertainty estimation in the context of deep neural classification.In this context, all known methods are based on extracting uncertainty signals from a trained network optimized to solve the classification problem at hand.We demonstrate that such techniques tend to introduce biased estimates for instances whose predictions are supposed to be highly confident.We argue that this deficiency is an artifact of the dynamics of training with SGD-like optimizers, and it has some properties similar to overfitting.Based on this observation, we develop an uncertainty estimation algorithm that selectively estimates the uncertainty of highly confident points, using earlier snapshots of the trained model, before their estimates are jittered.We present extensive experiments indicating that the proposed algorithm provides uncertainty estimates that are consistently better than all known methods.
We use snapshots from the training process to improve any uncertainty estimation method of a DNN classifier.
173
FairFace: A Novel Face Attribute Dataset for Bias Measurement and Mitigation
Existing public face image datasets are strongly biased toward Caucasian faces, and other races are significantly underrepresented.The models trained from such datasets suffer from inconsistent classification accuracy, which limits the applicability of face analytic systems to non-White race groups.To mitigate the race bias problem in these datasets, we constructed a novel face image dataset containing 108,501 images which is balanced on race.We define 7 race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino.Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups.Evaluations were performed on existing face attribute datasets as well as novel image datasets to measure the generalization performance.We find that the model trained from our dataset is substantially more accurate on novel datasets and the accuracy is consistent across race and gender groups.We also compare several commercial computer vision APIs and report their balanced accuracy across gender, race, and age groups.
A new face image dataset for balanced race, gender, and age which can be used for bias measurement and mitigation
174
A Learned Representation for Scalable Vector Graphics
Dramatic advances in generative models have resulted in near photographic quality for artificially rendered faces, animals and other objects in the natural world.In spite of such advances, a higher level understanding of vision and imagery does not arise from exhaustively modeling an object, but instead identifying higher-level attributes that best summarize the aspects of an object. In this work we attempt to model the drawing process of fonts by building sequential generative models of vector graphics. This model has the benefit of providing a scale-invariant representation for imagery whose latent representation may be systematically manipulated and exploited to perform style propagation.We demonstrate these results on a large dataset of fonts and highlight how such a model captures the statistical dependencies and richness of this dataset.We envision that our model can find use as a tool for designers to facilitate font design.
We attempt to model the drawing process of fonts by building sequential generative models of vector graphics (SVGs), a highly structured representation of font characters.
175
Unsupervised Discovery of Dynamic Neural Circuits
What can we learn about the functional organization of cortical microcircuits from large-scale recordings of neural activity? ', "To obtain an explicit and interpretable model of time-dependent functional connections between neurons and to establish the dynamics of the cortical information flow, we develop 'dynamic neural relational inference'.", 'We study both synthetic and real-world neural spiking data and demonstrate that the developed method is able to uncover the dynamic relations between neurons more reliably than existing baselines.
We develop 'dynamic neural relational inference', a variational autoencoder model that can explicitly and interpretably represent the hidden dynamic relations between neurons.
176
Exploring the Hidden Dimension in Accelerating Convolutional Neural Networks
DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks.DeePa optimizes parallelism at the granularity of each individual layer in the network.We present an elimination-based algorithm that finds an optimal parallelism configuration for every layer.Our evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×.
To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of CNNs in all parallelizable dimensions at the granularity of each layer.
177
Learning Backpropagation-Free Deep Architectures with Kernels
One can substitute each neuron in any neural network with a kernel machine and obtain a counterpart powered by kernel machines.The new network inherits the expressive power and architecture of the original but works in a more intuitive way since each node enjoys the simple interpretation as a hyperplane.Further, using the kernel multilayer perceptron as an example, we prove that in classification, an optimal representation that minimizes the risk of the network can be characterized for each hidden layer.This result removes the need of backpropagation in learning the model and can be generalized to any feedforward kernel network.Moreover, unlike backpropagation, which turns models into black boxes, the optimal hidden representation enjoys an intuitive geometric interpretation, making the dynamics of learning in a deep kernel network simple to understand.Empirical results are provided to validate our theory.
We combine kernel method with connectionist models and show that the resulting deep architectures can be trained layer-wise and have more transparent learning dynamics.
178
Stochastic Learning of Additive Second-Order Penalties with Applications to Fairness
Many notions of fairness may be expressed as linear constraints, and the resulting constrained objective is often optimized by transforming the problem into its Lagrangian dual with additive linear penalties.In non-convex settings, the resulting problem may be difficult to solve as the Lagrangian is not guaranteed to have a deterministic saddle-point equilibrium. In this paper, we propose to modify the linear penalties to second-order ones, and we argue that this results in a more practical training procedure in non-convex, large-data settings.For one, the use of second-order penalties allows training the penalized objective with a fixed value of the penalty coefficient, thus avoiding the instability and potential lack of convergence associated with two-player min-max games.Secondly, we derive a method for efficiently computing the gradients associated with the second-order penalties in stochastic mini-batch settings.Our resulting algorithm performs well empirically, learning an appropriately fair classifier on a number of standard benchmarks.
We propose a method to stochastically optimize second-order penalties and show how this may apply to training fairness-aware classifiers.
179
Understanding and Exploiting the Low-Rank Structure of Deep Networks
Training methods for deep networks are primarily variants on stochastic gradient descent. Techniques that use second-order information are rarely used because of the computational cost and noise associated with those approaches in deep learning contexts. However, in this paper, we show how feedforward deep networks exhibit a low-rank derivative structure. This low-rank structure makes it possible to use second-order information without needing approximations and without incurring a significantly greater computational cost than gradient descent. To demonstrate this capability, we implement Cubic Regularization on a feedforward deep network with stochastic gradient descent and two of its variants. There, we use CR to calculate learning rates on a per-iteration basis while training on the MNIST and CIFAR-10 datasets. CR proved particularly successful in escaping plateau regions of the objective function. We also found that this approach requires less problem-specific information than other first-order methods in order to perform well.
We show that deep learning network derivatives have a low-rank structure, and this structure allows us to use second-order derivative information to calculate learning rates adaptively and in a computationally feasible manner.
180
Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness
The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training, has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.Nonetheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the research of adversarial attack and defense.In particular, given a set of risk sources, minimizing the maximal loss induced from the domain set can be reformulated as a general min-max problem that is different from AT.Examples of this general formulation include attacking model ensembles, devising universal perturbation under multiple inputs or data transformations, and generalized AT over different types of attack models.We show that these problems can be solved under a unified and theoretically principled min-max optimization framework. We also show that the self-adjusted domain weights learned from our method provides a means to explain the difficulty level of attack and defense over multiple domains.Extensive experiments show that our approach leads to substantial performance improvement over the conventional averaging strategy.
A unified min-max optimization framework for adversarial attack and defense
181
Dimensionality Reduction for Representing the Knowledge of Probabilistic Models
Most deep learning models rely on expressive high-dimensional representations to achieve good performance on tasks such as classification.However, the high dimensionality of these representations makes them difficult to interpret and prone to over-fitting.We propose a simple, intuitive and scalable dimension reduction framework that takes into account the soft probabilistic interpretation of standard deep models for classification.When applying our framework to visualization, our representations more accurately reflect inter-class distances than standard visualization techniques such as t-SNE.We show experimentally that our framework improves generalization performance to unseen categories in zero-shot learning.We also provide a finite sample error upper bound guarantee for the method.
dimensionality reduction for cases where examples can be represented as soft probability distributions
182
Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration
Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments.These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces.However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy.In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space.This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space.We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations.
We propose a novel Intrinsically Motivated Goal Exploration architecture with unsupervised learning of goal space representations, and evaluate how various implementations enable the discovery of a diversity of policies.
183
Post-training for Deep Learning
One of the main challenges of deep learning methods is the choice of an appropriate training strategy.In particular, additional steps, such as unsupervised pre-training, have been shown to greatly improve the performances of deep structures.In this article, we propose an extra training step, called post-training, which only optimizes the last layer of the network.We show that this procedure can be analyzed in the context of kernel theory, with the first layers computing an embedding of the data and the last layer a statistical model to solve the task based on this embedding.This step makes sure that the embedding, or representation, of the data is used in the best possible way for the considered task.This idea is then tested on multiple architectures with various data sets, showing that it consistently provides a boost in performance.
We propose an additional training step, called post-training, which computes optimal weights for the last layer of the network.
184
Compressing Word Embeddings via Deep Compositional Code Learning
Natural language processing models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint.Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance.For this purpose, we propose to construct the embeddings with few basis vectors.For each word, the composition of basis vectors is determined by a hash code.To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme.Each code is composed of multiple discrete numbers, such as, where the value of each component is limited to a fixed range.We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick.Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss.In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate.Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture.
Compressing the word embeddings over 94% without hurting the performance.
185
Deep Anomaly Detection with Outlier Exposure
It is important to detect anomalous inputs when deploying machine learning systems.The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples.At the same time, diverse image and text data are available in enormous quantities.We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure.This enables anomaly detectors to generalize and detect unseen anomalies.In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance.We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue.We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.
OE teaches anomaly detectors to learn heuristics for detecting unseen anomalies; experiments are in classification, density estimation, and calibration in NLP and vision settings; we do not tune on test distribution samples, unlike previous work
186
Beyond GANs: Transforming without a Target Distribution
While generative neural networks can learn to transform a specific input dataset into a specific target dataset, they require having just such a paired set of input/output datasets.For instance, to fool the discriminator, a generative adversarial network exclusively trained to transform images of black-haired *men* to blond-haired *men* would need to change gender-related characteristics as well as hair color when given images of black-haired *women* as input.This is problematic, as often it is possible to obtain *a* pair of distributions but then have a second source distribution where the target distribution is unknown.The computational challenge is that generative models are good at generation within the manifold of the data that they are trained on.However, generating new samples outside of the manifold or extrapolating "out-of-sample" is a much harder problem that has been less well studied.To address this, we introduce a technique called *neuron editing* that learns how neurons encode an edit for a particular transformation in a latent space.We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons."By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations.", 'Our technique is general and works on a wide variety of data domains and applications.We first demonstrate it on image transformations and then move to our two main biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs.
A method for learning a transformation between one pair of source/target datasets and applying it a separate source dataset for which there is no target dataset
187
TRUNCATED HORIZON POLICY SEARCH: COMBINING REINFORCEMENT LEARNING & IMITATION LEARNING
In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle.We study the effectiveness of the near- optimal cost-to-go oracle on the planning horizon and demonstrate that the cost- to-go oracle shortens the learner’s planning horizon as function of its accuracy: a globally optimal oracle can shorten the planning horizon to one, leading to a one- step greedy Markov Decision Process which is much easier to optimize, while an oracle that is far away from the optimality requires planning over a longer horizon to achieve near-optimal performance.Hence our new insight bridges the gap and interpolates between imitation learning and reinforcement learning.Motivated by the above mentioned insights, we propose Truncated HORizon Policy Search, a method that focuses on searching for policies that maximize the total reshaped reward over a finite planning horizon when the oracle is sub-optimal.We experimentally demonstrate that a gradient-based implementation of THOR can achieve superior performance compared to RL baselines and IL baselines even when the oracle is sub-optimal.
Combining Imitation Learning and Reinforcement Learning to learn to outperform the expert
188
Unsupervised Demixing of Structured Signals from Their Superposition Using GANs
Recently, Generative Adversarial Networks have emerged as a popular alternative for modeling complex high dimensional distributions.Most of the existing works implicitly assume that the clean samples from the target distribution are easily available.However, in many applications, this assumption is violated.In this paper, we consider the observation setting in which the samples from a target distribution are given by the superposition of two structured components, and leverage GANs for learning of the structure of the components.We propose a novel framework, demixing-GAN, which learns the distribution of two components at the same time.Through extensive numerical experiments, we demonstrate that the proposed framework can generate clean samples from unknown distributions, which further can be used in demixing of the unseen test images.
An unsupervised learning approach for separating two structured signals from their superposition
189
On the relationship between Normalising Flows and Variational- and Denoising Autoencoders
Normalising Flows are a class of likelihood-based generative models that have recently gained popularity.They are based on the idea of transforming a simple density into that of the data.We seek to better understand this class of models, and how they compare to previously proposed techniques for generative modeling and unsupervised representation learning.For this purpose we reinterpret NFs in the framework of Variational Autoencoders, and present a new form of VAE that generalises normalising flows.The new generalised model also reveals a close connection to denoising autoencoders, and we therefore call our model the Variational Denoising Autoencoder.Using our unified model, we systematically examine the model space between flows, variational autoencoders, and denoising autoencoders, in a set of preliminary experiments on the MNIST handwritten digits.The experiments shed light on the modeling assumptions implicit in these models, and they suggest multiple new directions for future research in this space.
We explore the relationship between Normalising Flows and Variational- and Denoising Autoencoders, and propose a novel model that generalises them.
190
Multi-agent query reformulation: Challenges and the role of diversity
We investigate methods to efficiently learn diverse strategies in reinforcement learning for a generative structured prediction problem: query reformulation.In the proposed framework an agent consists of multiple specialized sub-agents and a meta-agent that learns to aggregate the answers from sub-agents to produce a final answer.Sub-agents are trained on disjoint partitions of the training data, while the meta-agent is trained on the full training set.Our method makes learning faster, because it is highly parallelizable, and has better generalization performance than strong baselines, such asan ensemble of agents trained on the full data.We evaluate on the tasks of document retrieval and question answering.Theimproved performance seems due to the increased diversity of reformulation strategies.This suggests that multi-agent, hierarchical approaches might play an important role in structured prediction tasks of this kind.However, we also find that it is not obvious how to characterize diversity in this context, and a first attempt based on clustering did not produce good results.Furthermore, reinforcement learning for the reformulation task is hard in high-performance regimes.At best, it only marginally improves over the state of the art, which highlights the complexity of training models in this framework for end-to-end language understanding problems.
We use reinforcement learning for query reformulation on two tasks and surprisingly find that when training multiple agents diversity of the reformulations is more important than specialisation.
191
Safe Policy Learning for Continuous Control
We study continuous action reinforcement learning problems in which it is crucial that the agent interacts with the environment only through safe policies, i.e.,~policies that keep the agent in desirable situations, both during training and at convergence.We formulate these problems as Markov decision processes and present safe policy optimization algorithms that are based on a Lyapunov approach to solve them.Our algorithms can use any standard policy gradient method, such as deep deterministic policy gradient or proximal policy optimization, to train a neural network policy, while guaranteeing near-constraint satisfaction for every policy update by projecting either the policy parameter or the selected action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints.Compared to the existing constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data.Moreover, our action-projection algorithm often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline.We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated tasks, as well as a real-world robot obstacle-avoidance problem, demonstrating their effectiveness in terms of balancing performance and constraint satisfaction.
A general framework for incorporating long-term safety constraints in policy-based reinforcement learning
192
Evaluation of generative networks through their data augmentation capacity
Generative networks are known to be difficult to assess.Recent works on generative models, especially on generative adversarial networks, produce nice samples of varied categories of images.But the validation of their quality is highly dependent on the method used.A good generator should generate data which contain meaningful and varied information and that fit the distribution of a dataset.This paper presents a new method to assess a generator.Our approach is based on training a classifier with a mixture of real and generated samples.We train a generative model over a labeled training set, then we use this generative model to sample new data points that we mix with the original training data.This mixture of real and generated data is thus used to train a classifier which is afterwards tested on a given labeled test dataset.We compare this result with the score of the same classifier trained on the real training data mixed with noise."By computing the classifier's accuracy with different ratios of samples from both distributions we are able to estimate if the generator successfully fits and is able to generalize the distribution of the dataset.", 'Our experiments compare the result of different generators from the VAE and GAN framework on MNIST and fashion MNIST dataset.
Evaluating generative networks through their data augmentation capacity on discrimative models.
193
Improving Neural Abstractive Summarization Using Transfer Learning and Factuality-Based Evaluation: Towards Automating Science Journalism
We propose Automating Science Journalism, the process of producing a press release from a scientific paper, as a novel task that can serve as a new benchmark for neural abstractive summarization.ASJ is a challenging task as it requires long source texts to be summarized to long target texts, while also paraphrasing complex scientific concepts to be understood by the general audience.For this purpose, we introduce a specialized dataset for ASJ that contains scientific papers and their press releases from Science Daily.While state-of-the-art sequence-to-sequence models could easily generate convincing press releases for ASJ, these are generally nonfactual and deviate from the source.To address this issue, we improve seq2seq generation via transfer learning by co-training with new targets: scientific abstracts of sources and partitioned press releases.We further design a measure for factuality that scores how pertinent to the scientific papers the press releases under our seq2seq models are.Our quantitative and qualitative evaluation shows sizable improvements over a strong baseline, suggesting that the proposed framework could improve seq2seq summarization beyond ASJ.
New: application of seq2seq modelling to automating sciene journalism; highly abstractive dataset; transfer learning tricks; automatic evaluation measure.
194
Design for Interpretability
The interpretability of an AI agent's behavior is of utmost importance for effective human-AI interaction.", 'To this end, there has been increasing interest in characterizing and generating interpretable behavior of the agent."An alternative approach to guarantee that the agent generates interpretable behavior would be to design the agent's environment such that uninterpretable behaviors are either prohibitively expensive or unavailable to the agent.", 'To date, there has been work under the umbrella of goal or plan recognition design exploring this notion of environment redesign in some specific instances of interpretable of behavior.In this position paper, we scope the landscape of interpretable behavior and environment redesign in all its different flavors.Specifically, we focus on three specific types of interpretable behaviors -- explicability, legibility, and predictability -- and present a general framework for the problem of environment design that can be instantiated to achieve each of the three interpretable behaviors.We also discuss how specific instantiations of this framework correspond to prior works on environment design and identify exciting opportunities for future work.
We present an approach to redesign the environment such that uninterpretable agent behaviors are minimized or eliminated.
195
Learning to Infer
Inference models, which replace an optimization-based inference procedure with a learned model, have been fundamental in advancing Bayesian deep learning, the most notable example being variational auto-encoders.In this paper, we propose iterative inference models, which learn how to optimize a variational lower bound through repeatedly encoding gradients.Our approach generalizes VAEs under certain conditions, and by viewing VAEs in the context of iterative inference, we provide further insight into several recent empirical findings.We demonstrate the inference optimization capabilities of iterative inference models, explore unique aspects of these models, and show that they outperform standard inference models on typical benchmark data sets.
We propose a new class of inference models that iteratively encode gradients to estimate approximate posterior distributions.
196
Spike-based causal inference for weight alignment
In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients.For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes.This produces the so-called "weight transport problem" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli.This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm.However, such random weights do not appear to work well for large networks.Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem.The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design.We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights.As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST and CIFAR-10.Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.
We present a learning rule for feedback weights in a spiking neural network that addresses the weight transport problem.
197
Variational Diffusion Autoencoders with Random Walk Sampling
Variational inference methods and especially variational autoencoders specify scalable generative models that enjoy an intuitive connection to manifold learning --- with many default priors the posterior/likelihood pair/ can be viewed as an approximate homeomorphism between the data manifold and a latent Euclidean space.However, these approximations are well-documented to become degenerate in training.Unless the subjective prior is carefully chosen, the topologies of the prior and data distributions often will not match.Conversely, diffusion maps automatically the data topology and enjoy a rigorous connection to manifold learning, but do not scale easily or provide the inverse homeomorphism.In this paper, we propose \extbf a principled measure for recognizing the mismatch between data and latent distributions and \extbf a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model.The measure, the , is a sufficient condition for a homeomorphism and easy to compute and interpret.The method, the , is a novel generative algorithm that first infers the topology of the data distribution, then models a diffusion random walk over the data.To achieve efficient computation in VDAEs, we use stochastic versions of both variational inference and manifold learning optimization.We prove approximation theoretic results for the dimension dependence of VDAEs, and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold.Finally, we demonstrate the utility of our method on various real and synthetic datasets, and show that it exhibits performance superior to other generative models.
We combine variational inference and manifold learning (specifically VAEs and diffusion maps) to build a generative model based on a diffusion random walk on a data manifold; we generate samples by drawing from the walk's stationary distribution.
198
Gradient Surgery for Multi-Task Learning
While deep learning and deep reinforcement learning systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge, particularly as these algorithms learn individual tasks from scratch.Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning.However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently.The reasons why multi-task learning is so challenging compared to single task learning are not fully understood.Motivated by the insight that gradient interference causes optimization challenges, we develop a simple and general approach for avoiding interference between gradients from different tasks, by altering the gradients through a technique we refer to as “gradient surgery”.We propose a form of gradient surgery that projects the gradient of a task onto the normal plane of the gradient of any other task that has a conflicting gradient.On a series of challenging multi-task supervised and multi-task reinforcement learning problems, we find that this approach leads to substantial gains in efficiency and performance. Further, it can be effectively combined with previously-proposed multi-task architectures for enhanced performance in a model-agnostic way.
We develop a simple and general approach for avoiding interference between gradients from different tasks, which improves the performance of multi-task learning in both the supervised and reinforcement learning domains.
199
Metropolis-Hastings view on variational inference and adversarial training
In this paper we propose to view the acceptance rate of the Metropolis-Hastings algorithm as a universal objective for learning to sample from target distribution -- given either as a set of samples or in the form of unnormalized density.This point of view unifies the goals of such approaches as Markov Chain Monte Carlo, Generative Adversarial Networks, variational inference.To reveal the connection we derive the lower bound on the acceptance rate and treat it as the objective for learning explicit and implicit samplers.The form of the lower bound allows for doubly stochastic gradient optimization in case the target distribution factorizes.We empirically validate our approach on Bayesian inference for neural networks and generative models for images.
Learning to sample via lower bounding the acceptance rate of the Metropolis-Hastings algorithm